We are hiring Data Scientist and Data Engineer to join our clients growing data team. Youll play a key role in shaping the future of AI, data, and digital innovation, working on impactful projects such as consumer apps, EV charging, grid monitoring, asset optimization, and advanced analytics.
Role: Data Scientist
Key Responsibilities
- Design, develop, and deploy AI/ML/Gen AI solutions across business domains (apps, EV charging, grid monitoring, asset optimization, etc.).
- Engage stakeholders to gather requirements, translate them into scalable AI solutions, and ensure delivery of business value.
- Collaborate with engineers, analysts, product owners, and UX designers in an agile framework.
- Implement AI governance practices (documentation, explainability, monitoring, ethics, security).
- Research and evaluate new AI/Gen AI frameworks, tools, and vendor solutions for adoption.
Key Requirements
- Proven experience in developing and deploying AI/ML models into production.
- Hands-on with Gen AI frameworks (LangChain, HuggingFace, LlamaIndex).
- Strong understanding of ML/DL (CV, NLP, time-series, predictive maintenance, etc.).
- Bachelors degree in Computer Science, Data Analytics, AI, or related field.
- Bonus: stakeholder engagement, data storytelling, Docker, cloud (AWS/Azure/GCP), AI governance frameworks.
Role: Data Engineer
Key Responsibilities
- Build and maintain robust, high-performance data pipelines across cloud, hybrid, and on-prem ecosystems.
- Implement data governance processes including data quality, profiling, remediation, and lineage.
- Process large and complex datasets from diverse sources.
- Develop and maintain microservices, REST APIs, and reporting services.
- Design and optimize infrastructure to support scalability and performance.
- Collaborate with Data Scientists, ML Engineers, and business stakeholders to deliver actionable insights.
Key Requirements
- Experience building and managing data lakes, warehouses, and MDM platforms (Informatica, Talend, Semarchy, etc.).
- Strong expertise with big data tools (Hadoop, Spark, Kafka) and cloud platforms (AWS, Azure, GCP).
- Proficient in SQL/NoSQL and programming languages (Python, Java, Scala).
- Familiarity with data governance, data quality, and metadata management.
- Bonus: ETL tools (Talend, ADF), Databricks/Delta Lake, Cloudera, and large-scale distributed systems.