Lead end-to-end delivery of data and AI/GenAI solutions across AWS, Azure, and GCP environments.
Develop and implement techniques and analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualisation software
Architect and implement resilient, scalable data pipelines and cloud-based data platforms (Databricks Lakehouse).
Apply data mining, data modelling, natural language processing, and machine learning to extract and analyse information from large structured and unstructured datasets
Define and enforce data governance, security, and compliance frameworks.
Drive platform strategy, roadmap development, and innovation in AI/ML and analytics capabilities.
Oversee integration of heterogeneous data sources and enable advanced analytics and ML-driven insights.
Implement cloud-native architectures (microservices, containers, serverless) and IaC (Terraform, CloudFormation).
Manage cross-functional stakeholders and lead high-performing engineering teams.
Establish best practices in MLOps, data modeling, and real-time/event-driven architectures.
Requirements
Bachelor's degree in Computer Science, Information Technology, Engineering, or a related studies.
10+ years in data engineering and analytics, with 5+ years in leadership roles (stakeholder management, people management and project management).
Strong expertise in Databricks, data lakes & data warehousing, ETL and distributed data systems.
Hands-on experience with AWS and Azure (multi-cloud exposure required GCP is a plus).
Proven track record delivering AI/ML or GenAI-powered platforms.
Proficiency in Java/.Net, Python, SQL and R programming for data processing and statistical computing.
Experience with BI tools (Power BI, Tableau) and advanced analytics workflows.
Strong knowledge of cloud security, data governance, and access control.
Certifications (AWS, Databricks, PMP) are highly desirable.