
Search by job, company or skills
Job Responsibilities:
. Build and manage data pipelines (batch and real-time) using Databricks / Snowflake
. Implement data ingestion frameworks (API, streaming, batch) for external sources
(e.g., Moody's)
. Develop data transformation, standardisation, and enrichment pipelines
. Implement data quality checks and CDQ rules within pipelines
. Integrate pipelines with metadata and lineage platforms
. Automate workflows and integrate with ServiceNow for issue management
. Support implementation of data governance controls and audit traceability
. Build reusable data engineering components for scalability
Job Requirements:
. Minimum 10 years of relevant experience
. Strong hands-on experience in Databricks (Spark, Delta Lake) / Snowflake
. Experience in ETL/ELT pipelines, API integrations, streaming (Kafka/Event Hub)
. Knowledge of data quality frameworks and validation techniques
. Familiarity with data lineage integration and metadata tagging
. Programming skills (Python, SQL, Spark)
. Experience with cloud platforms (Azure / AWS / GCP)
. Understanding of governance integration (Collibra / Atlan / Purview preferred)
Job ID: 145223631