Our client, a Private European Investment Bank is looking for a Data Engineer to design, build, and maintain the data pipelines, backend services, and governance frameworks that power real-time decisioning and analytics across the organisation. This role suits someone who enjoys solving complex data problems, building robust backend systems, and working with modern cloud and streaming technologies in a regulated environment.
Responsibilities
- Design, develop, and maintain real-time data pipelines, backend services, and workflow orchestration for decisioning, reporting, and data processing.
- Define data requirements, models, and transformation logic across structured and unstructured data sets.
- Write high-quality, secure, well-tested code in Python, Scala, or similar languages.
- Build software and processes that strengthen data governance, data quality, and data security.
- Support scalable data architectures using Azure, Delta/Live Tables, and distributed computing frameworks.
- Troubleshoot and resolve data issues across pipelines, storage layers, and downstream systems.
Requirements
- BS/MS in Computer Science, Data Engineering, Information Systems, or equivalent experience.
- At least 5+ years building end-to-end data systems, ETL/ELT pipelines, and workflow management using tools such as Airflow, Databricks, or similar.
- Strong SQL skills for diagnosing data issues and validating complex transformations.
- Hands-on experience with large-scale data processing using Databricks, PySpark, Spark Streaming, and Delta Tables.
- Experience in Azure cloud data environments, including Data Lake Storage and CI/CD deployment workflows.
- Familiarity with microservices platforms (Kubernetes, Docker) and event-driven systems (Kafka, Event Hub, Event Grid, Flink).
- Experience developing in Python, Scala, JavaScript, Java, or C#.
- Knowledge of dbt, Data Vault, or Microsoft Fabric is a plus.
- Prior experience in banking or financial services is highly preferred.
(12-Month Contract Convertible to Permanent)