Search by job, company or skills
Client Summary
We are building a global investment firm with a strong reputation for innovation and stability. They manage a diverse portfolio across asset classes and markets in more than 37 countries, consistently driving value creation through data-driven insights and advanced technology.
What You'll Do
. Design and build robust data pipelines to ingest, clean, transform, and aggregate large
volumes of structured and unstructured data
. Enable analytics and AI use cases by ensuring datasets are reliable, accessible, and
efficiently modeled
. Collaborate with data scientists, analysts, and business units to deliver usable datasets
for portfolio management, risk, operations, and compliance
. Implement monitoring, logging, and alerting frameworks to maintain high data quality
and pipeline uptime
. Contribute to data governance practices, including documentation, lineage tracking, and
compliance with internal and regulatory requirements
. Optimize performance and scalability of pipelines to handle growing data volumes and
use cases across the investment lifecycle
What We're Looking For
. Proficiency in Python for data engineering tasks
. Strong knowledge of SQL and relational databases
. Experience with ETL frameworks (e.g., Airflow, Luigi, Spark)
. Familiarity with cloud platforms (AWS, Azure, or GCP) and data warehousing (Snowflake,
Redshift, BigQuery)
Data practices:
. Experience with data ingestion, transformation, and modeling
. Solid grasp of data quality, governance, and compliance principles
Analytics & visualization:
. Exposure to BI tools like Tableau/PowerBI
Date Posted: 13/09/2025
Job ID: 126044011