Search by job, company or skills
Job Responsibilities:
Lead the design, development, and optimization of data frameworks for complex, large-scale data systems.
Oversee the implementation and management of Big Data technologies (Apache Spark, Hive, Presto, Iceberg) to support scalable and efficient architectures.
Drive the development and governance of metadata frameworks, ensuring data lineage, consistency, and quality across platforms.
Architect and implement Iceberg for enhanced data lake performance, scalability, and storage optimization.
Own the data governance strategy, ensuring compliance with regulations (GDPR, PCI-DSS, SOX).
Mentor and lead a team of data engineers, driving the adoption of best practices and technical expertise.
Collaborate with senior leadership and business stakeholders to define and align data strategies with business goals.
Continuously improve data pipelines and technologies to optimize operational efficiency.
Requirements:
10+ years of experience in data engineering, with expertise in Big Data technologies (e.g., Apache Spark, Hive, Presto, Kafka).
Strong experience in metadata management (e.g., Collibra, Axon, IDMC) and data lineage processes.
Deep expertise in Iceberg and data lake architecture, focusing on performance, scalability, and cost-efficiency.
Advanced experience with cloud platforms (AWS, GCP, Azure).
In-depth knowledge of data governance, security, and compliance frameworks (e.g., GDPR, PCI-DSS, SOX).
Proven leadership experience managing technical teams and delivering complex data solutions.
Strong problem-solving, strategic thinking, and communication skills for both technical and non-technical stakeholders.
Date Posted: 04/09/2025
Job ID: 125465465