Search by job, company or skills
Job Responsibilities:
Design, implement, and optimize data frameworks for complex data ingestion, processing, and governance.
Lead Big Data technologies (Apache Spark, Hive, Presto, Iceberg) to manage large-scale data operations efficiently.
Oversee metadata frameworks to ensure proper data lineage, quality, and compliance across pipelines.
Utilize Iceberg in data lake architecture for optimized data storage, query performance, and versioning.
Drive data governance initiatives to ensure compliance with regulations (GDPR, PCI-DSS, internal standards).
Collaborate with data engineers, scientists, and business teams to build data solutions that meet business needs.
Continuously improve and optimize data pipelines and frameworks for operational efficiency.
Requirements:
5 -10 years of hands-on experience with Big Data technologies (Apache Spark, Hive, Presto, Kafka).
Proven experience with metadata management and data lineage.
Expertise in Iceberg and data lake architecture.
Strong experience with cloud platforms (AWS, GCP, Azure).
In-depth knowledge of data governance and compliance (GDPR, PCI-DSS, SOX).
Experience mentoring junior engineers and leading data engineering projects.
Strong problem-solving and analytical skills.
Excellent communication skills for cross-team collaboration.
Bachelor's degree in Computer Science, Information Technology, Programming & Systems Analysis, Science (Computer Studies) or related fields
5 - 10years of experience as a Data Engineer.
Date Posted: 04/09/2025
Job ID: 125465481