Search by job, company or skills
Job Responsibilities:
Build, maintain, and optimize data frameworks for ingestion, processing, and governance.
Work with Big Data technologies (Apache Spark, Hive, Presto, Iceberg) to manage large-scale data.
Implement and manage metadata frameworks, ensuring proper data lineage across platforms.
Gain hands-on experience with Iceberg in data lake architecture for scalable data processing.
Support data governance and ensure compliance with regulations (e.g., GDPR, PCI-DSS).
Collaborate with teams to optimize data solutions for business needs.
Contribute to the improvement of data pipelines and frameworks.
Develop analytics applications and visualize data to inform business decisions.
Requirements:
3 - 5 years of experience with Big Data technologies (Apache Spark, Hive, Presto).
Minimum 3 to 5 years of experience as a Data Engineer.
Basic knowledge of metadata management and data lineage.
Familiarity with Iceberg and cloud platforms (AWS, GCP, or Azure).
Understanding of data governance and compliance regulations.
Strong problem-solving and collaboration skills.
Bachelor's degree in Computer Science, Information Technology, Programming & Systems Analysis, Science (Computer Studies) or related fields
Date Posted: 04/09/2025
Job ID: 125465479