Search by job, company or skills

H

Data Engineer

8-11 Years
SGD 9,500 - 12,500 per month
new job description bg glownew job description bg glownew job description bg svg
  • Posted 4 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Key Responsibilities

  • Design, implement, and maintain ETL/ELT pipelines for structured and unstructured data.

  • Develop and optimize data models, data marts, and analytical datasets for reporting and analytics use cases.

  • Build and manage data integration frameworks using batch and real-time processing technologies.

  • Implement data governance, data quality checks, cataloging, lineage, and metadata management.

  • Collaborate with cross-functional stakeholders to understand analytical requirements and translate into technical data specifications.

  • Ensure performance, scalability, and security of enterprise data infrastructure.

  • Support cloud deployments and modern data architectures (e.g., data lakes, lakehouses, data warehouses).

  • Troubleshoot and resolve data-related technical issues, performance bottlenecks, and pipeline failures.

  • Maintain documentation for data workflows, schemas, and internal processes.

Required Skills & Competencies

  • Strong proficiency in SQL and database concepts (Transactional + Analytical).

  • Programming skills in Python, Scala, or Java for data processing.

  • Hands-on experience with ETL/ELT tools and frameworks (e.g., Airflow, DBT, Informatica, SSIS).

  • Experience with distributed data processing frameworks (e.g., Spark, Hadoop, Flink).

  • Cloud data services exposure (e.g., AWS - Redshift/Glue/S3, GCP - BigQuery/Dataflow, Azure - Synapse/Data Factory).

  • Data modeling expertise (dimensional models, star-schema, normalization).

  • CI/CD integration familiarity for data pipeline automation.

  • Understanding of data security, compliance, and governance principles.

  • Strong collaboration, stakeholder communication, and problem-solving capabilities.

Preferred / Advantageous Skills

  • Experience with streaming platforms (Kafka, Kinesis, Pub/Sub).

  • Exposure to MLOps and ML pipeline orchestration tools.

  • Knowledge of containerization & orchestration (Docker, Kubernetes).

  • Familiarity with data cataloging platforms (e.g., Collibra, Alation, Glue Catalog).

  • Experience in building dashboards (Tableau, Power BI, Looker) is a plus.

Education & Certifications

  • Bachelor's degree in Computer Science, Engineering, Mathematics, Information Systems, or related discipline.

  • Certifications in AWS, GCP, Azure, or Databricks preferred.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 138500797

Similar Jobs

Early Applicant