Search by job, company or skills

S

Data Engineer

1-4 Years
SGD 1,000 - 5,000 per month
new job description bg glownew job description bg glownew job description bg svg
  • Posted 21 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Responsibilities

  • Data Pipeline Engineering
  • Design, implement, and maintain data pipelines for ingestion, transformation, and storage.
  • Build and operate streaming and batch processing pipelines using technologies such as Kafka and Apache Spark.
  • Develop robust systems for data reconciliation and validation to ensure consistency across multiple data sources.
  • Data Quality & Reconciliation
  • Build automated processes that detect and resolve discrepancies between datasets.
  • Design data integrity checks, monitoring, and alerting systems.
  • Implement reconciliation workflows to ensure correctness across distributed pipelines.
  • System Architecture
  • Contribute to the design of scalable data architectures that support high-throughput data ingestion and processing.
  • Work on event-driven data systems and streaming architectures.
  • Participate in discussions around system design, reliability, and performance optimization.
  • Data Processing & Wrangling
  • Clean, normalize, and transform raw datasets into structured, analysis-ready formats.
  • Build reusable tooling and utilities for data transformation and validation.
  • Work with large datasets and ensure pipelines remain performant and scalable.

Required Qualifications

  • Strong programming skills in Python, Scala, or C++
  • Experience working with Kafka, Spark, or other distributed data processing systems
  • Understanding of data pipeline architecture and distributed systems
  • Experience with data wrangling, cleaning, and transformation
  • Ability to design systems with reliability, scalability, and maintainability in mind
  • Strong problem-solving ability and attention to detail
  • Hands-on dashboard/reporting experience

Preferred Qualifications

  • Experience with stream processing systems (Kafka Streams, Flink, Spark Streaming)
  • Familiarity with data lake or warehouse architectures
  • Experience working with large-scale datasets
  • Familiarity with containerization (Docker) and workflow orchestration tools

What We Look For

  • Strong systems thinking - the ability to reason about how components interact within a larger architecture
  • Ability to own projects end-to-end
  • Curiosity about building robust and reliable data infrastructure
  • A pragmatic mindset toward solving real-world engineering problems

More Info

Job Type:
Industry:
Employment Type:

Job ID: 143871597

Similar Jobs