Search by job, company or skills

S

Senior Data Engineer

5-8 Years
SGD 8,000 - 13,000 per month
new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About the Role

We are seeking a highly skilled Senior Data Engineer to design, build, and optimize large-scale data platforms and pipelines. The ideal candidate will have deep expertise in big data ecosystems, distributed processing frameworks, and scalable ETL architecture. This role is critical to enabling advanced analytics, data warehousing, and data-driven decision-making across the organization.

Key Responsibilities

  • Architect, develop, and maintain enterprise-grade data pipelines for batch and real-time processing.
  • Implement scalable ETL/ELT solutions using modern distributed processing tools and frameworks.
  • Collaborate with Data Science, Analytics, and Business teams to translate requirements into high-performance data solutions.
  • Design data models, schemas, and storage strategies for optimal performance and cost efficiency.
  • Ensure data quality, consistency, and reliability across environments.
  • Integrate diverse data sources (transactional systems, event streams, logs, external feeds).
  • Drive performance tuning, query optimization, and platform scalability.
  • Document system design, standards, and best practices.
  • Provide technical mentorship and leadership to data engineering teams.

Required Skills & Experience

Technical Skills

  • Big Data Frameworks: Strong experience with Apache Hadoop ecosystem (HDFS, YARN).
  • Distributed Processing: Expertise in Apache Spark and writing high-performance jobs for ETL/ELT.
  • Databricks: Certified or hands-on experience with Databricks development and deployment.
  • Databases & SQL: Advanced proficiency in SQL (including optimization) across relational and analytical systems.
  • Programming Languages: Python and/or Scala for data processing.
  • Data Warehousing / Query Engines: Exposure to Hive, Impala or similar SQL engines on data lakes.
  • Cloud & Storage: Experience with cloud storage (S3/ADLS) or on-prem HDFS.
  • Data Formats: Parquet, ORC, Avro, JSON processing.
  • Data Orchestration: Experience with workflow tools (Airflow, Oozie, etc.).

Soft Skills

  • Strong communication across technical and business stakeholders.
  • Problem-solving with a data-centric mindset.
  • Ability to deliver under ambiguity and drive cross-team alignment.

Preferred Qualifications

  • Bachelor's or Master's in Computer Science, Engineering, or related discipline.
  • Prior work on real-time streaming platforms (Kafka, Spark Structured Streaming).
  • Familiarity with data governance, security, and compliance best practices.
  • Experience mentoring junior engineers.

What Success Looks Like in 90 Days

  • Production-ready ETL pipelines deployed with automated testing.
  • Documented data processing standards and schemas.
  • Measurable improvements in data latency, query performance, or pipeline reliability.
  • Successful onboarding of internal analytics teams onto platform.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 138804929