Search by job, company or skills

S

Data Engineer (Immediate Starter)

2-5 Years
SGD 5,000 - 6,900 per month
new job description bg glownew job description bg glownew job description bg svg
  • Posted 9 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Key Responsibilities

  • Design, build, and maintain ETL/ELT data pipelines across cloud or on-prem environments.

  • Develop automated workflows to ingest, clean, validate, and transform structured and unstructured data.

  • Build and maintain data marts, data warehouse tables, and analytical datasets to support business intelligence and reporting.

  • Implement data quality checks, monitoring dashboards, and alerting mechanisms for pipeline reliability.

  • Work with APIs, streaming data, or batch processes to integrate data from multiple internal and external sources.

  • Support troubleshooting, incident investigation, and optimisation of pipeline performance.

  • Collaborate with analysts, product teams, and business units to understand data requirements and deliver usable datasets.

  • Manage cloud resources, storage layers, and compute workloads (AWS/Azure/GCP).

  • Participate in documentation, version control, code reviews, and CI/CD practices.

Skills & Experience We're Looking For

Core Requirements

  • Proficiency in SQL for data manipulation, modelling, and performance-optimised queries.

  • Hands-on experience with Python for ETL scripting, data transformation, or API integrations.

  • Experience with at least one cloud platform: AWS, Azure, or GCP.

  • Familiarity with data orchestration or workflow tools (e.g., Airflow, ADF, Step Functions, Cloud Composer, Cron).

  • Experience with relational databases (MySQL, PostgreSQL, SQL Server, etc.).

  • Ability to design and maintain data pipelines across batch or near-real-time processes.

Good To Have (Bonus)

  • Experience with Spark / Databricks (PySpark or Scala).

  • Exposure to data lake architecture (bronze/silver/gold), Delta Lake, or Snowflake.

  • Web scraping tools (BeautifulSoup, Selenium) or API integration experience.

  • Knowledge of BI tools such as Power BI, Tableau, QuickSight, or Looker.

  • Understanding of data modelling (star schema, fact/dimension tables).

  • Familiarity with CI/CD pipelines, Git, Docker, or serverless functions.

  • Experience handling large datasets and optimising performance.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 133807631