Search by job, company or skills

N

Data Engineer (Big Data / ETL) 12 Months Contract

8-10 Years
SGD 5,500 - 7,500 per month
new job description bg glownew job description bg glownew job description bg svg
  • Posted 18 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

We are looking for an experienced Data Engineer to support enterprise data platform and big data initiatives in a large banking environment.

The role involves designing and implementing data integration pipelines, supporting data lake platforms, and ensuring data quality across analytics and reporting systems.

This is a 12-month contract with NTT DATA, deployed onsite at Changi Business Park.

Salary Budget: Up to SGD 7,500 per month

Key Responsibilities

  • Design and develop ETL/ELT data integration pipelines for data warehouse and data lake platforms.

  • Develop and maintain big data processing frameworks using Spark, Hive, and Hadoop ecosystem tools.

  • Build and optimize data ingestion pipelines for batch and streaming data.

  • Write complex SQL / SparkSQL / HiveQL queries for data transformation and validation.

  • Perform data reconciliation and data quality validation between source systems and data warehouse/data lake environments.

  • Collaborate with data architects, analysts, and application teams to implement data integration solutions.

  • Support data migration, data transformation, and data analytics projects.

  • Participate in SDLC activities including design, development, testing, deployment, and support.

  • Implement best practices in data management, scheduling, monitoring, and performance tuning.


    Requirements

    • 5-10 years of experience in data engineering, data warehouse, or big data projects.

    • Hands-on experience designing and developing ETL/ELT pipelines.

    • Strong experience working with Hadoop ecosystem technologies such as:

      • HDFS

      • Hive

      • Spark

      • Kafka

      • Sqoop

      • NiFi

      • Impala

      • HBase

      • Cassandra

    • Experience working with Cloudera or Hortonworks distributions.

    • Strong experience with RDBMS databases such as Oracle, SQL Server, PostgreSQL, MySQL, or DB2.

    • Hands-on programming experience in Python or Scala with Spark framework.

    • Experience in data warehouse design and data modeling concepts.

    • Familiarity with NoSQL databases such as Cassandra, MongoDB, or HBase.

    • Experience working with Agile / Scrum methodologies.


      Nice to Have

      • Experience with stream processing frameworks such as Kafka Streaming or Spark Structured Streaming.

      • Knowledge of DevOps / CI-CD tools such as Jenkins, Docker, GitHub, Puppet, or Chef.

      • Exposure to data visualization tools such as Tableau, Power BI, or QlikView.

      • Experience in banking or financial services environments.


        Interested candidates are kindly requested to email their CV with their experience to

        We look forward to your application!

More Info

Job Type:
Industry:
Employment Type:

Job ID: 143961209