Search by job, company or skills

M

Data Engineer-Digital & Cloud

5-8 Years
SGD 6,000 - 9,000 per month
new job description bg glownew job description bg glownew job description bg svg
  • Posted 2 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

We are seeking a Data Engineer to join our team in Singapore and play a key role in advancing our cloud-based data capabilities. Working onsite as part of the Cloud Center of Excellence (CCoE) team, you will collaborate closely with the customer and cross-functional stakeholders to design and implement scalable, secure, and high performance data solutions.

In this role, you will contribute to building a modern data platform aligned with data mesh principles, empowering business teams to own and manage their data as a product. You'll be responsible for developing reliable data pipelines that support both batch and streaming workloads, ensuring consistent and efficient data delivery across domains.

Your work will directly support the organization's data transformation goals by enabling reusable, self-service data products and seamless integration across cloud and hybrid environments. This role is ideal for someone who values clean architecture, automation, and collaboration, and who is passionate about building data systems that deliver measurable business value

The main missions are

  • Design, build, and maintain scalable data pipelines for batch and real-time data processing using ETL frameworks and scripting languages (e.g., Python, SQL).
  • Participate in end-to-end data project delivery, following traditional SDLC, agile, or hybrid methodologies.
  • Work closely with business and technology stakeholders to gather and analyze data requirements, particularly around banking products, transactions, customer analytics, and regulatory reporting.
  • Translate complex business use cases into efficient, normalized and de-normalized data models, optimized for both operational and analytical workloads.
  • Design, implement, and manage data warehouses, data marts, and data integration layers that align with enterprise data architecture.
  • Collaborate with development teams to deploy physical data models and optimize performance for largescale financial datasets.
  • Ensure all data solutions adhere to data governance, quality, privacy, and metadata management standards.
  • Develop and maintain data documentation such as data dictionaries, lineage diagrams, and technical specifications to support business, audit, and compliance needs.
  • Support data lineage, metadata, and data quality initiatives to improve data transparency and trust.
  • Provide data-driven support to business users, while proactively identifying and communicating IT-related challenges or issues.
  • Communicate insights and technical concepts effectively to both technical and non-technical stakeholders through presentations, reports, and documentation.

Experience

  • Experience working with large datasets in platforms such as Greenplum, Hadoop, Oracle, or DB2, Mongo DB.
  • Familiarity with dashboarding tools like Tableau, Power BI, or SAS VA.
  • Experience in shell/batch scripting, application packaging, and deployment across DEV to PROD environments.
  • Skilled in change management, service request handling, and maintenance reporting.
  • Strong capabilities in data modelling (logical and physical) for banking, risk, compliance, and analytics use cases.
  • Deep understanding of relational and dimensional modelling, data warehousing concepts, and data integration techniques.
  • Solid SQL skills, with experience in managing large, complex financial datasets.

Technical skills

  • Proficient in Python and experienced in building scalable ETL/ELT pipelines using Spark.
  • Strong SQL skills with hands-on experience working with large-scale datasets and data warehousing solutions.
  • Familiar with big data technologies such as Hadoop, Hive, and Spark, and data modeling (including dimensional and schema design).
  • Experience working with cloud platforms, especially AWS (e.g., Glue, Redshift, RDS, S3), and basic knowledge of IAM, VPC, and security configurations.
  • Hands-on with Linux environments, shell scripting, and AWS CLI for automation and resource management
  • Comfortable working with a variety of database types - SQL, NoSQL, and data lakes.
  • Exposure to tools like Terraform, or Talend is a plus.
  • Familiar with data visualization tools such as QuickSight, Qlik, or Tableau.
  • Experience writing clean, production-grade code and maintaining clear, structured data pipeline documentation.

Education and certifications
Bachelor's degree, Software Engineering, or equivalent work experience
Professional Cloud Certifications (AWS/Azure/GCP)

Relevant certifications are preferred, such as:

  • AWS Certified Data Analytics - Specialty or AWS Certified Solutions Architect - Associate
  • Microsoft Certified: Azure Data Engineer Associate
  • Google Cloud Professional Data Engineer
  • Databricks Data Engineer Associate/Professional
  • Cloudera Certified Data Engineer (if applicable)

More Info

Job Type:
Industry:
Employment Type:

Job ID: 135387411