Search by job, company or skills

M

Data Engineer (Quantexa_Alteryx_SQL_ELK_DevOps_AWS)

5-8 Years
SGD 10,000 - 13,000 per month
Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 5 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Maltem Asia is seeking a Data Engineer for a Banking Client based in Singapore.

The Data Engineer will support the design and implementation of Quantexa-based solutions for Financial Crime (AML/Fraud). The role will bridge business requirements and data/technology teams, translating risk and compliance needs into scalable data-driven solutions.

Responsibilities:

  • Gather and translate business requirements (AML, Fraud, KYC) into functional and data specifications
  • Define entity resolution, matching and network linking logic aligned to business use cases
  • Perform data analysis and mapping across multiple source systems (customer, account, transaction)
  • Design logical data pipelines (ingestion, standardization, matching, network generation, scoring)
  • Collaborate with Data Engineers to ensure feasibility and alignment of data transformations
  • Support data quality assessment, cleansing rules, and standardization approaches
  • Validate outputs including entity resolution results, network generation, and risk scoring
  • Assist in UAT, defect triage, and business validation of Quantexa outputs
  • Prepare functional documentation (BRD, FRD, mapping documents, data dictionaries)
  • Work closely with Compliance, Risk, and Operations stakeholders

Required Skills & Experience:

  • 5 to 10 years of experience in Financial Services /Capital Markets / Banking Technology
  • Hands-on experience in Quantexa implementation or similar platforms (Actimize, SAS AML, Feature space)
  • Relevant alternative experience using Alteryx, Linkurious or DataWalk is an added advantage
  • Strong understanding of Entity Resolution and data matching techniques
  • Understanding of customer and transaction data models
  • Knowledge of network / graph-based analytics concepts
  • Solid SQL skills (joins, aggregations, data validation)
  • Good understanding of data engineering concepts (ETL pipelines, data modeling, data quality)
  • In-depth understanding of Apache Spark architecture, RDDs, DataFrames, and Spark SQL
  • Strong expertise in designing and developing data infrastructure using Hadoop, Spark, and related tools (HDFS, Hive, Pig, etc)
  • Experience with containerization platforms such as OpenShift Container Platform (OCP) and container orchestration using Kubernetes
  • Proficiency in programming languages commonly used in data engineering, such as Spark, Python, Scala, or Java
  • Knowledge of DevOps practices, CI/CD pipelines, and infrastructure automation tools (e.g., Docker, Jenkins, Ansible, BitBucket)
  • Experience with Grafana, Prometheus, Splunk will be an added benefit
  • Experience integrating and working with Elasticsearch for data indexing and search applications
  • Solid understanding of Elasticsearch data modeling, indexing strategies, and query optimization
  • Experience with distributed computing, parallel processing, and working with large datasets
  • Proficient in performance tuning and optimization techniques for Spark applications and Elasticsearch queries
  • Strong problem-solving and analytical skills with the ability to debug and resolve complex issues
  • Familiarity with version control systems (e.g., Git) and collaborative development workflows
  • Excellent communication and teamwork skills with the ability to work effectively in cross-functional teams
  • Experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services is a plus

More Info

Job Type:
Industry:
Employment Type:

Job ID: 146575599

Similar Jobs