Design and develop the robust data solutions on SQL Server, ensuring optimal performance through advanced query optimization, indexing strategies, and partition management
Design and optimize large-scale ETL pipelines using Apache Spark to process high-volume data from diverse sources across the bank
Ensure data quality and reliability across the platform through comprehensive validation frameworks and proactive monitoring
Collaborate with analytics teams using SAP BusinessObjects and other systems to deliver regulatory reports and business intelligence
Drive continuous improvement by evaluating and adopting emerging technologies and best practices in big data engineering
Troubleshoot and resolve complex data pipeline issues, performance bottlenecks, and system failures
Requirements
Around 8 years of data engineering experience, preferably in financial services or high-volume data environments
Advanced SQL Server expertise including:
Query optimization, execution plan analysis, and performance tuning
Index design, maintenance, and partitioning strategies for large tables
Deadlock analysis, resolution, and troubleshooting
Database design, normalization, and experience managing large-scale databases (multi-TB environments)
Solid understanding of data engineering principles: data modeling, ETL best practices, and data quality frameworks
Good to have the proficiency with Apache Spark (DataFrames, RDDs, Spark SQL) for building and optimizing production-grade ETL pipelines that handle large-scale distributed data processing
GMP Recruitment Services (S) Pte Ltd | EA Licence: 09C3051 | VO UYEN AI LINH | Registration No: R22109232