Responsibilities
- Design and build scalable data pipelines to process large volumes of structured and unstructured data
- Develop and optimize ETL work flows using modern data processing frameworks
- Work with large datasets to ensure data quality, integrity, and performance
- Write efficient and optimized SQL queries for data transformation and analysis
- Collaborate with cross-functional teams to support reporting and analytics requirements
- Troubleshoot data issues, performance bottlenecks, and pipeline failures
- Contribute to best practices in data engineering and continuous improvement initiatives
Required Skills & Experience
- 5+ years of experience in Data Engineering or related fields
- Strong SQL fundamentals (data transformation, query optimization, working with large datasets)
- Hands-on experience withPython and/or Spark (PySpark preferred)
- Experience building and maintaining ETL/data pipelines
- Exposure to cloud platforms such as Amazon Web Services, Microsoft Azure, or Google Cloud Platform
- Understanding of data modelling and data engineering best practices
- Experience working with large-scale or high-volume data environments
Ikas International (Asia) Pte Ltd
Sanderson-iKas is the brand name for iKasInternational (Asia) Pte Ltd
EA Licence No: 16S8086 | EA Registration No. R1988468
We regret to inform you that only shortlisted candidates will be notified /contacted.