Responsibilities
- Design and build scalable data pipelines to process large volumes of structured and unstructured data
- Develop and optimize ETL workflows using Apache Spark (PySpark preferred)
- Work with large-scale distributed data systems to ensure performance and reliability
- Write efficient SQL queries for data transformation and analysis
- Ensure data quality, consistency, and integrity across data pipelines
- Troubleshoot and resolve data pipeline issues and performance bottlenecks
- Collaborate with cross-functional teams to support analytics and reporting requirements
- Contribute to best practices in data engineering and continuous improvement
Required Skills & Experience
- 5-10 years of experience in Data Engineering or Big Data roles
- Strong hands-on experience with Apache Spark (Data Frames, Spark SQL, PySpark)
- Strong SQL skills working with large datasets
- Experience with Python for data processing
- Experience building and maintaining ETL/data pipelines
- Familiarity with distributed data processing and large-scale data environments
Ikas International (Asia) Pte Ltd
Sanderson-iKas is the brand name for iKas International (Asia) Pte Ltd
EA Licence No: 16S8086 | EA Registration No. R1988468
We regret to inform you that only shortlisted candidates will be notified /contacted.