Responsibilities
- Design, build, and maintain large-scale data pipelines for processing high-volume data
- Develop and optimize data workflows using distributed data processing frameworks
- Work with structured and unstructured data from multiple sources
- Write efficient and optimized SQL queries for data transformation and analysis
- Ensure data quality, reliability, and performance across pipelines
- Troubleshoot and resolve data-related issues and performance bottlenecks
- Collaborate with cross-functional teams including analytics and business stakeholders
Skills & Experience
- 5-10 years of experience in Data Engineering or Big Data roles
- Strong experience with Apache Spark (PySpark preferred)
- Strong SQL skills (data transformation, query optimization, large datasets)
- Experience with Python for data processing
- Experience building ETL/data pipelines
- Exposure to cloud platforms such as Amazon Web Services, Microsoft Azure, or Google Cloud Platform
- Experience working with large-scale or distributed data environments
Ikas International (Asia) Pte Ltd
Sanderson-iKas is the brand name for iKas International (Asia) Pte Ltd
EA Licence No: 16S8086 | EA Registration No. R1988468
We regret to inform you that only shortlisted candidates will be notified /contacted.