Job role:AWS Engineer
Experience: 5+ Years
Location: Singapore
Job Overview
We are seeking an experienced AWS Data Engineer with strong expertise in building, optimizing, and managing large-scale data pipelines on the AWS cloud. The ideal candidate will have deep hands-on experience with Python, PySpark, SQL, and AWS data services, along with a solid understanding of modern data engineering and data warehousing practices.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and data processing frameworks using Python and PySpark.
- Implement and optimize ETL/ELT workflows for structured and semi-structured data.
- Write, optimize, and tune complex SQL queries to support analytics and reporting needs.
- Design and manage cloud-based data solutions using AWS services such as S3, EMR, Glue, Lambda, and related components.
- Ensure data quality, reliability, security, and performance across data platforms.
- Collaborate with cross-functional teams including data analysts, data scientists, and business stakeholders.
- Monitor and troubleshoot production data pipelines and improve system performance and cost efficiency.
- Follow best practices in coding standards, documentation, and version control.
Required Skills & Qualifications
- 5+ years of experience as an AWS Engineer / Data Engineer.
- Strong proficiency in Python for data processing, scripting, and application development.
- Extensive hands-on experience with PySpark for large-scale data transformation and optimization.
- Advanced SQL skills, including complex joins, window functions, performance tuning, and schema design.
- Proven experience working with AWS data services such as S3, EMR, Glue, Lambda, and similar tools.
- Solid understanding of data warehousing concepts, ETL/ELT principles, and data pipeline best practices.
- Strong analytical, problem-solving, and communication skills.
- Ability to work independently and in a team-oriented environment.