Job Responsibilities:
. Design and develop data ingestion solutions for big data.
. Build efficient and reliable data processing solutions.
. Design and implement data storage solutions.
. Develop scalable data pipelines for ingestion, transformation, and storage of large datasets.
. Optimize data pipelines for real-time and batch processing.
. Ensure data quality and integrity throughout the data pipeline by implementing effective data validation and monitoring strategies.
Job Requirements:
- Minimum 5-8 years of designing and implementing ETL solutions.
- Bachelor's degree or higher in Computer Science, Engineering, or a related field.
- Familiar with AWS data ingestion and processing tools like FluentBit, Kinesis, and Glue.
- Strong expertise in big data technologies such as Apache Spark.
- Experience with AWS data storage solutions including S3, Redshift, Iceberg, Aurora.
- Proficiency in programming languages including Python, Scala, and Java.
- Preferred certification and/or hands-on experience with AWS data services.
Professional Skills:
- Attention to detail and a strong commitment to delivering high-quality solutions.
- Strong problem-solving skills and the ability to work effectively in a fast-paced environment.
- Work well in a team.
- Excellent communication and interpersonal skills
Law Ka Yan
EA Licence No. 91C2918 | Personnel Registration No. R1981563