Roles & Responsibilities
- Work closely with Product Owner to develop, implement, and maintain data engineering solutions for BI and Data Warehouse projects.
- Design, develop, and deploy data solutions (data tables, views, marts) across data pipelines, data warehouses, data lakes, and operational data stores.
- Perform data extraction, cleaning, transformation, and processing (including web scraping where required).
- Build and maintain scalable batch and real-time data pipelines using modern data processing frameworks.
- Integrate and manage data from multiple sources in a scalable and compliant manner.
- Ensure data quality across the full lifecycle (development, deployment, validation, change management, documentation).
- Implement data governance, security, and regulatory compliance standards.
- Collaborate with cross-functional teams (Project Manager, Data Architect, Business Analysts, Developers, Designers).
- Develop backend APIs and manage databases to support applications.
- Work in an Agile environment with CI/CD practices.
- Participate in code reviews and pair programming.
- Gather and translate business requirements into technical data solutions.
- Support development of dashboards, reports, and data visualisations.
- Provide maintenance, troubleshooting, and operational support.
- Manage incidents and service requests within SLA timelines.
- Support audits and provide training where required.
Requirements (Knowledge & Skills)
- Proficient in data cleaning and transformation (SQL, Python, pandas, R, etc.).
- Strong experience building ETL/ELT pipelines (e.g. SSIS, AWS DMS, AWS Glue, Lambda, Python).
- Experience with databases (SQL, PostgreSQL, MySQL, MongoDB, Cassandra, AWS S3, Athena, etc.).
- Familiar with cloud platforms (AWS, Azure, Google Cloud).
- Experience in big data environments and production-grade data pipelines.
- Understanding of system design, data structures, and algorithms.
- Knowledge of data modelling and storage architectures (Data Warehouse, Data Lake, Data Mart).
- Familiar with REST APIs and web protocols.
- Exposure to big data tools (Hadoop, Spark, Kafka, RabbitMQ).
- Experience with web scraping tools (BeautifulSoup, Selenium, etc.).
- Understanding of data governance, access control, and security best practices.
- Proficient in at least one scripting language (SQL or Python).
- Comfortable working in Windows and Linux environments.
- Strong communication skills and ability to work with stakeholders.
- Interest in bridging engineering and analytics.
Additional Information
- Due to project requirements, applicants should be Singapore Citizens or Permanent Residents.