About the Project
Building and maintaining large scale data systems in big data environment.
Responsibilities
- Design, develop, and maintain Big Data solutions for both structured and unstructured data environments.
- Work with traditional structured databases such as Teradata, Oracle, and perform SQL/PLSQL development.
- Manage and process large datasets using the Hadoop ecosystem, including tools like HIVE, Impala, HDFS, Spark, Scala, and HBase.
- Develop and implement modern data transformation methodologies using tools like DBT.
Skills/Requirement
- Bachelor's degree in computer science, Data Engineering, Information Systems, or a related field.
- Experience in Data Engineering, Big Data solutions, and analytics functions
- Strong background in traditional structured database environments such as Teradata / Oracle, SQL & PL/SQL.
- Fluent in the management of structured and unstructured data, as well as modern data transformation methodologies and tools like DBT.
- Proficient in Hadoop ecosystem components (e.g., HIVE, Impala, HDFS, Spark, Scala, HBase)
- Hands-on experience working on real-time data and streaming applications using Kafka or other similar tools
- Hands-on experience in automation process building procedures, ETL and automated job scheduling using data integration tool such as Talend