Search by job, company or skills
Key Responsibilities:
. Design, develop, and maintain ETL jobs using the Talend ETL tool set.
. Build and optimize ETL processes for extracting and transforming data from diverse sources including Hive, PostgreSQL, and SQL Server.
. Design and develop database tables with appropriate constraints based on business requirements.
. Collaborate with team members to understand source system structures, data retrieval methods, and organizational tools.
. Support the development of data transformation logic using ETL tools and scripting languages such as SQL and Python.
. Perform data cleaning, validation, and transformation to align with target schema and quality standards.
. Contribute to data quality improvement initiatives.
. Participate in troubleshooting activities to ensure data integrity and process efficiency.
Required Skills & Experience:
. Strong hands-on experience with Talend, Python, and Spark.
. Solid knowledge and working experience with Data Lake and Hadoop ecosystem (Hive, Impala, HDFS).
. Proven experience in designing, developing, and optimizing Talend Big Data jobs leveraging the Spark engine.
. Good understanding of Spark Catalyst Optimizer and Spark executor parameters for query optimization.
. Strong foundation in data warehousing and data modeling techniques.
. Familiarity with industry-standard visualization and analytics tools.
. Strong interpersonal skills, proactive attitude, and ability to work effectively in a team environment.
Date Posted: 28/08/2025
Job ID: 125005359