Search by job, company or skills

HCLTech

Data Engineer

4-6 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted 13 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Key Responsibilities

  • Design, build, and maintain scalable data pipelines using AWS Glue.
  • Implement ETL/ELT processes for ingesting data from multiple internal and external sources.
  • Optimize data workflows for performance, reliability, and scalability.
  • Monitor, troubleshoot, and resolve data pipeline failures and performance issues.
  • Manage and optimize AWS Redshift data warehouse operations.
  • Configure and maintain data storage solutions, including AWS S3 and data lake environments.
  • Implement data partitioning, indexing, and compression strategies to improve performance.
  • Support Infrastructure as Code (IaC) practices for deploying and managing data infrastructure.
  • Develop and maintain CI/CD pipelines for data workflows using GitLab.
  • Implement automated testing for data pipelines and data quality validation.
  • Support version control, release management, and deployment of datarelated assets.
  • Configure and manage AWS Lambda functions for automated data processing tasks.
  • Set up and maintain monitoring and alerting systems for data pipeline health and performance.
  • Provide technical support for datarelated incidents and troubleshooting.
  • Collaborate with technical teams to refine and enhance data architecture requirements.
  • Optimize query performance and database operations within data warehouse environments.
  • Document data pipeline architecture, workflows, and technical specifications.
  • Maintain runbooks and standard operating procedures.
  • Conduct monthly progress meetings (1 hour) to report on system health and status.
  • Track engineering tasks through SHIPHATS Jira.
  • Maintain technical documentation on SHIPHATS Confluence.

Requirements

  • Minimum of 4 years of relevant working experience preferred.
  • Strong background in data engineering and data pipeline development
  • Proficiency in SQL, Python, and shell scripting
  • Extensive experience with AWS data services (Redshift, S3, Glue, Lambda, CloudWatch)
  • Data warehouse design and optimization experience
  • Strong CI/CD pipeline knowledge (GitLab preferred)
  • Infrastructure as Code (IaC) experience (Terraform, CloudFormation)
  • Knowledge of data modeling and database design principles
  • Strong troubleshooting and performance optimization skills

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 144472067

Similar Jobs