Search by job, company or skills

TDCX

Data Python Engineer

Fresher
new job description bg glownew job description bg glownew job description bg svg
  • Posted 13 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Commencement date: Immediate

We are seeking an experienced Data Python Engineer to build robust data pipelines, develop vectorization workflows, and support advanced analytics initiatives across public-sector projects. You will collaborate closely with data and platform teams to deliver scalable, high-quality engineering solutions that enable reliable data processing, automation, and AI-driven capabilities.

Job Description

Develop and maintain ETL/ELT pipelines, data workflows, and analytics engineering processes to support enterprise data needs.

Build and enhance vectorization workflows to support advanced search, retrieval, and AI/LLM applications.

Collaborate with data, AI/ML, cloud, and product teams to translate technical requirements into scalable data solutions.

Automate deployment, monitoring, and operational workflows following DevOps best practices.

Optimise data storage, processing performance, and reliability across distributed data systems.

Troubleshoot pipeline issues, conduct root-cause analysis, and implement long-term fixes.

Implement and maintain unit testing and integration testing frameworks for data workflows.

Contribute to documentation, architecture discussions, and continuous improvement of engineering standards.

Job Requirements

Strong experience in software engineering or data engineering with solid proficiency in Python.

Hands-on experience building data pipelines, ETL/ELT processes, and automation scripts.

Experience in unit testing and integration testing for data or backend services.

Familiarity with DevOps practices such as CI/CD, Git, and automated deployments.

Experience working in Agile or iterative delivery environments.

Strong understanding of distributed data processing and workflow orchestration concepts.

Experience with AWS cloud services and Kubernetes (K8s) deployments.

Familiarity with modern data platforms such as:

Snowflake, Databricks, Apache Spark, Apache Hive, Delta Lake, Iceberg, vector databases.

Experience using orchestration tools such as:

Airflow, Dagster, Prefect, or Temporal.

Exposure to monitoring/observability tools (e.g., Datadog, CloudWatch).

Strong analytical and problem-solving skills with the ability to navigate complex technical environments.

By submitting any application or rsum to us, you will be deemed to have agreed and consented to us disclosing your personal information to prospective employers for their consideration.

We regret only short-listed candidates will be notified.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 135924939