Search by job, company or skills

Epergne Solutions

Cloud Data Engineer

6-8 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Role: - Cloud Data Engineer

Job Location: - Singapore

Experience: -6-8 Years

Roles & Responsibilities: -

  • Design and architect data storage solutions such as databases, data lakes, and data warehouses using AWS (S3, RDS, Redshift, DynamoDB) and Databricks Delta Lake.
  • Build, manage, and optimize data pipelines for ingestion, processing, and transformation using AWS Glue, AWS Lambda, Databricks, and Informatica IDMC.
  • Integrate data from various internal and external sources into AWS and Databricks environments while ensuring data quality and consistency.
  • Develop ETL processes using Databricks (Spark) and Informatica IDMC for cleansing, transforming, and enriching data.
  • Monitor and optimize data processing performance and queries to meet scalability and efficiency requirements.
  • Implement security best practices, encryption standards, and compliance controls across AWS and Databricks environments.
  • Automate routine data workflows using AWS Step Functions, Lambda, Databricks Jobs, and Informatica IDMC.
  • Maintain clear documentation for data infrastructure, pipelines, and configurations.
  • Work closely with cross-functional teams (data scientists, analysts, engineers) to support data needs.
  • Troubleshoot and resolve data-related issues to ensure high data availability and integrity.
  • Optimize resource usage across AWS, Databricks, and Informatica IDMC to manage costs effectively.
  • Stay updated with latest industry practices and emerging technologies in cloud data engineering.

Requirements / Qualifications:-

  • Bachelors or Masters degree in Computer Science, Data Engineering, or related field.
  • Minimum 5 years of experience in data engineering with strong expertise in AWS, Databricks, and/or Informatica IDMC.
  • Proficiency in Python, Java, or Scala for developing data pipelines.
  • Strong SQL and NoSQL database knowledge.
  • Experience in evaluating and optimizing performance for complex data transformations.
  • Good understanding of data modeling and schema design.
  • Strong analytical, problem-solving, communication, and collaboration skills.
  • Relevant certifications (AWS, Databricks, Informatica) are an advantage.

Preferred Skills:-

  • Hands-on experience with big data technologies such as Apache Spark and Hadoop.
  • Knowledge of containerization/orchestration (Docker, Kubernetes).
  • Familiarity with visualization tools (Tableau, Power BI).
  • Understanding of DevOps concepts for deploying and managing data pipelines.
  • Experience with version control (Git) and CI/CD pipelines.
  • Knowledge of data governance and cataloging tools, especially Informatica IDMC.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 135690095

Similar Jobs