
Search by job, company or skills
Modernize: Refactor legacy codebases into high-performance PySpark pipelines.
Build: Design end-to-end data workflows using Databricks Jobs, Workflows, and Delta Lake.
Optimize: Fine-tune Spark job performance and cluster configurations for cost and speed.
Lead: Implement CI/CD, data quality frameworks, and mentor junior engineers.
Expertise: 8+ years in Data Engineering with 2-3 years of hands-on Databricks experience.
Technical Core: Deep mastery of PySpark (DataFrames/SQL), Python, and Delta Lake.
Architecture: Strong understanding of Dimensional Modeling, Data Vault, or Lakehouse patterns.
Modern Ops: Experience with Git, CI/CD, and robust monitoring/alerting.
Certification: Must hold either Databricks Certified Data Engineer Associate or Professional.
Job ID: 145451373