Job Overview:
We are searching for a skilled Databricks Developer with 4-6 years of experience in data engineering and big data development. The role focuses on developing, optimizing, and maintaining data pipelines on Databricks using Apache Spark, supporting lakehouse architectures, and delivering high-quality datasets for analytics and downstream consumption.
Key Responsibilities:
- Develop, optimize, and maintain data pipelines using Databricks and Apache Spark (PySpark)
- Build and manage Delta Lake tables for reliable and scalable data storage
- Implement data transformations using Spark SQL and PySpark
- Develop and schedule pipelines using Databricks Workflows (Jobs)
- Apply data quality, performance tuning, and optimization best practices
- Work closely with data engineers, analysts, and business teams
- Monitor, troubleshoot, and support Databricks production workloads
Requirements:
- 4-6 years of experience in data engineering or Databricks development
- Strong hands-on experience with Databricks and Apache Spark (PySpark)
- Good understanding of Delta Lake and lakehouse architecture
- Strong SQL and Python skills for data processing and transformation
- Experience with performance tuning and troubleshooting in Databricks