Data Pipeline Development & Operations
- Design, build, and operate scalable and reliable data pipelines on the Databricks platform
- Develop end-to-end data workflows from ingestion through transformation to consumption
- Implement robust error handling, monitoring, and alerting mechanisms
- Ensure data pipeline reliability, performance, and maintainability
- Optimize pipeline performance through efficient Spark job design and cluster configuration
- Manage and orchestrate complex data workflows using Databricks Jobs and workflows
Legacy Code Modernization
- Refactor legacy code and data pipelines to PySpark for improved performance and scalability
- Migrate traditional ETL processes to modern ELT patterns on Databricks
- Assess existing codebases and identify opportunities for optimization and modernization
- Ensure backward compatibility and data integrity during migration processes
- Document refactoring approaches and create migration playbooks
- Collaborate with stakeholders to minimize disruption during code transitions
Data Engineering Excellence
- Implement data quality checks and validation frameworks
- Design and maintain Delta Lake tables with appropriate optimization strategies
- Develop reusable code libraries and frameworks for common data engineering tasks
- Follow software engineering best practices, including version control, testing, and CI/CD
- Participate in code reviews and provide constructive feedback to team members
- Troubleshoot and resolve data pipeline issues in production environments
Collaboration & Knowledge Sharing
- Work closely with data architects, analysts, and business stakeholders
- Collaborate with Infrastructure (Infra), Applications (Apps), and Cyber teams
- Share knowledge and best practices with Team NCS
- Mentor junior data engineers on PySpark and Databricks technologies
- Document technical solutions and maintain comprehensive documentation
Qualifications:
- Minimum 7 years in data engineering or related roles
- At least 2-3 years of hands-on experience with the Databricks platform
- Proven track record of refactoring legacy code to modern frameworks
- Experience building and maintaining production data pipelines at scale
- Background working across multiple data sources and formats
- Experience in Agile development environments
Technical Skills
- Data Engineering: Strong foundation in data engineering principles, ETL/ELT processes, and data pipeline design patterns
- PySpark: Proven hands-on experience developing data pipelines using PySpark, including DataFrames API, Spark SQL, and performance optimization
- Databricks Platform: Practical experience with Databricks workspace, cluster management, notebooks, and job orchestration
- Workspace AI Agent: Knowledge of Databricks Workspace AI Agent capabilities and integration
- Data Modelling: Experience implementing data models, including dimensional modeling, data vault, or lakehouse architectures
- Delta Lake: Understanding of Delta Lake features, including ACID transactions, schema evolution, and optimization techniques
- Python: Strong Python programming skills for data processing and automation.
Additional Certifications (Preferred)
- Databricks Certified Data Engineer Associate OR Databricks Certified Data Engineer Professional
- Databricks Certified Associate Developer for Apache Spark
- Cloud platform certifications (Azure Data Engineer Associate, AWS Certified Data Analytics, or Google Cloud Professional Data Engineer)
- Relevant data engineering or big data certifications