Required Qualifications
Essential Technical Skills
- Databricks Platform: Proven hands-on experience with Databricks platform including workspace administration, cluster management, and workflow orchestration
- Unity Catalog: Demonstrated experience implementing Unity Catalog for unified governance, including metastore configuration, catalog/schema design, and access control policies
- Data Modelling: Strong expertise in designing and implementing data models, including dimensional modeling, data vault, and Lakehouse architectures
- Delta Lake: Deep understanding of Delta Lake features including ACID transactions, time travel, and optimization techniques
- ApacheSpark: Proficiency in Spark SQL, Data Frames, and performance tuning
- Programming: Strong skills in Python, SQL, and/or Scala for data processing
- Cloud Platforms: Experience with Azure, AWS, or GCP cloud services
Additional Technical Skills
- Data pipeline development and ETL/ELT processes
- Data governance frameworks and metadata management
- Performance optimization and troubleshooting
- CI/CD practices for data platforms
- Data quality and validation frameworks
- Understanding of data security and compliance requirements
- Professional Experience
- Minimum 8-10 years in data architecture, engineering, or analytics roles
- At least 3-5 years of hands-on experience with Databricks platform
- Proven track record implementing Unity Catalog in enterprise environments
- Experience designing and implementing complex data models for large-scale systems
- Background working with Databricks Professional Services or similar partnerships preferred
- Experience across multiple industry sectors (Public Service, Financial Services, Healthcare, etc.) is advantageous
Certifications (Preferred)
- Data Engineer Professional
- Databricks Certified Associate Developer for Apache Spark
- Cloud platform certifications (Azure Data Engineer, AWS Data Analytics, or GCP Data Engineer)
- Relevant data management or analytics certifications