We are looking for a versatile Software Engineer to join our Data Innovation Team. In this role, you won't just be managing data; you will be building the brain of our organization. You will own the entire product lifecyclearchitecting high-performance data pipelines, implementing AI-driven logic, and developing the full-stack applications that bring these insights to life. If you are a builder who thrives at the intersection of Big Data and system engineering, we want you to help us turn complex data signals into our most competitive assets.
Key Responsibilities
- Solution Architecture: Translate high-level business requirements into technical specifications, utilizing Lakehouse architecture to ensure scalability and performance.
- End-to-End Software Development: Build and maintain full-stack applications, ensuring seamless integration between backends and frontend components.
- Frontend Development: Design and develop responsive, intuitive web interfaces and data visualizations that translate complex analytics into actionable user experiences.
- Data & AI Pipelines: Build and optimize declarative pipelines using Delta Live Tables (DLT) and orchestrate complex workflows to serve Machine Learning models.
- Operational Excellence: Manage the full application lifecycle, including governance via Unity Catalog, model tracking via MLflow, and CI/CD for both code and data.
- Cross-Functional Collaboration: Work closely with stakeholders to ensure successful project delivery, focusing on code modularity and maintainability within the Databricks ecosystem.
Technical Requirements
The Essentials:
- Education: Degree in Computer Science, Software Engineering, or a related field.
- Software Engineering: Deep mastery of Python (PySpark), Java, or C++. You should understand data-oriented design as well as you understand OOP.
- Full-Stack Mindset: Comfort moving from backend data logic to frontend API consumption.
The Lakehouse Stack (Highly Desired):
- Databricks Ecosystem: Hands-on experience with Delta Lake, Spark SQL, and Databricks Workflows.
- AI Operations: Experience using MLflow or Mosaic AI to deploy and monitor models in production.
- DevOps for Data: Familiarity with Databricks Asset Bundles (DABs), Git integration, and containerization (Docker).