You will architect, develop, and test large scale and efficient solutions that provide senior management with the accurate data to make business decisions.
Description:
- Design and implement scalable and high-quality methods of consuming data from a diverse set of sources with variable quality and predictability
- Create data products, enabling self-service and predictability by consumer
- Build libraries and frameworks that drive leverage and productivity for the team
- Optimize and maintain solutions, driving improvements in efficiency, data quality, and operational excellence
Requirements:
- BS or MS in Engineering/Computer Science
- 6+ years software engineering, including strong SQL and data focus
- Expertise in languages like Python, Java, or Scala and technologies like Airflow, Spark, Trino, Kafka, Docker, Iceberg
- Ability to analyze complex datasets and design solutions with quality and efficiency
- Familiarity with SDLC best practices, version control, CI/CD
- Experience with cloud services such as AWS, GCP, or Azure for data infrastructure and storage
- Knowledge of infrastructure as code (e.g., Terraform) and container orchestration tools (e.g. Kubernetes)