Explore new external data sources to understand availability and quality and make them easily accessible to different stakeholders for different use cases
Develop solutions that enable investment professionals and data science teams to efficiently extract insights from data. This includes owning the ingestion and transformation
Work with multiple stakeholders [internal end user, external technical consultants] to drive the development of firm's enterprise data catalogue system
Partner with the investment professionals, quantitative researchers, and data scientists to design, develop and deploy solutions that answer fundamental questions about companies, sectors, countries and financial markets
Build tools and automation capabilities for data pipelines that improve the efficiency, quality and resiliency of our data platform
Drive the evolution of our data strategy by challenging the status quo and identifying opportunities to enhance our platform
Requirements
Minimum 7+ years experience in data engineering with a passion for data and software development
Proficient in Data Warehousing, Machine Learning, and CI/CD workflows like Jenkins
Skilled in managing multiple Docker deployments and deploying microservices for backend and ML model hosting
Experience with cloud platforms: AWS, Azure Cloud, Kubernetes, and preferably Snowflake or similar data warehouse tech
Programming proficiency in Python and SQL with knowledge of data caching and cloud data reuse strategies
Background in quantitative hedge funds or technology companies preferred strong communication skills required
Bachelor's or Master's in Computer Science or equivalent experience
Bonus skills include experience with Generative AI and rapid prototyping frameworks like Streamlit