About the Role
We're looking for a driven AI / Data Scientist with at least 2 years of experience to join our team. You'll play a key role in developing and deploying machine learning models, working on real-world AI applications, and turning data into actionable insights. This role is ideal for someone who has moved beyond entry-level work and is ready to take ownership of projects while continuing to grow technically.
What You'll Do
- Design, build, and optimize machine learning and AI models
- Work with large, complex datasets to extract insights and improve model performance
- Develop AI-powered solutions such as recommendation systems, NLP models, or predictive analytics tools
- Collaborate with engineering and product teams to deploy models into production environments
- Evaluate and monitor model performance, ensuring reliability and scalability
- Contribute to data pipelines and feature engineering processes
- Stay current with emerging AI trends, including generative AI and LLMs
- Build solutions for document image processing using tools like Google Cloud Vision, AWS Tex tract, and OCR libraries
- Perform advanced image processing tasks, including manipulating and enhancing images while preserving original metadata, resolution, and archival integrity
What We're Looking For
- Bachelor's or Master's Degree in Computer Science, Data Science, AI, or a related field
- 2+ years of hands-on experience in data science or AI roles
- Strong proficiency in Python
- Experience with machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch)
- Solid understanding of statistical methods and machine learning concepts
- Experience working with SQL and large datasets
- Ability to take ownership of tasks and deliver end-to-end solutions
- Strong communication skills and ability to work cross-functionally
Preferable:
- Experience with NLP, computer vision, or recommendation systems
- Familiarity with LLMs, prompt engineering, or generative AI applications
- Experience deploying models using cloud platforms (AWS, GCP, Azure)
- Knowledge of MLOps practices (CI/CD, model monitoring, versioning)
- Experience with big data tools (Spark, Hadoop)