Collaborate with developers and QA engineers to review software requirements
Learn how to identify key elements that need testing
Building AI Models with LLMs
Use LLMs (e.g., Gemma, Ollama, via Hugging Face Transformers) to interpret requirements
Fine-tune or prompt LLMs to generate relevant and context-aware test cases
Experiment with different model types and techniques to improve accuracy
Automation Integration
Map AI-generated test cases to models in Test Automation Framework
Write scripts in Python to automate test execution
Ensure test cases can run automatically and validate outcomes
Validation & Feedback
Collaborate with manual testers to review test cases
Run tests and analyze results to improve model performance
Refine models based on feedback and test outcomes
Continuous Improvement
Monitor model performance and retrain with new data
Implement feedback loops for ongoing learning and optimization
Documentation & Reporting
Maintain clear documentation of models, processes, and results
Share insights and findings with the team
Expected Outcomes:
LLM-powered models that generate test cases from software requirements
Automated execution of test cases using Test Automation Framework
Improved test coverage and efficiency
A framework for continuous learning and model improvement
Basic Qualifications:
Currently enrolled in Bachelor/Master Computer Science/Electrical/Electronic Engineering or a software related discipline, from an accredited college or university
Experience and interest in software development; using AI Models, Python is an advantage.
Able to work independently and creatively
Familiarity with software tools such as Visual Studio, git, Bitbucket, LLMs.