Search by job, company or skills

Razer Inc.

Algorithm Engineer - Agentic AI

2-4 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted 4 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.

This AI algorithm Engineer role sits within the Agentic AI Pod, focused on researching, designing, and scaling multimodal agent systems within Razer's internal AI platform. You will play a critical role in developing autonomous and semi-autonomous multimodal AI agents that integrate large language models (LLMs), multimodal foundation models (vision, speech, audio), retrieval systems, fine-tuned models, and tool-based orchestration to enable intelligent, real-time, and context-aware capabilities across Razer's gaming and platform experiences.

The ideal candidate is a strong AI systems and applied research engineer with hands-on experience in multimodal agent architectures, RAG pipelines, LLM and multimodal model fine-tuning, and production deployment. You will work across the full lifecyclefrom data preparation and multimodal model adaptation to system integration, deployment, and continuous optimizationwhile collaborating closely with AI Software Engineers, Research Scientists, Platform Engineers, and DevOps teams.

Key Responsibilities

  • Design, implement, and maintain multimodal agentic AI architectures, including perception, planning, tool use, memory, and multi-step reasoning
  • Research and build multimodal agents that combine text, vision, audio, and speech models for grounded understanding and interaction
  • Build, operate, and optimize multimodal Retrieval-Augmented Generation (RAG) pipelines using embeddings, vector databases, and internal multimodal knowledge sources (text, images, video, audio)
  • Perform LLM and multimodal model fine-tuning and adaptation (e.g., supervised fine-tuning, instruction tuning, PEFT methods such as LoRA) to improve reasoning, perception, and task performance
  • Develop internal agent frameworks, multimodal tooling, and orchestration layers for LLM- and multimodal-model-driven workflows
  • Integrate and adapt 3rd-party multimodal AI services (LLMs, vision models, speech/audio models, agent platforms) into agent-based systems
  • Prototype, evaluate, and productionize multimodal agent frameworks and research ideas, balancing model capability, latency, cost, and system complexity
  • Deploy and operate production-grade multimodal AI systems, addressing scalability, latency, reliability, observability, and cost controls
  • Conduct benchmarking and evaluation of multimodal models, agent behaviors, fine-tuning strategies, and retrieval approaches
  • Collaborate with platform, infrastructure, and security teams to ensure secure, compliant, and maintainable AI systems
  • Stay current with advances in multimodal foundation models, agentic AI research, RAG methods, and deployment patterns

Pre-Requisites Technical Skills

  • Minimum 2+ years of experience in AI systems engineering, agentic AI development, multimodal AI, or applied ML research in production
  • Strong proficiency in Python and solid software engineering fundamentals (API design, testing, modular architecture)
  • Strong proficiency in prompt design for multimodal agents, including instruction design, role prompting, tool-use prompting, multimodal input/output handling, and evaluation
  • Hands-on experience with LLM APIs (e.g., OpenAI, Claude, Gemini) and multimodal models (vision-language, speech, audio)
  • Practical experience with LLM and multimodal model fine-tuning workflows, including data preparation, training, evaluation, and deployment.
  • Experience with agent and RAG frameworks such as LangChain, LlamaIndex, AutoGen, or similar
  • Experience deploying and operating AI systems with multimodal inputs, with attention to latency, throughput, and reliability
  • Familiarity with cloud platforms (AWS, GCP, Azure) and AI deployment / MLOps workflows (CI/CD, monitoring, versioning)

Preferred Qualifications

  • Experience with parameter-efficient fine-tuning (PEFT) techniques such as LoRA, QLoRA, or adapters
  • Hands-on experience with multimodal foundation models (e.g., vision-language, speech-language, audio-language models)
  • Experience with vector databases (e.g., Pinecone, Weaviate, Milvus, FAISS), including multimodal embeddings
  • Strong understanding of multimodal prompt engineering, retrieval strategies, and RAG evaluation
  • Experience operating and debugging multimodal agent systems in production or research environments
  • Ability to clearly communicate research insights, architectural decisions, and trade-offs
  • Passion for gaming and interest in intelligent, interactive, and immersive AI experiences
  • Comfortable working in a fast-paced, research-driven, agile environment

Education & Experience

  • Master's degree or PhD in Computer Science, Artificial Intelligence, Machine Learning, or a closely related technical discipline

Travel Requirements

  • Role based in the Singapore office, with occasional travel (up to 1 trip per year) for conferences, research collaborations, or business meetings.

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 139176127