Search by job, company or skills

G

AI Agent Security Engineer (OS Security / Access Control / Trust Framework)

5-8 Years
SGD 10,000 - 15,000 per month
Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 4 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

We are building a next-generation intelligent computing platform that integrates advanced AI capabilities to power seamless user and enterprise experiences. As AI agents and large language models (LLMs) become core components of modern systems, they also introduce new and complex security challenges-such as prompt injection, unauthorized API access, and data leakage.

Given that AI agents often operate with elevated privileges, improper access control or excessive data exposure can lead to system compromise, privacy risks, and unintended behaviors.

To address these challenges, we are expanding our AI Security R&D team to design and implement robust, system-level protections for AI-driven environments across mobile, desktop, and IoT platforms.

Key Responsibilities

  • Design and implement security mechanisms to safeguard AI agents and LLM-powered systems
  • Develop and enhance access control frameworks, including dynamic least-privilege models and sandboxing mechanisms
  • Secure AI system interactions, including APIs, plugins, and tool integrations
  • Identify, analyze, and mitigate emerging threats in AI systems (e.g., prompt injection, adversarial attacks)
  • Collaborate with cross-functional teams to integrate security into system architecture and AI workflows

Required Technical Expertise

We are looking for candidates with strong experience in one or more of the following areas:

System Programming

  • Proficiency in C/C++ for low-level or system programming (e.g., kernel modules, system services)

Operating System Security

  • Deep understanding of OS security mechanisms, including:Mandatory Access Control (e.g., SELinux, AppArmor)Kernel hardening (memory protection, syscall filtering)Secure API design and enforcement

AI / LLM Security

  • Familiarity with AI-specific security risks, including:Prompt injection attacksAdversarial machine learning techniquesRisks from over-privileged AI agents

Platform Security

  • Experience with security frameworks across mobile, desktop, or Linux-based systems

Qualifications

  • Master's or PhD in Computer Science, Cybersecurity, Artificial Intelligence, or related fields
  • Industry experience in operating system or platform security (e.g., Linux, Android, iOS)
  • Strong problem-solving skills and a security-first mindset
  • Publications or patents in cybersecurity are a plus (not required)

Preferred Experience

  • Hands-on experience working with AI agents or AI-enabled systems
  • Contributions to system-level or open-source security projects (e.g., Linux kernel, Android AOSP)
  • Experience designing or implementing access control or sandboxing frameworks

More Info

Job Type:
Industry:
Employment Type:

Job ID: 145641127

Similar Jobs