Job description
Overview
We are partnering with a leading global technology company to hire an AI Agent Security Researcher as part of their expanding security R&D team.
This role focuses on building system-level security mechanisms to safeguard AI-driven functionalities and ensure secure deployment across mobile, PC, and IoT environments.
Responsibilities
- Design and implement security frameworks for AI agents within an OS environment
- Develop advanced access control models (e.g., Dynamic Least Privilege, intent-based sandboxing)
- Build and enhance security mechanisms such as AI Fence, MCP, and skill-level protection systems
- Identify and mitigate AI/LLM-related risks (e.g., prompt injection, adversarial attacks, over-privileged behaviors)
- Strengthen secure API interactions between AI agents and system services
- Collaborate on system-level security across mobile, PC, and IoT platforms
Requirements
- Master's or PhD in Computer Science, Cybersecurity, AI, or a related field
- Proficiency in C/C++ for system-level programming (e.g., kernel modules)
- Strong understanding of OS security mechanisms, including:
- Mandatory Access Control (e.g., SELinux, AppArmor)
- Kernel hardening (memory protection, syscall filtering)
- Secure API gateway design
- Knowledge of AI/LLM security risks, such as prompt injection, adversarial ML, and privilege misuse
- Familiarity with mobile/PC security frameworks (e.g., Android SE, iOS sandbox, Linux security modules)
- Experience in OS security (Android/iOS/Linux) is preferred
- Exposure to AI agents or contributions to system security projects (e.g., Linux kernel, Android AOSP) is a plus


