About the Team
The Trust & Safety Policy team of TikTok develops, reviews, and implements the policies and processes that underpin our Community Guidelines to promote a positive and safe environment for all of our users and content creators to enjoy and express themselves. The LIVE Product Policy Team Lead owns the intersection of content policy and machine learning enforcement - translating principled safety frameworks into high-quality model inputs, scalable enforcement mechanisms, and rigorous evaluation standards for TikTok LIVE. This is a dual-facing leadership role: you will deliver AI-powered solutions that achieve content governance goals while also crafting policy solutions that empower model-based enforcement at scale.
Responsibilities:
- Lead and mentor a team of policy managers and specialists, setting objectives and priorities, and fostering an AI culture to safeguard platform principles.
- Translate content policies into precise, model-ready specifications including labeling guidelines, taxonomy frameworks, prompt engineering, and enforcement boundaries that directly inform model training and fine-tuning.
- Design and maintain rigorous evaluation standards - including benchmark datasets and quality metrics - to measure and continuously improve AI enforcement performance.
- Monitor the LIVE content ecosystem for emerging trends, evasion behaviors, and enforcement gaps proactively update model-digestible policy inputs to maintain platform resilience.
- Partner with Algo and Product teams to systematically identify model capacity ceilings and explore how to leverage different model training to achieve intended governance goals.
- Represent the team in cross-functional forums, communicating technical policy trade-offs and enforcement strategies to non-technical stakeholders including legal, communications, and product leadership.
- Champion an AI-first mindset across all policy operations - proactively identifying opportunities to integrate AI solutions at team-level.
- Contribute to global safety standards by staying current with regulatory developments, academic research, and industry benchmarks relevant to AI-powered content moderation
Minimum Qualifications
- 5+ years of experience in Trust & Safety, content policy, AI/ML operations, or platform risk management, with at least 2 years in a lead role.
- Demonstrated experience developing model-friendly policy guidance and scalable safety frameworks, including prompt engineering, AI-assisted evaluation methodologies, and building AI agents for ML workflows across emerging product surfaces and rapidly evolving risk domains.
- Strong working knowledge of supervised learning concepts, model evaluation metrics, risk measurement methodologies, and the policy-to-data pipeline, with experience designing safety evaluation or risk classification approaches for complex user-generated or AI-generated content ecosystems.
- Experience identifying, measuring, and mitigating platform integrity and commercial ecosystem risks, including sensitive content, regulated goods, counterfeit, misleading commercial behavior, or other high-risk policy areas, while balancing business growth and user safety objectives.
- Familiarity with global regulatory and compliance considerations related to online safety, privacy, and harmful content domains (e.g. DSA, GDPR, CSAM obligations).
- Excellent cross-functional and influence skills, with experience collaborating with both technical teams (algorithm engineering, data science, product management) and non-technical stakeholders (policy, operations, legal).
- Proven track record of driving cross-functional AI operations, policy, or safety initiatives from strategy through execution in fast-paced and ambiguous environments.
Preferred Qualifications:
- Hands-on familiarity with AI/ML concepts, data annotation, and model evaluation processes in a policy development and enforcement context.
- Growth mindset with a demonstrated eagerness to adopt and champion AI tools in daily work continuously seeking opportunities to leverage emerging technologies to enhance personal productivity, workflow efficiency, and team output quality.
- Exposure to AI governance, responsible AI frameworks, or regulatory compliance (e.g. DSA).
- Proficiency in data analysis and research methodologies to inform data-driven policy decisions.
- Experience setting global or multi-market content standards across diverse regulatory environments.
- Experience working with livestream content ecosystems subject to rapid change and monetization integrity challenges