Responsibilities:
Product Strategy & Roadmap
- Define and maintain the product vision and roadmap for selected digital mental health and AI-enabled initiatives.
- Translate strategic objectives into prioritised backlogs, epics, and user stories with clear acceptance criteria.
- Continuously review and refine the roadmap based on data insights, user feedback, and system constraints.
AI Product Design & Governance
- Define use cases for AI-enabled features (e.g., co-pilots, triage support, recommendation engines) grounded in clear problem statements and measurable outcomes.
- Work with data science and engineering teams to translate model capabilities and limitations into safe, usable product features.
- Embed human-in-the-loop safeguards, escalation logic, explainability elements, and UX disclosures into product design.
- Define model evaluation metrics (e.g., precision/recall, latency, response quality benchmarks) and acceptance thresholds before deployment.
- Oversee responsible AI governance, including bias assessment, data provenance, auditability, and monitoring for model drift.
Delivery & Cross-Functional Coordination
- Lead sprint planning, backlog refinement, and prioritisation with engineering, data science, UX, and operations teams.
- Develop detailed product requirements documents (PRDs), including data flows, edge cases, risk scenarios, and non-functional requirements.
- Ensure alignment across frontend, backend, analytics, and machine learning components.
- Coordinate UAT, stakeholder reviews, and release readiness sign-offs.
Data, Metrics & Impact Tracking
- Define operational KPIs (e.g., engagement, completion rates, workflow efficiency, response times, escalation accuracy).
- Work with analytics teams to measure user behaviour and AI system performance.
- Interpret data to guide iterations, feature sunsetting, or scaling decisions.
- Ensure product decisions are evidence-informed and evaluation-ready.
Documentation, Risk & Transition
- Maintain comprehensive documentation of product requirements, workflows, system dependencies, and governance controls.
- Prepare knowledge-transfer materials to support long-term sustainability and ownership transitions.
- Identify operational, clinical, and reputational risks for AI-enabled features and implement mitigation plans.
- Support audit, compliance, and security review processes.
User Research & Validation
- Conduct user research across help-seekers, counsellors, clinicians, and administrators to validate needs and usability.
- Design and oversee pilots and controlled rollouts of new AI-enabled capabilities.
- Translate qualitative and quantitative findings into actionable product improvements.
- Ensure continuous feedback loops throughout the product lifecycle.
- Coordinate with research partners to run evaluation studies of deployed systems.
Additional Responsibilities
- Perform other ad-hoc tasks related to the ongoing development, evaluation, scaling, and responsible deployment of digital mental health and AI-enabled innovations.
Requirement:
- Bachelor's degree in Computer Science, Engineering, Psychology, Public Health, Business, or a related field.
- Minimum 2+ years of experience in product management, digital product development, AI/ML product work, or related roles.
- Ability to translate complex AI/technical concepts into user-centred product requirements.
- Understanding of AI/ML concepts, evaluation metrics, and responsible AI principles.
- Experience working with cross-functional teams in an agile environment.
- Ability to manage multiple workstreams and priorities within fast-paced environments.
- Good to have experience in healthcare, mental health, or regulated environments.