Design, maintain, and update organizational AI ethics principles to ensure fair, transparent, and responsible AI use.
Establish guidelines that integrate ethical considerations into AI development, deployment, and operational processes
Monitor and ensure adherence to relevant AI-related laws, regulations, and standards (e.g., GDPR, EU AI Act, Algorithmic Accountability Act).
Ensure AI systems comply with applicable regulatory guidelines.
Establish and execute process for AI model validation, approval and monitoring.
Develop and deliver training programs to educate employees and stakeholders about AI ethics, risks, and compliance requirements.
Contribute to development and implementation of AI governance frameworks, policies and standards.
Provide guidance on responsible AI practices across the organization.
Stay up to date with global AI regulations and governance trends.
Requirements
Bachelor's or Master's degree in Law, Computer Science, Data Science or related field.
7-10 years of working experience in relevant field with at least 5 years of AI specific experience.
Experience in governance, risk management, compliance and AI/ML environment.
Strong understanding in:
AI/ML concepts and lifecycle
Data governance and privacy regulations
Risk management frameworks
Excellent analytical, documentation and communication skills.
Experience with responsible AI frameworks.
Familiarity with AI related regulations.
Knowledge of model risk management practices.
Certification in risk, compliance or data governance, such as Artificial Intelligence Governance Professional Certificate or Risk and AI (RAI)™ Certificate
Knowledge and experience in data privacy laws (GDPR, PDPA) and ethical AI principles.
Experience in regulated industries (finance, healthcare, public sector).
Experience with governance tools, auditing frameworks, and monitoring platforms for AI systems.