A journey of persistence, courage, and perspective—shaping international standards in AI security while bringing Pacific voices to global AI safety discourse.
Co-Lead
Leading the development of the top 10 machine learning security vulnerabilities framework used globally by security practitioners and organisations.
Associate Editor
Peer review and editorial oversight for cutting-edge research at the intersection of technology and society.
Professional Standards Board Member
Shaping professional standards and ethics for the computing industry across Australia.
AI Working Group Volunteer
Contributing to global cloud security standards and AI-specific security frameworks.
Domain Expert Member
Providing expert guidance on AI safety, security, and governance to OpenAI's research community.
Security best practices for AI/ML systems, organisational frameworks, and MLSecOps implementation. Developed the CIPHER framework for harmonising emerging risks.
Novel approaches using chaos theory, dynamical systems, and game theory to model and quantify AI security risks with mathematical rigour.
Challenging Western-centric approaches by bringing Pacific and Indigenous perspectives to AI governance, emphasising sustainability over growth and sovereignty over control.
"Current AI safety discourse in Oceania operates under colonial misconceptions, given the prioritisation of growth and control over sustainability and sovereignty. More sustainable and inclusive frameworks for AI safety can be developed using Indigenous knowledge."— From Brookings Institution research on AI infrastructure and Pacific Island nations
Charles Sturt University
Recognised for shaping global standards in AI security. Citation highlights the journey from starting university without a bachelor's degree to becoming an international AI security leader.
Australian AI Awards 2025
2025
AISA 2024
2024
Telstra
2023
Telstra
Earned "Leads The Way" and "Team Performer" credentials
Whether you're building AI systems, establishing governance frameworks, or researching ML security—let's work together to create safer, more equitable AI.