AI Safety matters.

Safe & ethical advanced AI could bring unprecedented benefits to humanity. But advanced AI is not safe by default. Australians have a crucial role to play in addressing the catastrophic risks from unsafe AI.

Learn about AI Safety

AI safety encompasses work to understand and address risks from increasingly capable AI systems. This includes both current harms and potential catastrophic risks from misaligned AI systems.

AI governance focuses on institutions, policies, and decision-making processes that shape how AI is developed and deployed safely.

Nobel laureate and “Godfather of AI” Geoffrey Hinton explains AI risks in this 60 Minutes television interview, including warnings about loss of human control (15 mins). Geoffrey Hinton has researched AI for decades, and left Google Brain in 2023 to be able to speak about AI risks more freely.

Nobel laureate Yoshua Bengio presents the case for taking catastrophic risks from AI seriously in this TED talk (15 mins).

Global synthesis from 100 experts across 30 countries on AI capabilities, risks, and technical safety measures as of early 2025.

Explore the comprehensive report

Current Australian government guidance for organizations using AI, including 10 guardrails focused on accountability, risk management, evaluations, and transparency.

Read the safety standard

Contribute Your Skills

Apply your professional expertise to advance AI safety.

Career advice website 80,000 Hours introduces AI safety issues, risks, and career paths in the field (~30 mins).

Read the 80,000 Hours AI Safety Guide

Global challenges education non-profit BlueDot Impact has an accessible introduction to the rapid increase in AI capabilities and associated challenges (2 hours). A good starting point to get oriented in AI developments and their implications. BlueDot also offers more in-depth courses in technical AI Safety, AI governance, and economics of AI.

Enroll in the 2-hour course

Contribute Your Voice

Group of experts publicly advocating for government action on AI risks. They publish open letters and policy recommendations for Australian policymakers and government agencies.

Sign an open letter to the Australian Government

Connect with Community

Community for people in Australia or New Zealand interested in preventing existential risk from AI.

Join the mailing list

Policy organization developing and advocating for solutions to this century’s most challenging problems, including AI governance, biosecurity, and institutional reform.

Join the newsletter

Additional Resources

Explore additional Australian organizations conducting research and developing solutions for responsible AI systems.

Independent nonprofit research institute building safety, ethics, accountability and transparency into AI systems. They train organisations operating AI systems and provide technical guidance on AI policy development.

Australia’s national science agency team focusing on responsible AI engineering, including AI system safety. They develop frameworks and methodologies for safe & trustworthy AI systems.