AI safety encompasses work to understand and address risks from increasingly capable AI systems. This includes both current harms and potential catastrophic risks from misaligned AI systems.
AI governance focuses on institutions, policies, and decision-making processes that shape how AI is developed and deployed safely.
Nobel laureate and “Godfather of AI” Geoffrey Hinton explains AI risks in this 60 Minutes television interview, including warnings about loss of human control (15 mins). Geoffrey Hinton has researched AI for decades, and left Google Brain in 2023 to be able to speak about AI risks more freely.
Nobel laureate Yoshua Bengio presents the case for taking catastrophic risks from AI seriously in this TED talk (15 mins).
Global synthesis from 100 experts across 30 countries on AI capabilities, risks, and technical safety measures as of early 2025.
Current Australian government guidance for organizations using AI, including 10 guardrails focused on accountability, risk management, evaluations, and transparency.
Apply your professional expertise to advance AI safety.
Career advice website 80,000 Hours introduces AI safety issues, risks, and career paths in the field (~30 mins).
Global challenges education non-profit BlueDot Impact has an accessible introduction to the rapid increase in AI capabilities and associated challenges (2 hours). A good starting point to get oriented in AI developments and their implications. BlueDot also offers more in-depth courses in technical AI Safety, AI governance, and economics of AI.
Group of experts publicly advocating for government action on AI risks. They publish open letters and policy recommendations for Australian policymakers and government agencies.
Community for people in Australia or New Zealand interested in preventing existential risk from AI.
Policy organization developing and advocating for solutions to this century’s most challenging problems, including AI governance, biosecurity, and institutional reform.
Explore additional Australian organizations conducting research and developing solutions for responsible AI systems.
Independent nonprofit research institute building safety, ethics, accountability and transparency into AI systems. They train organisations operating AI systems and provide technical guidance on AI policy development.
Australia’s national science agency team focusing on responsible AI engineering, including AI system safety. They develop frameworks and methodologies for safe & trustworthy AI systems.