AI safety encompasses work to understand and address risks from increasingly capable AI systems. This includes both current harms and potential catastrophic risks from misaligned AI systems.
AI governance focuses on institutions, policies, and decision-making processes that shape how AI is developed and deployed safely.
Nobel laureate and “Godfather of AI” Geoffrey Hinton explains AI risks in this 60 Minutes television interview, including warnings about loss of human control (15 mins). Geoffrey Hinton has researched AI for decades, and left Google Brain in 2023 to be able to speak about AI risks more freely.
Nobel laureate Yoshua Bengio presents the case for taking catastrophic risks from AI seriously in this TED talk (15 mins).
Career advice website 80,000 Hours has a guide explaining why preventing AI catastrophe is crucial, introduces technical safety issues, risks, and career paths in the field (~30 mins).
Global challenges education non-profit BlueDot Impact has an accessible introduction to the rapid increase in AI capabilities and associated challenges (2 hours). A good starting point to get oriented in AI developments and their implications. BlueDot also offers more in-depth courses in technical AI Safety, AI governance, and economics of AI.
Global synthesis from 100 experts across 30 countries on AI capabilities, risks, and technical safety measures as of early 2025.
“Introduction to AI Safety, Ethics and Society” by Dan Hendrycks. A comprehensive free online textbook, also available as audiobook.
Official Australian government guidance providing 10 practical guardrails for organizations using AI, focusing on transparency, accountability, and risk management.
Government proposals for mandatory AI safety requirements in high-risk settings, outlining regulatory approaches and implementation options.
Detailed policy submission on mandatory guardrails, providing comprehensive recommendations for Australian AI governance frameworks.
Comprehensive policy framework for Australia’s AI governance from 2025-2028. Shows 78% of Australians are concerned about negative AI outcomes and 86% support creating a new AI regulatory body. Outlines recommendations for launching an Australian AI Safety Institute, introducing an AI Act, and hosting an AI Safety Summit.
Connect with the growing AI safety community across Australia. From research institutes to advocacy groups, there are many ways to get involved.
Community for people in Australia or New Zealand interested in preventing existential risk from AI. Organizes regular online events and maintains active discussions on current developments.
Group of experts publicly advocating for government action on AI risks. They publish open letters and policy recommendations for Australian policymakers and government agencies.
Policy organization developing and advocating for solutions to this century’s most challenging problems, including AI governance, biosecurity, and institutional reform.
Independent nonprofit research institute building safety, ethics, accountability and transparency into AI systems. They develop ethically-aware AI algorithms and provide technical guidance on AI policy development.
Australia’s national science agency team focusing on responsible AI engineering. They develop frameworks and methodologies for trustworthy AI systems across the entire AI lifecycle.