About AI safety

The development and deployment of increasingly capable AI systems involve novel risks and opportunities. Alongside current and pressing risks, there are also potentially catastrophic risks from human misuse of capable AI systems, AI systems acting in ways that are misaligned with human goals, or the intensification of other risks through competition or conflict over AI’s benefits and advantages.

AI safety is an umbrella term for work that tries to understand and address these risks. Although originally focused on technical solutions for catastrophic risks from AI, work in this area now recognises the importance of human decision-making - individually or embedded in organisations and institutions.

AI governance is an umbrella term for non-technical approaches to improve AI safety: how decisions are made about AI, and what institutions and arrangements help those decisions to be made well. It includes norms, international agreements and treaties, shared beliefs and practices, standards, and ‘ways of doing things’.

For an introduction to possible catastrophic risks from AI and pathways to safety, read or watch Ben Garfinkel’s talk at Effective Altruism Global: London in May 2023: YouTube recording of Catastrophic risks from unsafe AI; Article summary.

AI safety in Australia

in 2023, the most advanced AI systems (“frontier AI”) are being developed by companies in the US and UK. However, Australia and Australians have a role to play in safely navigating transitions to a world with advanced AI systems.

At a minimum, Australian technical, policy and governance talent could be deployed to address global issues (e.g., through research), or directly address issues in jurisdictions where frontier AI is being developed, governed, and regulated (e.g., by working in those jurisdictions).

However, it’s also worthwhile to build an Australian community of people who care about AI risks and work to address them. This is because the most capable systems in 2023 are likely to proliferate globally; policy and governance arrangements must be made for the impacts of AI on Australians, just as they need to be made for other jurisdictions; and Australia as a government and community (of businesses, organisations, civil society, academics, etc) has a role to play in supporting effective international arrangements that will reduce catastrophic risks from AI.

For information about what Australian governments could do to address AI risks, you can read an open letter from Australians for AI Safety, or read a detailed policy submission by Good Ancestors Policy (PDF) to the Commonwealth Department of Industry, Science and Resources consultation on Safe and Responsible AI

Get involved

AI Governance Slack
Join the Slack channel to talk AI governance in Australia
AI governance opportunities

Alexander Saeri is seeking interested people to participate in the AI policy & governance community in Australia. He is interested in collaborating on research, training, and community-building activities, as well as raising funding to support this work.

AI governance opportunities
AI Safety Brisbane

AI Safety Brisbane is convened by Jay Bailey

AI Safety Brisbane
AI Safety Sydney

AI Safety Sydney is convened by Yanni Kyriacos and Chris Leong

AI Safety Sydney
AI Safety Melbourne

AI Safety Melbourne is convened by EJ Watkins and Justin Olive

AI Safety Melbourne
AI Safety Australia & New Zealand

This community, convened by Yanni Kyriacos and Chris Leong, was created for people in Australia or New Zealand who are interested in preventing existential risk from AI. AI Safety ANZ organises online events and has active Facebook chat discussions.

AI Safety Australia & New Zealand
Australians for AI Safety

This group of experts has publicly advocated for government action on risks from AI.

Australians for AI Safety

Contact

Contact Alexander Saeri for more information on AI governance in Australia.

Contact Chris Leong for more information on technical AI safety in Australia.