AI safety field-building in Australia should accelerate. My rationale:
* OpenAI opened a Sydney office in Dec 2025 and Anthropic is planning to open a Sydney office in 2026. These offices may hire safety staff from local talent, or partner with local auditing, evaluation, and security companies, including Harmony Intelligence, Good Ancestors, and Gradient Institute.
* An Australian AISI was announced for early 2026 and is currently hiring. The UK AISI has benefited from close partnerships with Apollo Research, METR, and the LISA office community. There is a community space in Sydney, the Sydney AI Safety Space, and two field-building organizations, AI Safety ANZ and TARA, but these could expand substantially.
* Australia seems like a prime location for datacenter build-out. OpenAI published an "AI blueprint" for Australia, calling for datacenter build-out, and started building a $4.6B datacenter in Sydney in Dec 2025. Australia is a NATO partner, Five Eyes member, and member of the AUKUS security partnership with the US and UK; it's much more secure and aligned with US/UK interests than Saudi Arabia. Australia is the second-largest exporter of thermal coal, has vast solar and wind resources, and the highest uranium reserves on earth. Australia is currently quite anti-nuclear at the moment, but it has no earthquakes or tsunamis to disrupt power plants. Janet Egan (CNAS) recently called for the development of US military AI projects in Australia, similar to the Pine Gap facility in the Northern Territory. AI safety & security research and political pressure for safety standards should focus on countries with frontier AI companies and datacenters.
* Several AI safety researchers have come from Australia, including Shane Legg, Marcus Hutter, Buck Shlegeris, Jan Leike (PhD only), Dan Murfet, and Daniel Filan. A decent number of MATS fellows come from Australia.
* Australia has a history as a middle power between the US and China, two of its largest trading partners