(Cross-posted from speaker's notes of my talk at DeepMind today.)
Good local time, everyone. I am Audrey Tang, 🇹🇼 Taiwan's Cyber Ambassador and first Digital Minister (2016-2024). It is an honor to be here with you all at DeepMind.
When we discuss "AI" and "society," two futures compete.
In one—arguably the default trajectory—AI supercharges conflict.
In the other, it augments our ability to cooperate across differences. This means treating differences as fuel and inventing a combustion engine to turn them into energy, rather than constantly putting out fires. This is what I call ⿻ Plurality.
Today, I want to discuss an application of this idea to AI governance, developed at Oxford’s Ethics in AI Institute, called the... (read 3176 more words →)
Hi! If I understand you correctly, the risk you identify is an AI gardener with its own immutable preferences, pruning humanity into compliance.
Here, a the principle of subsidiarity would entail BYOP: Bring Your Own Policy. A concrete example is our team at ROOST working with OpenAI to release its gpt-oss-safeguard model. Its core feature is not a set of rules, but the ability for any community to evolve its own code of conduct in plain language.
If a community decides its values have changed — that it wants to pivot from its governance to pursue a transformative future — the gardener is immediately responsive. In an ecosystem of communities empowered with kamis, we can preserve the potential to let a garden grow wild if they so choose.