LESSWRONG
LW

Audrey Tang
1301121
Message
Dialogue
Subscribe

Cyber Ambassador 🇹🇼 Taiwan (2024-) and founding Minister of Digital Affairs (2016-2024).
AFP Fellow, Institute for Ethics in AI, Oxford University.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
⿻ Plurality & 6pack.care
Audrey Tang1h10

Hi! Great to hear from you. “Optimize for fun” (‑Ofun) is still very much the spirit of this 6pack.care work.

On practicality (bending the market away from arms‑race incentives): Here are some levers that worked, inspired by Taiwan's tax-filing case, that shift returns from lock‑in to civic care:

  1. Interoperability: Make “data portability” the rule. Mandate fair protocol‑level interop so users and complements can exit without losing their networks. Platforms must compete on quality of care, not captivity.
  2. Public options: Offer simple public options (and shared research compute) so there’s always a baseline service that is easy, safe, and non‑extractive. Private vendors must beat it on care, not on lock‑in.
  3. Provenance for Paid Reach: For ads and mass reach in political/financial domains, require verifiable sponsorship and durable disclosure. Preserve anonymity for ordinary speech via meronymity.
  4. Mission‑Locked Governance — Through procurement rules, ensure steward‑ownership/benefit structures and board‑level safety duties so “civic care” is a fiduciary obligation, not a marketing slogan.
  5. Institutionalize Alignment Assemblies and localized evals; pre‑commit vendors to adopt outcomes or explain deviations. Federate trust & safety so threat intel flows without central chokepoints.
Reply
⿻ Plurality & 6pack.care
Audrey Tang1h10

On symbiosis: the kami view is neither egalitarian sameness nor fixed hierarchy. It’s a bounded, heterarchical ecology with many stewards with different scopes that coordinate without a permanent apex. (Heterarchy = overlapping centers of competence; authority flows to where the problem lives.)

Egalitarianism would imply interchangeable agents. As capabilities grow, we’ll see a range of kami sizes: a steward for continental climate models won’t be the same as one for a local irrigation system. That’s diversity of scope, not inequality of standing.

Hierarchy would imply command. Boundedness prevents that: each kami is powerful only within its scope of care and is designed for “enough, not forever.” The river guardian has no mandate nor incentive to run the forest.

When scopes intersect, alignment is defined by civic care: Each kami maintain the relational health of their shared ecosystem at the speed of the garden. Larger systems may act as ephemeral conveners, but they don’t own the graph or set permanent policy. Coordination follows subsidiarity and federation: solve issues locally when possible; escalate via shared protocols when necessary. Meanwhile, procedural equality (the right to contest, audit, and exit) keeps the ecology plural rather than feudal.

Reply
Early Chinese Language Media Coverage of the AI 2027 Report: A Qualitative Analysis
Audrey Tang4mo301

I wrote a summary in Business Weekly Taiwan (April 24):

https://sayit.archive.tw/2025-04-24-%E5%95%86%E5%91%A8%E5%B0%88%E6%AC%84ai-%E6%9C%AA%E4%BE%86%E5%AD%B8%E5%AE%B6%E7%9A%84-2027-%E5%B9%B4%E9%A0%90%E8%A8%80

https://sayit.archive.tw/2025-04-24-bw-column-an-ai-futurists-predictions-f

https://www.businessweekly.com.tw/archive/Article?StrId=7012220

An AI Futurist’s Predictions for 2027

When President Trump declared sweeping reciprocal tariffs, the announcement dominated headlines. Yet inside Silicon Valley’s tech giants and leading AI labs, an even hotter topic was “AI‑2027.com,” the new report from ex‑OpenAI researcher Daniel Kokotajlo and his team.

At OpenAI, Kokotajlo had two principal responsibilities. First, he was charged with sounding early alarms—anticipating the moment when AI systems could hack systems or deceive people, and designing defenses in advance. Second, he shaped research priorities so that the company’s time and talent were focused on work that mattered most.

The trust he earned as OpenAI’s in‑house futurist dates back to 2021, when he published a set of predictions for 2026, most of which have since come true. He foresaw two pivotal breakthroughs: conversational AI—exemplified by ChatGPT—captivating the public and weaving itself into everyday life, and “reasoning” AI spawning misinformation risks and even outright lies. He also predicted U.S. limits on advanced‑chip exports to China and AI beating humans in multi‑player games.

Conventional wisdom once held that ever‑larger models would simply perform better. Kokotajlo challenged that assumption, arguing that future systems would instead pause mid‑computation to “think,” improving accuracy without lengthy additional training runs. The idea was validated in 2024: dedicating energy to reasoning, rather than only to training, can yield superior results.

Since leaving OpenAI, he has mapped the global chip inventory, density, and distribution to model AI trajectories. His projection: by 2027, AI will possess robust powers of deception, and the newest systems may take their cues not from humans but from earlier generations of AI. If governments and companies race ahead solely to outpace competitors, serious alignment failures could follow, allowing AI to become an independent actor and slip human control by 2030. Continuous investment in safety research, however, can avert catastrophe and keep AI development steerable.

Before the tariff news, many governments were pouring money into AI. Now capital may be diverted to shore up companies hurt by the tariffs, squeezing safety budgets. Yet long‑term progress demands the opposite: sustained funding for safety measures and the disciplined use of high‑quality data to build targeted, reliable small models—so that AI becomes a help to humanity, not an added burden.


 

Reply5
AI Views Snapshots
Audrey Tang2y*20

Based on my personal experience in pandemic resilience, additional wakeups can proceed swiftly as soon as a specific society-scale harm is realized.

Specifically, as we are waking up to over-reliance harms and addressing them (esp. within security OODA loops), it would buy time for good enough continuous alignment.

Reply
AI Views Snapshots
Audrey Tang2y60

Based on recent conversations with policymakers, labs and journalists, I see increased coordination around societal evaluation & risk mitigation — (cyber)security mindset is now mainstream.

Also, imminent society-scale harm (e.g. contextual integrity harms caused by over-reliance & precision persuasion since ~a decade ago) has shown to be effective in getting governments to consider risk reasonably.

Reply
AI Views Snapshots
Audrey Tang2y*80

Well, before 2016, I had no idea I'd serve in the public sector...

(The vTaiwan process was already modeled after CBV in 2015.)

Reply
AI Views Snapshots
Audrey Tang2y*10

Yes. The basic assumption (of my current day job) is that good-enough contextual integrity and continuous incentive alignment are solvable well within the slow takeoff we are currently in.

Reply
AI Views Snapshots
Audrey Tang2y10

Something like a lightweight version of the off-the-shelf Vision Pro will do. Just as nonverbal cues can transmit more effectively with codec avatars, post-symbolic communication can approach telepathy with good enough mental models facilitated by AI (not necessarily ASI.)

Reply
AI Views Snapshots
Audrey Tang2y*10

Safer than implants is to connect at scale "telepathically" leveraging only full sensory bandwidth and much better coordination arrangements. That is the ↗️ direction of the depth-breadth spectrum here.

Reply
AI Views Snapshots
Audrey Tang2y*30

Yes, that, and a further focus on assistive AI systems that excel at connecting humans — I believe this is a natural outcome of the original CBV idea.

Reply
Load More
Coherent Blended Volition
2y
42⿻ Plurality & 6pack.care
7h
3