Posts

Sorted by New

Wiki Contributions

Comments

Based on my personal experience in pandemic resilience, additional wakeups can proceed swiftly as soon as a specific society-scale harm is realized.

Specifically, as we are waking up to over-reliance harms and addressing them (esp. within security OODA loops), it would buy time for good enough continuous alignment.

Based on recent conversations with policymakers, labs and journalists, I see increased coordination around societal evaluation & risk mitigation — (cyber)security mindset is now mainstream.

Also, imminent society-scale harm (e.g. contextual integrity harms caused by over-reliance & precision persuasion since ~a decade ago) has shown to be effective in getting governments to consider risk reasonably.

Well, before 2016, I had no idea I'd serve in the public sector...

(The vTaiwan process was already modeled after CBV in 2015.)

Yes. The basic assumption (of my current day job) is that good-enough contextual integrity and continuous incentive alignment are solvable well within the slow takeoff we are currently in.

Something like a lightweight version of the off-the-shelf Vision Pro will do. Just as nonverbal cues can transmit more effectively with codec avatars, post-symbolic communication can approach telepathy with good enough mental models facilitated by AI (not necessarily ASI.)

Safer than implants is to connect at scale "telepathically" leveraging only full sensory bandwidth and much better coordination arrangements. That is the ↗️ direction of the depth-breadth spectrum here.

Yes, that, and a further focus on assistive AI systems that excel at connecting humans — I believe this is a natural outcome of the original CBV idea.

Nice to meet you too & thank you for the kind words. Yes, same person as AudreyTang. (I posted the map at audreyt.org as a proof of sorts.)

Thank you for this. Here's mine:

  1. It’s physically possible to build STEM+ AI (though it's OK if we collectively decide not to build it.)
  2. STEM+ AI will exist by the year 2035 but not by 2100 (human scientists will improve significantly thanks to cohering blending volition, aka ⿻Plurality).
  3. If STEM+AI is built, within 10 years AIs will be able to disempower humanity (but won't do it.)
  4. The future will be better if AI does not wipe out humanity.
  5. Given sufficient technical knowledge, humanity could in principle build vastly superhuman AIs that reliably produce very good outcomes.
  6. Researchers will produce good enough processes for continuous alignment.
  7. Technical research and policy work are ~equally useful.
  8. Governments will generally be reasonable in how they handle AI risk.

Always glad to update as new evidence arrives.

https://audreyt.org/ai-views-snapshot.png