Related thread from Emmett Shear that I've appreciated on the topic: https://x.com/eshear/status/1660356496604139520, esp the framing:
> “power corrupts” is actually shorthand for “power corrupts your world model”
Tangent: I was curious about your estimates of the top 1% and top 0.1% figures - so I looked into it. Somewhat also thinking about Erik Hoel's essay about successful authors being about as rare as billionaires. [link]
What are all the high-level answers to "What should you, a layperson, do about AI x-risk?". Happy to receive a link to an existing list.
Mine from 5m of recalling answers I've heard:
Labor magnification as a measure of AI systems.
Cursor is Mag(Google SWE, 1.03) if Google would rather have access to Cursor than 3% more SWEs at median talent level.
A Mag(OpenAI, 10) system is a system that OpenAI would rather have than 10x more employees at median talent level
A time based alternative is useful too, in cases where it's a little hard to envision as many more employees.
A tMag(OpenAI, 100) system is a system that OpenAI would rather have than 100x time-acceleration for its current employee pool.
Given that definition, some notes:
1. Mag(OpenAI, 100) is my preferred watermark for self-improving AI, one where we'd expect takeoff unless there's a sudden & exceedingly hard scaling wall.
2.... (read more)
Related: Rule Thinkers In, Not Out