LESSWRONG
LW

Venki
34140
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
1Venki's Shortform
7mo
5
No wikitag contributions to display.
Before LLM Psychosis, There Was Yes-Man Psychosis
Venki11d74

Related thread from Emmett Shear that I've appreciated on the topic: https://x.com/eshear/status/1660356496604139520, esp the framing:
> “power corrupts” is actually shorthand for “power corrupts your world model”

Reply
The Inkhaven Residency
Venki21d32

Tangent: I was curious about your estimates of the top 1% and top 0.1% figures - so I looked into it. Somewhat also thinking about Erik Hoel's essay about successful authors being about as rare as billionaires. [link]

  • Some estimate ~45 Substacks at >$1M/y [link] - that roughly fits with Substack's reported total subscription revenue at ~$450M/y. (Officially reported at 30 in 2024 [link])
  • Reasonable to estimate 100-1000 Substacks at >$100k/y
  • How many "bloggers" exist?
    • Substack reports 50,000 Substacks w/ at least one paid sub [link]
      • Very closely supports your 0.1% and 1% estimates TBH!
    • There are about ~500M wordpress blogs on the Internet (???)
    • I'm not sure what the right vibe is here: I could buy anything from 50,000 to 5M.
  • There's good numbers from Substack: this probably gets a lot weirder off-Substack.
  • It does seem reasonable to say that "of Substackers that get to their first paid subscription, 1% get to ~$100k/y, and 0.1% get to ~$1M/y"
  • There might still be some other weirdness here ~ top Substacks often look not that much like blogs.
Reply
Venki's Shortform
Venki7mo290

What are all the high-level answers to "What should you, a layperson, do about AI x-risk?". Happy to receive a link to an existing list.

Mine from 5m of recalling answers I've heard:

  • Don’t work for OpenAI
  • Found or work for an AI lab that gains a lead on capabilities, while remaining relatively safe
  • Maybe work for Anthropic, they seem least bad
  • Don’t work for any AI lab
  • Don’t take any action which increases revenue of any AI lab
  • Mourn
  • Do technical AI alignment
  • Don’t do technical AI alignment
  • Do AI governance & advocacy
  • Donate to AI x-risk funds
  • Cope
  • Don't perform domestic terrorism
Reply4
Venki's Shortform
Venki7mo*10

Labor magnification as a measure of AI systems.

Cursor is Mag(Google SWE, 1.03) if Google would rather have access to Cursor than 3% more SWEs at median talent level.

A Mag(OpenAI, 10) system is a system that OpenAI would rather have than 10x more employees at median talent level

A time based alternative is useful too, in cases where it's a little hard to envision as many more employees. 

A tMag(OpenAI, 100) system is a system that OpenAI would rather have than 100x time-acceleration for its current employee pool.

Given that definition, some notes: 

1. Mag(OpenAI, 100) is my preferred watermark for self-improving AI, one where we'd expect takeoff unless there's a sudden & exceedingly hard scaling wall. 

2. I prefer this framework over t-AGI in some contexts - as I expect that we'll see AI systems able to do plenty of 1-month tasks before it can do all 1-minute tasks. OpenAI's Deep Research already comes close to confirming this. Magnification is more helpful as it measures shortcomings based on whether they can be cheaply shored up by human labor.

Reply
1Venki's Shortform
7mo
5