801

LESSWRONG
LW

800
SurveysAI
Personal Blog

13

Poll on De/Accelerating AI

by denkenberger
9th Aug 2025
1 min read
38

13

SurveysAI
Personal Blog

13

Poll on De/Accelerating AI
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1StanislavKrym
1denkenberger
1denkenberger
-1StanislavKrym
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1StanislavKrym
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
1denkenberger
0denkenberger
9Rafael Harth
3denkenberger
-2the gears to ascension
New Comment
38 comments, sorted by
top scoring
Click to highlight new comments since: Today at 3:14 PM
[-]denkenberger3mo1-34

Not ok to use AI

Reply
[-]denkenberger3mo10

Ok to use free AI (but not to pay for AI, or you need to offset your payment for AI)

Reply2
[-]denkenberger3mo123

Ok to pay for AI (but not to invest)

Reply
[-]denkenberger3mo13

Ok to invest in AI companies

Reply
[-]denkenberger3mo121

Ok to be a safety employee at a safer lab

Reply
[-]denkenberger3mo119

Ok to be a safety employee at a less safe lab

Reply
[-]denkenberger3mo1-19

Ok to be a capabilities employee at a safer lab for career capital/donations

Reply
[-]denkenberger3mo1-38

Ok to be a capabilities employee at a less safe lab for career capital/donations

Reply
[-]denkenberger3mo1-20

Ok to be a capabilities employee at a safer lab (direct impact)

Reply
[-]StanislavKrym3mo10

A capabilities employer at a safer lab could also, say, propose an architecture that has worse interpretability than CoT, better than neuralese and is between CoT and neuralese in terms of capabilities per compute. 

Reply
[-]denkenberger3mo1-42

Ok to be a capabilities employee at a less safe lab (direct impact)

Reply
[-]denkenberger3mo1-22

Never build AGI (Stop AI)

Reply
[-]StanislavKrym3mo-1-2

Alas, this is the proposal that requires coordination with China as well...

Reply
[-]denkenberger3mo116

Shut AI down for decades until something changes radically, such as genetic enhancement of intelligence

Reply
[-]denkenberger3mo12

Pause AI now unilaterally (one country)

Reply
[-]denkenberger3mo147

Pause AI now if it is done globally

Reply
[-]denkenberger3mo1-3

Pause AI if there is mass unemployment (say >20%)

Reply
[-]denkenberger3mo135

Make AI progress very slow (heavily regulate it)

Reply
[-]denkenberger3mo1-29

Restrict access to AI to few people (like nuclear)

Reply
[-]denkenberger3mo132

Pause AI if it causes a major disaster (e.g. like Chernobyl)

Reply
[-]StanislavKrym3mo11

I agree that it would be easier (think of the Rogue Replication Timeline). But what if there is no publicly visible Chernobyl? While the AI-2027 forecast has Agent-3 catch Agent-4 and reveal its misalignment to OpenBrain's employees, even the forecast's authors doubt that Agent-4 will be caught. The uncaught Agent-4 means that mankind races ahead without even realizing that the AI could be misaligned.

Reply
[-]denkenberger3mo12

Ban AI agents

Reply
[-]denkenberger3mo115

Ban a certain level of autonomous code writing

Reply
[-]denkenberger3mo122

Ban training above a certain size

Reply
[-]denkenberger3mo131

Pause AI if AI is greatly accelerating the progress on AI (e.g. 10x)

Reply
[-]denkenberger3mo120

SB-1047 (liability, etc)

Reply
[-]denkenberger3mo19

Responsible scaling policy or similar

Reply
[-]denkenberger3mo1-17

Neutral (no regulations, no subsidy)

Reply
[-]denkenberger3mo1-21

Accelerate AGI in safer lab in US (subsidy, no regulations)

Reply
[-]denkenberger3mo1-29

Accelerate ASI in safer lab in US (subsidy, no regulations)

Reply
[-]denkenberger3mo1-37

Accelerate AGI in less safe lab in US (subsidy, no regulations)

Reply
[-]denkenberger3mo1-41

Accelerate ASI in less safe lab in US (subsidy, no regulations)

Reply
[-]denkenberger3mo1-44

Accelerate AGI everywhere (subsidy, no regulations)

Reply
[-]denkenberger3mo1-45

Accelerate ASI everywhere (subsidy, no regulations)

Reply
[-]denkenberger3mo00

Only donating/working on pausing AI is ok

Reply
[-]Rafael Harth3mo94

Not sure what this means? What is not okay if you agree-vote this?

Reply
[-]denkenberger3mo30

This is the extreme deceleration end of the personal action spectrum (so it is not ok to use AI, pay for AI, invest in AI, work at labs, etc).

Reply
[-]the gears to ascension3mo-26

(this question is self-downvoted to keep it at the bottom.)

develop methods for user's-morality-focused alignment of the kind open weights AI users would still want for their decensored model

Reply
Moderation Log
More from denkenberger
View more
Curated and popular this week
38Comments

Recently, people on both ends of the de/accelerating AI spectrum have been making claims that rationalists are on the opposite end. So I think it would be helpful to have a poll to get a better idea where rationalists stand. Since there is not a built-in poll function, I'm putting positions in comments. Please agree vote only. This should preserve the order of the questions, which reduces cognitive load as they are approximately ordered from more accelerating to more decelerating within the two categories of big picture and individual actions.

Update: the order is not being maintained by magic sorting despite people only dis/agree voting, but it seems to be functioning adequately. You can sort by "oldest" to get the intended ordering of the questions. I think people are agreeing with things they think are good, even if not sufficient.

Update 2: Now that it's been 3 days and voting has slowed, I thought I would summarize some of the interesting results.
Big picture: the strongest support is for pausing AI now if done globally, but there's also strong support for making AI progress slow, pausing if disaster, pausing if greatly accelerated progress. There is only moderate support for shutting AI down for decades, and near zero support for pausing if high unemployment, pausing unilaterally, and banning AI agents. There is strong opposition to never building AGI. Of course there could be large selection bias (with only ~30 people voting), but it does appear that the extreme critics saying rationalists want to accelerate AI in order to live forever are incorrect, and also the other extreme critics saying rationalists don't want any AGI are incorrect. Overall, rationalists seem to prefer a global pause either now or soon.
Individual actions: the relatively strong agreement positions are that it's okay to pay for AI but not to invest in it, and it's okay to be a safety employee at safer and less safe labs.