45

LESSWRONG
LW

44
Forecasting & PredictionAIWorld Modeling
Frontpage

112

AGI Predictions

by Amandango, Ben Pace
21st Nov 2020
5 min read
35

112

112

AGI Predictions
15Jacob Pfau
2Daniel Kokotajlo
10Zack_M_Davis
26AnnaSalamon
10TurnTrout
4Rohin Shah
4Rafael Harth
10Veedrac
7Ben Pace
6Veedrac
4Daniel Kokotajlo
3habryka
10TurnTrout
2Ben Pace
5TurnTrout
2Rohin Shah
9Rafael Harth
9Pablo
1Veedrac
9Amandango
7Kevin Lacker
6Amandango
4supposedlyfun
3Davidmanheim
5Rohin Shah
3NunoSempere
2Mati_Roy
2adamShimi
1Emiya
1Measure
4habryka
1Measure
3habryka
1Rana Dexsin
3Ben Pace
New Comment
35 comments, sorted by
top scoring
Click to highlight new comments since: Today at 12:37 AM
[-]Jacob Pfau5y150

Great post! I am very curious about how people are interpreting Q10 and Q11, and what their models are. What are prototypical examples of 'insights on a similar level to deep learning'? 

Here's a break-down of examples of things that come to my mind:

Historical DL-level advances: 

  • the development of RL (Q-learning algorithm, etc.)
  • Original formulation of a single neuron i.e. affine transformation + non-linearity

Future possible DL-level:

  • a successor to back-prop (e.g. the how biological neurons learn)
  • a successor to the Q-learning family (e.g. neatly generalizing and extending 'intrinsic motivation' hacks)
  • full brain simulation
  • an alternative to the affine+activation recipe

Below DL-level major advances:

  • an elegant solution to learn from cross-modal inputs in a self-supervised fashion (babies somehow do it)
  • a breakthrough in active learning
  • a generalizable solution to learning disentangled and compositional representations
  • a solution to adversarial examples

Grey areas: 

  • breakthroughs in neural architecture search
  • a breakthrough in neural Turing machine-type research

I'd also like to know how people's thinking fits in with my taxonomy: Are people who leaned yes on Q11 basing their reasoning on the inadequacy of the 'below DL-level advances' list, or perhaps on the necessity of the 'DL-level advances' list? Or perhaps people interpreted those questions completely differently, and don't agree with my dividing lines?

Reply
[-]Daniel Kokotajlo5y20

Thank you for asking this question and for giving that break-down. I was wondering something similar. I am not an AI scientist but DL seems like a very big deal to me, and thus I was surprised that so many people seemed to think we need more insights on that level. My charitable interpretation is that they don't think DL is a big deal.

Reply
[-]Zack_M_Davis5y100

At time of writing, I'm assigning the highest probability to "Will AGI cause an existential catastrophe?" at 85%, with the next-highest predictions at 80% and 76%. Why ... why is everyone so optimistic?? Did we learn something new about the problem actually being easier, or our civilization more competent, than previously believed?

Should—should I be trying to do more x-risk-reduction-relevant stuff (somehow), or are you guys saying you've basically got it covered? (In 2013, I told myself it was OK for dumb little ol' me to personally not worry about the Singularity and focus on temporal concerns in order to not have nightmares, and it turned out that I have a lot of temporal concerns which could be indirectly very relevant to the main plot, but that's not my real reason for focusing on them.)

Reply
[-]AnnaSalamon5y260

IMO, we decidedly do not "basically have it covered."

That said, IMO it is generally not a good idea for a person to try to force themselves on problems that will make them crazy, desperate need or no.

I am often tempted to downplay how much catastrophe-probability I see, basically to decrease the odds that people decide to make themselves crazy in the direct vicinity of alignment research and alignment researchers.

And on the other hand, I am tempted by the HPMOR passage:

"Girls?" whispered Susan. She was slowly pushing herself to her feet, though Hermione could see her limbs swaying and quivering. "Girls, I'm sorry for what I said before. If you've got anything clever and heroic to try, you might as well try it."

(To be clear, I have hope. Also, please just don't go crazy and don't do stupid things.)

Reply
[-]TurnTrout5y100

For me, it's because there's disjunctively many ways that AGI could not happen (global totalitarian regime, AI winter, 55% CFR avian flu escapes a BSL4 lab, unexpected difficulty building AGI & the planning fallacy on timelines which we totally won't fall victim to this time...), or that alignment could be solved, or that I could be mistaken about AGI risk being a big deal, or... 

Granted, I assign small probabilities to several of these events. But my credence for P(AGI extinction | no more AI alignment work from community) is 70% - much higher than my 40% unconditional credence. I guess that means yes, I think AGI risk is huge (remember that I'm saying "40% chance we just die to AGI, unconditionally"), and that's after incorporating the significant contributions which I expect the current community to make. The current community is far from sufficient, but it's also probably picking a good amount of low-hanging fruit, and so I expect that its presence makes a significant difference.

EDIT: I'm decreasing the 70% to 60% to better match my 40% unconditional, because only the current alignment community stops working on alignment. 

Reply
[-]Rohin Shah5y40

why is everyone so optimistic??

Some reasons.

Reply
[-]Rafael Harth5y40

I've gone from roughly 2/3 to 1/2 on existential catastrophe (I've put 58% here, was feeling pessimistic) based on the big projects having safety teams who I think are doing really good work. That probably falls under our civilization being more competent than previously believed.

Reply
[-]Veedrac5y100

There is a huge difference in the responses to Q1 (“Will AGI cause an existential catastrophe?”) and Q2 (“...without additional intervention from the existing AI Alignment research community”), to a point that seems almost unjustifiable to me. To pick the first matching example I found (and not to purposefully pick on anybody in particular), Daniel Kokotajlo thinks there's a 93% chance of existential risk without the AI Alignment community's involvement, but only 53% with. This implies that there's a ~43% chance of the AI Alignment community solving the problem, conditional on it being real and unsolved otherwise, but only a ~7% chance of it not occurring for any other reason, including the possibility of it being solved by the researchers building the systems, or the concern being largely incorrect.

What makes people so confident in the AI Alignment research community solving this problem, far above that of any other alternative?

Reply
[-]Ben Pace5y70

I also noticed Daniel’s difference in probabilities there, and thought they were substantial. But it doesn’t seem unreasonable to me. The existing AI x-risk community has changed the global conversation on AI and also been responsible for much in the way of funding and direct research on many related technical problems. I could talk about the specific technical work, or the impact that things like the AI FOOM Debate had on Superintelligence had on OpenPhil, or CFAR on FLI on Musk on OpenAI. Or I could go into detail about the research being done on topics like Iterated Amplification and Agent Foundations and so on and ways that this seems to me to be clear progress on subproblems. I’m not sure exactly what alternatives you might have in mind.

Reply
[-]Veedrac5y*60

To emphasize, the clash I'm perceiving is not the chance assigned to these problems being tractable, but to the relative probability of ‘AI Alignment researchers’ solving the problems, as compared to everyone else and every other explanation. In particular, people building AI systems intrinsically spend a degree of their effort, even if completely unconvinced about the merits of AI risk, trying to make systems aligned, just because that's a fundamental part of building a useful AI.

I could talk about the specific technical work, or the impact that things like the AI FOOM Debate had on Superintelligence had on OpenPhil, or CFAR on FLI on Musk on OpenAI. Or I could go into detail about the research being done on topics like Iterated Amplification and Agent Foundations and so on and ways that this seems to me to be clear progress on subproblems.

I have a sort of Yudkowskian pessimism towards most of these things (policy won't actually help; Iterated Amplification won't actually work), but I'll try to put that aside here for a bit. What I'm curious about is what makes these sort of ideas only discoverable in this specific network of people, under these specific institutions, and particularly more promising than other sorts of more classical alignment.

Isn't Iterated Amplification in the class of things you'd expect people to try just to get their early systems to work, at least with ≥20% probability? Not, to be clear, exactly that system, but just fundamentally RL systems that take extra steps to preserve the intentionality of the optimization process.

To rephrase a bit, it seems to me that a worldview in which AI alignment is sufficiently tractable that Iterated Amplification is a huge step towards a solution, would also be a worldview in which AI alignment is sufficiently easy (though not necessarily easy) that there should be a much larger prior belief that it gets solved anyway.

Reply
[-]Daniel Kokotajlo5y40

FWIW, I made these judgments quickly and intuitively and thus could easily have just made a silly mistake. Thank you for pointing this out.

So, what do I think now, reflecting a bit more?

--The 7% judgment still seems correct to me. I feel pretty screwed in a world where our entire community stops thinking about this stuff. I think it's because of Yudkowskian pessimism combined with the heavy-tailed nature of impact and research. A world without this community would still be a world where people put some effort into solving the problem, but there would be less effort, by less capable people, and it would be more half-hearted/not directed at actually solving the problem/not actually taking the problem seriously.

--The other judgment? Maybe I'm too optimistic about the world where we continue working. But idk, I am rather impressed by our community and I think we've been making steady progress on all our goals over the last few years. Moreover, OpenAI and DeepMind seem to be taking safety concerns mildly seriously due to having people in our community working there. This makes me optimistic that if we keep at it, they'll take it very seriously, and that would be great.

Reply
[-]habryka5y30

I interpreted the question as something like "if nobody cares about safety and there isn't a community that takes a special interest in it, will we be safe". I don't think it's specifically this AI Alignment community solving it, it's just that if nobody tries to solve the problem, the problem will stay unsolved.

Edit: And I do now see that I misinterpreted the question. Updated my second estimate downwards because of that. Thanks for pointing this out!

Reply
[-]TurnTrout5y100

In the following, an event is "catastrophic" if it endangers several human lives; it need not be an existential catastrophe.

Edit: I meant to say "deceptive alignment", but the meaning should be clear either way.

Reply
[-]Ben Pace5y20

”Catastrophic” is normally used in the term ”global catastrophic risk” and means something like “kills 100,000s of people”, so I do think “doesn’t necessarily kill but could’ve killed a couple of people” is a fairly different meaning. In retrospect I realize that I put my answer to the second question far too high — if it just means “a deceptive aligned system nearly gives a few people in hospital a fatal dosage but it’s stopped and we don’t know why the system messed up” then it’s quite plausible nothing this substantial will happen as a result of that.

Reply
[-]TurnTrout5y50

”Catastrophic” is normally used in the term ”global catastrophic risk” and means something like “kills 100,000s of people”, so I do think “doesn’t necessarily kill but could’ve killed a couple of people” is a fairly different meaning.

Agreed. In retrospect, I might have opted for "pre-AGI nearly-deadly accident caused by deceptive alignment." 

In retrospect I realize that I put my answer to the second question far too high — if it just means “a deceptive aligned system nearly gives a few people in hospital a fatal dosage but it’s stopped and we don’t know why the system messed up” then it’s quite plausible nothing this substantial will happen as a result of that.

I intended the situation to be more like "we catch the AI pretending to be aligned, but actually lying, and it almost or does kill at least a few people as a result of that." 

With #1, I'm trying to have people predict the "deception is robustly instrumental behavior, but AIs will be bad at it at first and so we'll catch them." #2 is trying to operationalize whether this would be viewed as a fire alarm.

Some ways you might think scenario #1 won't happen:

  • You don't think deception will be incentivized
  • Fast takeoff means the AI is never smart enough to deceive but dumb enough to get caught
  • Our transparency tools won't be good enough for many people to believe it was actually deceptively aligned
Reply
[-]Rohin Shah5y20

Some ways you might think scenario #1 won't happen:

Also: we solve alignment really well on paper, and that's why deception doesn't arise. (I assign non-trivial probability to this.)

Reply
[-]Rafael Harth5y90

I suspect this is intentional, but the set {1,6,7,8} of predictions in redundant, in the sense that probabilities for three of them mathematically imply the probability of the forth due to the law of total probability.

In particular, if #1 is A and #6 is B, then #7 and #8 are A|B and A|¬B, and we have the equality

P(A)=P(A|B)P(B)+P(A|¬B)P(¬B)

The probability I would assign to #8 intuitively is about 0,41. Math based on my other three predictions yields (doing the calculation now) 0.476. I am going to predict the math output rather than my intuition.

Did anyone else calculate their level of inconsistency?

Reply
[-]Pablo5y90

The probability I would assign to #8 intuitively is about 0,41. Math based on my other three predictions yields (doing the calculation now) 0.476. I am going to predict the math output rather than my intuition.

I think the correct response to this realization is not to revise your final answer so as to make it consistent with the first three. It is to revise all four answers so that they are maximally intuitive, subject to the constraint that they be jointly consistent. Which answer comes last is just an artifact of the order of presentation, so it isn't a rational basis for privileging some answers over others.

Reply
[-]Veedrac5y10

This is only true if, for example, you think AI would cause GDP growth. My model assigns a lot of probability to ‘AI kills everyone before (human-relevant) GDP goes up that fast’, so questions #7 and #8 are conditional on me being wrong about that. If we can last any small multiples of a year with AI smart enough to double GDP in that timeframe, then things probably aren't as bad as I thought.

Reply
[-]Amandango5y90

How to add your own questions:

  1. Go to elicit.org/binary
  2. Type your question into the field at the top
  3. Click on the question title, and click the copy URL button
  4. Paste the URL into the LessWrong editor

See our launch post for more details!

Reply
[-]Kevin Lacker5y70

I suspect this question is misworded:

Will there be a 4 year interval in which world GDP growth doubles before the first 1 year interval in which world GDP growth doubles?

Do you mean in which world GDP doubles? World GDP growth doubles when it goes from, say, 0.5% yearly growth to 1% yearly growth.

Personally, I suspect world GDP is most likely to next double in a period after a severe war or depression, so you might want to rephrase to avoid that scenario if that isn't what you're thinking about.

Reply
[-]Amandango5y60

This was a good catch! I did actually mean world GDP, not world GDP growth. Because people have already predicted on this, I added the correct questions above as new questions, and am leaving the previous questions here for reference:

Reply
[-]supposedlyfun5y40

I really appreciate the effort that went into collecting all of these questions, framing them clearly, and coding the clickable predictions.

Reply
[-]Davidmanheim5y30

"Will > 50% of AGI researchers agree with safety concerns by 2030?"

From my research, I think they mostly already do, they just use different framings, and care about different time frames.

Reply
[-]Rohin Shah5y50

Fwiw, I think the operationalization of the question is stronger than it appears at first glance, and that's why estimates are low.

Reply
[-]NunoSempere5y30

That was fun. This time, I tried not to update too much on other people's predictions. In particular, I'm at 1% for "Will we experience an existential catastrophe before we build AGI?" and at 70% for "Will there be another AI Winter (a period commonly referred to as such) before we develop AGI?", but would probably defer to a better aggregate on the second one.

Reply
[-]Mati_Roy5y20

So the following, for example, don't count as "existential risk caused by AGI", right?

  • many AIs
    • an economy run by advanced AIs amplifying negative externalities, such as pollution, leading to our demise
    • an em world with minds evolving to the point of being non-valuable anymore ("a Disneyland without children")
    • a war by transcending uploads
  • narrow AI
    • a narrow AI killing all humans (ex.: by designing grey goo, a virus, etc.)
    • a narrow AI eroding trust in society until it breaks apart
  • intermediary cause by an AGI, but not ultimate cause
    • a simulation shutdown because our AI didn't have a decision theory for acausal cooperation
    • an AI convincing a human to destroy the world
Reply
[-]adamShimi5y20

Thanks a lot for the feature and this post! I'll be really interested by an analysis after a lot of answers are in.

Reply
[-]Emiya5y10

Wouldn't it be better to have the other votes visible only after voting? People could be highly influenced by seeing how many and who voted what.

Reply
[-]Measure5y10

I've been seeing an intermittent bug on a few of these where tapping to record an answer causes the question text to disappear. Sometimes scrolling away and back fixes it.

Chrome browser on Android phone.

Reply
[-]habryka5y40

This is intentional. The question text shares space with the list of users and their respective predictions. On mobile, this means when you tap on a section, you see the users who voted in the corresponding range, until you tap away.

Reply
[-]Measure5y10

Ah, makes sense. I guess I just need to get used to the interface.

Reply
[-]habryka5y30

Yeah, we had to make some tradeoffs because I really wanted them to fit into a small space, and also to never resize when you interact with them, while also not dominating any post in which they are in. Not sure whether we hit the perfect balance of the tradeoffs.

Reply
[-]Rana Dexsin5y10

What level of background in AI alignment are you assuming/desiring for respondents? Is it just “all readers” where the assumption is that any cultural osmosis etc. is included in what you're trying to measure?

Reply
[-]Ben Pace5y30

Yeah, any LWer is welcome to record their predictions :)

Reply
Moderation Log
More from Amandango
View more
Curated and popular this week
35Comments
Forecasting & PredictionAIWorld Modeling
Frontpage
1%
2%
3%
4%
5%
6%
7%
8%
9%
Kevin Lacker (1%),Oskar Press Mathiasen (5%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
David Pape (15%),Max_Daniel (17%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Veedrac (20%),elifland (23%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
jimrandomh (37%),HunterJay (37%),Lukas Finnveden (39%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
Pablo (43%),Rafael Harth (48%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Ozyrus (50%),NunoSempere (50%),Gurkenglas (50%),Daniel Kokotajlo (53%),Vanilla_cabs (55%),Raemon (58%),TurnTrout (59%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
RowanE (60%),Measure (66%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Ben Pace (70%),Mark Xu (75%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
habryka (88%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
plex (96%)
1%
Will AGI cause existential catastrophe conditional on there being a 1 year period of doubling of world GDP growth without there first being a 4 year period of doubling?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
HunterJay (15%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
paulfchristiano (20%),Veedrac (20%),habryka (25%),jimrandomh (25%),Raemon (29%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
plex (32%),Rafael Harth (32%),Mark Xu (33%),Ben Pace (35%),Daniel Kokotajlo (38%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
David Pape (41%),Oskar Press Mathiasen (47%),Bird Concept (48%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Gurkenglas (50%),Measure (51%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Ozyrus (60%),frankybegs (60%),elifland (60%),Vanilla_cabs (65%),Amandango (65%),Pablo (65%),Lukas Finnveden (65%),TurnTrout (66%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
RowanE (70%),NunoSempere (75%),Tamay (75%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
Max_Daniel (80%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
1%
Will there be a 4 year interval in which world GDP growth doubles before the first 1 year interval in which world GDP growth doubles?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
Kevin Lacker (1%),Max_Daniel (4%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
elifland (10%),Oskar Press Mathiasen (18%),Lukas Finnveden (19%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Veedrac (20%),avturchin (21%),HunterJay (25%),NunoSempere (25%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
Pablo (30%),Vanilla_cabs (30%),Ozyrus (30%),Mark Xu (33%),Raemon (37%),TurnTrout (38%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
habryka (48%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Gurkenglas (50%),David Pape (50%),Bird Concept (50%),RowanE (50%),Daniel Kokotajlo (54%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Ben Pace (60%),Measure (60%),jimrandomh (65%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
Rafael Harth (80%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
plex (92%)
1%
Will AGI cause existential catastrophe conditional on there being a 4 year period of doubling of world GDP growth before a 1 year period of doubling?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
digital_carver (2%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
Ben Pace (35%),Amandango (35%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
Vanilla_cabs (40%),Rohin Shah (40%),Lukas Finnveden (44%),Daniel Kokotajlo (47%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Mark Xu (50%),habryka (57%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
alokja (60%),Pialgo (60%),TurnTrout (64%),Rafael Harth (65%),Noosphere89 (65%),__nobody (66%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Self_Optimization (70%),Rana Dexsin (72%),peterbarnett (78%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
Measure (80%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
Davidmanheim (95%)
1%
Before AGI, will we learn of an example of catastrophic deceptive misalignment?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
__nobody (10%),Davidmanheim (15%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Ben Pace (20%),Self_Optimization (25%),habryka (26%),peterbarnett (28%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
Mark Xu (33%),TurnTrout (34%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
Rafael Harth (40%),Lukas Finnveden (40%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Rana Dexsin (50%),Measure (55%),Noosphere89 (56%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Rohin Shah (60%),Pialgo (60%),Alibi (65%),digital_carver (66%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Vanilla_cabs (70%),alokja (75%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
1%
Conditional on the AI community learning of pre-AGI catastrophic deceptive misalignment, will the ($ spent on AI alignment research)/($ spent on AI research) ratio increase by more than 50% over the two years following the catastrophe?
99%

This post is a collection of key questions that feed into AI timelines and AI safety work where it seems like there is substantial interest or disagreement amongst the LessWrong community. 

You can make a prediction on a question by hovering over the widget and clicking. You can update your prediction by clicking at a new point, and remove your prediction by clicking on the same point. Try it out:

 

Add questions & operationalizations

This is not intended to be a comprehensive list, so I’d love for people to add their own questions – here are instructions on making your own embedded question. If you have better operationalizations of the questions, you can make your own version in the comments. If there's general agreement on an alternative operationalization being better, I'll add it into the post.

Questions

AGI definition

We’ll define AGI in this post as a unified system that, for almost all economically relevant cognitive tasks, at least matches any human's ability at the task. This is similar to Rohin Shah and Ben Cottier’s definition in this post.

Safety Questions

 

Timelines Questions

See Forecasting AI timelines, Ajeya Cotra’s OP AI timelines report, and Adam Gleave’s #AN80 comment, for more context on this breakdown. I haven’t tried to operationalize this too much, so feel free to be more specific in the comments.

The first three questions in this section are mutually exclusive — that is, the probabilities you assign to them should not sum to more than 100%.

 

Non-technical factor questions

 

Operationalizations

Safety Questions

1. Will AGI cause an existential catastrophe?

  • Existential catastrophe is defined here according to Toby Ord’s definition in the Precipice: “An event that causes extinction or the destruction of humanity’s long-term potential”.
  • This assumes that everyone currently working on AI alignment continues to do so.

2. Will AGI cause an existential catastrophe without additional intervention from the AI Alignment research community?

  • Roughly, the AI Alignment research community includes people working at CHAI, MIRI, current safety teams at OpenAI and DeepMind, FHI, AI Impacts, and similar orgs, as well as independent researchers writing on the AI Alignment Forum.
  • “Without additional intervention” = everyone currently in this community stops working on anything directly intended to improve AI safety as of today, 11/20/2020. They may work on AI in a way that indirectly and incidentally improves AI safety, but only to the same degree as researchers outside of the AI alignment community are currently doing this.

4. Will there be an arms race dynamic in the lead-up to AGI?

  • An arms race dynamic is operationalized as: 2 years before superintelligent AGI is built, there are at least 2 companies/projects/countries at the cutting edge each within 2 years of each others' technology who are competing and not collaborating.

 5. Will a single AGI or AGI project achieve a decisive strategic advantage?

  • This question uses Bostrom’s definition of decisive strategic advantage: “A level of technological and other advantages sufficient to enable it to achieve complete world domination” (Bostrom 2014).

 6. Will > 50% of AGI researchers agree with safety concerns by 2030?

  • “Agree with safety concerns” means: broadly understand the concerns of the safety community, and agree that there is at least one concern such that we have not yet solved it and we should not build superintelligent AGI until we do solve it (Rohin Shah’s operationalization from this post).

7. Will there be a 4 year interval in which world GDP growth doubles before the first 1 year interval in which world GDP growth doubles?

  • This is essentially Paul Christano’s operationalization of the rate of development of AI from his post on Takeoff speeds. I’ve used this specific operationalization rather than “slow vs fast” or “continuous vs discontinuous” due to the ambiguity in how people use these terms.

8. Will AGI cause existential catastrophe conditional on there being a 4 year period of doubling of world GDP growth before a 1 year period of doubling?

  • Uses the same definition of existential catastrophe as previous questions.

9. Will AGI cause existential catastrophe conditional on there being a 1 year period of doubling of world GDP growth without there first being a 4 year period of doubling?

  • For example, we go from current growth rates to doubling within a year.
  • Uses the same definition of existential catastrophe as previous questions.

 

Timelines Questions

9. Will we get AGI from deep learning with small variations, without more insights on a similar level to deep learning?

  • An example would be something like GPT-N + RL + scaling.

10. Will we get AGI from 1-3 more insights on a similar level to deep learning?

  • Self-explanatory.

11. Will we need > 3 breakthroughs on a similar level to deep learning to get AGI?

  • Self-explanatory.

12. Before reaching AGI, will we hit a point where we can no longer improve AI capabilities by scaling?

  • This includes: 1) We are unable to continue scaling, e.g. due to limitations on compute, dataset size, or model size, or 2) We can practically continue scaling but the increase in AI capabilities from scaling plateaus (see below).

13. Before reaching AGI, will we hit a point where we can no longer improve AI capabilities by scaling because we are unable to continue scaling?

  • Self-explanatory.

14. Before reaching AGI, will we hit a point where we can no longer improve AI capabilities by scaling because the increase in AI capabilities from scaling plateaus?

  • Self-explanatory.

 

Non-technical factor questions

15. Will we experience an existential catastrophe before we build AGI?

  • Existential catastrophe is defined here according to Toby Ord’s definition in the Precipice: “An event that causes extinction or the destruction of humanity’s long-term potential”.
  • This does not include events that would slow the progress of AGI development but are not existential catastrophes.

16. Will there be another AI Winter (a period commonly referred to as such) before we develop AGI?

  • From Wikipedia: “In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.”
  • This question asks about whether people will *refer* to a period as an AI winter, for example, Wikipedia and similar sources refer to it as a third AI winter.

 

Additional resources

  • We’ve also collected many of the predictions on AI we could find on the internet and compiled them here.
  • For a more comprehensive set of questions on AI alignment, see Ben Cottier and Rohin Shah’s Alignment Forum post.

 

Big thanks to Ben Pace, Rohin Shah, Daniel Kokotajlo, Ethan Perez, and Andreas Stuhlmüller for providing really helpful feedback on this post, and suggesting many of the operationalizations.

1%
2%
3%
4%
5%
6%
7%
8%
9%
alanwzhg (0%),Kevin Lacker (1%),NaiveTortoise (2%),Shmi (2%),Jonas (2%),Maxwell Peterson (3%),alokja (4%),Blippo (5%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
Rohin Shah (10%),Heido Nõmm (10%),this.is.patrick (10%),Noosphere89 (13%),stuhlmueller (15%),Morpheus (15%),howtodowtle (16%),Max_Daniel (17%),matthew.vandermerwe (18%),Angela Pretorius (18%),Matthew Barnett (18%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Vetterbot (20%),Tamay (25%),Neel Nanda (25%),Aaron Graifman (25%),Oskar Press Mathiasen (26%),Logan Z (26%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
SoerenMind (30%),fiddler (30%),Rana Dexsin (30%),seed (30%),Jsevillamol (30%),RyanCarey (30%),NunoSempere (30%),akaTrickster (31%),Amandango (32%),Lukas Finnveden (33%),technicalities (35%),ozziegooen (37%),Noa Nabeshima (38%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
A Ray (40%),Felipe Calero Forero (40%),newcom (40%),Alexei (41%),__nobody (42%),Davidmanheim (45%),elifland (45%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Ozyrus (50%),MrThink (50%),meanderingmoose (50%),Yair Halberstadt (50%),hitobashira.counter (51%),tenthkrige (55%),Hjalmar_Wijk (56%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Nathan Simons (60%),TurnTrout (60%),Mikhail Samin (60%),Wolfgang Buchmaier (60%),SamuelKnoche (60%),teradimich (60%),RowanE (60%),Bogdan Ionut Cirstea (60%),Pablo (60%),Harold (60%),Emiya (61%),peterbarnett (61%),Alex K. Chen (parrot) (63%),Measure (64%),Teerth Aloke (65%),DanielFilan (65%),digital_carver (65%),Gunnar_Zarncke (66%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Perhaps (70%),Dach (70%),nealeratzlaff (70%),Pialgo (75%),Eli Tyre (75%),Matt Goldenberg (75%),Gurkenglas (75%),Veedrac (76%),arxhy (77%),Kaveh-Sedghi (79%),habryka (79%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
Ben Pace (80%),adamShimi (80%),Vanilla_cabs (80%),Mark Xu (80%),frankybegs (80%),João Bosco Lucena Filho (80%),Jeffrey Yun (80%),Rafael Harth (82%),bohaska (83%),David Pape (83%),Vermillion (85%),dregntael (85%),plex (86%),avturchin (87%),jimrandomh (88%),janus (89%),Zack_M_Davis (89%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
eapi (90%),Adele Lopez (91%),HunterJay (91%),Dawn Drescher (91%),Self_Optimization (92%),Adam Scholl (92%),riceissa (92%),Daniel Kokotajlo (93%),Bird Concept (95%),holomanga (95%),Jason Hausenloy (95%),Foyle (95%),jean rene tournecuillert (95%),Nick_Tarleton (97%),Mason Wang (99%)
1%
Will AGI cause an existential catastrophe without additional intervention from the existing AI Alignment research community?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
Shmi (10%),Ethan Perez (10%),Rohin Shah (10%),NaiveTortoise (10%),Morpheus (12%),Rana Dexsin (14%),Heido Nõmm (15%),Blippo (15%),paulfchristiano (15%),Matthew Barnett (15%),Tobias_Baumann (17%),Max_Daniel (18%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
alokja (20%),SoerenMind (20%),Jeffrey Yun (25%),stuhlmueller (25%),Felipe Calero Forero (28%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
A Ray (30%),Eli Tyre (31%),Noa Nabeshima (34%),Jsevillamol (35%),Hjalmar_Wijk (35%),peterbarnett (35%),sanyer (37%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
digital_carver (40%),RyanCarey (40%),Quinn (45%),Gunnar_Zarncke (45%),Lukas Finnveden (46%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Logan Z (50%),seed (50%),technicalities (50%),NunoSempere (50%),Raemon (50%),David Pape (50%),elifland (50%),Maxwell Peterson (52%),howtodowtle (52%),avturchin (53%),Noosphere89 (54%),Pablo (55%),Pialgo (55%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
matthew.vandermerwe (60%),Davidmanheim (60%),Bird Concept (60%),adamShimi (60%),Oskar Press Mathiasen (64%),tenthkrige (66%),SamuelKnoche (67%),Maxime Riché (69%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Yair Halberstadt (70%),this.is.patrick (70%),Wolfgang Buchmaier (70%),Alexei (70%),Dawn Drescher (70%),Glen-Andrew Thomson (71%),bohaska (72%),TurnTrout (72%),Vermillion (75%),riceissa (75%),Alex K. Chen (parrot) (75%),eapi (75%),Aaron Graifman (77%),Veedrac (78%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
Self_Optimization (80%),Rafael Harth (80%),Nathan Simons (80%),newcom (80%),nealeratzlaff (80%),Teerth Aloke (80%),Anirandis (80%),dregntael (80%),Bogdan Ionut Cirstea (80%),RowanE (80%),Jonathan Uesato (81%),Measure (83%),Tymoteusz (83%),Mikhail Samin (84%),Kaveh-Sedghi (85%),frankybegs (85%),habryka (85%),Emiya (85%),akaTrickster (85%),ozziegooen (85%),Mark Xu (85%),Daniel Kokotajlo (85%),Gurkenglas (85%),Vanilla_cabs (85%),plex (87%),janus (87%),superads91 (88%),Adam Scholl (88%),jean rene tournecuillert (88%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
meanderingmoose (90%),Ben Pace (90%),João Bosco Lucena Filho (90%),Zolmeister (90%),jimrandomh (90%),Vetterbot (93%),HunterJay (95%),__nobody (95%),Jonas (95%),Dach (95%),supposedlyfun (99%),Ozyrus (99%),Foyle (99%)
1%
Will a single AGI or AGI project achieve a decisive strategic advantage?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
hitobashira.counter (0%),Ethan Perez (5%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
digital_carver (10%),Glen-Andrew Thomson (12%),Bird Concept (13%),Morpheus (13%),Ben Pace (14%),__nobody (15%),Jeffrey Yun (15%),habryka (15%),Zolmeister (18%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
RyanCarey (20%),Kaveh-Sedghi (20%),A Ray (20%),dregntael (20%),Rana Dexsin (20%),riceissa (20%),Kevin Lacker (20%),Adam Scholl (22%),howtodowtle (23%),Hjalmar_Wijk (25%),nealeratzlaff (25%),adamShimi (25%),this.is.patrick (26%),Noosphere89 (26%),Eurisko (26%),Chris van Merwijk (28%),Raemon (28%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
technicalities (30%),elifland (30%),TurnTrout (30%),Adele Lopez (31%),Ozyrus (32%),Daniel Kokotajlo (33%),Rohin Shah (33%),Max_Daniel (33%),Mark Xu (34%),Teerth Aloke (35%),Mikhail Samin (35%),Emiya (35%),avturchin (35%),Pialgo (35%),NunoSempere (35%),Tamay (35%),Veedrac (37%),plex (38%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
Vanilla_cabs (40%),Lukas Finnveden (41%),peterbarnett (41%),tenthkrige (42%),matthew.vandermerwe (42%),Measure (42%),bohaska (44%),akaTrickster (44%),Maxwell Peterson (44%),Noa Nabeshima (45%),Alex K. Chen (parrot) (45%),Felipe Calero Forero (46%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
meanderingmoose (50%),Wolfgang Buchmaier (50%),SoerenMind (50%),Self_Optimization (50%),Bogdan Ionut Cirstea (50%),jungofthewon (50%),newcom (50%),jean rene tournecuillert (52%),Aaron Graifman (53%),Oskar Press Mathiasen (55%),jimrandomh (55%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Yair Halberstadt (60%),Heido Nõmm (60%),alokja (60%),stuhlmueller (60%),Anirandis (60%),Gurkenglas (60%),eapi (65%),HunterJay (65%),Matthew Barnett (66%),janus (68%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Jonas (70%),Vermillion (70%),Foyle (70%),Alexei (70%),Dawn Drescher (70%),Blippo (75%),RowanE (75%),Zack_M_Davis (75%),frankybegs (75%),seed (77%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
Gunnar_Zarncke (82%),SamuelKnoche (85%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
Davidmanheim (90%)
1%
Will > 50% of AGI researchers agree with safety concerns by 2030?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
chanamessinger (33%),Maxwell Peterson (34%),sanyer (34%),Aaron Graifman (38%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
Gunnar_Zarncke (41%),SoerenMind (44%),Angela Pretorius (48%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
meanderingmoose (50%),Logan Z (50%),A Ray (50%),Morpheus (50%),Pialgo (55%),MrThink (56%),this.is.patrick (57%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Yair Halberstadt (60%),Heido Nõmm (60%),Dawn Drescher (60%),Matthew Barnett (60%),Raemon (62%),peterbarnett (63%),technicalities (65%),Felipe Calero Forero (65%),Adam Scholl (65%),Oskar Press Mathiasen (65%),hitobashira.counter (66%),Max_Daniel (66%),TurnTrout (66%),Gurkenglas (66%),Rana Dexsin (66%),Hjalmar_Wijk (67%),Quinn (68%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Jsevillamol (70%),RowanE (70%),Rohin Shah (70%),Ben Pace (70%),Pablo (70%),Veedrac (70%),Amandango (70%),matthew.vandermerwe (71%),stuhlmueller (72%),Alex K. Chen (parrot) (73%),Teerth Aloke (74%),Matt Goldenberg (74%),Rafael Harth (74%),HunterJay (75%),Vanilla_cabs (75%),Anirandis (75%),tenthkrige (75%),__nobody (75%),Lukas Finnveden (75%),Noosphere89 (75%),João Bosco Lucena Filho (75%),Daniel Kokotajlo (77%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
SamuelKnoche (80%),alokja (80%),kyle Kilian (80%),Vermillion (80%),Bogdan Ionut Cirstea (80%),Alexei (81%),CuriousApe (81%),Wolfgang Buchmaier (81%),digital_carver (83%),RyanCarey (83%),Nathan Simons (84%),jimrandomh (85%),Jeffrey Yun (85%),Vetterbot (85%),Davidmanheim (85%),adamShimi (85%),NunoSempere (85%),riceissa (85%),bohaska (85%),ozziegooen (86%),Glen-Andrew Thomson (86%),howtodowtle (88%),Measure (89%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
Ethan Perez (90%),Mark Xu (90%),Shmi (90%),fiddler (90%),elifland (90%),seed (90%),jungofthewon (90%),Ozyrus (90%),Perhaps (90%),supposedlyfun (90%),nealeratzlaff (90%),habryka (90%),Chris van Merwijk (91%),Noa Nabeshima (91%),avturchin (92%),plex (95%),Mikhail Samin (95%),eapi (95%),lesalia (95%),dregntael (95%),Emiya (95%),janus (95%),frankybegs (95%),jean rene tournecuillert (95%),Zolmeister (95%),Nick_Tarleton (96%),akaTrickster (96%),Blippo (97%),newcom (98%),Self_Optimization (98%),Mason Wang (99%),David Pape (99%),Jonas (99%),Foyle (99%),Kaveh-Sedghi (99%),Harold (99%),Tymoteusz (99%)
1%
Will there be an arms race dynamic in the lead-up to AGI?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
Akkete T (0%),alokja (1%),Maxwell Peterson (1%),Shmi (1%),Jonas (1%),Kevin Lacker (1%),Paal (2%),NaiveTortoise (2%),Rohin Shah (5%),CuriousApe (8%),howtodowtle (8%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
Morpheus (10%),digital_carver (10%),Noosphere89 (10%),Heido Nõmm (10%),matthew.vandermerwe (10%),this.is.patrick (10%),seed (10%),newcom (10%),stuhlmueller (10%),Ethan Perez (10%),Oskar Press Mathiasen (11%),Pat (11%),hath (12%),Jason Hausenloy (13%),Angela Pretorius (14%),Max_Daniel (14%),paulfchristiano (15%),Matthew Barnett (15%),lennoxjohnsonnz (16%),akaTrickster (16%),BlueLemon (16%),Aaron Graifman (18%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Mikhail Samin (20%),Emiya (20%),Tamay (20%),RyanCarey (20%),fiddler (20%),Jsevillamol (20%),Bogdan Ionut Cirstea (20%),Neel Nanda (20%),teradimich (20%),__nobody (20%),Noa Nabeshima (22%),edoarad (23%),Lukas Finnveden (24%),Felipe Calero Forero (24%),SamuelKnoche (25%),Blippo (25%),SoerenMind (25%),Jeffrey Yun (25%),Rana Dexsin (25%),NunoSempere (25%),Logan Z (25%),technicalities (25%),Alexei (26%),Vetterbot (26%),Tymoteusz (27%),peterbarnett (28%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
ozziegooen (30%),João Bosco Lucena Filho (30%),Yair Halberstadt (30%),Raemon (30%),HunterJay (31%),Self_Optimization (31%),Wolfgang Buchmaier (32%),RowanE (33%),Dach (33%),DanielFilan (34%),Teerth Aloke (34%),Davidmanheim (35%),tenthkrige (35%),elifland (35%),Amandango (35%),Vanilla_cabs (35%),Pablo (35%),Alex K. Chen (parrot) (38%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
frankybegs (40%),TurnTrout (40%),A Ray (40%),codyhouff (43%),Pialgo (45%),jimrandomh (45%),arxhy (46%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Gunnar_Zarncke (50%),Vermillion (50%),Perhaps (50%),dregntael (50%),Gurkenglas (50%),nealeratzlaff (50%),meanderingmoose (50%),Harold (50%),sanyer (51%),Ozyrus (51%),hitobashira.counter (51%),Hjalmar_Wijk (52%),Daniel Kokotajlo (53%),Glen-Andrew Thomson (53%),Lauro Langosco (55%),Kaveh-Sedghi (55%),bohaska (57%),Rafael Harth (58%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
holomanga (60%),Nathan Simons (60%),Measure (60%),Bird Concept (62%),Matt Goldenberg (64%),Ben Pace (65%),Ruby (65%),habryka (66%),Mark Xu (66%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
adamShimi (70%),Veedrac (72%),avturchin (76%),Adam Scholl (78%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
David Pape (80%),Benjy Forstadt (80%),janus (80%),Adele Lopez (83%),Zack_M_Davis (85%),plex (85%),viluon (85%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
eapi (90%),riceissa (90%),Dawn Drescher (90%),Foyle (90%),alanwzhg (91%),RouvR (99%),Mason Wang (99%)
1%
Will AGI cause an existential catastrophe?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
kyle Kilian (0%),botahamec (0%),Tachikoma (1%),burrito (5%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Oskar Press Mathiasen (25%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
Nathan Simons (30%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
alokja (40%),jimrandomh (40%),avturchin (44%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Shmi (52%),Paal (57%),Jason Schukraft (58%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Zack_M_Davis (60%),Rafael Harth (61%),Maxwell Peterson (64%),frankybegs (65%),João Bosco Lucena Filho (65%),Amandango (65%),DanielFilan (65%),Gunnar_Zarncke (66%),Rana Dexsin (68%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Angela Pretorius (70%),Lukas Finnveden (70%),SinguLarry (70%),platers (71%),Striving4Consistency (73%),ozziegooen (74%),TurnTrout (74%),HunterJay (74%),Zolmeister (75%),SoerenMind (75%),Ruby (75%),Vanilla_cabs (76%),Elias Edgren (76%),Sokodler (76%),Morpheus (77%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
Maria Shakhova (80%),paulfchristiano (80%),Sean Hardy (80%),stuhlmueller (80%),elifland (82%),Raemon (82%),this.is.patrick (83%),steven0461 (84%),Dach (85%),meanderingmoose (85%),Gurkenglas (85%),Vermillion (85%),Matt Goldenberg (85%),Bird Concept (85%),Mark Xu (85%),Tamay (85%),bohaska (86%),Lauro Langosco (86%),Zvi (87%),BrianTan (87%),Ozyrus (89%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
NaiveTortoise (90%),bon (90%),A Ray (90%),RowanE (90%),Patrick (90%),meerpirat (90%),fiddler (90%),Rohin Shah (90%),Measure (90%),newcom (90%),David Pape (91%),Mati_Roy (91%),HT U (91%),hath (91%),BrownHairedEevee (92%),Neel Nanda (92%),james_aung (92%),Zack P (93%),the coding dog (93%),peterbarnett (93%),Pablo (94%),Noa Nabeshima (94%),Charlie Jackson (94%),Alex K. Chen (parrot) (95%),Lukas_Finnveden (95%),Chris van Merwijk (95%),florian-z (95%),Pialgo (95%),technicalities (95%),Garrett Baker (95%),gabriel b (95%),__nobody (95%),dregntael (95%),zielmicha (95%),Mary Phuong (95%),Matthew Barnett (95%),Eurisko (95%),atlas (96%),OlyaBabe (96%),Alibi (96%),Mason Wang (96%),Benjy Forstadt (97%),Isaac_Dunn (97%),Jonathanm (97%),Hjalmar_Wijk (97%),Jsevillamol (97%),akaTrickster (97%),Alejandro Acelas (97%),arxhy (97%),JasperGeh (97%),Nicholas Kross (98%),Misha_Yagudin (98%),seed (98%),Veedrac (98%),The_Golden_Compass (98%),Ethan Perez (98%),Glen-Andrew Thomson (98%),adamShimi (99%),yagudin (99%),Blippo (99%),Perhaps (99%),tassilo.neubauer (99%),EdoArad (99%),Max_Daniel (99%),jgil (99%),Self_Optimization (99%),Mikhail Samin (99%),Aaron Gertler (99%),Sinclair Chen (99%),Kit Harris (99%),Dawn Drescher (99%),michaelchen (99%),brook (99%),janus (99%),Mr Axilus (99%),kittH (99%),howtodowtle (99%),nathanpmyoung (99%),algon33 (99%),habryka (99%),Archimedes (99%),Alexei (99%),Heido Nõmm (99%),Ben Pace (99%),Kontherad (99%),Darius_Meissner (99%),Jeffrey Yun (99%),Emiya (99%),Jonas (99%),KStub (99%),Davidmanheim (99%),Harold (99%),Flawed S (99.8%)
1%
Will more than 50 people predict on this post?
99%
Mentioned in
140Against GDP as a metric for timelines and takeoff speeds
23SETI Predictions
16[AN #128]: Prioritizing research on AI existential safety based on its application to governance demands
15AI Winter Is Coming - How to profit from it?
1%
2%
3%
4%
5%
6%
7%
8%
9%
Glen-Andrew Thomson (9%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
digital_carver (13%),Blippo (15%),Dach (15%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Veedrac (20%),Chris van Merwijk (23%),Vetterbot (26%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
tenthkrige (33%),RyanCarey (33%),Rafael Harth (34%),Ben Pace (35%),Adam Scholl (35%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
bohaska (42%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Pialgo (50%),Noa Nabeshima (50%),this.is.patrick (50%),Heido Nõmm (50%),Nathan Simons (50%),adamShimi (50%),Foyle (50%),eapi (50%),Noosphere89 (52%),jean rene tournecuillert (55%),janus (57%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Bogdan Ionut Cirstea (60%),matthew.vandermerwe (60%),SamuelKnoche (60%),SoerenMind (60%),Teerth Aloke (60%),Felipe Calero Forero (62%),Alex K. Chen (parrot) (64%),Matthew Barnett (65%),Mikhail Samin (65%),stuhlmueller (66%),riceissa (67%),peterbarnett (68%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
alokja (70%),Jeffrey Yun (70%),A Ray (70%),meanderingmoose (70%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
Wolfgang Buchmaier (80%),Yair Halberstadt (80%),Rohin Shah (80%),Dawn Drescher (80%),Gunnar_Zarncke (81%),Aaron Graifman (85%),akaTrickster (86%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
dregntael (90%)
1%
Will there be a 4 year interval in which world GDP doubles before the first 1 year interval in which world GDP doubles?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
alokja (1%),Rohin Shah (3%),matthew.vandermerwe (7%),stuhlmueller (8%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
Matthew Barnett (10%),digital_carver (10%),Heido Nõmm (10%),peterbarnett (14%),Noosphere89 (18%),SoerenMind (18%),Felipe Calero Forero (18%),Pialgo (19%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
SamuelKnoche (20%),Veedrac (20%),Dach (20%),Teerth Aloke (25%),Aaron Graifman (25%),Blippo (25%),Noa Nabeshima (28%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
dregntael (30%),akaTrickster (35%),Jeffrey Yun (38%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
A Ray (40%),Wolfgang Buchmaier (46%),Hjalmar_Wijk (48%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
meanderingmoose (50%),Mikhail Samin (50%),adamShimi (50%),bohaska (57%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Ben Pace (60%),janus (62%),Glen-Andrew Thomson (64%),tenthkrige (66%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Dawn Drescher (79%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
Rafael Harth (80%),Adam Scholl (84%),eapi (85%),Foyle (85%),riceissa (87%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
1%
Will AGI cause existential catastrophe conditional on there being a 4 year period of doubling of world GDP before a 1 year period of doubling?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
digital_carver (10%),akaTrickster (11%),alokja (12%),matthew.vandermerwe (15%),Noosphere89 (17%),stuhlmueller (18%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Veedrac (20%),Rohin Shah (20%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
Felipe Calero Forero (30%),Dach (35%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
A Ray (40%),tenthkrige (42%),Rafael Harth (48%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Blippo (50%),Heido Nõmm (50%),SamuelKnoche (50%),adamShimi (50%),Teerth Aloke (50%),Aaron Graifman (51%),Jeffrey Yun (55%),SoerenMind (55%),Hjalmar_Wijk (56%),peterbarnett (56%),bohaska (57%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Pialgo (60%),Mikhail Samin (60%),teradimich (65%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
dregntael (70%),Ben Pace (70%),meanderingmoose (70%),Glen-Andrew Thomson (74%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
Adam Scholl (80%),Dawn Drescher (81%),janus (88%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
Foyle (90%),riceissa (95%)
1%
Will AGI cause existential catastrophe conditional on there being a 1 year period of doubling of world GDP without there first being a 4 year period of doubling?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
Fede R (0%),akaTrickster (1%),Glen-Andrew Thomson (1%),Jonas (2%),alokja (4%),Blippo (4%),meanderingmoose (5%),Yair Halberstadt (5%),fiddler (5%),hitobashira.counter (6%),Oskar Press Mathiasen (9%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
Kevin Lacker (10%),Zolmeister (10%),Measure (10%),howtodowtle (11%),digital_carver (11%),eapi (14%),Jsevillamol (14%),Self_Optimization (15%),__nobody (15%),Heido Nõmm (15%),Rana Dexsin (15%),Gunnar_Zarncke (17%),Zack_M_Davis (18%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Mikhail Samin (20%),Vanilla_cabs (20%),SinguLarry (20%),Jeffrey Yun (20%),RowanE (20%),Amandango (20%),HunterJay (23%),Pablo (23%),NunoSempere (25%),teradimich (25%),Foyle (25%),Ethan Perez (25%),Davidmanheim (25%),Alibi (25%),Tamay (25%),RyanCarey (27%),Raemon (28%),habryka (28%),Bird Concept (29%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
Noa Nabeshima (30%),Ozyrus (30%),matthew.vandermerwe (30%),elifland (30%),Alexei (31%),Alex K. Chen (parrot) (31%),seed (33%),peterbarnett (33%),Mati_Roy (35%),nealeratzlaff (35%),technicalities (35%),Max_Daniel (35%),Felipe Calero Forero (36%),stuhlmueller (37%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
A Ray (40%),Rafael Harth (40%),Gurkenglas (40%),janus (40%),avturchin (42%),Adam Scholl (42%),TurnTrout (43%),jimrandomh (45%),Noosphere89 (45%),this.is.patrick (45%),adamShimi (45%),Bogdan Ionut Cirstea (45%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Ben Pace (50%),Dach (50%),Logan Z (50%),riceissa (50%),Wolfgang Buchmaier (50%),Pialgo (50%),Vermillion (50%),Lukas Finnveden (51%),plex (52%),SoerenMind (57%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
SamuelKnoche (60%),Matthew Barnett (60%),Mark Xu (64%),Veedrac (65%),MrThink (67%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Rohin Shah (75%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
Hjalmar_Wijk (80%),Daniel Kokotajlo (85%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
1%
Will we get AGI from deep learning with small variations, without more insights on a similar level to deep learning?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
Daniel Kokotajlo (3%),plex (6%),Hjalmar_Wijk (7%),SinguLarry (8%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
Gurkenglas (10%),riceissa (10%),Noosphere89 (10%),Bogdan Ionut Cirstea (10%),SamuelKnoche (10%),Vermillion (10%),Rohin Shah (10%),Kevin Lacker (10%),Matthew Barnett (10%),this.is.patrick (10%),Felipe Calero Forero (12%),nealeratzlaff (13%),Mark Xu (14%),Veedrac (14%),Foyle (14%),Ethan Perez (15%),Heido Nõmm (15%),Ben Pace (15%),Rana Dexsin (15%),Self_Optimization (15%),SoerenMind (15%),Alibi (15%),Dach (15%),Pablo (17%),Wolfgang Buchmaier (17%),TurnTrout (18%),Bird Concept (19%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Noa Nabeshima (20%),Vanilla_cabs (20%),Zolmeister (20%),RowanE (20%),Lukas Finnveden (20%),Pialgo (20%),Ozyrus (20%),A Ray (20%),Mikhail Samin (20%),Raemon (20%),akaTrickster (24%),jimrandomh (25%),Davidmanheim (25%),stuhlmueller (25%),Rafael Harth (27%),Adam Scholl (29%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
adamShimi (30%),Measure (30%),janus (30%),Alexei (31%),peterbarnett (32%),matthew.vandermerwe (33%),seed (33%),DanielFilan (33%),Max_Daniel (35%),elifland (35%),habryka (36%),HunterJay (38%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
Alex K. Chen (parrot) (40%),digital_carver (44%),RyanCarey (45%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
alokja (50%),NunoSempere (50%),Blippo (50%),Tamay (50%),Oskar Press Mathiasen (53%),__nobody (55%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Yair Halberstadt (60%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
eapi (76%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
meanderingmoose (80%),Jonas (80%),fiddler (80%),Glen-Andrew Thomson (86%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
1%
Will we need > 3 breakthroughs on a similar level to deep learning to get AGI?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
Blippo (1%),janus (2%),digital_carver (2%),Alexei (3%),Hjalmar_Wijk (5%),Felipe Calero Forero (6%),Rohin Shah (7%),SamuelKnoche (8%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
Logan Zoellner (10%),Bogdan Ionut Cirstea (10%),fiddler (10%),this.is.patrick (10%),plex (10%),Foyle (10%),Max_Daniel (10%),Adam Scholl (12%),Rafael Harth (12%),seed (12%),Oskar Press Mathiasen (13%),HunterJay (13%),Gurkenglas (15%),Ethan Perez (15%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Pialgo (20%),riceissa (20%),SoerenMind (20%),Mikhail Samin (20%),adamShimi (20%),Daniel Kokotajlo (20%),Vermillion (20%),Zolmeister (20%),Noosphere89 (22%),TurnTrout (22%),RyanCarey (22%),habryka (24%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
Yair Halberstadt (30%),stuhlmueller (30%),Veedrac (33%),Mark Xu (33%),Rana Dexsin (34%),eapi (34%),nealeratzlaff (35%),Jsevillamol (39%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
RowanE (40%),Self_Optimization (40%),meanderingmoose (40%),Bird Concept (40%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
NunoSempere (50%),elifland (50%),Measure (50%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
peterbarnett (63%),__nobody (65%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Ozyrus (70%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
A Ray (80%),Jonas (80%),alokja (80%),Noa Nabeshima (84%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
Glen-Andrew Thomson (98%),akaTrickster (99%)
1%
Before reaching AGI, will we hit a point where we can no longer improve AI capabilities by scaling because the increase in AI capabilities from scaling plateaus?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
Alexei (5%),meanderingmoose (5%),Hjalmar_Wijk (7%),Noosphere89 (8%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
Adam Scholl (10%),NunoSempere (10%),Bogdan Ionut Cirstea (10%),RowanE (10%),alokja (10%),Zolmeister (10%),Self_Optimization (10%),__nobody (10%),digital_carver (10%),Noa Nabeshima (12%),Rohin Shah (13%),RyanCarey (13%),peterbarnett (13%),akaTrickster (14%),SamuelKnoche (15%),Pialgo (15%),Measure (15%),Mikhail Samin (15%),Foyle (15%),Oskar Press Mathiasen (16%),Bird Concept (16%),habryka (17%),nealeratzlaff (18%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Jonas (20%),elifland (20%),adamShimi (20%),Logan Zoellner (23%),TurnTrout (25%),Max_Daniel (25%),eapi (25%),HunterJay (27%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
riceissa (30%),Gurkenglas (30%),stuhlmueller (30%),Mark Xu (33%),Rana Dexsin (34%),this.is.patrick (34%),Daniel Kokotajlo (35%),Kevin Lacker (35%),Felipe Calero Forero (37%),plex (39%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
fiddler (40%),Yair Halberstadt (40%),janus (40%),jimrandomh (45%),Matthew Barnett (45%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Ozyrus (50%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Jsevillamol (60%),seed (60%),Gunnar_Zarncke (61%),Blippo (64%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Ethan Perez (75%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
A Ray (80%),Glen-Andrew Thomson (85%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
1%
Before reaching AGI, will we hit a point where we can no longer improve AI capabilities by scaling because we are unable to continue scaling?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
Blippo (1%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
Logan Zoellner (10%),Foyle (10%),Jonas (10%),digital_carver (12%),Alexei (13%),Noa Nabeshima (14%),Hjalmar_Wijk (15%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
SamuelKnoche (20%),Bogdan Ionut Cirstea (20%),Ethan Perez (20%),Rohin Shah (20%),Adam Scholl (22%),Noosphere89 (24%),Pialgo (25%),Zolmeister (25%),Heido Nõmm (25%),Oskar Press Mathiasen (26%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
RyanCarey (30%),HunterJay (32%),Dach (33%),Max_Daniel (35%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
this.is.patrick (40%),meanderingmoose (40%),adamShimi (40%),nealeratzlaff (40%),Jsevillamol (40%),habryka (41%),janus (42%),Felipe Calero Forero (43%),Mikhail Samin (45%),Gurkenglas (45%),plex (49%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Vanilla_cabs (50%),Self_Optimization (50%),Kevin Lacker (50%),stuhlmueller (50%),Ben Pace (50%),Yair Halberstadt (50%),Ozyrus (50%),Daniel Kokotajlo (50%),riceissa (50%),fiddler (50%),Matthew Barnett (50%),Raemon (50%),Mark Xu (50%),RowanE (50%),eapi (54%),Alex K. Chen (parrot) (57%),TurnTrout (57%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Vermillion (60%),NunoSempere (60%),jungofthewon (60%),peterbarnett (64%),howtodowtle (65%),jimrandomh (65%),Measure (65%),Rana Dexsin (68%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
elifland (70%),alokja (70%),seed (72%),Gunnar_Zarncke (75%),__nobody (75%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
A Ray (80%),akaTrickster (89%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
Glen-Andrew Thomson (99%)
1%
Before reaching AGI, will we hit a point where we can no longer improve AI capabilities by scaling?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
Pialgo (1%),Foyle (1%),Blippo (3%),janus (4%),Veedrac (4%),Logan Z (5%),seed (5%),digital_carver (5%),Ethan Perez (5%),_vk_ (6%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
Bogdan Ionut Cirstea (10%),Perhaps (10%),Vermillion (10%),nealeratzlaff (10%),Daniel Kokotajlo (10%),Rana Dexsin (12%),Rafael Harth (13%),plex (14%),Mark Xu (14%),Measure (15%),this.is.patrick (15%),Mikhail Samin (15%),SamuelKnoche (15%),Rohin Shah (15%),Hjalmar_Wijk (15%),Self_Optimization (15%),HunterJay (15%),Oskar Press Mathiasen (17%),Ben Pace (18%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Noa Nabeshima (20%),Jonas (20%),riceissa (20%),adamShimi (20%),RyanCarey (20%),TurnTrout (20%),Amandango (20%),stuhlmueller (20%),Noosphere89 (21%),akaTrickster (22%),Alexei (24%),SoerenMind (25%),Dach (25%),Gurkenglas (25%),Chris van Merwijk (26%),Bird Concept (26%),habryka (26%),Pablo (27%),avturchin (27%),Felipe Calero Forero (28%),Alex K. Chen (parrot) (29%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
Adam Scholl (30%),A Ray (30%),matthew.vandermerwe (30%),Vanilla_cabs (30%),Zolmeister (30%),MrThink (33%),__nobody (33%),Max_Daniel (33%),Raemon (33%),eapi (35%),Matthew Barnett (35%),Lukas Finnveden (35%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
RowanE (40%),elifland (40%),Tamay (44%),Kaveh-Sedghi (45%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Alibi (50%),hitobashira.counter (51%),howtodowtle (55%),DanielFilan (55%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
meanderingmoose (60%),Jeffrey Yun (60%),fiddler (60%),alokja (65%),peterbarnett (65%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Mr Axilus (70%),Ozyrus (70%),NunoSempere (70%),Kevin Lacker (75%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
Yair Halberstadt (80%),Glen-Andrew Thomson (89%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
1%
Will there be another AI Winter (a period commonly referred to as such) before we develop AGI?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
Jeffrey Yun (1%),Felipe Calero Forero (1%),Bogdan Ionut Cirstea (1%),seed (1%),Pialgo (1%),Foyle (1%),Vermillion (1%),Raemon (1%),Mark Xu (1%),Noa Nabeshima (1%),SamuelKnoche (1%),NunoSempere (1%),Adele Lopez (2%),adamShimi (2%),Rohin Shah (2%),Matthew Barnett (2%),Mikhail Samin (2%),Zolmeister (2%),janus (2%),digital_carver (2%),Vanilla_cabs (2%),stuhlmueller (2%),Adam Scholl (3%),Anirandis (3%),matthew.vandermerwe (4%),elifland (4%),Noosphere89 (4%),alokja (5%),Pablo (5%),HunterJay (5%),_vk_ (5%),Dach (5%),riceissa (5%),Measure (5%),Lukas Finnveden (5%),Bird Concept (5%),Amandango (5%),Logan Z (5%),Hjalmar_Wijk (5%),SoerenMind (5%),MrThink (6%),Yair Halberstadt (6%),Daniel Kokotajlo (7%),nealeratzlaff (7%),Oskar Press Mathiasen (7%),peterbarnett (8%),DanielFilan (8%),this.is.patrick (9%),TurnTrout (9%)
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
RyanCarey (10%),nathanpmyoung (10%),Max_Daniel (10%),Veedrac (10%),Perhaps (10%),teradimich (10%),Jonas (10%),habryka (12%),Emiya (12%),Ben Pace (12%),Blippo (13%),plex (13%),eapi (13%),Gurkenglas (15%),__nobody (15%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
meanderingmoose (20%),RowanE (20%),A Ray (20%),Self_Optimization (20%),fiddler (20%),howtodowtle (21%),Alexei (21%),Alex K. Chen (parrot) (22%),Rafael Harth (27%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
frankybegs (30%),Glen-Andrew Thomson (39%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
akaTrickster (44%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
Kevin Lacker (50%),Ozyrus (50%),avturchin (54%),Wolfgang Buchmaier (55%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
descent (60%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Rana Dexsin (73%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
hitobashira.counter (99%)
1%
Will we experience an existential catastrophe before we build AGI?
99%
1%
2%
3%
4%
5%
6%
7%
8%
9%
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
akaTrickster (10%),Kevin Lacker (10%),Hjalmar_Wijk (12%),Daniel Kokotajlo (12%),eapi (14%),Glen-Andrew Thomson (14%),Rohin Shah (15%),Max_Daniel (15%),fiddler (15%),technicalities (17%),Veedrac (17%),Jonas (18%),howtodowtle (19%),kairos_ (19%)
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
Mark Xu (20%),meanderingmoose (20%),alokja (20%),adamShimi (25%),NunoSempere (25%),Davidmanheim (25%),RyanCarey (28%),SoerenMind (28%),Adam Scholl (29%)
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
Gurkenglas (30%),__nobody (30%),Pialgo (30%),janus (30%),Logan Z (30%),SamuelKnoche (30%),A Ray (30%),Wolfgang Buchmaier (30%),Lukas Finnveden (30%),Matthew Barnett (30%),Noosphere89 (32%),TurnTrout (33%),Rafael Harth (33%),seed (33%),Alexei (34%),elifland (35%),Yair Halberstadt (35%),Ben Pace (35%),habryka (35%),Dach (35%),Vermillion (35%),Noa Nabeshima (36%),stuhlmueller (37%),HunterJay (38%)
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
riceissa (40%),plex (42%),digital_carver (45%),Bogdan Ionut Cirstea (45%),Oskar Press Mathiasen (46%)
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
this.is.patrick (50%),Tamay (50%),matthew.vandermerwe (50%),Jsevillamol (57%)
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
Alibi (60%),Pablo (60%),Rana Dexsin (60%),RowanE (60%),Jacob Pfau (60%),Heido Nõmm (60%),Ethan Perez (60%),Vanilla_cabs (60%),Mikhail Samin (60%),Measure (60%),avturchin (61%),Bird Concept (62%),DanielFilan (63%),NaiveTortoise (63%),Raemon (63%),Foyle (64%),jimrandomh (65%),nealeratzlaff (65%),peterbarnett (66%),Blippo (66%)
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
Zolmeister (70%),Felipe Calero Forero (72%)
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
Self_Optimization (80%),Ozyrus (80%)
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
SinguLarry (90%)
1%
Will we get AGI from 1-3 more insights on a similar level to deep learning?
99%