2097

LESSWRONG
LW

2096
AI TakeoffAI
Personal Blog

-2

The fast takeoff motte/bailey

by lc
24th Feb 2023
1 min read
7

-2

-2

The fast takeoff motte/bailey
14Eli Tyre
4lc
12Eli Tyre
2Viliam
2lc
5Mateusz Bagiński
7niplav
New Comment
7 comments, sorted by
top scoring
Click to highlight new comments since: Today at 6:34 PM
[-]Eli Tyre3y1420

Downvoted, because even though I think this is reasonable point worth considering, I'm not excited about a LessWrong dominated by snarky memes, that make points, instead of essays.

Reply
[-]lc3y40

I started an essay version, but decided the meme version was concise without much loss of detail. I see your point though. I'll go ahead and remove my upvote of this and post the essay instead.

Reply
[-]Eli Tyre3y*1210

I do appreciate the conciseness a lot. 

It seems like I maybe would have gotten the same value from the essay (which would have taken 5 minutes to read?) as from this image (which maybe took 5 seconds).

But I don't want to create a culture that rewards snark, even more than it already does. It seems like that is the death of discourse, in a bunch of communities.

So I'm interested in if there are ways to get the benefits here, without the costs.

Reply
[-]Viliam3y20

What about essay first, the image at the bottom?

Reply
[-]lc3y20

Agreed.

Reply
[-]Mateusz Bagiński3y50

Can you give an example of somebody making that move?

Reply
[-]niplav3y75

I got the impression of this happening on the side of MIRI in the 2021 conversations.

Soares 14:43:

Nate's attempted rephrasing: EY's model might not be confident that there's not big GDP boosts, but it does seem pretty confident that there isn't some "half-capable" window between the shallow-pattern-memorizer stuff and the scary-laserlike-consequentialist stuff, and in particular Eliezer seems confident humanity won't slowly traverse that capability regime

Yudkowsky 11:16:

In particular, I would hope that - in unlikely cases where we survive at all - we were able to survive by operating a superintelligence only in the lethally dangerous, but still less dangerous, regime of "engineering nanosystems".
Whereas "solve alignment for us" seems to require operating in the even more dangerous regimes of "write AI code for us" and "model human psychology in tremendous detail".

Yudkowky 11:41:

It won't be slow and messy once we're out of the atmosphere, my models do say. But my models at least permit - though they do not desperately, loudly insist - that we could end up with weird half-able AGIs affecting the Earth for an extended period.

Yudkowsky 11:09:

There are people and organizations who will figure out how to sell AI anime waifus without that being successfully regulated, but it's not obvious to me that AI anime waifus feed back into core production cycles.
When it comes to core production cycles the current world has more issues that look like "No matter what technology you have, it doesn't let you build a house" and places for the larger production cycle to potentially be bottlenecked or interrupted.
I suspect that the main economic response to this is that entrepreneurs chase the 140 characters instead of the flying cars - people will gravitate to places where they can sell non-core AI goods for lots of money, rather than tackling the challenge of finding an excess demand in core production cycles which it is legal to meet via AI.
Even if some tackle core production cycles, it's going to take them a lot longer to get people to buy their newfangled gadgets than it's going to take to sell AI anime waifus; the world may very well end while they're trying to land their first big contract for letting an AI lay bricks.

Yudkowsky 17:01:

Physics is continuous but it doesn't always yield things that "look smooth to a human brain". Some kinds of processes converge to continuity in strong ways where you can throw discontinuous things in them and they still end up continuous, which is among the reasons why I expect world GDP to stay on trend up until the world ends abruptly; because world GDP is one of those things that wants to stay on a track, and an AGI building a nanosystem can go off that track without being pushed back onto it.

Perhaps those could be operationalised as a an unconditional and a conditional statement: Unconditional on everything, we expect very fast takeoff + takeover with advanced technology, conditional on that not happening, we will still be surprised by AI because TAI will not happen because of regulation before these systems completely take over.

Reply
Moderation Log
More from lc
View more
Curated and popular this week
7Comments
AI TakeoffAI
Personal Blog