LESSWRONG
LW

118
MichaelDickens
1554112580
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2MichaelDickens's Shortform
4y
133
"Shut It Down" is simpler than "Controlled Takeoff"
MichaelDickens8d105

Most normal people would not take a 5% risk of destroying the world in order to greatly improve their lives and the lives of their children.

Polls suggest that most normal people expect AGI to be bad for them and they don't want it. I'm more speculating here, but I think the typical expectation is something like "AGI will put me out of a job; billionaires will get even richer and I'll get nothing."

Reply
This is a review of the reviews
MichaelDickens9d43

It's not only a feint. You don't want to go to war, and you hope that the treaty will prevent war from happening, but you are prepared to go to war if the treaty is violated. This is the standard way treaties work.

Reply
Buck's Shortform
MichaelDickens10d20

The relevant way in which it's analogous is that a head of state can't build [dangerous AI / nuclear weapons] without risking war (or sanctions, etc.).

Reply
Buck's Shortform
MichaelDickens10d31

My preferred mechanism, and I think MIRI's, would be an international treaty in which every country implements AI restrictions within its own borders. That means a head of state can't build dangerous AI without risking war. It's analogous to nuclear non-proliferation treaties.

I don't think I would call it low risk, but my guess is it's less risky than the default path of "let anyone build ASI with no regulations".

Reply1
Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures
MichaelDickens10d241

This seems like a good thing to advocate for, I'm disappointed that they don't make any mention of extinction risk but I think establishing red lines would be a step in the right direction.

Reply
This is a review of the reviews
MichaelDickens10d1814

A lukewarmer who believes in , say, a 30% chance of dystopia just isn't on the same.page as an extremist who believes in 98% certain doom. They are not going to support nuking data centers.

Eliezer doesn't support nuking data centers, either. He supports an international treaty that, like all serious international treaties, is backed by a credible threat of violence.

(I suppose someone with a 98% P(doom) might hypothetically support unconditionally nuking data centers, but that is not Eliezer's actual position. I assume it's not the position of anyone at MIRI but I can only speak for Eliezer because he's written a lot about this publicly.)

Reply
This is a review of the reviews
MichaelDickens10d184

I feel like every time I write a comment, I have to add a caveat about how I'm not as doomy as MIRI and I somewhat disagree with their predictions. But like, I don't actually think that matters. If you think there's a 5% or 20% chance of extinction from ASI, you should be sounding the alarm just as loudly as MIRI is! Or maybe 75% as loudly or something. But not 20% as loudly—how much you should care about raising concern for ASI is not a linear function of your P(doom).

Reply1
This is a review of the reviews
MichaelDickens10d163

I'm glad you wrote this, I very much feel the same way but I wasn't sure how to put it. It feels like many reviewers—the ones who agree that AI x-risk is a big deal, but spent 90% of the review criticizing the book—are treating this like an abstract philosophical debate. ASI risk is a real thing that has a serious chance of causing the extinction of humanity.

Like, I don't want to say you're not allowed to disagree. So I'm not sure how to express my thoughts. But I think it's crazy to believe AI x-risk is a massive problem, and then spend most of your words talking about how the problem is being overstated by this particular group of people.

Reply
Buck's Shortform
MichaelDickens10d73

I disagree with this position, but if I held it, I would be saying somewhat similar things to Zach (even having read the book).

I wouldn't. I roughly agree with Zach's background position (i.e. I'm quite uncertain about the likelihood of extinction conditional on YOLO-ing the current paradigm*) but I still think his conclusions are wild. Quoting Zach:

First, it leaves room for AI's transformative benefits. Tech has doubled life expectancy, slashed extreme poverty, and eliminated diseases over the past two centuries. AI could accelerate these trends dramatically.

The tradeoff isn't between solving scarcity at a high risk of extinction vs. never getting either of those things. It's between solving scarcity now at a high risk of extinction, vs. solving scarcity later at a much lower risk.

Second, focusing exclusively on extinction scenarios blinds us to other serious AI risks: authoritarian power grabs, democratic disruption through misinformation, mass surveillance, economic displacement, new forms of inequity. These deserve attention too.

Slowing down / pausing AI development gives us more time to work on all of those problems. Racing to build ASI means not only are we risking extinction from misalignment, but we're also facing a high risk of outcomes such as, for example, ASI being developed so quickly that governments don't have time to get a handle on what's happening and we end up with Sam Altman as permanent world dictator. (I don't think that particular outcome is that likely, it's just an example.)

*although I think my conditional P(doom) is considerably higher than his

Reply
Contra Collier on IABIED
MichaelDickens12d257

I find it very strange that Collier claims that international compute monitoring would “tank the global economy.” What is the mechanism for this, exactly?

I am continually confused at how often people make this claim. We already monitor lots of things. Monitoring GPUs and restricting sales would be no harder than doing the same for guns or prescription medication. Easier, maybe, because manufacturing is highly centralized and you'd need a lot of GPUs to do something dangerous.

Reply
Load More
64Outlive: A Critical Review
3mo
4
9How concerned are you about a fast takeoff due to a leap in hardware usage?
Q
4mo
Q
7
24Why would AI companies use human-level AI to do alignment research?
5mo
8
16What AI safety plans are there?
5mo
3
7Retroactive If-Then Commitments
8mo
0
5A "slow takeoff" might still look fast
3y
3
2How much should I update on the fact that my dentist is named Dennis?
Q
3y
Q
3
15Why does gradient descent always work on neural networks?
Q
3y
Q
11
2MichaelDickens's Shortform
4y
133
19How can we increase the frequency of rare insights?
4y
10
Load More