Remmelt

Research coordinator of Stop/Pause area at AI Safety Camp.

See explainer on why AGI could not be controlled enough to stay safe:
lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

 

Sequences

Bias in Evaluating AGI X-Risks
Developments toward Uncontrollable AI
Why Not Try Build Safe AGI?

Comments

Sorted by

Looks like I summarised it wrong. It’s not about ionising radiation directly from bombarding ions from outer space. It’s about the interaction of the ions with the Earth’s magnetic field, which as you stated “induced large currents in long transmission lines, overloading the transformers.”. 

Here is what Greg Weinstein wrote in a scenario I just found written by him:

In 2013, a report had warned that an extreme geomagnetic storm was almost inevitable, and would induce huge currents in Earth’s transmission lines. This vulnerability could, with a little effort, have been completely addressed for a tiny sum of money — less than a tenth of what the world invested annually in text messaging prior to the great collapse of 2024.

Will correct my mistake in the post now. 

There is one question on my mind still is whether and how a weakened Earth magnetic field makes things worse. Would the electromagnetic interactions occur on the whole closer to Earth, therefore causing larger currents in power transmission lines? Does that make any sense?

But it’s weird that I cannot find even a good written summary of Bret’s argument online (I do see lots of political podcasts).

I found an earlier scenario written by Bret that covers just one nuclear power plant failing and that does not discuss the risk of a weakening magnetic field.

This was an interesting read, thank you. 

Remmelt10

Good question!  Will look into it / check more if I have the time. 

Remmelt10

Ah, thank you for correcting. I didn’t realise it could be easily interpreted that way. 

Remmelt10

Also suggest exploring what it may means we are unable to be able to solve the alignment problem for fully autonomous learning machinery.

There will be a [new AI Safety Camp project](https://docs.google.com/document/d/198HoQA600pttXZA8Awo7IQmYHpyHLT49U-pDHbH3LVI/edit) about formalising a model of AGI uncontainability. 

Remmelt10

Fixed it!  You can use either link now to share with your friends.

Remmelt20

To clarify for future reference, I do think it’s likely (80%+) that at some point over the next 5 years there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

Ie. I think we are heading for an AI winter. It is not sustainable for the industry to invest 600+ billion dollars per year in infrastructure and teams in return for relatively little revenue and no resulting profit for major AI labs.

At the same time, I think that within the next 20 years tech companies could both develop robotics that self-navigate multiple domains and have automated major sectors of physical work. That would put society on a path to causing total extinction of current life on Earth. We should do everything we can to prevent it.

Load More