LESSWRONG
LW

Tarnish
22130
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2Tarnish's Shortform
2mo
0
Is there a known method to find others who came across the same potential infohazard without spoiling it to the public?
Tarnish10mo10

As far as I know, there is unfortunately no system for this. I think what people typically do is contact MIRI leadership, but I don't know MIRI leadership to have particularly put silent people in touch with other silent people as a result.

Reply
Most arguments for AI Doom are either bad or weak
Tarnish11mo2-6

Note that some of the best arguments are of the shape "AI will cause doom because it's not that hard to build the following..." followed by insights about how to build an AI that causes doom. Those arguments are best rederived privately rather than shared publicly, and by asking publicly you're filtering the strength of arguments you might get exposed to.

Reply
Provably Safe AI: Worldview and Projects
Tarnish1y1716

Unfortunately, that does not appear to be a stable solution. Even if the US paused its AI development, China or other countries could gain an advantage by accelerating their own work.

Arguing-for-pausing does not need to be a stable solution to help. If it buys time, that's already helpful. If the US pauses AI development, but China doesn't, that's still less many people working on AI that might kill everyone.

Reply
2Tarnish's Shortform
2mo
0