LESSWRONG
LW

3557
James_Miller
153026227221
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
9James_Miller's Shortform
4y
16
AI Doomers Should Raise Hell
James_Miller2h30

That is a reasonable point about extinction risks motivating some people on climate change. But Republicans, and given their control of the US government and likely short AI time horizons influencing them is a top priority, detest the Extinction Rebellion movement, and current environmental activism seems to anti-motivate them to act on climate change.

Reply
AI Doomers Should Raise Hell
James_Miller3h30

The target audience needs to include the Trump administration so connections to religion might strengthen the case.  Altman told the Senate that AI might kill everyone, and he was misinterpreted as talking about job loss. Something about human extinction causes powerful people to tune-out. The students at my college hate Elon, but are completely unaware that he went on Joe Rogan and said the tech is he helping to build might annihilate everyone. We see concerns about AI using up water getting more play than AI extinction risks.

Reply
Rejecting Violence as an AI Safety Strategy
James_Miller1mo30

That is a valid point. I did ask two AIs to point out mistakes in the article so I got some criticism. One AI wanted me to steelman the position in favor of violence, which I didn't do because I feared it being taken out of context, and I feared that some might think I was really advocating for violence and putting in the anti-violence positions as cover.

Reply
Rejecting Violence as an AI Safety Strategy
James_Miller1mo20

Doomers are claiming that those building AI are threatening the lives of everyone, so that is already an attempt to put a lot of guilt on the builders.

Reply
Dear Paperclip Maximizer, Please Don’t Turn Off the Simulation
James_Miller4mo20

Yes. It is running trillions upon trillions simulations and ignoring almost all of the details from the simulations. Our hope is that writing this letter slightly increases the odds that it learns about the contents of this post. Also, there are multiple acausal trade equilibria and this version of me taking about them could favorably alter which equilibria we are in. Finally, agency has value and so writing this letter by itself might slightly increase the expected value of working with us.

Reply
Our Reality: A Simulation Run by a Paperclip Maximizer
James_Miller5mo41

An implicit assumption (which should have been made explicit) of the post is that the cost per simulation is tiny. This is like in WW II where the US would send a long-range bomber to take photos of Japan. I agree with your last paragraph and I think it gets to what is consciousness. Is the program's existence enough to generate consciousness, or does the program have to run to create conscious observers?

Reply
Our Reality: A Simulation Run by a Paperclip Maximizer
James_Miller6mo20

Great example!

Reply
Our Reality: A Simulation Run by a Paperclip Maximizer
James_Miller6mo20

I'm assuming the cost of this simulation is tiny compared to the value of learning about potential enemies and trading partners.

Reply
Our Reality: A Simulation Run by a Paperclip Maximizer
James_Miller6mo30

Yes, I agree that you can't give too much weight to my saying I'm in pain because I could be non-conscious from your viewpoint. Assuming all humans are conscious and pain is as it appears to be, there seems to be a lot of unnecessary pain, but yes I could be missing the value of having us experience it.

Reply
Our Reality: A Simulation Run by a Paperclip Maximizer
James_Miller6mo81

I'm in low level chronic pain including as I write this comment, so while I think the entire Andromeda galaxy might be fake, I think at least some suffering must be real, or at least I have the same confidence in my suffering as I do in my consciousness.

Reply1
Load More
-14AI Doomers Should Raise Hell
3h
4
58Rejecting Violence as an AI Safety Strategy
1mo
5
7Dear Paperclip Maximizer, Please Don’t Turn Off the Simulation
4mo
6
22Our Reality: A Simulation Run by a Paperclip Maximizer
6mo
65
13Quantum Immortality: A Perspective if AI Doomers are Probably Right
1y
55
35Adam Smith Meets AI Doomers
2y
10
14Cortés, AI Risk, and the Dynamics of Competing Conquerors
2y
3
33Will Artificial Superintelligence Kill Us?
2y
2
41An Appeal to AI Superintelligence: Reasons to Preserve Humanity
3y
75
9James_Miller's Shortform
4y
16
Load More
Intelligence explosion
14 years ago