LESSWRONG
LW

827
Jeff Rose
1841820
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Transcript and Brief Response to Twitter Conversation between Yann LeCunn and Eliezer Yudkowsky
Jeff Rose2y10

He specifically told me when I asked this question that his views were the same as Geoff Hinton and Scott Aaronson and neither of them hold the view that smarter than human AI poses zero threat to humanity.

Reply
Consider The Hand Axe
Jeff Rose2y51

I enjoyed this and thank you for writing it.   Ultimately, the only real reason to do this is for your own enjoyment or perhaps those of friends (and random people on the internet).

Reply
How is AI governed and regulated, around the world?
Jeff Rose2y11

Non-signatories to the NPT  (Israel, India, Pakistan), were able to and did develop nuclear weapons without being subject to military action. By contrast (and very much contrary to international law) Yudkowsky proposes that non-signatories to his treaty be subject to bombardment.  

Reply
Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky
Jeff Rose2y11

It is not a well-thought out exception.  If this proposal were meant to be taken seriously it would make enforcement exponentially harder and set up an overhang situation where AI capabilities would increase further in a limited domain and be less likely to be interpretable.

Reply
Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky
Jeff Rose2y2333

The use of violence in case of violations of the NPT treaty has been fairly limited and highly questionable in international law.  And, in fact, calls for such violence are very much frowned upon because of fear they have a tendency to lead to full scale war.   

No one has ever seriously suggested violence as a response to potential violation of the various other nuclear arms control treaties. 

No one has ever seriously suggested running a risk of nuclear exchange to prevent a potential treaty violation. So, what Yudkowsky is suggesting is very different than how treaty violations are usually handled.  

Given Yudkowsky's view that the continued development of AI has an essentially 100% probability of killing all human beings, his view makes total sense - but he is explicitly advocating for violence up to and including acts of war.   (His objections to individual violence mostly appear to relate to such violence being ineffective.)

Reply
Truth and Advantage: Response to a draft of "AI safety seems hard to measure"
Jeff Rose2y1-2

I would think you could force the AI to not notice that the world was round, by essentially inputting this as an overriding truth.  And if that was actually and exactly what you cared about, you would be fine.  But if what you cared about was any corollary of the world being round or any result of the world being round or the world being some sort of curved polygon it wouldn't save you.

To take the Paul Tibbetts analogy:  you told him not to murder and he didn't murder; but what you wanted was for him not to kill and in most systems including the one he grew up in killings of the enemy in war are not murder.

This may say more about the limits of the analogy than anything else, but in essence you might be able to tell the AI it can't deceive you, but it will be bound exactly by the definition of deception you provide and it will freely deceive you in any way that you didn't think of. 

Reply
Transcript: Yudkowsky on Bankless follow-up Q&A
Jeff Rose3y0-1
  1. Other planets have more mass, higher insolation, lower gravity, lower temperature and/or rings and more (mass in) moons. I can think of reasons why any of those might be more or less desirable than the characteristics of Earth It is also possible that the AI may determine it is better off not to be on a planet at all. In addition, in a non- foom scenario, for defensive or conflict avoidance reasons the AI may wind up leaving Earth and once it does so may choose not to return.

  2. That depends a lot on how it views the probe. In particular by doing this is it setting up a more dangerous competitor than humanity or not? Does it regard the probe as self? Has it solved the alignment problem and how good does it think it's solution is?

  3. No. Humans aren't going to be the best solution. The question is whether they will be good enough that it would be a better use of resources to continue using the humans and focus on other issues.

  4. It's definitely possible that it will discover extra reasons to process Earth (or destroy the humans even if it doesn't process Earth).

Reply
Transcript: Yudkowsky on Bankless follow-up Q&A
Jeff Rose3y42

This is just wrong.  Avoiding processing Earth doesn't require that the AI cares for us.  Other possibilities include:

(1)  Earth is not worth it; the AI determines that getting off Earth fast is better;

(2)  AI determines that it is unsure that it can process Earth without unacceptable risk to itself;

(3) AI determines that humans are actually useful to it one way or another;

(4)  Other possibilities that a super-intelligent AI can think of, that we can't.

Reply
Transcript: Yudkowsky on Bankless follow-up Q&A
Jeff Rose3y-4-3

There are, of necessity, a fair number of assumptions in the arguments he makes.   Similarly, counter-arguments to his views also make a fair number of assumptions.  Given that we are talking about something that has never happened and which could happen in a number of different ways, this is inevitable. 

Reply
Bing chat is the AI fire alarm
Jeff Rose3y10

What makes monkeys intelligent in your view?  

Reply
Load More
10Shortening Timelines: There's No Buffer Anymore
3y
5