LESSWRONG
LW

VCM
130260
Message
Dialogue
Subscribe

www.sophia.de

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Is the argument that AI is an xrisk valid?
VCM4y10

One more consideration about "instrumental intelligence": we left that somewhat under-defined, more like "if I had that utility function, what would I do?" ... but it is not clear that this image of "me in the machine" captures what a current or future machine would do. In other words, people who use instrumental intelligence for an image of AI owe us a more detailed explanation of what that would be, given the machines we are creating - not just given the standard theory of rational choice.

Reply
Is the argument that AI is an xrisk valid?
VCM4y10

Thanks, it's useful to bring these out - though we mention them in passing. Just to be sure: We are looking at the XRisk thesis, not at some thesis that AI can be "dangerous", as most technologies will be. The Omhundro-style escalation is precisely the issue in our point that instrumental intelligence is not sufficient for XRisk.

Reply
Is the argument that AI is an xrisk valid?
VCM4y10

... we aren't trying to prove the absence of XRisk, we are probing the best argument for it?

Reply
Is the argument that AI is an xrisk valid?
VCM4y10

We tried to find the strongest argument in the literature. This is how we came up with our version:

"
Premise 1: Superintelligent AI is a realistic prospect, and it would be out of human control. (Singularity claim)

Premise 2: Any level of intelligence can go with any goals. (Orthogonality thesis)

Conclusion: Superintelligent AI poses an existential risk for humanity
"

====
A more formal version with the same propositions might be this:

1. IF there is a realistic prospect that there will be a superintelligent AI system that is a) out of human control and b) can have any goals, THEN there is existential risk for humanity from AI

2. There is a realistic prospect that there will be a superintelligent AI system that is a) out of human control and b) can have any goals

->

3. There is existential risk for humanity from AI

====

And now our concern is whether a superintelligence can be both a) and b) - given that a) must be understood in a way that is strong enough to generate existential risk, including "widening the frame", and b) must be understood as strong enough to exclude reflection on goals. Perhaps that will work only if "intelligent" is understood in two different ways? Thus Premise 2 is doubtful.

Reply
Delta variant: we should probably be re-masking
VCM4y-10

Even if that is true, you would still get a) a lot of sickness & suffering, and b) infect a lot of other people (who infect further). So some people would be seriously ill and some would die as a result of this experiment.

Reply
Is the argument that AI is an xrisk valid?
VCM4y10

Can one be a moral realist and subscribe to the orthogonality thesis? In which version of it? (In other words, does one have to reject moral realism in order to accept the standard argument for XRisk from AI? We should better be told! See section 4.1)

Reply
Is the argument that AI is an xrisk valid?
VCM4y10

But reasoning about morality? Is that a space with logic or with anything goes?

Reply
Is the argument that AI is an xrisk valid?
VCM4y70

Thanks. We are actually more modest. We would like to see a sound argument for XRisk from AI and we investigate what we call 'the standard argument'; we find it wanting and try to strengthen it, but we fail. So there is something amiss. In the conclusion we admit "we could well be wrong somewhere and the classical argument for existential risk from AI is actually sound, or there is another argument that we have not considered."

I would say the challenge is to present a sound argument (valid + true premises) or at least a valid argument with decent inductive support for the premises. Oddly, we do not seem to have that.

Reply
Is the argument that AI is an xrisk valid?
VCM4y10

... plus we say that in the paper :)

Reply
Is the argument that AI is an xrisk valid?
VCM4y10
  • Maximal overall utility is better than minimal overall utility. Not sure what that means. The NPCs in this simulation don't have "utility". The real humans in the secret prison do.

This should have been clearer. We meant this in Bentham's good old way: minimal pain and maximal pleasure. Intuitively: A world with a lot of pleasure (in the long run) is better than a world with a lot of pain. - You don't need to agree, you just need to agree that this is worth considering, but on our interpretation the orthogonality thesis says that one cannot consider this.

Reply
Load More
No posts to display.