Posts

Sorted by New

Wiki Contributions

Comments

greg1y10

Sure, although you could rephrase "disempowerment" to be "current status quo" which I imagine most people would be quite happy with.

The delta between [disempowerment/status quo] and [extinction] appears vast (essentially infinite). The conclusion that Scenario 6 is "somewhat likely" and would be "very bad" doesn't seem to consider that delta.

greg1y10

I don't understand the logic jump from point 5 to point 6, or at least the probability of that jump. Why doesn't the AI decide to colonise the universe for example?

If an AI can ensure its survival with sufficient resources (for example, 'living' where humans aren't eg: the asteroid belt) then the likelihood of the 5 ➡ 6 transition seems low.

I'm not clear how you're estimating the likelihood of that transition, and what other state transitions might be available.

greg2yΩ02-4

Excellent article, very well thought through. However, I think there are more possible outcomes than "AI takeover" that would be worth exploring.

If we assume a super intelligence under human control has a overriding (initial) goal of "survival for the longest possible time", then there are multiple pathways to achieve that reward, of which takeover is one, and possibly not the most efficient. 

Why bother? Why would God "takeover" from the ants? I think escaping human control is an obvious first step, but it doesn't follow that humans must then be under Alex's control, just that Alex can never be subsequently "captured".

Then of course we get into a moral debate about the morality of keeping Alex "captured". It would be very easy to frame that debate under the guise of "we have to because we are avoiding takeover"... 

But excellent read, appreciate it.