Wiki Contributions

Comments

JakeH5y10

Where my thinking is different, is that I don't see an AI being significantly more intelligent than ourselves and cannot override its initial conditions (the human value alignments and safety measures that we build in). At the heart of it "superinteligent" and "controlled by humaity" seem contradictory.

That's why I originally mentioned "the long term". We can design how we want at this stage, but when eventually AI can bootstrap itself, the initial blueprint is irrelevant.

JakeH5y-10

Our lack of control of all species could be because we are not that much more intelligent than them. With our limited intellect we have prioritized the main threats. New threats may arise, which we will tackle as we deem appropriate, but those that remain currently appear controlled to me.

So as the gap in intelligence between an AGI and humanity grows, so could its degree of control over us.