LESSWRONG
LW

1453
JakeH
1220
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Is AI safety doomed in the long term?
JakeH6y10

Where my thinking is different, is that I don't see an AI being significantly more intelligent than ourselves and cannot override its initial conditions (the human value alignments and safety measures that we build in). At the heart of it "superinteligent" and "controlled by humaity" seem contradictory.

That's why I originally mentioned "the long term". We can design how we want at this stage, but when eventually AI can bootstrap itself, the initial blueprint is irrelevant.

Reply
Is AI safety doomed in the long term?
JakeH6y-10

Our lack of control of all species could be because we are not that much more intelligent than them. With our limited intellect we have prioritized the main threats. New threats may arise, which we will tackle as we deem appropriate, but those that remain currently appear controlled to me.

So as the gap in intelligence between an AGI and humanity grows, so could its degree of control over us.

Reply
6Is value amendment a convergent instrumental goal?
Q
6y
Q
5
-1Is AI safety doomed in the long term?
Q
6y
Q
11