LESSWRONG
LW

1301
Peter Kuhn
5230
Message
Dialogue
Subscribe

Analytic philosopher with a background in physics, now working in AI.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Taming the Fire of Intelligence
Peter Kuhn2y10

I think the interesting discussion is not about how certain exactly our predictions of doom are or can be.

 Let me put the central point another way: However pessimistic you are about the success of alignment you should become more pessimistic once you realize that alignment requires the prediction of an AIs actions. Any notion that we could circumvent this by engineering values into the system is illusory.

Reply
Taming the Fire of Intelligence
Peter Kuhn2y21

The primary thing I am arguing, as Max as already said, is that the AI alignment paradigm obscures the most fundamental problem of AI safety: That of prediction. It does this by conflating various interpretations of what values or utility functions are.

One of the most fundamental insights entailed by a move from an alignment to a predictive paradigm is that is becomes far from clear whether the relevant problems are solvable. 

Nothing in this shows that AGI is "guaranteed to be destructive to human preferences" of course. Rather, it shows that various paradigms that one may choose to try to make AI safe, like RLHF, should actually not make us more confident in our AGI systems at all because they address the wrong questions: We can never know in advance whether they will work and this is true for all paradigms that try to sidestep the prediction problem by appealing to values (bracketing hard-wired values of course).

Reply
Taming the Fire of Intelligence
Peter Kuhn2y20

I have problems getting the first point. If bugs are hard to find then shouldn't this precisely entail that dangerous AI is hard to differentiate from benign AI?! Any literature you can suggest on the subject?

Regarding the second point. I don't find Eliezer's idea entirely convincing. But I don't think the fire thesis hinges on his view. Rather, it is built on the much weaker and simpler view that if we don't know the utility function of some AGI system then this system is dangerous - I find it very hard to see any convincing reasons for thinking this is false. Eliezer thinks doom is default. I just assume that ignorance makes it rational to air on the side of caution.

Reply
No wikitag contributions to display.
4Children of War: Hidden dangers of an AI arms race
3mo
0
0Taming the Fire of Intelligence
2y
7
4Pessimism about AI Safety
3y
1