Wiki Contributions

Comments

So the first one is an "AGSI", and the second is an "ANSI" (general vs narrow)?

If I understand correctly... One type of alignment (required for the "AGSI") is what I'm referring to as alignment which is that it is conscious of all of our interests and tries to respect them, like a good friend, and the other is that it's narrow enough in scope that it literally just does that one thing, way better than humans could, but the scope is narrow enough that we can hopefully reason about it and have an idea that it's safe.

Alignment is kind of a confusing term if applied to ANSI, because to me at least it seems to suggest agency and aligned interests, wheras in the case of ANSI if I understand correctly the idea is to prevent it from having agency and interests in the first place. So it's "aligned" in the same way that a car is aligned, ie it doesn't veer off the road at 80 mph :-)

But I'm not sure if I've understood correctly, thanks for your help...

Thanks for your response! Could you explain what you mean by "fully general"? Do you mean that alignment of narrow SI is possible? Or that partial alignment of general SI is good enough in some circumstance? If it's the latter could you give an example?

The same game theory that has all the players racing to improve their models in spite of ethics and safety concerns will have them getting the models to self improve if that provides an advantage.