LESSWRONG
LW

Wikitags

AI alignment open problem

Edited by Eliezer Yudkowsky last updated 11th Apr 2015

A tag for pages that describe at least one major open problem that has been identified within the theory of , powerful artificial minds such that the effect of running them is good / nice / normatively positive ('').

To qualify as an 'open problem' for this tag, the problem should be relatively crisply stated, unsolved, and considered important.

Parents:
Discussion0
Discussion0
AI alignment
high value
value-aligned advanced agents