LESSWRONG
LW

Wikitags

AI alignment open problem

Edited by Eliezer Yudkowsky last updated 11th Apr 2015

A tag for pages that describe at least one major open problem that has been identified within the theory of value-aligned advanced agents, powerful artificial minds such that the effect of running them is good / nice / normatively positive ('high value').

To qualify as an 'open problem' for this tag, the problem should be relatively crisply stated, unsolved, and considered important.

Parents:
AI alignment
Discussion0
Discussion0