LESSWRONG
LW

1237
FR_Max
1020
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
7 traps that (we think) new alignment researchers often fall into
FR_Max3y21

Thank you for your post!  It really is. New alignment researchers often fall into is assuming that AI systems can be aligned with human values by solely optimizing for a single metric. Thanks again for the deep insight into the topic and the recommendations.

Reply
No posts to display.