From Arbital's Mild Optimization page:

Mild optimization relates directly to one of the three core reasons why aligning at-least-partially superhuman AGI is hard - making very powerful optimization pressures flow through the system puts a lot of stress on its potential weaknesses and flaws.

I'm interested in this taxonomy of core reasons. Unfortunately this page doesn't specify the other two. What are they?

Also, this page is part of the AI alignment domain -- was it written by Eliezer? (surprisingly, "10 changes by 3 authors" is a link to edit and does not show author information or edit history)

New Answer
New Comment
2 comments, sorted by Click to highlight new comments since: Today at 5:55 AM
Three core reasons why

I'd have called this question, The Two missing core reasons why..., or Arbital's two missing core, etc.

Thanks Pattern -- I've taken your advice and updated the title.