LESSWRONG
LW

AI
Frontpage

7

[ Question ]

The two missing core reasons why aligning at-least-partially superhuman AGI is hard

by Joel Burget
19th Apr 2022
1 min read
A
0
2

7

AI
Frontpage

7

The two missing core reasons why aligning at-least-partially superhuman AGI is hard
2Pattern
1Joel Burget
New Answer
New Comment
2 comments, sorted by
top scoring
Click to highlight new comments since: Today at 2:23 PM
[-]Pattern3y20
Three core reasons why

I'd have called this question, The Two missing core reasons why..., or Arbital's two missing core, etc.

Reply
[-]Joel Burget3y10

Thanks Pattern -- I've taken your advice and updated the title.

Reply
Moderation Log
More from Joel Burget
View more
Curated and popular this week
A
0
2

From Arbital's Mild Optimization page:

Mild optimization relates directly to one of the three core reasons why aligning at-least-partially superhuman AGI is hard - making very powerful optimization pressures flow through the system puts a lot of stress on its potential weaknesses and flaws.

I'm interested in this taxonomy of core reasons. Unfortunately this page doesn't specify the other two. What are they?

Also, this page is part of the AI alignment domain -- was it written by Eliezer? (surprisingly, "10 changes by 3 authors" is a link to edit and does not show author information or edit history)