LESSWRONG
LW

apollonianblues
33230
Message
Dialogue
Subscribe

Hello. I am Ella Markianos. I am an undergraduate student trying to understand alignment. If you like or dislike any of my posts please email me at apollonianblues@gmail.com; I've always wanted a pen pal.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
My Plan to Build Aligned Superintelligence
apollonianblues3y*10

TBH my naive thought is that if John's project succeeds it'll solve most of what I think of as the hard part of alignment, and so it seems like one of the more promising approaches to me, but in my model of the world it seems quite unlikely that there are natural abstractions in the way that John seems to think there are.

Reply
My Plan to Build Aligned Superintelligence
apollonianblues3y00

I have LOL thanks tho

Reply
My Plan to Build Aligned Superintelligence
apollonianblues3y10

My assumption is that it would do this to prevent other people from making superintelligences that are unaligned. At least Eliezer thinks you need to do this (see bullet point 6 in this post), and I think it generally comes up in conversations people have about pivotal acts. Some people think if you think of an alignment solution that's good and easy to implement, everyone building AGI will use it, and so you won't have to prevent other people from building unaligned AGI, but this seems unrealistic and risky to me.

Reply
18My Plan to Build Aligned Superintelligence
3y
7
19Can We Align AI by Having It Learn Human Preferences? I’m Scared (summary of last third of Human Compatible)
3y
3