LESSWRONG
LW

613
PeterC
0020
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Naive Hypotheses on AI Alignment
PeterC3y10

Enjoyed this. 

Overall, I think that framing AI alignment as a problem is ... erm .. problematic. The best parts of my existence as a human do not feel like the constant framing and resolution of problems. Rather they are filled with flow, curiosity, wonder, love.

I think we have to look in another direction, than trying to formulate and solve the "problems" of flow, curiosity, wonder, love. I have no simple answer - and stating a simple answer in language would reveal that there was a problem, a category, that could "solve" AI and human alignment problems. 

I keep looking for interesting ideas - and find yours among the most fascinating to date.

Reply
Naive Hypotheses on AI Alignment
PeterC3y10

I believe that cognitive neuroscience has nothing much to say about how any experience at all is implemented in the brain - but I just read this book which has some interesting ideas: https://lisafeldmanbarrett.com/books/how-emotions-are-made/ 

Reply