Each person is special. Amanda is in a class of her own, Fred is in a class of his own. Amanda has many different properties: where she came from, what she looks like, what behaviors she habitually does, what she knows and doesn't know, what her plans are, how she...
There is an argument that although humans evolved under pressure to maximize inclusive genetic fitness (IGF), humans don't actually try to maximize their own IGF. This, as the argument goes, shows that in the one case we have of a process creating general intelligence, it was not the case that...
Youtube link: https://www.youtube.com/watch?v=fZlZQCTqIEo From the description: "Eliezer Yudkowsky insists that once artificial intelligence becomes smarter than people, everyone on earth will die. Listen as Yudkokwsky speaks with EconTalk's Russ Roberts on why we should be very, very afraid and why we're not prepared or able to manage the terrifying risks...
Let's try this again... The problem of aligning superhuman AGI is very difficult. We don't have access to superhuman general intelligences. We have access to superhuman narrow intelligences, and human-level general intelligences. There's an idea described here that says: (some of) the neocortex is a mostly-aligned tool-like AI with respect...
[Edit: for reasons I still don't understand, people dislike this post. Here is a version of the post that people like, you may want to read that one instead.] There's an idea described here that says: (some of) the neocortex is a mostly-aligned tool-like AI with respect to the brain...
[Content note: this is off-the-cuff, maybe nothing new, better to publish than not. I checked Nanda's posts and found this post which is basically the same idea, and it points to tools and other stuff.] In imitation learning, the AI is trained to imitate the way that a person is...