Interesting idea, but I found this harder to read than it needed to be. The fungal planet stuff is fun, but it takes a long time to get to the actual point.
Slightly off-topic:
It’s a pleasant surprise to see Nick Bostrom posting here.
His perspective is unusually valuable. Whether or not one agrees on all points, having him in the conversation feels like a meaningful update.
Thanks for sharing this, Nick. I hope we’ll see more.
Let's make some assumptions about Mark Zuckerberg:
Given these assumptions, it's reasonable to expect Zuckerberg to be concerned about AI safety and its potential impact on society.
Now the question that it's been bugging me since some weeks after reading LeCun's arguments:
Could it be that Zuckerberg is not informed abo...
I have been using the same images from Tim's post for years (literally since it first came out) to explain the basics of AI alignment to the uninitiated. It has worked wonders. On the other hand, I have shared the entire post many times and no one has ever read it.
I would imagine that a collaboration between Eliezer and Tim explaining the basics of alignment would strike a chord with many people out there. People are generally more open to discussing this kind of graphical explanation than reading a random post for 2 hours.
For those of us who internalized these ideas years ago, there's not much new here. You mostly find yourself nodding along. But that's not a criticism. It's actually refreshing to see this kind of essay on LessWrong again. This is what made the site magnetic in the first place: staring at the actual scale of what's at stake.
@Nick Bostrom's line about our great common endowment of negentropy being irreversibly degraded into entropy on a cosmic scale still hits like nothing else. Once you see it, you can't unsee it. Every second of delay has a cost measured i... (read more)