The Current Issue Suppose you'd like to improve in chess as a total beginner so that within a month training one hour a day your rating in rapids is as high as possible. What do you do? Watch a YouTube video? Read a book? Ask ChatGPT? What about learning a...
Solving death is one of those things you don't want to do if you want species to prosper.
Why not just avoid creating AGI all together and setup the civilization ourselves in such a way that we don't doom ourselves. Why even focus on the alignment problem?
So far the process of solving it has only increased the chances of doom...
In other words I am not convinced that focusing on the alignment problem decreases our chances of doom more than focusing on the "stable civilization without agis" problem.
It's very hard to imagine humans prosper in a world with AGIs. What's the point of solving the alignment problem? Is it so that one ASI can create a civilization where AGIs will never exist and then self-destruct?
not as nice as map corresponding to territory tho
I don't like the term map, because it communicates something static. But the real world is not static. It evolves dynamically. Sure there are things in life which are for which I guess it is good to use the term, for instance mathematical theorems, physical laws, etc... But does anyone have a better term for the real world. Something like current-world-state. So that I could still say "my idea of current world state corresponds to the real world state" but shorter?
Suppose you'd like to improve in chess as a total beginner so that within a month training one hour a day your rating in rapids is as high as possible. What do you do? Watch a YouTube video? Read a book? Ask ChatGPT? What about learning a new language in the shortest amount of time, with 20 minutes to spare a day? Or creating a startup full-time for 3 years to maximize net-worth?
The advice is scattered all over the internet, and you don't even have a way of accurately telling which of them have any merit.
In this blog, I propose a platform solution that properly provides incentives and combines beliefs... (read 481 more words →)
I agree with what you're saying but don't see how it relates to my take...
To me it seems like both people in AI labs and people trying to solve alignment are trying to create and control "god".
What about not creating it in the first place?
Like MIRI dedicated it's whole existence to "solving alignment" instead of "stable civilization". Honestly the latter seems like an easier problem.