Ultimately, rationalism should help people win. Scott Alexander claims that the surge of the price of Bitcoin was a test of that:

...suppose God had decided, out of some sympathy for our project, to make winning as easy as possible for rationalists. He might have created the biggest investment opportunity of the century, and made it visible only to libertarian programmers willing to dabble in crazy ideas. And then He might have made sure that all of the earliest adapters were Less Wrong regulars, just to make things extra obvious.

This was the easiest test case of our "make good choices" ability that we could possibly have gotten, the one where a multiply-your-money-by-a-thousand-times opportunity basically fell out of the sky and hit our community on its collective head. So how did we do?

I would say we did mediocre.

Five years later, suppose God wanted to give rationalists another test. But instead of the opportunity to win big, He wanted to test whether they could avoid losing hard. He might create the largest workforce disruption of the century, driven by an unpredictable technology (which the rationalists happen to know the most about) and primarily affecting white-collar workers (which most rationalists are). If rationalism truly helps people predict the future and make better decisions, rationalists who work should survive the incoming wave of AI-driven job automation better than everyone else.

Of course, this only applies for those whose jobs are truly in danger. I'm pursuing a career in AI alignment research – if that becomes automated, none of this matters. But suppose, like many of us, you're a software engineer. You should be paying careful attention to forecasts related to your future compensation and learning how to use the latest tools which accelerate your productivity relative to your competitors. And if things start looking really grim, since you know about Moravec's paradox and status quo bias, you'd start learning valuable blue-collar skills earlier than your developer friends who are also at risk of getting laid off.

Most artists didn't expect robots to become creative and thus didn't prepare accordingly. Then, DALL-E 2 happened, rendering many of the skills they spent years mastering economically useless. This should be a warning shot for everyone else who thinks they're rational: think ahead, don't get caught off guard, and act on your beliefs, and maybe you'll remain economically valuable up until the singularity.

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 7:54 AM

Moravec's paradox doesn't really apply anymore, so it's worth updating in that direction. "Reasoning" as envisioned in 1976 was a very narrow thing that they thought would generalize with very little compute. They were wrong.

Even moderately accurate reasoning about the real world appears to require more compute than most people have access to today, as even GPT-4 with its probably 10^14 FLOP per answer inference costs can't do it well.

On the other hand, complex sensorimotor tasks can be handled in portable computing devices that can be embedded in robots. The expensive part of a robot isn't the compute, it's all the sensors and actuators and the reasoning required to apply those sensorimotor capabilities.

That's a good point. I conflated Moravec's Paradox with the observation that so far, it seems as though cognitive tasks will be automated more quickly than physical tasks.

Isn't doing well with technological unemployment really easy if you have a good job now, through the magic of investing? The hard part would be figuring out what to do if you're currently in a low-paying job, but I doubt that's common here.

rationally, automating more tasks in my life should make for an easier life that’s subject to fewer demands. rationally, when this isn’t the case — when individuals each working to automate more things causes them to instead be subjected to more demands (learn new skills, else end up on the street), you shouldn’t expect doubling down on this strategy to be long-term viable.

rationally, if you’re predicting the proportion of people able to stay afloat to be always decreasing up to the singularity — a point at which labor becomes valueless — you shouldn’t expect to still be afloat come that moment.

“rationally”, you’re doomed unless you can slide into a different economic system wherein you do observe the benefits of automation. idly watching your peers get rolled over by that bus is bad for your future as it further separates you from the potential exit ramps. the viable solutions to your problem require collective action. that doesn’t put it entirely out of league with rationality, but if it’s not clear from my tone (i apologize if it reads too strong) i believe you’re thinking of this way too narrowly. i think you’re leaning too far toward a Spock type of rationality for what is increasingly a social problem.

I don't think this serves as a good test for whether rationality helps us win.

Suppose for the sake of argument that there are 1,000 areas where rationality can help you win. If you observe a failure in one area, there still might be success in the 999 other areas, so the one failure doesn't point very strongly against rationality. It would if you knew that that one area is strongly correlated with other areas, but I don't think that's the case with technological unemployment.