A few facts: * Solving intelligence is the most important problem we face because intelligence can be used to solve everything else * We know it's possible to solve intelligence because evolution has already done it * We've made enormous progress towards solving intelligence in the last few years Given...
However, if the button had another option, which was a nonzero chance (literally any nonzero chance!) of a thousand years of physical torture, I wouldn't press that button, even if it's chance of utopia was 99.99%.
I often wonder if any AGI utopia comes with a nonzero chance of eternal suffering. Once you have a godlike AGI that is focused on maximizing your happiness, are you then vulnerable to random bitflips that cause it to minimize your happiness instead?
Even if saving money through AGI converts 1:1 into money after the singularity, it will probably be worth less in utility to you:
- You'll probably be able to buy planets post-AGI for the price of houses today. More generally your selfish and/or local and/or personal preferences will be fairly easily satisfiable even with small amounts of money, or to put it in other words, there are massive diminishing returns.
No one will be buying planets for the novelty or as an exotic vacation destination. The reason you buy a planet is to convert it into computing power, which you then attach to your own mind. If people aren't explicitly prevented from using planets for that purpose, then planets are going to be in very high demand, and very useful for people on a personal level.
This post and many of the comments are ignoring one of the main reasons that money becomes so much more critical post-AGI. It's because of the revolution in self-modification that ensues shortly afterwards.
Pre-AGI, a person can use their intelligence to increase their money, but not the other way around; post-AGI it's the opposite. The same applies if you swap intelligence for knowledge, health, willpower, energy, happiness set-point, or percentage of time spent awake.
This post makes half of that observation: that it becomes impossible to increase your money using your personal qualities. But it misses the other half: that it becomes possible to improve your personal qualities using your money.
The value of capital... (read 528 more words →)
Relevant quote from Altman after the firing:
“I think this will be the most transformative and beneficial technology humanity has yet invented,” Altman said, adding later, “On a personal note, four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room when we push … the veil of ignorance back and the frontier of discovery forward.”
However, uploading seems to offer a third way: instead of making alignment researchers more productive, we "simply" run them faster.
When I think about uploading as an answer to AI, I don't think of it as speeding up alignment research necessarily, but rather just outpacing AI. You won't get crushed by an unaligned AI if you're smarter and faster than it is, with the same kind of access to digital resources.
The breeding process would adjust that if it was a limiting factor.
The problem with this is that one day you'll see someone who has the same flaw you've been trying to suppress in yourself, and they just completely own it, taking pride in it, focusing on its advantages, and never once trying to change it. And because they are so self-assured about it, the rest of the world buys in and views it as more of an interesting quirk than a flaw.
When you encounter that person, you'll feel like you threw away something special.
How about this one? Small group or single individual manages to align the first very powerful AGI to their interests. They conquer the world in a short amount of time and either install themselves as rulers or wipe out everyone else.
Oh, I see your other graph now. So it just always guesses 100 for everything in the vicinity of 100.
A few facts:
Given our situation, it's surprising that the broader world doesn't share LessWrong's fascination with AGI. If people were giving it the weight it deserved, it would overshadow even the global culture war which has dominated the airwaves for the last 5 years and pulled in so many of us.
And forget, for a moment, about true intelligence. The narrow AIs that we already have are going to shake up our lives... (read 2216 more words →)
Kind of funny to stumble on this in 2026 and notice that the other conspicuous number in his tweet, besides 14 and 88, is 67. If there wasn't before, there is certainly now a surprising density of meaningful numbers in that tweet.