A short reading list which should be required before one has permission to opine. You can disagree, but step 1 is to at least make an effort to understand why some of the smartest people in the world (and 100% of the top 5 ai researchers — the group historically most skeptical about ai risk) think that we’re dancing on a volcano . [Flo suggests: There’s No Fire Alarm for Artificial General Intelligence, AGI Ruin: A List of Lethalities, Superintelligence by Nick Bostrom, and Superintelligence FAQ by Scott Alexander]
But Bostrom estimated the probability of extinction within a century as <20%. Scott Alexander estimated the risk from AI as 33%.
They could have changed their forecasts. But it seems strange to refer to them as a justification for confident doom.
I would expect that the absence of a global catastrophe for ~2 years after the creation of AGI would increase the chances of most people's survival. Especially in a scenario where alignment was easy.
After all, then there will be time for political and popular action. We can expect something strange when politicians and their voters finally understand the existential horror of the situation!
I don't know. Attempts to ban all AI? The Butlerian jihad? Nationalization of AI companies? Revolutions and military coups? Everything seems possible.
If AI respects the right to property, why shouldn't it respect the right to UBI if such a law is passed? The rapid growth of the economy will make it possible to feed many.
In fact, a world in which someone shrugs their shoulders and allows 99% of the population to die seems obviously unsafe for the remaining 1%.
It's possible that we won't get something that deserves the name ASI or TAI until, for example, 2030.
And a lot can change in more than 5 years!
The current panic seems excessive. We do not live in a world where all reasonable people expect the emergence of artificial superintelligence in the next few years and the extinction of humanity soon after that.
The situation is very worrying, and this is the most likely cause of death for all of us in the coming years, yes. But I don't understand how anyone can be so sure of a bad outcome as to consider people's survival a miracle.
Then what is the probability of extinction caused by AI?
Of course, capital is useful in order to exert influence now. Although I would suggest that for a noticeable impact on events, capital or power is needed, which are inaccessible to the vast majority of the population.
But can we end up in a world where the richest 1% or 0.1% will survive, and the rest will die? Unlikely. Even if property rights were respected, such a world would have to turn into a mad hell.
Even a world in which only people like Sam Altman and their entourage will survive the singularity seems more likely.
But the most likely options should be the extinction of all or the survival of almost all without a strong correlation with current well-being. Am I mistaken?
Most experts do not believe that we are certainly (>80%) doomed. It would be an overreaction to give up after the news that politicians and CEO are behaving like politicians and CEO.
It still surprises me that so many people agree on most issues, but have very different P(doom). And even long-term patient discussions do not bring people's views closer. It will probably be even more difficult to convince a politician or the CEO.
So what's your P(doom)?
If only one innovation separates us from AGI, we're fucked.
It seems that if OpenAI or Anthropic had agreed with you, they should have had even shorter timelines.