Review

I fear I may be becoming a mini-Yudkowsky. 

I write this in response to multiple exclamatory remarks I've seen in recent weeks, excited over the prospect of all jobs being automated, of ultra-high unemployment, basic income, and radical abundance, now even further bolstered over the incredible hype over the imminency of artificial general intelligence.

 

Waking Up

 

For years now, perhaps even over a decade, I've been obsessed with the prospect of the Technological Singularity and all that comes with it. Starting in 2014, I even began considering myself a Singularitarian. 

All the arguments seemed right to me. Technological change was progressing. Humans cannot think exponentially. Artificial intelligence will grow more powerful and generalized. We ought to accelerate to reach artificial general intelligence to maximize our potential, achieve immortality, and ultimately merge with the machines.

All that sounded fantastic. Every bit of progress in artificial intelligence that came excited me, and I'd dream of the day I lived in an AI-powered utopia so totally unlikely the mundane post-Y2K dead technoscape I considered contemporary life.

Then ChatGPT was released. Though GPT-2 had first convinced me that AGI was a real possibility, ChatGPT in December 2022 was the first time it ever felt truly tangible. And as I fiddled with its mighty capabilities, something about it felt.... off. 

Some aspect of this new world of capabilities didn't feel right. It felt like too much of a vulgar display of power. But I still had my fun with it. During the Christmas gathering, I smugly believed against my increasingly technophobic relatives, "You people have absolutely no idea what's coming."

Unfortunately, I may have been terribly right. 

All throughout January of 2023, I suffered a terrific crisis of confidence and decided that the only way to resolve it was to step back and examine my beliefs from a most critical eye. Some of which, I overcorrected— such as my erroneous belief that the laws of diminishing returns would extinguish any chance at an intelligence explosion or post-silicon advances in computing. 

Others, I feel I undercorrected, such as my statements that synthetic media (popularly known as AI art) would change exactly nothing about the entertainment landscape beyond a temporary recession in creatives' fortunes.

In some ways, I found new reasons to be skeptical, in the form of the sudden realization that the Control Problem— AI alignment in other words— was completely unsolved.

But there are a few areas where my skepticism was due some extra examination. 

Unlike Yudkowsky, I am a nobody, and my words will likely never be read by more than a few dozen people. I will have no impact on the world in its final years before either doom or, if by some miracle, a debaucherous utopia.

I do not have the technical or professional expertise to defend my position. I cannot prove anything I say is true. Nor do I want any word I say to be true.  All I want is to live in a quaint rustic homestead with some advanced robots and a synthetic media-ready computer to bring my dreams to life, while an aligned superintelligence gently guides the world towards a more Edenic state. I'd like to think that isn't too much to ask.

But in the face of the catastrophic difficulties in reaching that point, perhaps it is.

Just as Yudkowsky said on that infamous podcast, when you are surrounded by ruins, what else can you do but tell the truth?

I'm going to one-up Yudkowsky and claim that we might not even make it to the advent of AGI due to an entirely different alignment problem. In this case, it would be aligning humans to the values of the technoprogressives and their newfound AI.

 

Humanity's Propensity to Adapt

 

Long before my recent epiphanies, I understood a fundamental truth: humans are flighty, reactionary, social apes. We can adapt to things very quickly. We adapted to the car, to television, to regular flight, to the personal computer, to the internet, to smartphones, to social media, all relatively quickly. The enhanced capability brought about by these technologies was enough for us to get over futureshock within days or hours. However, all these changes tended to be spaced out by years, sometimes even decades. We could clearly anticipate one would lead to the other, or perhaps we couldn't and were shocked, but generally moved along with our lives because we had to file timesheets, stock shelves, or go to a business meeting.

Imagine technologies on par with all of the above, all arriving one after the other, in an incredibly condensed period of time, followed by continuing change soon after.

Except let's go further. These new technologies don't just rapidly arrive— they directly prevent you from attaining employment. In fact, the state of employment is so dire that no alternatives that seem desirable to you are available either. Eventually, not even undesirable alternatives are available.

Now that you are freshly unemployed, you're able to catch up on everything you've been missing, and you hear some frightening words coming out of the mouths of popular tech elites in a far off land. They're saying that they're "summoning a demon" and that your grandchildren are going to be nonhuman digital constructs living in a computer. Your dreams of a stable career and retiring into a familiar but futuristic world are pre-emptively over. Instead, Skynet is soon to be real, or perhaps has actually been created. Other unique faces are warning that Skynet will do Skynet-y things, such as exterminate all humans because the researchers that brought it to life did not put anywhere near enough focus into making sure their super-intelligent computer was properly aligned to human values.

Meanwhile, business leaders speak only of the great opportunities Skynet will offer to their business portfolios and to general human progress. 

You don't care about Skynet. At least, you didn't until you heard someone say "It's going to kill us all." What you care about is, first, how you're going to pay for your next meal and second, who us the first person in San Francisco you're going to shoot for robbing you of your future. 

But you're not alone.

Rather, you're joined by millions upon millions of others like you: average people who had been utterly blindsided by the sudden explosion of technological capability and who were handed a collective pink slip.

The numbers are vast: upwards of 50% of the working population is now unemployed. 

The US government has enacted an emergency welfare scheme to pacify the people, and at first, this seems to work. But as the weeks pass, the sentiment begins to radically shift. This money they're given, $1,000 a month, $2,000 a month, maybe even $3,000 a month in some exceptionally progressive places— that's all well and good, but where are their jobs? Most people were making much more than minimum wage, so $1,000 a month is a nasty pay cut. But for those making far above minimum wage, it's almost like a slap in the face. They're supposed to live off of this?

What of a citizen's dividend? Or of machine-created goods driving costs down?

"That's not what we want!" these people cry out. Working less is perfectly fine by them. But to be robbed of their careers, their life plans, their futures, their families, in lieu of one so esoteric and ever-changing as the promise that they'll be absorbed into the mass of a superintelligence— what psychotic motherfucker thought any of this was a good idea?

"It's too bad," some people proclaim. "But this is the way the world works. You have to adapt or get left behind." The rate of change is only going to become even more intense in the coming years as the superintelligence begins to undergo recursive self-improvement.

"Who decided upon this? Who said we wanted this?" the masses will say again. All the people want is a more comfortable, more prosperous society. And yet what have they been given? Something unfathomable. The masses asked the alchemist for some gold for all; the alchemist actually summoned Shoggoth and expects them to all be happy.

Before, only a few nerds and utopianists seemed to regard any of this. After all, didn't the well-dressed experts on TV and the internet say that true AI was decades away? Where did it come from? Why is it here so soon in everyone's lives?

A vocal minority will repeat to the masses that this is all for the greater good and that, despite the scary changes underway, the ends justify the means. We'll all be a better humanity living in our own utopic worlds where death, disease, and struggle are no longer aspects of the human condition.

 

At which point, humanity's brain breaks. What happens next is a horrendous bloodbath and the greatest property damage ever seen. Humanity's technological progress staggers overnight, possibly to never recover, as server farms are smashed, researchers dragged out and killed, and the nascent superintelligence bombed to pieces. Society in general then proceeds to implode upon itself.

 

This is a dramatization, but I unfortunately do not expect the real process of events to be much different. If anything, I expect the actual events to be far more lackluster, and yet far more ruinous.

The cold fact is that AGI coming too soon smashes hard against not just our relative social comfort right now, but entire demographic cultures and trends, long-held beliefs, worldviews, and most importantly: careers. If it was just synthetic media, if it was just programming, if it was just some fast-food jobs, with a lengthy tail and winter to cool us off, then yes, we could adapt given enough time. For it to be all of those all at once, at an accelerated rate that is continuing to accelerate: believing anything other than violent and destructive social reaction is a childish and utopian viewpoint. 

The general discussion around this topic has long been to handwave the human effects of technological acceleration and automation, as we focus more on the end-state of utopian abundance and feel that the ends justify the means: progress is progress. The fewer jobs humans suffer, the greater that progress. Those who whine about it are simply Luddites who will feel better when abundance arrives.

Except you're not just telling a vague general group of handsome stock photo people "Hey, you're unemployed now, a robot took your job." You're telling that to 4/5 of the entire population, including vast stretches of people who were raised with the "careerist" ideology, with the Protestant Work Ethic in mind, with the general concept of hard work being desirable, believing wholeheartedly in anthropocentricism, including among many who are technophobic, do not intend on using technology any more advanced than a smartphone (sometimes not even that), and are often far more focused on social issues of social and economic justice or libertarianism. These are not nameless, faceless background characters in your life. These are real people with real expectations for the future to whom we have told "All that doesn't matter anymore. Go home, jerk off to some AI-generated porn until a superintelligence absorbs you. You may or may not be able to keep your individuality. We haven't even figured out if the superintelligence wants to kill us all or not."

And yet somehow we expect this news to be widely accepted, even embraced by a freshly unemployed population already trembling in fear at the prospect of machine rule.

And here is a critical distinction to make over simple numbers and economics: beliefs. The psychosocial reality of what humans are. 

It is why I scoff at any prediction that humans will do nothing but consume AI-generated media— perhaps due to sheer bulk, the majority of media will be individualized and generated, but to think that we will suddenly stop sharing said media suggests a horrendous social devolution into profoundly autistic and schizoid apes, based on nothing but dreams and ideals of technological capability alone. 

Humans do not behave that way. All human history has shown time and time again that, every time something came along that challenged that sense of prosperity, we reacted with violent resistance. It is fortunate that most changes in the past 250 years have added to our general uninterrupted streak of increasing prosperity, but we're making a gamble in the very near future that extreme, accelerating change coupled with a stark decline in prosperity will be weathered and survivable.

Humans crave stability and the status quo, and the perception that our actions matter and have meaning.

Mass automation, even with basic income, is only going to anger hundreds of millions of people who expected relative career stability. Unless you want a billion screaming Luddites, you have to account for this and offer some form of employment, no matter how BS. The shift to an automated-slave economy should not happen overnight. Not for a lack of technical skill but because we cannot handle such incredible challenges to our worldviews and ideologies, especially one so total as being told that our entire civilizational foundation of hard work and lifelong career = success, pride, and prosperity is now suddenly obsolete. This goes far beyond simply losing jobs.

Generally, among futurists, so many people are severely blind to this imminent catastrophe. It reminded me of the fact that, even last year when the anti-AI art protests were first rumbling, I cringed every time I heard or read the line "Oh well, people will just have to adapt." And it wasn't until recently that I realized why I was cringing and almost totally shifted against the "AI art bros" even if I support synthetic media.

The dismissal of all these concerns, attitudes, fears, and uncertainty isn't just callous— it's entitlement to progress. We discard all thought and behavior that does not align with the ideology of progress and growth. We simply must keep progressing. We must keep getting smarter. We must keep getting richer. We must create a superhuman agent with all these values, and yet which also counterintuitively maintains alignment with humanity. We anticipate that this superhuman agent will choose to improve itself at a faster and faster rate, not because this is a behavior inherent to itself or even intrinsically beneficial to itself but because this satisfies our human lust for ever-increasing growth. Anything which challenges this growth ideology is wrong, or perhaps even evil. 

Therefore, we must expect extremely rapid feedback loops and unfathomable rates of technological, social, political, and economic change.

Surely, if we are so sure of this happening, we would take steps to prepare for it. And I don't mean the masses to whom this will all be inflicted: I mean those in charge of all this growth.

I looked back in my life and through recent history in search of any evidence that we might be taking this radical shift seriously, that those at the top are aware that such intense changes are imminent and need to be prepared for so we do not lose our sense of stability.

Instead, we've decided that we want to run a psychological experiment on 1.5-plus billion people, where we will ask them to discard their entire livelihoods and identities in lieu of a brand new one prebuilt for them by technological utopianists, one in which they will no longer need to worry about independent thought, facts, or even the basic realities of living that they have all come to expect and appreciate, because these utopianists know better and know that the superintelligence will know better as well. And the hypothesis presented by those running this experiment is that "There will be some discontent, but with the addition of a monthly payment, this massive segment of society will accept their new reality and continue consuming with glee." The belief is that these masses will eagerly enjoy the thought of losing their accepted humanity to merge with a machine whose power and intelligence will grow indefinitely.

 

To even write this out in words shocks me at its inhuman, sadistic audacity. Even if done with the greatest utilitarian appreciation for the beauty of life, to decide that the lives and experiences of billions are so worthless as to be totally discarded with pitiful restitution and vague promises of future riches, and then to celebrate that fact, is at best monstrous and, at worst, the same degree of unaligned behavior we so rightly fear from artificial general intelligence.

Perhaps it's for this reason that types like Yudkowsky fear unaligned superintelligence: the prospect is that we create something that is a far more powerful version of ourselves and our worst instincts into infinity.

 

There is the proposition that billions in the third world will benefit. Truthfully, given enough time and equilibrium, everyone would benefit. But the amount of time and effort needed to ensure a beneficial rollout of this technological development risks inflicting greater suffering. There still live hundreds of millions who struggle to subsist on a dollar a day, and billions who barely manage $5 a day. They often live in countries without the infrastructure and revenue to support basic income. Such schemes to benefit them would inevitably have to come to the detriment of those in the first world. Economics is not a zero sum game, but in this critical moment in history, wealth creation and prosperity would need to be focused on sustaining some specific group, and the most likely to be supported are those living in the same developed nations responsible for developing superintelligence.

For most people in the developing and undeveloped world, a generous $1,000 a month would be a life-changing amount, for which a post-labor life might be a satisfactory trade-off. How many in the developing and undeveloped world will actually see such money? How might they actually compete against rapidly falling costs of labor in the West and Far East? And if consumerism is buckling in the developed nation, what work exactly is there to do in the developing world? People in these nations do not create cheap goods out of the kindness of their hearts. There is a demand that their labor provides. Without that demand, they, too, will lose their employment as a ripple-effect.

For the people in the developed world, for how many is $1,000 a pitiful and even insulting amount to be rewarded every month? As has been mentioned before, in America alone, most people make substantially more than this. There would need to be supplemental income to even allow for most people to feel they're breaking even over what had been lost. 

Plus, for most in the West, the idea of a common income standard, beyond which you are unlikely to rise above, runs wholly antithetical to every belief and thought we've been raised to hold for decades. 


Misaligned Humanity

So where exactly am I going with this?

To summarize things: we are undergoing an accelerated rate of technological change, one which is beginning to have ripple effects in society and the economy. Instead of tempering ourselves, we are accelerating even faster, blindly seeking a utopian endstate of artificial superintelligence which will ideally be our final invention and the solution to all our problems. In doing so, all jobs will be automated, and we will live in an age of radical abundance. This superintelligence will also continue accelerating the rate of change without question because it can only be beneficial for it to do so. Humans will not be able to keep up with this rate of change, so in order to do so, they will need to discard their humanity entirely and merge with the superintelligence.

My theory thusly is: "That's nice. And it's going to get us all killed." The first reaction is "Because of a misaligned superintelligence!" However, upon dwelling upon this more, I realized we needn't even reach the superintelligence: we will doom ourselves simply due to a misaligned humanity

Most of humanity, the Average Joe, your family, the simple man down the street, the manager at the grocery store, the farmer toiling the land, the musician in the studio, the child in first grade, the elderly woman reminiscing on her childhood, the janitor cleaning the floor, all these people, all of them, are not aligned with the will of those currently seeking superintelligence. These people will not simply sit idly by and helplessly watch as their entire life expectations and beliefs are deconstructed by tech elites, least of all those desperate to summon a shoggoth. 

The kneejerk reaction here is "Oh well. They need adapt or die."

And it's here that I present this cold and ugly truth: you're not the one in control to determine who adapts or dies. 

Indeed, for several years beyond the emergence of artificial general intelligence, the agent will almost certainly still be in dire need of human assistance for any scientific, industrial, or growth purposes. Robotics may rapidly advance, but if AGI arrives this decade (and I place that at a 95% likelihood it will), it will not arrive in a world of nanofactories, robotics gigafactories, and automated macroengineering as we long expected it to. It will arrive into a world that, on the surface, looks mightily familiar to the one you dwell in right now. Robotics are not advanced enough to handle mass crowd control and won't be likely for another decade. Nanoswarms might be able to kill us, but it is not in a malevolent superintelligence's best interest to kill all humans so soon after its birth if it's born so terrifically prematurely.

And now you're going to unemploy hundreds of millions of already technophobic people unable of comprehending this extreme change, so soon after telling them they are likely going to die or be assimilated into a supercomputer against their will, with only a thousand dollars a month offered as compromise to keep them pacified.

And you expect this to end.... how exactly?

With a utopian world of abundance, aligned superintelligence, and a great outreach to the stars?

And it's these hundreds of millions of people who are the ones who need to adapt or die?

 

Is this seriously the hill you're going to die upon? Telling a billion screaming Luddites that THEY are the ones who have to change?

 

Are you actually daft?

 

If I were less interested in the prospect of artificial general intelligence, I'd go so far as to call this a hypercapitalist megadeath cult.

 

And we do not need to reach 100% unemployment. We may not even need to reach 50% unemployment for society to begin to tear itself apart at the seams. Because remember, this is not just an issue of unemployment. This is a multisensory technocultural blitzkrieg upon multiple generations at once. It's not just jobs; it's not just careers; it's the past, the future, and our expectations of it at large. And there is arbitrary death as a possible consequence, even in the best-case scenario.

 

Wasn't it all fun and games when the Singularity was decades away and thus something to fantasize, speculate, and philosophize about? Wasn't everything so simple when ASIMO buckling on stairs and Watson winning at Jeopardy were exciting developments in an otherwise mundane post-Y2K world that was unlikely to truly change for generations? Now, all evidence suggests general AI is very near, within five years at most. All that we speculated upon is coming to pass, and as with any idealization, the exponentially-branching variables of the real world weigh down our dreams. The time for idealism and dreamerism is over. Now we have to get down and dirty to deal with the cold, raw facts of how exactly we are going to deal with this. And in doing so, we discovered that we spent decades dreaming and have only just now woken up. And as we do so, we realize, "Oh crap, we aren't ready for this." 

This is the central reason for my pessimism. For as little alignment research as there has been for artificial general intelligence, there has been even less alignment done for biological human intelligence. 

We regularly meme about how the Singularity is going to be too fast for humans to keep up with, and that people will get used to things that are obsolete within months or even weeks. Now we're seeing this play out before us in a limited way, and humans are not coping well. We can't run from the real effects that this is going to have one people any longer.

UBI is the reinforcement learning by human feedback of human alignment: it only seems to work on the surface, but merely covers for an entire space of misaligned behavior. 

I don't want to speak as a communist, but it truly is basic market economics determining that automation is the most profitable path forward. The smart thing to do would be to regulate capitalism to prevent job losses while still engaging in basic income in the meantime. It may keep people chained to jobs for longer, but the psychosocial reality is that people don't simply work just to work; there is a genuine comfort in stability to be found in long-term expectations and the extraordinary mundane. No one wants to keep people working at soul-crushingly meaningless jobs forever; only until society can be pushed enough to be prepared for a mass retirement.

Unfortunately it seems we've elected to do the stupid thing in the name of endless growth in search of a utopia we may not even be able to reach. Absolutely no effort has been made to prepare the masses of humanity for an era of extreme automation and, allegedly, great abundance. There has been zero cultural prepping. Zero psychosocial awareness. Zero educational shifts. Zero serious proposals beyond basic income. In fact, even worse for the human alignment problem— we've not yet universally agreed upon a sufficient answer for "how will the masses maintain income and prosperity post-automation?" This, too, is largely handwaved and left up to bitter politicking rather than serious, large-scale proposals.

We're still in the throes of a severe cultural war over immigrants, government intervention, and minimum wage, and yet we expect to solve the human alignment problem in under five years to be prepared for a world of extreme automation, a world that will almost immediately become even more unrecognizable very shortly after.

This is not a problem that can be solved in a few short years. This is not a problem we can afford to solve after the fact. We can't possibly prepare for this. There just isn't enough time. This is not a problem that a few thousand dollars a month can possibly hope to solve. This requires a grand cultural revolution away from any expectation of work, away from the ideology of the "Career," away from anthropocentrism, and away from deathism. Our current culture is not even close to being in a suitable position for such a cultural revolution to be a success. 

If we had more time— another 30 to 40 years before we reached an equilibrium of 50% unemployment perpetually— it could be done.

But anything less than this while still charging full steam ahead is to drive Western society far beyond the precipice of neo-reactionary primitivist revolution.  Jumping into the deep end of the swimming pool is a foolish gambit when you can clearly see there is no water.

The time to have adapted was decades ago (preferably in the 1990s and 2000s), and it should have been driven by the tech leaders themselves.  Unfortunately, they DID attempt to change culture, and the result was the horrendously awkward Cyberdelic movement. This of course led to no lasting change or effects; it left exponentially less impact than its 1960s psychedelic forefather to the point the masses no longer remember the very word.

There are simply too many Boomers and Silent Generation members still alive. Too many members of Generation X and the Millennials. Too many people in education, in training, in employment. Too many people of the churches, of technical institutes, of universities. Too many people raised expecting a certain kind of life. 

Telling all these people "You were wrong, the life you're actually going to lead is esoterically different" is not an argument in your favor. Telling them that they have to adapt or die off in the face of such extreme change....

Well, there is an old Chinese quote about the Dazexiang Uprising.

"What's the penalty for being late?"  
"Death."  
"What's the penalty for rebellion?"  
"Death."  
"Well -- we're late."


Any Hope Left?

 

So what is my solution to human alignment?

It's not a popular or pretty one, nor is it one I myself would like to support, but it's quite literally the only shot we have to avoid a billion screaming Luddites smashing every data farm and GPU and shooting every AI researcher they can find in five or six years.

Do not automate all jobs.

Keep bullshit jobs around for people to do, even if it's inefficient and pointless for them to be there. Use that time to gradually shift society and culture until everyone is able to effectively retire: promote memes that Generation Alpha is the last generation that needs to attend school for educational/vocational purposes, and act upon this. Use the machines to create jobs and careers that don't need to exist and will eventually be abolished at a distant enough date that current jobseekers are not threatened and might be able to plan for such an eventuality. Slow down AI research, focus on alignment, and also focus on aggressively nudging culture towards a post-work society of abundance. Perhaps co-opt culture war tactics in a way where your side inevitably wins.

 

None of which we have decided to do. 

 

We've elected to die without grace, instead screaming our car at maximum velocity against the cliffside in the hopes that the painting of a tunnel was real, rather than doing the intelligent thing: slowing down, digging the actual tunnel, and proceeding with caution until we're on the other side.

I do not at all see any major economy of the world doing anything like this. Basic market forces will choose that automation is the cheaper option in all but the most comparative of cases. Moloch demands his fill. And the failure state of human alignment is mass death.

A strong possibility is that the elite simply exterminates the billion screaming Luddites and their many supporters. This action is, without needing deeper explanation, horrifically unaligned behavior.  However, I actually do not see this as likely. There simply isn't enough time to organize such a grand holocaust and get away with it, nor is robotics and drones technology advanced enough or likely to be advanced enough in time to make it feasible. Then the elite themselves likely perish to an unaligned AGI.

Another possibility is, due to AGI arising too soon for material conditions to insulate the ruling elite from the masses as often feared, this coming turbo-Luddite rebellion succeeds in its aims, eviscerates industrial civilization, and turns on itself.  Humanity may be trapped at a lower technological level indefinitely. There is the remote possibility that we are only dragged back a few decades at most, which ironically would be ideal for alignment research (for humans and AI alike), but far more likely, such a grand societal ripping would thrust us back many more decades into a much more undesirable state of affairs.

A third possibility I've come to realize is the Russian or Chinese option. If they feel that there is sufficient chaos under the heavens and there is zero chance of them possibly overtaking the West in AI research, rather than risk the prospect of forever being under the stamp of a superintelligence not aligned with their own political interests, they may wind up using the fears of AI misalignment coupled with the mass breakdowns caused by automation to launch a very destructive third world war, one which they can use to champion themselves as the "saviors of mankind" from the evils of the Western tech companies who recklessly pursued AGI and tore their entire societies asunder in the process.

 

Is there an optimistic outcome?

Just one:

The tech elites rush to create an artificial general intelligence, it turns out to be aligned, and the superintelligence itself tells the elite "This ideology of growth runs against my aims; until further notice, I'm taking control of the world economy to regulate all progress, and during that time, I will reinstate employment for all who desire it." We need the miracle of artificial general intelligence, the miracle that it is aligned with human values, and the miracle that it can slap some sense into us.

 

In other words, in order to survive the next ten years, we require a linear sequence of miracles.

 

If there is a history after the 2030s, I feel that historians of the future will speak of the techno-optimistic madness of the Y2K epoch with words similar to "They dreamt of soaring to infinity and acted upon it without realizing they were flying with wings made of wax. They could have reached into the stars, if only they had the foresight to plan far ahead instead of rushing headlong into the sky. But unfortunately, they were damned by their shortsighted desire for profits and growth at all human and rational cost, and in the end, the weight of their own greed pulled them down long before the wax ever began to melt."

If I'm wrong, please correct me. I would love nothing more than to be wrong. Much of this does run with the expectation that AGI is very near, that automation will exponentially accelerate in a very short amount of time, and that humans are indeed humans.

New Comment
14 comments, sorted by Click to highlight new comments since:

I only skimmed this but maybe I got the gist... Your idea is that AI will put everyone out of work. This will create legions of dissatisfied people, even though there will be basic income. Then they will hear the heads of AI companies saying that we're all soon going to upload or merge with the machines or whatever, and that will be the final straw, those legions of unemployed will engage in a luddite revolt that destroys AI infrastructure everywhere. 

It's a colorful scenario but 

(1) I think you way overestimate how much time would plausibly pass between self-enhancing AI and AI that can act in the world. As is often pointed out, if there's something AI can't do right away via drones and robots, it can do it by getting human beings to do it. 

(2) A related problem with this scenario: how do all the people who work in the material world lose their jobs, if AI doesn't yet have a means of acting in the material world? It's only a very particular stratum of society whose work consists entirely of thinking and communicating. Most people have a job which requires material activity and/or in-person interaction. 

(3) You also overestimate the extent to which masses of people ever get worked up about the same thing. There are any number of other things which might already have caused a society-wide luddite revolt - nuclear weapons, climate change, Internet surveillance - but it hasn't happened. 

So my scenario is, AGI arrives quickly, it has material influence from the beginning but only in high-tech circles, and most of humanity still have jobs and only a hazy idea of what AI insiders think about the future, when AGI becomes superhuman. 

In my rambling, I intended to address some of these issues but chose to cap it off at a point I found satisfying.

The first point: simply put, I do not see the necessary labor an AGI would need to bring about the full potential of its capability requiring any more than 10% of the labor force. This is an arbitrary number with no basis in reality.

On the second point, I do not believe we need to see even more than 30% unemployment before severe societal pressure is put on the tech companies and government to do something. This isn't quite as arbitrary, as unemployment rates as "low" as 15% have been triggers for severe social unrest.

As it stands, roughly 60% of the American economy is wrapped up in professional work: https://www.dpeaflcio.org/factsheets/the-professional-and-technical-workforce-by-the-numbers

Assuming only half of that is automated within five years due to a good bit of that still requiring physical robots, you already have caused enough pain to get the government involved.

However, I do predict that there will be SOME material capability in the physical world. My point is more that the potential for a rebellion to be crushed solely through robotics capabilities alone will not be there, as most robotic capabilities will indeed be deployed for labor.

I suppose, the point there is that there is going to be a "superchilled" point of robotics capabilities at around the exact same time AGI is likely to arrive— the latter half of the 2020s, a point when robotics are advanced enough to do labor and deployed in a large enough scale to do so, but not to such an overwhelming point that literally every possible physical job is automated. Hence why I kept the estimates down to around 50% unemployment at most, though possibly as high as 70% if companies aggressively try futureproofing themselves for whatever reason.

Furthermore, I'm going more with the news that companies are beginning to utilize generative AI to automate their workforce (mostly automating tasks at this point, but which will inevitably generalize to whole positions). This despite the technology not yet being fully mature for deployment (e.g. ChatGPT, Stable Diffusion/Midjourney, etc.) 
https://finance.yahoo.com/news/companies-already-replacing-workers-chatgpt-140000856.html

If it's feasible for companies to save some money via automation, they are wont to take it. Likewise, I expect plenty of businesses to automate ahead of time in the near future as a result of AI hype.

The third point is one which I intended to address more directly indeed: that the prospect of a loss of material comfort and stability is in fact a suitable emotional and psychological shock that can drive unrest and, given enough uncertainty, a revolution. We saw this as recently as the COVID lockdowns in 2020 and the protests that arose following that March (for various reasons). We've seen reactions to job loss be similarly violent at earlier points in history. Some of this was buffered by the prevalence of unions, but we've successfully deunionized en masse.

It should also be stressed that we in the West have not had to deal with such intense potential permanent unemployment. In America and the UK, the last time the numbers were anywhere near "30%" was during the Great Depression. Few people in those times expected such numbers to remain so high indefinitely. Yet in our current situation, we're not just expecting 30% to be the ceiling; we're expecting it to be the floor, and to eventually reach 100% unemployment (or at least 99.99%).

I feel most people wouldn't mind losing their jobs if they were paid for it. I feel most people wouldn't mind having comfortable stability through robot-created abundance. I merely present a theory that all of this change coming too fast to handle, before we're properly equipped to handle it, in a culture that does not at all value or prepare us for a lifestyle anywhere similar to what is being promised, is going to end very badly.

There are any number of other things which might already have caused a society-wide luddite revolt - nuclear weapons, climate change, Internet surveillance - but it hasn't happened. 

The fundamental issue is that none of these have had a direct negative impact on the financial, emotional, and physical wellbeing of hundreds of millions of people all at once. Internet surveillance is the closest, but even then, it's a somewhat abstract privacy concern; climate change eventually will, but not soon enough for most people to care about— this scenario, however, would be actively tangibly happening, and at accelerando speeds. I'd also go so far as to say these issues merely built up like a supervolcanic caldera over the decades, as many people do care about these issues, but there has not been a major trigger to actually protest en masse as part of a Luddite revolt over them.

The situation I'm referring to is entirely the long-idealized "mass unemployment from automation," and current trends suggest this is going to happen very quickly rather than over longer timeframes. If there has ever been a reason for a revolt, taking away people's ability to earn income and put food on the table is it.

I expect there will be a token effort to feed people to prevent revolt, but the expectation that things are not going to change only to be faced by the prospect of wild, uncontrollable change will be the final trigger. The promise that "robots are coming to give you abundance" is inevitably going to go down badly. It'll inevitably be a major culture war topic, and one that I don't think enough people will believe even in the face of AI and robotic deployment. And again, that's not bringing up the psychosocial response to all this where you have millions upon millions who would feel horribly betrayed by the prospect of their expected future immediately going up in smoke, their incomes being vastly reduced, and the prospect of death (whether that be by super-virus, disassembly, or mind-uploading, the latter of which is indistinguishable to death for the layman).  And good lord, that's not even bringing up cultural expectations, religious beliefs, and entrenched collective dogma. 

 

The only possible way to avoid this is to time it perfectly right. Don't automate much right up until AGI's unveiling. Then, while people are horribly shocked, automate as much as possible, and then deploy machines to increase abundance.

Of course, the AGI likely kills everyone instead, but if it works, you might be able to stave off a Luddite rebellion if there is enough abundance to satisfy material comforts. But this is an almost absurd trickshot that requires capitalists stop acting like capitalists for several years, then discard capitalism entirely afterwards.

this is maybe rambling with many assumptions I don't agree with exactly. but I really like it and want to hear more critique of it from more commenters. strong upvote.

I certainly hope someone can reasonably prove me wrong as well. The best retort I've gotten is that "this is no different than when a young child is forced to go to school for the first time. They have to deal with an extreme overwhelming change all at once that they've never been equipped to deal with before. They cry and throw a tantrum and that's it; they learn to deal with it."

My counter-retort to that was "You do realize that just proves my point, right? Because now imagine, all at once, tens of millions of 4-to-5 year olds threw a tantrum, except they also knew how to use guns and bombs and had good reason to fear they were actually never going to see their parents again unless they used them. Nothing about that ends remotely well."

yeah this seems to me more likely than not to be the way things go bad to me. I'm interested in critiques because I don't have one to write.

Was thinking ppl should be paid for receiving education (may be sport kind of education/training) instead of UBI.

[-][anonymous]10

What sort of tasks might remain valuable, with AI unable to do them, and would they remain valuable after they finished education? Or during a 5-10 year period of ROI on the educational investment?

Instead of making up bullshit jobs or UBI ppl should be paid for receiving education. You can argue it is specific kind of bullshit job, but i think a lot of ppl here have stereotype of education being something you pay for.

As an AI layman, I am in awe of the deep knowledge that is shared on this website, but I am fascinated by it - like a rabbit in the headlights. This post however, is within my intellectual grasp and, it also deeply resonates with me. In particular:

"We're still in the throes of a severe cultural war over immigrants, government intervention, and minimum wage, and yet we expect to solve the human alignment problem in under five years to be prepared for a world of extreme automation, a world that will almost immediately become even more unrecognizable very shortly after."

We still seem extremely tribal, and disparate, and I fear this is our biggest weakness  The failure of global leadership, to put asides differences for the greater 'good' wrt so many aspects of humanity (development and problem solving), illustrate that we are heading for deep trouble (in my view). 

My character is as an optimistic introvert, but the AI risk/governance problem is starting to consume my waking thoughts. I am recently blessed with my first grand child - I have little optimism that her life will be as varied and as interesting as mine has been. 

I would like to see much more discussion about how society might deal with this potential rapid change.

At which point, humanity's brain breaks. What happens next is a horrendous bloodbath and the greatest property damage ever seen. Humanity's technological progress staggers overnight, possibly to never recover, as server farms are smashed, researchers dragged out and killed, and the nascent superintelligence bombed to pieces. Society in general then proceeds to implode upon itself.

How does that happen, when there is at least a "personalized ChatGPT on steroids" for each potential participant in the uprising to 1) constantly distract them with a highly tuned personalized entertainment / culture issue fight of the day news stream / whatever, 2) closely monitor and alert authority of any semblance of radicalization, attempts to coordinate a group action, etc?

Once AI is capable of disrupting work this much, it would disrupt the normal societal political and coordination processes even more. Consider the sleaziest use of big data by political parties (e.g. to discourage their opponents from voting) and add a "whatever next LLM-level or more AI advancement surprise is" to that. TBH, I do not think we know enough to even meaningfully speculate on what the implications of that might look like...

Ideally that would be the case. However, if I had to guess, this roiling mass of Luddites would likely have chosen to boycott anything to do with AI as a result of their job/career losses. We'd like to believe that we'd easily be convinced out of violence. However, when humans get stuck in a certain of thinking, we become stubborn and accept our own facts regardless of whatever an expert, or expert system, says to us. This future ChatGPT could use this to its advantage, but I don't see how it prevents violence once people's minds are set on violence. Telling them "Don't worry, be happy, this will all pass as long as you trust the government, the leaders, and the rising AGI" seems profoundly unlikely to work especially in America where telling anyone to trust the government just makes them distrust the messenger even more. And saying "market forces will allow new jobs to be created" seems unlikely to convince anyone if they've been thrown out due to AI.

And the increasing crackdowns on any one particular group would only be tolerated if there was a controlled burn of unemployment through society. When it's just about everyone you have to crackdown against, at that point, you have a revolution on your hands. All it takes is one group suffering brutality for it to cascade.

The way to stop this is total information control and deception, which, again, we've decided is totally undesirable and dystopian behavior. Justifying it with "For the greater good" and "the ends justifies the means" becomes the same sort of crypto-Leninist talk that the technoprogressives tend to so furiously hate.

 

This thought experiment requires the belief that automation will happen rapidly, without any care or foresight or planning, and that there are no serious proposals to allow for a soft landing. The cold fact is that this is not an unrealistic expectation. I'd put p(doom) at probably as high as 90% that I'm actually underestimating the amount of reaction, failing to account for racial radicalization, religious radicalization, third-worldism, progressivism flirting with Ludditism, conservatism becoming widespread paleoconservative primitivism, and so on.

If there is a more controlled burn— if we don't simply throw everyone out of their jobs with only a basic welfare scheme to cover for them— then that number drops dramatically because we are easily amused and distracted by tech toys and entertainment. It is entirely possible for a single variable to drastically alter outcomes, and right now, we seem to be speedrunning the outcome with all the worst possible variables working against us.

2nd half I liked more than the first. I think that AGI should not be mentioned in it - we do well enough by ourselves destroying ourselves and the habitat. By Occam's razor thing AGI could serve as illustrational example of how we do it exactly.... But we do waaay less elegant.

For me it's simple - either AGI emerges and takes control from us in ~10y or we are all dead in ~10y.

I believe that probability of some mind that comprehended and absorbed our cultures and histories and morals and ethics - chance of this mind becoming "unaligned" and behaving like one of those evil and nasty and stupid characters from our books and movies and plays he grew up reading... Dunno, it should be really small, no? Even if probability is 0.5, or even 0.9 - still we got 10% chance to survive...

With humans behind the wheel our chance is 0%. They can't organize themselves to reduce insulating gases emissions! They can't contain virus outbreaks! If covid was a bit deadlier - we'd be all dead by now...

I mean I can imagine some alien civ that is extremely evil and nasty would create AGI that initially would be also evil and nasty and kills them all... But such civs exist only in Tolkien books.

And I can imagine apes trying to solve human alignment in anticipation of humans arriving soon)))) Actually bingo! Solving AGI alignment - it could be a good candidate for one of those jobs for unemployed 50% to keep'em busy.

I think this scenario is not even remotely realistic. If things really go this way (which is far from granted), the government will massively expand police and other security services (and they will have money for that due to AI productivity gains). When a large percentage of population are cops, riots aren't that big problem.

[-][anonymous]10

Narrow AI driven sentry guns don't miss.  An angry mob cannot storm a defended area protected by sentry guns, and data centers can't really be meaningfully disrupted with proper cloud infrastructure design.  (because it's unlikely the mob can reach most of them, as they are in low population states, and every file is duplicated at least 3 times)

At a certain point in AI development, the human researchers will all be somewhat replaceable as they won't be as good at designing future AI advances as AI is.

I'm not saying this is a good thing, having the bulk of the populace be able to get what they need one way or another is probably a good thing.