LESSWRONG
LW

2530
cesspool
12050
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
The Doomers Were Right
cesspool15h91

That's comparing apples to oranges.  There are doomers and doomers.  I don't think the "doomers" predicting the Rapture or some other apocalypse are the same thing as the "doomers" predicting the moral decline of society.  The two categories overlap in many people, but they are distinct, and I think it's misleading to conflate them.  (Which is kind of a critique of the premise of the article as a whole--I would put the AI doomers in the former category, but the article only gives examples from the latter.)

The existential risk doomers historically are usually crazy, and they've never been right yet (in the context of modern society anyway--I suppose if you were an apocalypse doomer in 1300s China saying that the Mongols were going to come and wipe out your entire society you were pretty spot on), but that doesn't mean they are always wrong or totally off base.  It's completely rational to be concerned about doom from a nuclear war, for example, even though it hasn't happened yet.  Whether AI risk is crazy "Y2K/Rapture" doom or feasible "nuclear war" doom is the real debate, and this article doesn't really contribute anything to it.

What this article does a good job of is illustrating how "moral decline" doomers as opposed to "apocalypse" doomers are often proved technically correct by history.  I think what both they and this article miss is that they often see events as causes of the so-called decline, when they're actually milestones in an already-existing trend.  Legalizing gay marriage didn't cause other "degenerate" sexual behavior to be more accepted in society--we legalized gay marriage because we had already been moving away from the Puritanical sexual mores of the past towards a more liberated attitude, and this was just one more milestone in that process.  Now that's not always true--the invention of the book, and later, the smartphone absolutely did cause a devaluing of the ability to memorize and recite knowledge.  And sometimes it's a little bit of both, where an event is both a symptom of an underlying trend, and also contributes to accelerating it.  But I really like how the article acknowledges that they could be right even if "doom" as we think of it today did not occur, because the values that were important to them were lost--

Probably the ancients would see our lives as greatly impoverished in many ways downstream of the innovations they warned against. We do not recite poetry as we once used to, sing together for entertainment, roam alone as children, or dance freely in the presence of the internet's all-seeing eyes. Less sympathetic would be ancient's sadness at our sexual deviances, casual blasphemy or so on. But those were their values.

We laugh at them for being prudish for how appalled they would be at our society with homosexuality, polyamory, weird fetishes, etc. all being more or less openly discussed and acceptable, but think what it would feel like to you if in the future you saw your society trending towards one where, say, pedophilia was becoming less of a taboo?  It doesn't matter if it's right or wrong, it's the visceral response that most people have to that idea that you need to understand.  That's what it feels like to be a culturally conservative doomer watching their society experience value drift.  People today like to think that our values are somehow backed up by reality in a way that isn't true of other past or present value systems, but guess what?  That's what it feels like to have a value system.  Everyone, everywhere, in all times and all places has believed that, and the human mind excels at no other task more than coming up with rationalizations for why your values are the right ones, and opposing values are wrong.


Overall I think this article is pretty insightful about the "moral decline" type of doomers, just completely unrelated to the question of AI existential risk that brought it up in the first place.

Reply1
How we'll make all world leaders work together to make the world better (Expert-approved idea)
cesspool13d10

Before you get too excited about the idea, let's think for a minute.  What would world leaders--notoriously a bunch of people prone to be ruthless, sociopathic, and morally unscrupulous, even if they're ostensibly in charge of liberal democracies--be able to reach through their cultural boundaries and agree on?

Peace?  No way.  Everyone has too many problems like outstanding land disputes they want to reserve the option of using war to correct.

An end to poverty?  For who?  To any leader in the developed world, agreeing on human plenty and prosperity as supreme values that transcend national borders would involve giving up some of their resources to people in the third world who are clearly suffering more.  Everyone's resources are already stretched really thin with their existing projects as it is, so that's a total non-starter.

Property rights?  Maybe, as long as you didn't get specific enough to make it mean anything.  Any language that implied it was wrong for a government to take property from its citizens on any pretext it liked is certainly out.

Technological advancement for the betterment of humanity?  Sure, but everyone's doing that already.  Even if all the world leaders got together and solemnly swore to focus their efforts on pushing the limits of science and disseminating their learning to the rest of the world, they would...  keep doing exactly what they're doing now, keeping secrets, only publishing what is convenient and making excuses about national security concerns every time they get called out about it.

So what could they agree on?

Law and order?  Now we're getting closer, but that is some problematic phrasing.  What if this whole concept of "international law" gets applied to tell a world leader about how they can treat their own people?  No one wants that.  All our world leaders are quick to call each other out on their various human rights abuses, but we've all got skeletons (or repressed minority groups, as the case may be) in our own closet.  So what's the part of "law and order" that all world leaders could all agree on?

I honestly believe that if they had a summit like this, the outcome would be for all the leaders of the world to come together and formally agree that the supreme moral value of humanity is obedience and submission to the state.  That's the one thing that is in line with all of their desires, whether they want to admit it or not.  The leaders of America and a few others with a freedom-loving image to keep up would have to make a show of complaint, but even they could rationalize it away.

Reply
A Medium Scenario
cesspool3mo41

Not to put words in the author's mouth, but when they said "We go gently...", I don't think they meant "go" as in become extinct, at least not any time soon.  I took that to mean "go" into obscurity and stagnation instead of keeping on advancing technologically until we're building Dyson spheres and colonizing other planets and all the science fiction stuff that most people believe humanity is going to do eventually.  In that scenario, we would keep living on aimlessly for many millenia until some asteroid or other cosmic event took us out, because we had never advanced enough to be able to handle that or have colonies as a backup.

I agree with you that we're unlikely to stop reproducing just because many humans get addicted to watching/interacting with content fed to us by a perfect algorithm for most of our waking hours.  Raising a family seems to be one of those things that brings intrinsic meaning and pleasure to many people, so I believe you'd see more of it, not less--most of the reasons people are choosing not to have kids today are because they don't have enough time or money in today's economy and work environment, and in this scenario all those problems are solved.  This scenario makes the assumption that the AI-fueled content machine would be so addictive that basically all humans would forsake all other pursuits and live like the people in WALL-E.  I don't think that's necessarily true, and if it isn't, we might see a population explosion requiring our AI-enabled oligarchic overlords to take control measures to keep it manageable.

Far from humanity going extinct, I think one possible catastrophe in the future, if AI advances roughly along these lines, is a Malthusian scenario where the population grows way beyond current levels thanks to AI optimizing the distribution of resources to make that possible, but becoming so dependent on complex AI logistics to provide everyone's needs that any slight hiccup in the distribution network can quickly cause a famine that kills millions.

This scenario seems to allow enough room for AI alignment and humans still being in the driver's seat on big picture issues that it wouldn't decide to let us go extinct intentionally.  We can hope.

Reply
A Medium Scenario
cesspool3mo10

You could be right about the limit based on overall compute applying to other approaches to AI just as much as to LLMs.  Speculating about the future of AI is always a little frustrating because ultimately we won't know how to make AGI/ASI until we have it (and can't even agree on how we will know it when we see it).  The way I approach the problem is by looking at what we do know--at this point in time, we only know of one system in existence that we can all agree meets the definition of "general intelligence", and that is the human brain.  Because of how little we still understand about how intelligence actually works, I think the most likely path to AGI--resting on the fewest assumptions about things we don't know--is a "brain-like AGI".  That's basically Steven Byrnes's view and I think his arguments are very compelling.  If you accept that view, than I think we end up with something like your scenario anyway, at least for a while until the brain-like AGI comes to fruition.

Reply
A Medium Scenario
cesspool4mo20

AI-2027 and a lot of other AI doom forecasts seem to rest on a big assumption--that LLMs are capable of achieving some form of AGI or superintelligence, and that progress we see in LLMs getting better at doing LLM things is equivalent to progress towards humanity developing AGI or ASI as a whole.  This is not necessarily true, though it can be tempting to believe it is, especially when you're watching the LLMs getting better at conversing and coding and taking over peoples' jobs in real time.  I think a lot of that progress is totally tangential to the task of creating AGI/ASI.  Giving an LLM more reinforcement learning and more fine-tuned prompting to say less politically incorrect things and make less coding mistakes is a huge step towards making it useful in the workplace, but is not necessarily a step towards general or superintelligence.

I really like this scenario because it does not make that assumption.  It is very conservative, every prediction it makes is well grounded in tech development trends we can see happening currently or forces that already exist and motivate decision-makers today, instead of relying on assumptions about huge breakthroughs that still haven't happened yet.  One of the strongest biases I see persistently in the tech community--and I'm no exception, I catch myself with it all the time too--is a bias towards optimism* in believing a new technology will develop and radically transform society very soon, whether it's self-driving cars, virtual reality, cryptocurrency, or AI.  I think this model is as free of that bias as any ai-doom-prediction scenario can possibly be.

That's not to say I don't believe AGI/ASI is in our future, or that this model even rules it out.  I am no expert, but if I had to choose my most likely prediction based on what I know, it would be something like this model, with LLMs hitting a plateau before they are able to achieve general intelligence, except that at some unspecified point in the future--could be in 1 year, could be 10, could be 100--ASI gets dropped on humanity out of nowhere, because while we were all busy freaking out about ChatGPT 6 or 7 taking our jobs, someone else was quietly developing real AI in a lab using a "brain-in-a-box-in-a-basement" model that has nothing to do with today's LLMs.

It may be true that LLMs are going to radically transform society and the workforce, and also true that ASI is something humanity will build and carries the existential risks we're all familiar with, but those two things may turn out to be totally unrelated.  I don't think that possibility gets discussed enough.  If it is true, that makes AI alignment a much more difficult job, and most of our efforts to "align" the LLMs that we have completely futile.

*I mean making optimistic assessments of how fast technology will develop and transform the world--not necessarily optimistic about the outcomes being good.  Believing that ASI will be fully developed and kill us all a week from now would still be an example of that "optimistic" bias in this context.

Reply