Luke Muehlhauser writes:

Over the years, my colleagues and I have spoken to many machine learning researchers who, perhaps after some discussion and argument, claim to think there’s a moderate chance — a 5%, or 15%, or even a 40% chance — that AI systems will destroy human civilization in the next few decades. 1 However, I often detect what Bryan Caplan has called a “missing mood“; a mood they would predictably exhibit if they really thought such a dire future was plausible, but which they don’t seem to exhibit. In many cases, the researcher who claims to think that medium-term existential catastrophe from AI is plausible doesn’t seem too upset or worried or sad about it, and doesn’t seem to be taking any specific actions as a result.

Not so with Elon Musk. Consider his reaction (here and here) when podcaster Joe Rogan asks about his AI doomsaying. Musk stares at the table, and takes a deep breath. He looks sad. Dejected. Fatalistic. Then he says:

Read more

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 8:52 AM

One of his main steps was founding OpenAI, whose charter looks like a questionable decision now from an AI Safety standpoint (as they push capabilities of language models and reinforcement learning forward, while driving their original safety team away) and looked fishy to me even at the time (simply because more initiatives make coordination harder).

I agree that Musk takes AI risk seriously, and I understand the "try something" mentality. But I suspect he founded OpenAI because he didn't trust a safety project he didn't have his hands on himself; then later he realized OpenAI wasn't working as he hoped, so he drifted away to focus on Neuralink.

I feel like the linked post is extolling the virtue of something that is highly unproductive and self-destructive: using your internal grim-o-meter to measure the state of the world/future. As Nate points out in his post, this is a terrible idea. Maybe Musk can be constantly grim while being productive on AI Alignment, but from my experience, people constantly weighted down by the shit that happens don't do creative research -- they get depressed and angsty. Even if they do some work, they burnout way more often.

That being said, I agree that it makes sense for people really involved in this topic to freak out from time to time (happens to me). But I don't want to make freaking out the thing that every Alignment researcher feels like they have to signal. 

My best guess is that this "missing mood" effect is a defensive reaction to a lack of plausible actionable steps: upon first being convinced people get upset/worried/sad, but they fail to find anything useful to do about the problem, so they move on and build up some psychological defenses against it.

Maybe? But if they are doing AI research, it shouldn't be too hard for them to either (a) stop doing AI research, and thereby stop contributing to the problem, or (b) pivot their AI research more towards safety rather than capabilities, or at least (c) help raise awareness about the problem so that it becomes common knowledge so that everyone can stop all together and/or suitable regulation can be designed.

Edit: Oops, my comment shouldn't be a direct reply here (but it fits into this general comment section, which is why I'm not deleting it). I didn't read the parent comment that Daniel was replying to above and assumed he was replying in a totally different context (Musk not necessarily acting rationally on his non-missing mood, as opposed to Daniel talking about AI researchers and their missing mood.) 
 

--


Yeah. I watched a Q&A on youtube after a talk by Sam Altman, roughly a year or two ago (or so), where Altman alluded that Musk had wanted some of OpenAI's top AI scientists because Tesla needed them. It's possible that the reason he left OpenAI was simply related to that, not to anything about strategically thinking about AI futures, missing moods, etc.

More generally, I feel like a lot of people seem to think that if you run a successful company, you must be brilliant and dedicated in every possible way. No, that's not how it works. You can be a genius at founding and running companies and making lots of money without necessarily being good at careful reasoning about paths to impact other than "making money." Probably these skills even come apart at the tails.

 

Agreed that it shouldn't be hard to do that, but I expect that people will often continue to do what they find intrinsically motivating, or what they're good at, even if it's not overall a good idea. If this article can be believed, a senior researcher said that they work on capabilities because "the prospect of discovery is too sweet".

[-]hg003y100

An interesting missing mood I've observed in discussions of AI safety: When a new idea for achieving safe AI is proposed, you might expect that people concerned with AI risk would show a glimmer of eager curiosity. Perhaps the AI safety problem is actually solvable!

But I've pretty much never observed this. A more common reaction seems to be a sort of an uneasy defensiveness, sometimes in combination with changing the subject.

Another response I occasionally see is someone mentioning a potential problem in a manner that practically sounds like they are rebuking the person who shared the new idea.

I eventually came to the conclusion that there is some level on which many people in the AI safety community actually don't want to see the problem of AI safety solved, because too much of their self-concept is wrapped up in AI safety being a super difficult problem. I highly doubt this occurs on a conscious level, it's probably due to the same sort of subconscious psychological defenses you describe, e.g. embarrassment at not having seen the solution oneself.

[+][comment deleted]3y20

The idea of a missing mood, from following the link to Bryan Caplan's article, seems to amount to two ideas:

  1. "I think it has more costs than other people think, so even if someone thinks the benefits outweigh the costs, if they're not taking the costs seriously enough, they have a missing mood."
  2. "I think it has more benefits than other people think, so even if someone thinks the costs outweigh the benefits, if they're not taking the benefits seriously enough, they have a missing mood."

These are, of course, two sides of the same coin and have the same problem: You're assuming that the first half of your position (costs in case 1, benefits in case 2) is not only correct, but so obviously correct that nobody can reasonably disagree with it; if someone acts as they don't believe it, there must be some other explanation. This is better than assuming your entire position is correct, but it's still poor epistemic hygeine. For instance, both the military hawks example (case 1) and the immigration example (case 2) fail if your opponent doesn't value non-Americans very much, so there are lower costs or benefits, respectively.

Beware of starting with disagreement and concluding insincerity.

I'll make an analogy here as to get around the AI-worship induced gut reactions:

I think most people are fairly convinced there isn't a moral imperative beyond their own life, as in, even if behaving as if your own life is the ultimate driver of moral value is wrong and ineffective, from a logical standpoint it is, once your conscious experience ends everything ends. 

I'm not saying this is certain, it may be that the line between conscious states is so blurry that continuity between sleep and awakenes is basically 0, or as much as that between you can other completely different humans (which will be alive even once you die and will keep on flourishing). It may be that there is a ghost in the machine under whatever metaphysical framework you want... but, if I had to take a bet, I'd say something like ... 15,40,60% chance that once you close your eyes is over, the universe is done for.

I think many people accept this viewpoint, but most of them don't spend even a moment thinking about anti-aging, even those like myself that do, aren't to concerned about death in a "mood" sense. Why would you be? It's inevitable, like, yeah, your actions might contribute to averting death by 0.x% if you're very lucky and so you should pursue that area because... well, nothing better to do, right? But it makes no sense to concern oneself about death in an emotional way since it's likely coming anyway.

After all the purpose of life is living, and if you're not living because you're worrying about death you lost, even in the case where you were able to defeat death, you still lost, you didn't live, or less metaphorically you lived a life of suffering, or of unmeet potential.

Nor does it help to be parallelized by the fear of death every waking moment of one's life. It will likely make you less able to destory the very evil you are oposing.

Such is the case with every potential horrible inevitability in life, even if it is "absolute" in it's bad-ness, being afraid of it will not make ir easier to avoid and it might ultimately defeat the purpose of avoiding it, which is the happiness of you and the people you care about, since all of those will be more miserable if you are paralleized by fear.

So even if whatever fake model you had assumed a 99.9% chance of being destroyed by HAL or whatever in 10 years from now, it would still be the most sensible course of action to not get too emotional about the whole thing.

[-][anonymous]3y00

Is death by AI really any more dire than the default outcome, i.e. the slow and agonizing decay of the body until cancer/Alzheimer's delivers the final blow?

Senescence doesn't kill the world.

And it doesn't expand into the universe to kill every other life.

[-][anonymous]3y-20

How strange for us to achieve superintelligence where every other life in the universe has failed, don't you think?

Well, that's just a variation of the Fermi paradox, isn't it? What's strange is that we don't observe any sign of alien sentience, superintelligence or not. I guess, if we're in the zoo hypothesis, then the aliens will probably step in and stop us from developing a rogue AI (anytime now). But I wouldn't pin my hopes for life in the universe on it.

[-][anonymous]3y30

It was a rhetorical question, there is nothing strange about not observing aliens. I'm an avid critic of the Fermi paradox. You simply update towards their nonexistence and, to a lesser extent, whatever other hypothesis fits that observation. You don't start out with the romantic idea that aliens ought to be out there, living their parallel lives, and then call the lack of evidence thereof a "paradox".

The probability that all sentient life in the observable universe just so happens to invariably reside in the limbo state between nonexistence and total dominance is vanishingly small, to a comical degree. Even on our own Earth, sentient life only occupies a small fragment of our evolutionary history, and intelligent life even more so. Either we're alone, or we're in a zoo/simulation. 

Either way, Clippy doesn't kill more than us.

But it is surprising that life could only appear on our planet, since it doesn't seem to have unique features. If we're alone, that probably means we're just first. If we just blow up ourselves, another sentient species will probably appear someday somewhere else with a chance to not mess up. But an expanding unaligned AI will wipe out all chance of life appearing in the future. That's a big difference.

[-][anonymous]3y10

But it is surprising that life could only appear on our planet, since it doesn't seem to have unique features.

What does "could appear" mean here? 1 in 10? 1 in a trillion? 1 in 10^50?

Remember we live in a tiny universe with only ~10^23 stars.

[-][anonymous]3y40

Moloch is to the world what senescence is to a person. It, too, dies by default.

To an individual human, death by AI (or by climate catastrophe) is worse than old age "natural" death only to the extent that it comes sooner, and perhaps in being more violent.  To someone who cares about others, the large number of looming deaths is pretty bad.  To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.

To someone who loves only abstract intelligence and quantifies by some metric I don't quite get, AI may be just as good as (or better than) people.

[-][anonymous]3y-40

To an individual human, death by AI (or by climate catastrophe) is worse than old age "natural" death only to the extent that it comes sooner, and perhaps in being more violent. 

I would expect death by AI to be very swift but not violent, e.g. nanites releasing neurotoxin into the bloodstream of every human on the planet like Yudkowsky suggested.

To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.

Like I said above, I expect the human species to be doomed by default due to lots of other existential threats, so in the long term superintelligent AI has only upsides.