This is a linkpost[1], but I don't think the author would be happy to get commentary from people here on their blog, so I've copied the post here. link is at the bottom, if you really really want to comment on the original; please don't, though.


red: i suppose i should at least give him credit for acting on his beliefs, but my god i am so tired of being sucked into the yudkowsky cinematic universe. no more of this shit for me. i am ready to break out of this stupid fucking simulation.

blue: what did he even think was going to happen after the time piece? this is the kind of shit that makes people either laugh at him or start hoarding GPUs. it’s not as if he’s been putting any skill points into being persuasive to normies and it shows. wasn’t he the one who taught us about consequentialism?

(i thought initially that blue was going to disagree with red but no, blue is just mad in a different way)

https://twitter.com/mattparlmer/status/1641232557374160897?s=20 

red: it’s just insane to me in retrospect how much this one man’s paranoid fantasies have completely derailed the trajectory of my life. i came across his writing when i was in college. i was a child. this man is in some infuriating way my father and i don’t even have words for how badly he fucked that job up. my entire 20s spent in the rationality community was just an endless succession of believing in and then being disappointed by men who acted like they knew what they were doing and eliezer fucking yudkowsky was the final boss of that whole fucking gauntlet.

[Note: link intentionally non-clickable, and author intentionally cropped [edit: whoops guess it wasn't? uh sorry], to respect author's separation from this community. If you want to see the whole tweet thread, which is in fact very interesting, here's the link: https://twitter.com/QiaochuYuan/status/1542781304621518848?s=20]

blue: speaking of consequentialism the man dedicated his entire life to trying to warn people about the dangers of AI risk and, by his own admission, the main thing his efforts accomplished was get a ton of people interested in AI, help both openAI and deepmind come into existence, and overall make the AI situation dramatically worse by his own standards. what a fucking clown show. openAI is his torment nexus.

yellow: i just want to point out that none of this is actually a counterargument to -

red: yellow, shut the FUCK up -

yellow: like i get it, i get it, okay, we need to come to terms with how we feel about this whole situation, but after we do that we also need to maybe, like, actually decide what we believe? which might require some actual thought and actual argument?

red: if i never have another thought about AI again it’ll be too soon. i would rather think about literally anything else. i would rather think about dung beetles.

yellow: heh remember that one tweet about dung beetles -

https://twitter.com/SarahAMcManus/status/1119021587561369602?s=20 

red, blue: NOT THE TIME.

yellow: it’s a good tweet though, you know i love a good tweet.

red: we all love a good tweet. now. as i was saying. the problem is eliezer fucking yudkowsky thinks he can save the world with fear and paranoia and despair. in his heart he’s already given up! the “death with dignity” post was a year ago! it’s so clear from looking at him and reading his writing that whatever spark he had 15 years ago when he was writing the sequences is gone now. i almost feel sorry for him.

blue: the thing that really gets my goat about the whole airstrikes-on-datacenters proposal is it requires such a bizarre mix of extremely high and extremely low trust to make any sense - on the one hand, that you trust people so little not to abuse access to GPUs that you can’t let a single one go rogue, and on the other hand, that you trust the political process so much to coordinate violence perfectly against rogue GPUs and nothing else. “shut down all the large GPU clusters,” “no exceptions for anyone, including governments and militaries” - none of the sentences here have a subject. who is supposed to be doing this, eliezer???

red: not that i should be surprised by this point but i think way too many people are being fooled by the fact that he still talks in the rationalist register, so people keep being drawn into engaging with his ideas intellectually at face value instead of paying attention to the underlying emotional tone, which is insane. there’s no reason to take the airstrikes-on-datacenters proposal at face value. all it does is communicate how much despair he feels, that this is the only scenario he can imagine that could possibly do anything to stop what he thinks is the end of the world.

blue: ugh i don’t even want to talk about this anymore, now i actually do feel sorry for him. if his inner circle had any capacity to stand up to him at all they’d be strong-arming him into a nice quiet retirement somewhere. his time in the spotlight is over. he’s making the same points in the same language now as he was 10 years ago. it’s clear he neither can nor wants to change or grow or adapt in any real way.

yellow: so what should everyone be doing instead? who should everyone be listening to if not eliezer?

red: i have no idea. that’s the point. eliezer’s fantasy for how this was gonna go was clearly explained in harry potter and the methods of rationality - a single uber-genius, either him or someone else he was gonna find, figuring out AI safety on their own, completely within the comfort of their gigantic brain, because he doesn’t trust other people. that’s not how any of this is gonna go. none of us are smart enough individually to figure out what to do. we do this collectively, in public, or not at all. all i can do is be a good node in the autistic peer-to-peer information network. beyond that it’s in god’s hands.

blue, yellow: amen.

  1. ^
New Comment
18 comments, sorted by Click to highlight new comments since: Today at 4:39 AM

I have to thank you. I was spiraling due to yudkowsky's writings. I even posted a question about what to do because I was paralyzed by the fear. This is a helpful post

I will say - unfortunately, we are in a tight situation. but, eliezer's approach to communicating is a bit ... much. it is the case that humanity has to respond quickly. But I think that, by doing that, we can do that. Just don't think you're worried on your own, or that you need to be paralyzed. yudkowsky's approach to communicating causes that, and as I'm sure you know from your context in fighting, sometimes there are conflicts on earth. this conflict is between [humans and mostly-friendly ai] and [unfriendly ai], and there is in fact reason to believe that security resistance to unfriendly ai is meaningfully weak. I unfortunately do not intend to fully reassure, but we can survive this, let's figure out how - I think the insights about how to make computers and biology defensibly secure suddenly, well, do exist, but aren't trivial to find and use.

Also very relevant:


https://justathought.wiki/videos/2021/09/13/Lets_talk_about_soft_language_and_sensationalism 

https://www.youtube.com/watch?v=dUEQveTKH90 

Well, howdy there, Internet people. It's Beau again.

So today we're going to talk about words, the power of words,
words and their meaning, which we've kind of been doing most of last week.

And I'm going to give you some tips that I use to keep people engaged when they might otherwise shut down.

You know, I'm sure it's happened to you where you've been talking to somebody about a topic and then all of a sudden, boom, they just shut down, they're not listening anymore.

Over the last week, we have been talking a lot about words and what they mean. That's been the theme.

And even though the videos don't seem connected, they are by that element. The Biden mandate video, the video about the gas station, the word triage, the video about the word boy, the Bail Harris video, all of these had elements of it in it.

And it's in the middle of making these videos that I get this message.

"Appreciate your show. Thanks for being a voice of reason. One question, why are you so coy when talking about certain terms?"

And they actually list a few that they have realized that they're words I don't use.

Why public health crisis? There are numerous public health issues that predate the current one, and they give a list of them.

"When using an indirect and imprecise term, you not only give the audience room to misunderstand your meaning, you also rule out discussing relevant details. When you say that law in Texas, when referring to the one deputizing vigilante bounty hunting citizens for a law too unconstitutional for the state to enforce, it could also refer to a new gun law or voter suppression measures.

"Sorry to be picky, but euphemisms undermine persuasive power and messages and limit discussion."

I used to 100% agree with this. I really did. I don't anymore, and I'm going to tell you why.

Yeah, there are words that I don't use on this channel, and they're not words you might expect. We talk about foreign policy and conflicts a lot on the channel. Ever heard the phrase enemy troops?

Opposition. It's not emotionally charged.

I use a lot of soft language.

Yeah, I say public health issue rather than the word that everybody else is using. So why? Because y'all told me to.

YouTube gives people who have channels a crazy amount of information in the analytics, like a whole lot of information, reams and reams of charts and graphs that you can go through. And if you do that, you can find out some really interesting stuff.

In fact, most people who complain about the algorithm, if they went through their analytics, they'd find the problem.

One of the most interesting graphs to me is one that shows the percentage of people who watch a video through minutes and seconds. It will give you the exact time stamp and the percentage of people watching at that point. I pay real close attention to this one.

What it tells me first is that if I can get you through the first 30 seconds of a video, you're probably going to watch the whole thing. The majority of people do. The overwhelming majority of people get through the tell them phase.

If you're writing an essay, you have an introduction, body, and conclusion, right? In normal conversation, tell them what you're going to tell them, tell them, tell them what you told them.

The main body - If you make it through the first 30 seconds, you will make it through the main body. Almost everybody, like 90%, something like that. And then it slowly starts to taper off during the conclusion. That's what a normal video looks like on that graph.

However, every once in a while, it'll be floating along at 90%, and then it will drop to 70 or high 60s. And invariably, I can go to that exact time in a video, and I will find an emotionally charged word. 

This list of words, it's always changing. If you go back to the beginning of our current public health issue, you'll see that I used the word pandemic back then. But then that documentary quote came out, and that word became divisive. It became polarized.

And if you used the wrong version of it, you didn't use the play on words of it, people tuned out. In this quick case, quite literally, they would tune out. They'd turn off the video.

This happens in everyday conversation, too. If you use the wrong term in the wrong spot, people will shut down. So I use a whole lot of soft language, a whole lot. And theoretically, it keeps people engaged. It keeps people watching and therefore thinking, because I don't use emotional language.

I used to. I used to completely believe what this message said. I used a lot of inflammatory rhetoric, stuff like that. This was years ago.

And what I realized is that that was a really good way to get a group of people who already think the way you do, who already think the way you do. That's what that's useful for.

If you're trying to provoke independent thought, trying to provoke conversation, that strong rhetoric doesn't work. Doesn't normally work.

And I think that on the people who it does work on, people who can be swayed by an emotional argument or strong rhetoric or whatever, it's superficial and they can be swayed back by another emotional argument.

However, if you appeal to reason, rationality, that critical thinking takes hold and it's harder for them to be moved back through an emotional argument. And yeah, I used to use a lot of inflammatory rhetoric.

And then one day I made an analogy. Yeah, go figure, right? I made an analogy and it was a bad one. It was bad. I mean, like the space laser lady bad. It was just all bad. It was very easy to take it in a way that I didn't mean it.

And somebody who actually agreed with the point that I was making was like, you know, that was pretty messed up, dude. I took the correction. I didn't do it again. So I stopped using it. I stopped using inflammatory rhetoric. I stopped using strong language.

And I started using soft language. And it got softer over the years. And this weird thing happened.

I stopped getting messages that said, I agree with you, and started getting messages that said, I changed my mind about something.

And that's more my goal. So that's why there's soft language. Again, that strong rhetoric, that appeal to emotion, it has a place. But it's for encouraging people who already believe what you do. I don't think you're going to successfully change anybody's mind or get them out of the information silo they're in by using strong rhetoric.

Now, it's easy for me to say that, pointing to two pieces of anecdotal evidence, my analytics and my inbox. Is there anything that's publicly available that would suggest that's true? Yes. Absolutely. There was a poll done. And it said that 63% of Americans support teaching our kids, K through 12, about the lingering effects of slavery.

Now, if you're going to do that, you have to teach about the power structures that kept those effects in place. And if you're cool with doing that, odds are you're probably also cool with examining other issues with race that have occurred in American history.

63% OK with talking about the lingering effects of slavery and the institutions that kept those in place. In the same poll, the same respondents, only 49% were OK with teaching critical race theory because it's a polarized term, because it's politicized. Strong language, right?

It's shifting. Five years ago, it wouldn't have been a politicized word. It wouldn't be a word that I would avoid, or at least not say to a way past 30 seconds into a video. So there is evidence to suggest that if you're going to try to reach people, you want to use softer language. If you want to try to reach them through reason and rationality, you want to use softer language.

Because by that poll, you're talking about more than 10% of people that are kind of up for grabs as long as you're not using that strong language.

So that's why I do it.

And I think it's something that you could probably employ in your everyday life when you're talking to people who have fallen down into some echo chamber and they can't get out. You know, it might work.

Now, the danger of this is that this method, the soft language, it doesn't create people who think the way you do. It just creates people who are thinking on their own.

But I believe that if people are thinking reasonably and rationally, even if they don't think exactly like you, there's room for discussion and debate. And then you can move further if you can.

Anyway, it's just a thought.
Y'all have a good day.

and author intentionally cropped

The author is visible in the next screenshot, unless you meant something else (also, even if he wasn't, the name is part of the URL).

hello I'm an idiot

by his own admission, the main thing his efforts accomplished was get a ton of people interested in AI, help both openAI and deepmind come into existence, and overall make the AI situation dramatically worse by his own standards

I vaguely remember him saying things like that. This wasn't it, but there was some tweet like "Everyone was getting along and then Elon blew that all up".

I kinda get the impression he's completely wrong about that though, the fact that OpenAI and Deepmind (with its current culture) are the leaders of the field is way better than what I was expecting 10 years ago and seems like a clear win. It's not optimal, but it's better than the path we were on (AGI coming out of, finance, or blind mercenary orgs like Google, Microsoft, or Facebook AI), and Eliezer is the one who moved the path.

And was he serious? Does he really not see that? Is that assessment of his beliefs really coming from a deep reading of a couple of tweets? How much of this is due to the effects that twitter has had on his writing and your perception of him?

I must admit as an outsider I am somewhat confused as to why Eliezer's opinion is given so much weight, relative to all the other serious experts that are looking into AI problems. I understand why this was the case a decade ago, when not many people were seriously considering the issues, but now there are AI heavyweights like Stuart Russell on the case, whose expertise and knowledge of AI is greater than Eliezer's, proven by actual accomplishments in the field.  This is not to say Eliezer doesn't have achievements to his belt, but I find his academic work lackluster when compared to his skills in awareness raising, movement building, and persuasive writing. 

Isn't Stuart Russell an AI doomer as well, separated from Eliezer only by nuances? Are you asking why Less Wrong favors Eliezer's takes over his?

well it's more than eliezer is being loud right now and so he's affecting what folks are talking about a lot. stuart russell level shouting is the open letter, then eliezer shows up and goes "I can be louder than you!" and says to ban datacenters internationally by treaty as soon as possible, using significant military threat in negotiation.

Isn't Stuart Russell an AI doomer as well, separated from Eliezer only by nuances?

I'm only going off of his book and this article, but I think they differ in far more than nuances. Stuart is saying "I don't want my field of research destroyed", while Eliezer is suggesting a global treaty to airstrike all GPU clusters, including on nuclear-armed nations. He seems to think the control problem is solvable if enough effort is put into it. 

Eliezers beliefs are very extreme, and almost every accomplished expert disagrees with him. I'm not saying you should stop listening to his takes, just that you should pay more attention to other people. 

You know the expression "hope for the best, prepare for the worst"? A true global ban on advanced AI is "preparing for the worst" - the worst case being (1) sufficiently advanced AI has a high risk of killing us all, unless we know exactly how to make it safe, and (2) we are very close to the threshold of danger. 

Regarding (2), we may not know how close we are to the threshold of danger, but we have already surpassed a certain threshold of understanding (see the quote in Stuart Russell's article - "we have no idea" whether GPT-4 forms its own goals), and capabilities are advancing monthly - ChatGPT, then GPT-4, now GPT-4 with reflection. Because performance depends so much on prompt engineering, we are very far from knowing the maximum capabilities of the LLMs we already have. Sufficient reflection applied to prompt engineering may already put us on the threshold of danger. It's certainly driving us into the unknown. 

Regarding (1), the attitude of the experts seems to be, let's hope it's not that dangerous, and/or not that hard to figure out safety, before we arrive at the threshold of danger. That's not "preparing for the worst"; that's "hoping for the best". 

Eliezer believes that with overwhelming probability, creating superintelligence will kill us unless we have figured out safety beforehand. I would say the actual risk is unknown, but it really could be huge. The combination of power and unreliability we already see in language models, gives us a taste of what that's like. 

Therefore I agree with Eliezer that in a safety-first world, capable of preparing for the worst in a cooperative way, we would see something like a global ban on advanced AI; at least until the theoretical basis of AI safety was more or less ironclad. We live in a very different world, a world of commercial and geopolitical competition that is driving an AI capabilities race. For that reason, and also because I am closer to the technical side than the political side, I prefer to focus on achieving AI safety rather than banning advanced AI. But let's not kid ourselves; the current path involves taking huge unknown risks, and it should not have required a semi-outsider like Eliezer to forcefully raise, not just the idea of a pause, but the idea of a ban. 

Sorry, I should have specified, I am very aware of Eliezers beliefs. I think his policy prescriptions are reasonable, if his beliefs are true. I just don't think his beliefs are true. Established AI experts have heard his arguments with serious consideration and an open mind, and still disagree with them. This is evidence that they are probably flawed, and I don't find it particularly hard to think of potential flaws in his arguments. 

The type of global ban envisioned by yudkowsky really only makes sense if you agree with his premises. For example, setting the bar at "more powerful than GPT-5" is a low bar that is very hard to enforce, and only makes sense given certain assumptions about the compute requirement for AGI. The idea that bombing any datacentres in nuclear-armed nations is "worth it" only makes sense if you think that any particular cluster has an extremely high chance of killing everyone, which I don't think is the case. 

The type of global ban envisioned by yudkowsky really only makes sense if you agree with his premises

I think Eliezer's current attitude is actually much closer to how an ordinary person thinks or would think about the problem. Most people don't feel a driving need to create a potential rival to the human race in the first place! It's only those seduced by the siren call of technology, or who are trying to engage with the harsh realities of political and economic power, who think we just have to keep gambling in our current way. Any politician who seriously tried to talk about this issue would soon be trapped between public pressure to shut it all down, and private pressure to let it keep happening. 

setting the bar at "more powerful than GPT-5" is a low bar that is very hard to enforce

It may be hard to enforce but what other kind of ban would be meaningful? Consider just GPT 3.5 and 4, embedded in larger systems that give them memory, reflection, and access to the real world, something which multiple groups are working on right now. It would require something unusual for that not to lead to "AGI" within a handful of years. 

A big part of it is simply that he's still very good at being loud and sounding intensely spooky. He also doesn't do a very good job explaining his reasons and has leveled up his skill in explaining why it seems spooky to him without ever explaining the mechanics of the threat, because he did a good job thinking abstractly and did not do a good job compiling that into median-human-understandable explanation. Notice how oddly he talks - it's related to why he realized there was a problem, I suspect.

Notice how oddly he talks

I have seen him on video several times, including the Bankless podcast, and it has never seemed to me that he talks at all "oddly". What seems "odd" to you?

Talking like a rationalist. I do it too, so do you.

I don't know what you're pointing to with that, but I don't see any "rationalistic" manner that distinguishes him from, say, his interlocutors on Bankless, or from Lex Fridman. (I've not seen Eliezer's conversation with him, but I've seen other interviews by Fridman.)

I mean, he's really smart, and articulate, and has thought about these things for a long time, and can speak spontaneously and cogently to the subject, and field unprearranged questions. Being in the top whatever percentile in these attributes is, by definition, uncommon, but not "odd", which means more than just uncommon.

[-]TAG1y0-4

The people here,. on lesswrong,. give EY's opinion a lot of weight because LW was founded by EY, and functions as a kind of fan club.

https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts