EDIT: THIS IS NOT APRIL FOOLS RELATED

ALSO: This is specific to the LW scene in Berkeley and nearby Berkeley, as this is the only place where e/acc exclusion is asserted to take place.

I haven't been around the LW scene for some time, but I understand it's common to exclude e/acc people from events. I further understand this to be exclusion on philosophical grounds, not just because LW-ites tend to view e/acc people individually as unlikeable.

I personally don't want to try to sneak into LW parties if I'm someone that the hosts are trying to exclude on philosophical grounds. So I'd rather clarify whether, in the opinion of various people, I count.

It's common among e/acc people to say things like "We're so close, just don't die" by which they mean that AGI is close. They also want to create AGI as soon as possible. By contrast, LW-ites typically believe that AGI is close, and therefore it is necessary to slow down or stop AGI development as soon as possible, in order to ensure that future development is done safely.

I part ways from both camps in believing that we're nowhere close to AGI, that the apparently-impressive results from LLMs are highly overrated, and that the X-risk from AI is 0 for the forseeable future. If I didn't think this, I would be sympathetic[1] to the desire to stop AI until we thought we could do it safely. But I do think this, so AI safety seems like a Victorian Nuclear Regulatory Commission. The NRC is a good thing, but it's going to be a while before splitting the atom is even on the table.

As a result, in practice I think I'm functionally e/acc because I don't want to stop the e/acc people from trying to push AGI as fast as possible. I don't think they're actually an X-risk since they're not going to succeed any time soon. But I'm theoretically decel because if I thought anyone was anywhere close to AGI I'd be sympathetic to efforts to restrain it. As it is, I think the AI safety people can continue to study AI safety for years confident that they can finish all the theories off long before they actually become necessary for survival.

In light of that, if you're the sort of person who wants to exclude e/acc people from your party, should I just not show up? That's fine with me, I'd just as soon know ahead of time.

Actually, the fact that I have to even ask this question makes me disinclined to show up anyway, but I'm sort of curious what people would say.


  1. "Sympathetic" does not necessarily mean "in favor of." It's a practical question whether various strategies for controlling AI development are feasible or worth their risks. If you have to risk nuclear war to ensure the other players don't cheat, it might not be worth it. Thus I'm not comfortable saying in the abstract "I'm in favor of measures to control AI development" given that I'm not sure what those measures are. ↩︎

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

clone of saturn

Apr 02, 2024

115

I think the desire to exclude e/accs is mainly because of their attitude that human extinction is acceptable or even desirable, not because of the specifics of what regulatory actions they support. So how do you feel about human extinction?

I described my feelings about human extinction elsewhere.

However, unlike the median commenter on this topic, you seem to grant that e/acc exclusion is actually a real thing that actually happens. That is

I think the desire to exclude e/accs is mainly because of their attitude that human extinction is acceptable or even desirable,

is a strange thing to say if there was not, in fact, an actual desire among LW party hosts in Berkeley. So inasmuch as my doubts about the truth of this have been raised by other respondents, would you mind clarifying

  1. If you do
... (read more)
4clone of saturn15d
I'm not in Berkeley and I have no direct knowledge of Berkeley parties, but a certain level of contempt or revulsion toward e/acc seems pretty universal among the LW-aligned people I know. I have no reason to doubt that there's no explicit rule against e/accs showing up at Berkeley parties, as others have said. I personally wouldn't feel entirely comfortable at a party with a lot of e/accs.

Apr 02, 2024

30

Your view is compatible with the ideology of e/acc.  Dunno about house parties, I probably wouldn't be invited, but:

https://www.lesswrong.com/posts/mmYFF4dyi8Kg6pWGC/contra-ngo-et-al-every-every-bay-area-house-party-bay-area

Eventually, my Rabbi friend said “Okay, so what I’m hearing is: you’re expected to tithe 10% of your earnings to charity, you have pilgrimages a few times a year to something called EA Global, and you believe a superhuman power will usher in a new era in which humanity undergoes substantial transformations resulting in either total destruction or universal peace.” Heads nodded. “This… sounds a little like a cult,” he said. “Yes!!” multiple people excitedly shouted at once. 

So the partygoers invited a Rabbi and seem to be self aware enough to admit that their own organization is reasonably defined as a cult.  Sounds like you could score an invite if you are the kind of person that gets invited to other parties a lot.

 

Evidence on ideology: https://thezvi.substack.com/p/based-beff-jezos-and-the-accelerationists

@Zvi gives a list here, matching reasons bolded:

Those like Beff Jezos, who think human extinction is an acceptable outcome.

Those who think that technology always works out for the best, that superintelligence will therefore be good for humans.

Those who do not believe actually in the reality of a future AGI or ASI, so all we are doing is building cool tools that provide mundane utility, let’s do that.

Related to previous: Those who think that the wrong human having power over other humans is the thing we need to worry about.

More specifically: Those who think that any alternative to ultimately building AGI/ASI means a tyranny or dystopia, or is impossible, so they’d rather build as fast as possible and hope for the best.

Or: Those who think that even any attempt to steer or slow such building, or sometimes even any regulatory restrictions on building AI at all, would constitute a tyranny or dystopia so bad it is instead that any alternative path is better.

Or: They simply don’t think smarter than human, more capable than human intelligences would perhaps be the ones holding the power, the humans would stay in control, so what matters is which humans that is.

Those who think that the alternative is stagnation and decline, so even some chance of success justifies going fast.

Those who think AGI or ASI is not close, so let’s worry about that later.

Those who want to, within their cultural context, side with power.

Those who don’t believe they like being an edge lord on Twitter.

Those who personally want to live forever, and see this as their shot.

Those deciding based on vibes and priors, that tech is good, regulation bad. (at least a Victorian NRC would be bad since they would decel the things that eventually made nuclear reactors possible) 

The degree of reasonableness varies greatly between these positions.

Thanks, this is extremely helpful. Having a clearer definition of how e/acc is understood to LW makes this much easier to think about.

Just for fun, I'll quibble: I would add to my list of e/acc heresies

Related to previous: Those who think that the wrong human having power over other humans is the thing we need to worry about.

Insofar as I genuinely believe that to some extent, various actors are trying to take advantage of sincerely-held beliefs by LWers in the importance of decel-until-alignment to craft rules which benefit them and their short-term in... (read more)

26 comments, sorted by Click to highlight new comments since: Today at 10:37 AM

I understand it’s common to exclude e/acc people from events.

Is… this actually true??

I've never heard of such a thing happening.

It happened at least at the (Allah may forgive me uttering those words) Aella birthday gangbang:

With my approval, organizers rejected anyone associated with e/acc, because we don’t need to give nice things to people hastening our doom.

(But note that this is a man-bites-dog sort of mention: the way she highlights that choice implies that, far from being common, as far as Aella knows, it hardly ever happens and this party was highly unusual, and Aella disapproves of it being so unusual & so is making a point of publicly stating it happened at her party in the hopes of setting an example.)

This is a good point, but I don't intuitively see that it's particularly strong evidence that it must be unusual. I would expect an event like this to have more explicit rules than the average party.

I would prefer not to have people reply to me about people's personal sexual activities/events (without exceptional reason such as a credible accusation of rape or other criminality). 

I also do not think that attendees of people's personal sexual events should be policed by others (nor be included when publicly discussing attendance of LW events).

Would you describe yourself as plugged into the LW party scene in Berkeley?

I wouldn’t use that phrasing, but I live and work from Lighthaven, and a great number of large Berkeley x-risk network parties happen here, and I chat with the organizers, so I have a lot of interaction with events and organizers. I’m definitely more in contact with semi-professional events, like parties run by MATS and AI Impacts and Lightcone, and there’s of course many purely social events that happen in this extended network that I don’t know much about. I also go to larger non-organizational parties run by friends like 2x/month (e.g. 20-100 people).

This seems like good evidence and I don't think you would make it up.

I'm rapidly coming to the conclusion that Beff & co are exaggerating/full-of-it/otherwise-inaccurate.

Possibly the Aella thing was an anomaly, but also the thing that they actually really wanted to go to, and they're inaccurately (although not necessarily dishonestly) assuming it to be more widespread than it actually is.

Didn't Aella explicitly reject e/acc's from her gangbang on the grounds that they were philosophically objectionable?

Uh… does that really count as an event in “the LW scene”?

… are you sure this post isn’t an April 1st joke?

I think it counts. And while it's not the typical LW party, do you really think that prohibition says nothing about the scene? That seems like an odd opinion to me.

I don’t know, man. Like… yeah, “not the typical LW party”, but that’s a bit of an understatement, don’t you think? (What makes it an “LW party” at all? Is it literally just “the host of this party is sort of socially adjacent to some LW people”? Surely not everything done by anyone who is connected in any way to LW, is “an LW thing”?)

So, honestly, yeah, I think it says approximately nothing about “the scene”.

Would you describe yourself as familiar with the scene at all? You seem to imply that you doubt that e/acc exclusion is an actual thing, but is that based on your experience with the scene?

I'm not suggesting that you're wrong to doubt it (if anything I was most likely wrong to believe it), I just want to clarify what info I can take from your doubt.

Hmm… I suppose that depends on what you mean by “the scene”. If you’re including only the Bay Area “scene” in that phrase, then I’m familiar with it only by hearsay. If you mean the broader LW-and-adjacent community, then my familiarity is certainly greater (I’ve been around for well over a decade, and have periodic contact with various happenings here in NYC).

Yes, I meant specifically the Bay Area scene, since that's the only part of the LW community that's accused of excluding e/acc-ers.

It's interesting and relevant if you can say that in the NYC scene, this sort of thing is unheard of, and that you're familiar enough with that scene to say so, but it isn't 100% on point.

Yes, I meant specifically the Bay Area scene, since that’s the only part of the LW community that’s accused of excluding e/acc-ers.

In that case, I request that you edit your post to clarify this, please.

I would be more worried about getting kicked out of parties because you think "the NRC is a good thing".

More seriously, your opinion on this doesn't sound very e/acc to me. Isn't their position that we should accelerate AGI even if we know it will kill everyone, because boo government yay entropy? I think rationalists generally agree that speeding up the development of AGI (that doesn't kill all of us) is extremely important, and I think a lot of us don't think current AI is particularly dangerous.

I think rationalists generally agree that speeding up the development of AGI (that doesn't kill all of us) is extremely important

Didn't Eli want a worldwide moratorium on AI development, with data center airstrikes if necessary?

Granted, I understood this to be on the grounds that we were at the point that AGI killing us was a serious concern. But still, being in favor of "speeding up AGI that doesn't kill us" is kind of misleading if you think the plan should be

  1. Slow down AGI to 0.
  2. Figure out all of the alignment stuff.
  3. Develop AGI with alignment as fast as possible.

I mean, sure, you want all 3 steps to happen as fast as possible, but that's not why there's a difference of opinion. There's a reason why e/acc refer to the other side as "decels" and it's not unwarranted IMO.

I would be more worried about getting kicked out of parties because you think "the NRC is a good thing"

Let's say "An NRC would be a good thing (at least on the assumption that we don't intend to be 100% libertarian in the short run)". I'm not going to die on the hill of whatever they may have done recently.

I don't think there's anything misleading about that. Building AI that kills everyone means you never get to build the immortality-granting AI.

You could imagine a similar situation in medicine: I think if we could engineer a virus that spreads rapidly among humans and rewrites our DNA to solve all of our health issues and make us smarter would be really good, and I might think it's the most important thing for the world to be working on; but at the same time, I think the number of engineered super-pandemics should remain at zero until we're very, very confident.

It's worth noticing that MIRI has been working on AI safety research (trying to speed up safe AI) for decades and only recently got into politics.

You could argue that Eliezer and some other rationalist are slowing down AGI and that's bad because they're wrong about the risks, but that's not a particularly controversial argument here (for example, see this recent highly-upvoted post). There's less (recent) posts about how great safe AGI would be, but I assume that's because it's really obvious.

I don't think there's anything misleading about that. Building AI that kills everyone means you never get to build the immortality-granting AI.

I didn't say it wasn't sensible. I said describing it that way was misleading.

If your short-term goal is in fact to decelerate the development of AI, describing this as "accelerating the development of Friendly AI" is misleading, or at least confused. What you're actually doing is trying to mitigate X-risk. In part you are doing this in the hopes that you survive to build Friendly AI. This makes sense except for the part where you call it "acceleration."

Incidentally, people don't seem to say "Friendly AI" anymore. What's up with that?

I assume this is April Fools' related, but I can't really tell.  Assuming you're serious, I think there's less cohesive leadership of LW parties than you seem to think, and you can't generalize EITHER whether you qualify as e/acc, or whether any given party would exclude you just for that.

Ask the host.  If they're unclear or unsure, go and see if you feel unwelcome or uncomfortable, and leave if so.  Or avoid it if the people you expect to be there don't give you good vibes in advance.  

All that said, if you're that worried about it, or so tied to your ideas that you'll be massively uncomfortable or loudly disagreeable with people who think otherwise, probably avoid it regardless of what the host says.

I assume this is April Fools' related, but I can't really tell

It's not.

I think there's less cohesive leadership of LW parties than you seem to think

That sounds likely. To be fair, I've mostly heard of this from

  • Beff Jezos & Beff-adjacent people whining about it on twitter

  • Aella's gangbang

And I probably adjusted too hard off of this. Like nobody goes around prominently saying "actually we don't mind if e/acc show up, so long as they're nice" that I know of, but there's no reason to assume that they would.

Ask the host. If they're unclear or unsure, go and see if you feel unwelcome or uncomfortable, and leave if so.

So my initial reaction to this was to feel slightly insulted, but I realize that probably wasn't your intention.

I know how to go to parties. I'm not unusually awkward. I'm not a preacher. That's not the problem here. I can usually get along with people as long as they're not too nuts.

I asked this question because I believed that it very common for LW parties to specifically want to exclude people on philosophical grounds. If they do, and if their objection applies to me, I want them to succeed. I've heard stories of e/acc people trying to sneak into Berkeley parties incognito. That's not my style.

Also, my model of the most likely circumstances where I was likely to attend such a party was an acquaintance--who recognized that I seemed like an LW kinda guy but wasn't deeply familiar with my whole belief system--saying "Hey there's this thing tonight we could go to." So asking the host might not be practical; said host might already be drinking and playing board games or whatever degeneracy usually goes on. Thus if the e/acc ban was as widespread as I thought, it would make sense to know ahead of time.

I asked this question because I believed that it very common for LW parties to specifically want to exclude people on philosophical grounds.

Whether or not someone wants you dead or not is not a difference on "philosophical grounds". 

[-][anonymous]18d10

So certain aspects of the lesswrong 'consensus view' I have a lot of doubts on.  But they are all on the end of what's happening right now.  (nanotechnology, can an ASI just 'hack anything' and 'persuade anybody' and 'coordinate with itself', or "optimize itself to fit into cheap computers that are available", or "it's pointless to try to control it"?)

I would have to ask, if AGI is defined as "median human ability", and the argument around the GPT-3.5 release was "it can only read and emit text", how do you explain:

  1.  Above median human ability on many tests with GPT-4
  2. The 'context length' barrier was lifted with Gemini 1.5 and Claude.
  3.  The multimodality limit (I had doubts on the integration of extra modalities) was lifted with Gemini and Claude.

       4.  If it's just "hype", why does GPT-4 with plugins solve various basic engineering and physics problems about as well as an undergrad?  How do you explain Claude writing fairly long chunks of code that works?  (I have personally tried both) 

       5.  How do you explain the rate of progress.  

       6.  Have you considered that all those "sciencedirect" news you saw over the years were from tiny startups and single professor university labs?  That more is being pumped into AI every year than fusion power since the beginning?  Scale matters.  

       7.  Why are so many investors voting with their money?  Are they just stupid and tricked by hype or do they know something as a group that you don't?  

It seems to me that 'all' we have to do to reach AGI is integrate:

     online learning, robotics i/o (to a system 1 model that directly controls the robot), video perception, audio perception, internal buffer for internal monologue

Note that all already exist in someone's lab.  

Such a machine would have about the breadth and skill as the median human.  That's AGI.  People keep pushing that definition further ("expert human level at everything", "can research AI autonomously and faster") but a median human worker you can print is a gamechanger.

I wrote a long reply to your points, but ultimately decided it was a derail to original topic. I'll PM you just for fun though.