LESSWRONG
LW

Ben Pace
35039Ω10882754536193
Message
Dialogue
Subscribe

I'm an admin of LessWrong. Here are a few things about me.

  • I generally feel more hopeful about a situation when I understand it better.
  • I have signed no contracts nor made any agreements whose existence I cannot mention.
  • I believe it is good take responsibility for accurately and honestly informing people of what you believe in all conversations; and also good to cultivate an active recklessness for the social consequences of doing so.
  • It is wrong to directly cause the end of the world. Even if you are fatalistic about what is going to happen.

(Longer bio.)

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs
23Benito's Shortform Feed
Ω
7y
Ω
286
A case for courage, when speaking of AI danger
Ben Pace4h62

Yeah that makes sense.

As an aside, I notice that I currently feel much more reticent to name individuals who there is not some sort of legal/civilizational consensus about their character. I think I am a bit worried about contributing to the dehumanization of people who are active players, and a drop in basic standards of decency toward them, even if I were to believe they were evil.

Reply1
A case for courage, when speaking of AI danger
Ben Pace4h20

I think this domain needs clarity-of-thought more than it needs a social-conflict orientation

Openly calling people evil has some element of "deciding who to be in social conflict with", but insofar as it has some element of "this is simply an accurate description of the world" then FWIW I want to note that this consideration partially cuts in favor of just plainly stating what counts as evil, even whether specific people have met that bar.

Reply
[Meta] New moderation tools and moderation guidelines
Ben Pace4h20

(This is a tangent to the thread and so I don't plan to reply further on this, but I just wanted to mention that while I view Greenblatt and Shlegeris as stakeholders in LessWrong, a space they've made many great contributions to and are quite active in, I don't view them as leadership of the rationality community.)

Reply
A case for courage, when speaking of AI danger
Ben Pace5h156

To be clear, I think it would probably be reasonable for some external body like the UN to attempt to prosecute & imprison ~everyone working at big AI companies for their role in racing to build doomsday machines. (Most people in prison are not evil.) I'm a bit unsure if it makes sense to do things like this retroactively rather than to just outlaw it going forward, but I think it sometime makes sense to prosecute atrocities after the fact even if there wasn't a law against it at the time. For instance my understanding is that the Nuremberg trials set precedents for prosecuting people for war crimes, crimes against humanity, and crimes against peace, even though legally they weren't crimes at the time that they happened.

I just have genuine uncertainty about the character of many of the people in the big AI companies and I don't believe they're all fundamentally rotten people! And I think language is something that can easily get bent out of shape when the stakes are high, and I don't want to lose my ability to speak and be understood. Consequently I find I care about not falsely calling people's character/nature evil when what I think is happening is that they are committing an atrocity, which is similar but distinct.

Reply
A case for courage, when speaking of AI danger
Ben Pace5h132

I'm not quite sure what I make of this, I'll take this opportunity to think aloud about it. 

I often take a perspective where most people are born a kludgey mess, and then if they work hard they can become something principled and consistent and well-defined. But without that, they don't have much in the way of persistent beliefs or morals such that they can be called 'good' or 'evil'. 

I think of an evil person as someone more like Voldemort in HPMOR, who has reflected on his principles and will be persistently a murdering sociopath, than someone who ended up making horrendous decisions but wouldn't in a different time and place. I think if you put me under a lot of unexpected political forces and forced me to make high-stakes decisions, I could make bad decisions, but not because I'm a fundamentally bad person.

I do think it makes sense to write people off as bad people, in our civilization. There are people who have poor impulse control, who have poor empathy, who are pathological liars, and who aren't save-able by any of our current means, and will always end up in jail or hurting people around them. I rarely interact with such people so it's hard for me to keep this in mind, but I do believe such people exist.

But evil seems a bit stronger than that, it seems a bit more exceptional. Perhaps I would consider SBF an evil person; he seems to me someone who knew he was a sociopath from a young age, and didn't care about people, and would lie and deceive, was hyper-competent, and I expect that if you release him into society he will robustly continue to do extreme amounts of damage.

Is that who Eichmann was? I haven't read the classic book on him, but I thought the point of 'the banality of evil' was that he seemed quite boring and like many other people? Is it the case that you could replace Eichmann with like >10% of the population and get similar outcomes? 1%? I am not sure if it is accurate to think of that large a chunk of people as 'evil', as being the kind of robustly bad people who should probably be thrown in prison for the protection of civilization. My current (superficial) understanding is that Eichmann enacted an atrocity without being someone who would persistently do so in many societies. He had the capacity for great evil, but this was not something he would reliably seek out.

It is possible that somehow thousands of people like SBF and Voldemort have gotten together to work at AI companies; I don't currently believe that. To be clear, I think that if we believe there are evil people, then it must surely describe some of the people working at big AI companies that are building doomsday machines, who are very resiliently doing so in the face of knowing that they're hastening the end of humanity, but I don't currently think it describes most of the people.

This concludes my thinking aloud; I would be quite interested to read more of how your perspective differs, and why.

(cf. Are Your Enemies Innately Evil? from the Sequences)

Reply
Lightcone Infrastructure/LessWrong is looking for funding
Ben Pace6h*20

Office space in SF has continued to be extremely cheap e.g. I heard the Frontier Tower was previously selling for $38M and was recently purchased for $11M.

East Bay / Berkeley I think has return to pre-pandemic prices or so.

I didn't re-read the thread, let me know if I'm not quite answering the right question, but regarding Lighthaven: our bookings are basically not for office space, they're for conferences or residencies. I just checked the last year, we had 11/52 weekends not-booked, and that's only if you're counting Eternal September and the Christmas period (the latter of which is generally hard to book big events, and the former of which we were making money by ppl renting and paying for access and meals).

I think we definitely will have some free weekends, but most potential-revenue will come from competition increasing prices, and finding new clients with higher willingness-to-pay.

Reply
A case for courage, when speaking of AI danger
Ben Pace11h105

For one, I think I'm a bit scared of regretting my choices. Like, calling someone evil and then being wrong about it isn't something where you just get to say "oops, I made a mistake" afterwards, you did meaningfully move to socially ostracize someone, mark them as deeply untrustworthy, and say that good people should remove their power, and you kind of owe them something significant if you get that wrong.

For two, a person who has done evil, versus a person who is evil, are quite different things. I think that it's sadly not always the case that a person's character is aligned with a particular behavior of theirs. I think it's not accurate to think of all the people building the doomsday machines as generically evil and someone who will do awful things in lots of different contexts, I think there's a lot of variation in the people and their psychologies and predispositions, and some are screwing up here (almost unforgivably, to be clear) in ways they wouldn't screw up in different situations.

Reply
A case for courage, when speaking of AI danger
Ben Pace13h149

(This rhetoric is not quite my rhetoric, but I want to affirm that I do believe that ~most people working at big AI companies are contributing to the worst atrocity in human history, are doing things that are deontologically prohibited, and are morally responsible for that.)

Reply
[Meta] New moderation tools and moderation guidelines
Ben Pace13h20

I don't think the relevant dispute about rudeness/offensiveness is about one-place and two-place functions, I think it's about passive/overt aggression. With passive aggression you often have to read more of the surrounding context to understand what is being communicated, whereas with overt aggression it's clear if you just locally inspect the statement (or behavior), which sounds like one / two place functions (because ppl with different information states look at the same message and get different assessments), but isn't.

For instance, suppose Alice doesn't invite Bob to a party, and then Bob responds by ignoring all of Alice's texts and avoiding eye contact most of the time. Now any single instance of "not responding to a text" isn't aggression, but from the context of a chance in a relationship where it was typical to reply same-day, to zero replies, it can be understood as retalliation. And of course, even then it's not provable, there are other possible explanations (such as Bob is taking a GLP-1 inhibitor and is quite low-energy at the minute don't think too hard about why I picked that example), which makes it a great avenue for hard-to-litigate retaliation.

Reply
The Best Tacit Knowledge Videos on Every Subject
Ben Pace1d20

Done.

Reply1
Load More
Adversarial Collaboration (Dispute Protocol)
6mo
Epistemology
8mo
(-454)
Epistemology
8mo
(+56/-56)
Epistemology
8mo
(+9/-4)
Epistemology
8mo
(+66/-553)
Petrov Day
9mo
(+714)
37LessOnline 2025: Early Bird Tickets On Sale
4mo
5
20Open Thread Spring 2025
4mo
50
281Arbital has been imported to LessWrong
4mo
30
135The Failed Strategy of Artificial Intelligence Doomers
5mo
78
109Thread for Sense-Making on Recent Murders and How to Sanely Respond
5mo
146
83What are the good rationality films?
Q
7mo
Q
54
932024 Petrov Day Retrospective
9mo
25
136[Completed] The 2024 Petrov Day Scenario
9mo
114
55Thiel on AI & Racing with China
10mo
10
53Extended Interview with Zhukeepa on Religion
10mo
61
Load More