LESSWRONG
LW

293
Ben Pace
36554Ω10882784750201
Message
Dialogue
Subscribe

I'm an admin of LessWrong. Here are a few things about me.

  • I generally feel more hopeful about a situation when I understand it better.
  • I have signed no contracts nor made any agreements whose existence I cannot mention.
  • I believe it is good to take responsibility for accurately and honestly informing people of what you believe in all conversations; and also good to cultivate an active recklessness for the social consequences of doing so.
  • It is wrong to directly cause the end of the world. Even if you are fatalistic about what is going to happen.

Randomly: If you ever want to talk to me about anything you like for an hour, I am happy to be paid $1k for an hour of doing that.

(Longer bio.)

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
23Benito's Shortform Feed
Ω
7y
Ω
333
AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs
kave's Shortform
Ben Pace15h100

There is a strong force in web forums to slide toward news and inside-baseball; the primary goal here is to fight against that. It is a bad filter for new users if a lot of that they see on first visiting the LessWrong homepage is discussions of news, recent politics, and the epistemic standards of LessWrong. Many good users are not attracted by these, and for those not put off it's bad culture to set this as the default topic of discussion.

(Forgive me if I'm explaining what is already known, I'm posting in case people hadn't heard this explanation before; we talked about it a lot when designing the frontpage distinction in 2017/8.)

Reply1
leogao's Shortform
Ben Pace2d110

I think most of the people involved like working with the smartest and most competent people alive today, on the hardest problems, in order to build a new general intelligence for the first time since the dawn of humanity, in exchange for massive amounts of money, prestige, fame, and power. This is what I refer to by 'glory'.

Reply
leogao's Shortform
Ben Pace2d73

I think of it as 'glory'.

Reply2
kave's Shortform
Ben Pace2d158

Perhaps a react for "I wish this idea/sentence/comment was a post" would improve things.

Reply
Which side of the AI safety community are you in?
Ben Pace4d60

I felt confused at first when you said that this framing is leaning into polarization. I thought "I don't see any clear red-tribe / blue-tribe affiliations here." 

Then I remembered that polarization doesn't mean tying this issue to existing big coalitions (a la Hanson's Policy Tug-O-War), but simply that it is causing people to factionalize and create an conflict and divide between them.

I think it seems to me like Max has correctly pointed out a significant crux about policy preferences between people who care about AI existential risk, and it also seems to me worth polling people and finding out who thinks what. 

It does seem to me that the post is attempting to cause some factionalization here. I am interested in hearing about whether this is a good or bad faction to exist (relative to other divides) rather than simply saying that division is costly (which it is). I am interested in some argument about whether this is worth it / this faction is a real one. 

Or perhaps you/others think it should ~never be actively pushed for in the way Max does in this post (or perhaps not in this way on a place with high standards for discourse like LW).

Reply
Noah Birnbaum's Shortform
Ben Pace4d127

That's right. One exception: sometimes I upvote posts/comments written to low standards in order to reward the discussion happening at all. As an example I initially upvoted Gary Marcus's first LW post in order to be welcoming to him participating in the dialogue, even though I think the post is very low quality for LW. 

(150+ karma is high enough and I've since removed the vote. Or some chance I am misremembering and I never upvoted because it was already doing well, in which case this serves as a hypothetical that I endorse.)

Reply
Noah Birnbaum's Shortform
Ben Pace4d*237

The effect seems natural and hard to prevent. Basically, certain authors get reputations for being high (quality * writing), and then it makes more sense for people to read their posts because both the floor and ceiling are higher in expectation. Then their worse posts get more readers (who vote) than posts of a similar quality by another author, who's floor and ceiling is probably lower.

I'm not sure the magnitude of the cost, or that one can realistically expect to ever prevent this effect. For instance, ~all Scott Alexander blogposts get more readership than the best post by many other authors who haven't built a reputation and readership, and this kind of just seems part of how the reading landscape works.

Of course, it can be frustrating as an author to sometimes see similar quality posts on LW get different karma. I think part of the answer here is to do more to celebrate the best posts by new authors. The main thing that comes to mind here is curation, where we celebrate and get more readership on the best posts. Perhaps I should also have a term here for "and this is a new author, so I want to bias toward curating them for the first time so that they're more invested in writing more good content".

Reply4
leogao's Shortform
Ben Pace5d5535

I am not really clear that I should be worried on the scale of decades? If we're doing a calculation of expected future years of a flourishing technologically mature civilization, slowing down for 1,000 years here in order to increase the chance of success by like 1 percentage point is totally worth it in expectation.

Given this, it seems plausible to me that one should rather spend 200 years trying to improve civilizational wisdom and decision-making rather than instead attempt to specifically just unlock regulation on AI (of course the specifics here are cruxy).

Reply6
Load More
134The Inkhaven Residency
3mo
35
37LessOnline 2025: Early Bird Tickets On Sale
7mo
5
20Open Thread Spring 2025
8mo
50
281Arbital has been imported to LessWrong
8mo
30
141The Failed Strategy of Artificial Intelligence Doomers
9mo
77
109Thread for Sense-Making on Recent Murders and How to Sanely Respond
9mo
146
83What are the good rationality films?
Q
1y
Q
54
942024 Petrov Day Retrospective
1y
25
136[Completed] The 2024 Petrov Day Scenario
1y
114
55Thiel on AI & Racing with China
1y
10
Load More
LessWrong Reacts
20 days ago
LessWrong Reacts
20 days ago
LessWrong Reacts
20 days ago
LessWrong Reacts
20 days ago
(+3354/-3236)
LessWrong Reacts
a month ago
LessWrong Reacts
a month ago
(+638/-6)
LessWrong Reacts
a month ago
(+92)
LessWrong Reacts
a month ago
(+248)
Adversarial Collaboration (Dispute Protocol)
9 months ago