LESSWRONG
LW

1118
Ben Pace
35985Ω10882784681193
Message
Dialogue
Subscribe

I'm an admin of LessWrong. Here are a few things about me.

  • I generally feel more hopeful about a situation when I understand it better.
  • I have signed no contracts nor made any agreements whose existence I cannot mention.
  • I believe it is good take responsibility for accurately and honestly informing people of what you believe in all conversations; and also good to cultivate an active recklessness for the social consequences of doing so.
  • It is wrong to directly cause the end of the world. Even if you are fatalistic about what is going to happen.

Randomly: If you ever want to talk to me about anything you like for an hour, I am happy to be paid $1k for an hour of doing that.

(Longer bio.)

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs
Obligated to Respond
Ben Pace2d5-2

Curated! I thought this walked through a lot of the relevant considerations helpfully, and I liked the reframing of ask and guess cultures (and the idea that there can be many levels of echos being tracked).

Reply
Open Global Investment as a Governance Model for AGI
Ben Pace3d20

I'm not sure I understand this part actually, could you elaborate? Is this your concern with the OGI model or with your salary-only for first-N employees idea?

This is a concern I am raising with my own idea.

Reply
Open Global Investment as a Governance Model for AGI
Ben Pace3d40

But since the first N employees usually get to sign off on major decisions, why would they go along with such an agreement?

I'm imagining a world where a group of people step forward to take a lot of responsibility for navigating humanity through this treacherous transition, and do not want themselves to be corrupted by financial incentives (and wish to accurately signal this to the external world). I'll point out that this is not unheard of, Altman literally took no equity in OpenAI (though IMO was eventually corrupted by the power nonetheless).

Reply
Open Global Investment as a Governance Model for AGI
Ben Pace3d20

Couldn't we just... set up a financial agreement where the first N employees don't own stock and have a set salary?

My main concern is that they'll have enough power to be functionally wealthy all-the-same, or be able to get it via other means (e.g. Altman with his side hardware investment / company).

Reply
johnswentworth's Shortform
Ben Pace4d60

Sounds like a great empirical test!

Reply1
Shortform
Ben Pace5d40

Appreciate the example. I remember reading that retweet! 

At the time it sounded plausible to me, and I assumed it was accurate about certain industries. 

I'm interested in understanding a bit more what's going on here. Are we sure you're talking about the same kinds of companies? I'd guess you're dealing with companies in the range of 2k-20k employees, and I think Crowdstrike was substantially affecting companies in the range of 20k-200k employees (or at least that's what I thought of when I saw this tweet), where I imagine auditors have to use much more broad-brush tools to do auditing.

The sorts of companies I imagine as having this kind of broad-strokes audit are extremely broad service industries – airlines, trains, grocery stores, banks, hospitals – where my impression is they often use very old software and buggy hardware due to their overwhelming size and sloth, and where I suspect that a lot of decisions get made by the minimum possible thing required to meet some formal requirements.

Reply
Mikhail Samin's Shortform
Ben Pace5d50

Follow-up: Michael Trazzi wrapped up after 7 days due to fainting twice and two doctors saying he was getting close to being in a life-threatening situation.

(Slightly below my modal guess, but also his blood glucose level dropped unusual fast.)

FAO @Mikhail Samin.

Reply
MAGA speakers at NatCon were mostly against AI
Ben Pace6d52

That's a healthy hypothesis to track.

Reply
Shortform
Ben Pace6d*285

I recall a rationalist I know chiding Eliezer for his bad tweeting, and then Eliezer asked him to show him an example of a recent tweet that was bad, and then the rationalist failed to find anything especially bad.

Perhaps this has changed in the 2-3 years since that event. But I'd be interested in an example of a tweet you (lc) thought was bad.

Reply
Obligated to Respond
Ben Pace6d40

Oh okay. I don't find this convincing, consistent with my position above I'd bet that in the longer term we'd do best to hit a button that ended all religions today, and then eat the costs and spend the decades/centuries required to build better things in their stead. (I think it's really embarrassing we don't have better things in their place, especially after the industrial revolution.) I don't think I can argue well for that position right now, I'll need to think on it more (and maybe write a post on it when I've made some more progress on the reasonining).

(Obvious caveat that actually we only have like 0.5-3 decades of being humans any more, so the above 'centuries' isn't realistic.)

Reply
Load More
23Benito's Shortform Feed
Ω
7y
Ω
304
133The Inkhaven Residency
2mo
32
37LessOnline 2025: Early Bird Tickets On Sale
6mo
5
20Open Thread Spring 2025
7mo
50
281Arbital has been imported to LessWrong
7mo
30
137The Failed Strategy of Artificial Intelligence Doomers
8mo
77
109Thread for Sense-Making on Recent Murders and How to Sanely Respond
8mo
146
83What are the good rationality films?
Q
10mo
Q
54
932024 Petrov Day Retrospective
1y
25
136[Completed] The 2024 Petrov Day Scenario
1y
114
55Thiel on AI & Racing with China
1y
10
Load More
Adversarial Collaboration (Dispute Protocol)
8 months ago
Epistemology
10 months ago
(-454)
Epistemology
10 months ago
(+56/-56)
Epistemology
10 months ago
(+9/-4)
Epistemology
10 months ago
(+66/-553)
Petrov Day
a year ago
(+714)