LESSWRONG
LW

habryka
45497Ω17812685355117
Message
Dialogue
Subscribe

Running Lightcone Infrastructure, which runs LessWrong and Lighthaven.space. You can reach me at habryka@lesswrong.com. 

(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Noah Weinberger's Shortform
habryka8h30

Welcome! Glad to have you around and hope you have a good time!

Reply
Annapurna's Shortform
habryka11hModerator Comment119

This comment too is not fit for this site. What is going on with y'all? Why is fertility such a weirdly mindkilling issue? Please don't presume your theory to be true, try to highlight cruxes, try to summon up at least a bit of curiosity about your interlocutors, all the usual things.

Like, it's fine to have a personally confident take on the causes of low fertility in western countries, but man, you can't just treat your personal confidence as shared and obvious with everyone else, at least in this way.

Reply
Annapurna's Shortform
habryka11hModerator Comment1110

What... is going on in this comment? It has so much snark, and so my guess is downstream of some culture war gremlins. Please don't leave comments like this.

The basic observation that status might be a kind of conserved quality and as such in order to advocate for status-raising of one thing you also need to be transparent about which things you would feel comfortably lowering in status is a fine one, but this isn't the way to communicate that observation.

Reply
Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
habryka12h60

If the show turns out to be, say, the annual panto at the Palladium, then the claim was very conclusively true.

It would make sense that you would like a show put on at the LW theaters.

Reply4
Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
habryka12h97

Come on, make your critiques in the straightforward way, and use normal words to express them. I think this being kind of socially focused is a valid critique, but you are coating it in sneering language that feels obnoxious.

Reply
A case for courage, when speaking of AI danger
habryka1d136

ASI could kill about 8 billion.

The future is much much bigger than 8 billion people. Causing the extinction of humanity is much worse than killing 8 billion people. This really matters a lot for arriving at the right moral conclusions here.

Reply
Mikhail Samin's Shortform
habryka1d100

I mean, it's not like OpenPhil hasn't been interfacing with a ton of extremely successful people in politics. For example, OpenPhil approximately co-founded CSET, and talks a ton with people at RAND, and has done like 5 bajillion other projects in DC and works closely with tons of people with policy experience. 

The thing that Jason is arguing for here is "OpenPhil needs to hire people with lots of policy experience into their core teams", but man, that's just such an incredibly high bar. The relevant teams at OpenPhil are like 10 people in-total. You need to select on so many things. This is like saying that Lightcone "DOESN'T HAVE ANYONE WITH ARCHITECT OR CONSTRUCTION OR ZONING EXPERIENCE DESPITE RUNNING A LARGE REAL ESTATE PROJECT WITH LIGHTHAVEN". Like yeah, I do have to hire a bunch of people with expertise on that, but it's really very blatantly obvious from where I am that trying to hire someone like that onto my core teams would be hugely disruptive to the organization.

It seems really clear to me that OpenPhil has lots of contact with people who have lots of policy experience, frequently consults with them on stuff, and that the people working there full-time seem reasonably selected for me. The only way I see the things Jason is arguing for work out is if OpenPhil was to much more drastically speed up their hiring, but hiring quickly is almost always a mistake.

Reply1
Daniel Kokotajlo's Shortform
habryka1d72

I mean, I find the whole "models don't want rewards, they want proxies of rewards" conversation kind of pointless because nothing ever perfectly matches anything else, so I agree that in a common-sense way it's fine to express this as "wanting reward", but also, I think the people who care a lot about the distinction of proxies of rewards and actual reward would feel justifiedly kind of misunderstood by this.

Reply
Daniel Kokotajlo's Shortform
habryka1d72

I didn't disagree-vote, but it seems kind of philosophically confused. Like, when people say that reward isn't the optimization target, they usually mean "the optimization target will be some, potentially quite distance proxy to reward", and "going through the motions" sounds like exactly the kind of thing that points at a distant proxy of historical reward.

Reply
sam's Shortform
habryka1d74

In addition to the object-level problems with the post, the post also just cites wrong statistics (claiming that 97% of years of animal life are due to honey farming if you ignore insects, which is just plainly wrong, shrimp alone are like 10%), and also it just randomly throws in random insults at random political figures, which is clearly against the norm on LessWrong ("having about a million neurons—far more than our current president" and "That’s about an entire lifetime of a human, spent entirely on drudgery. That’s like being forced to read an entire Curtis Yarvin article from start to finish. And that is wildly conservative.").

I have sympathy for some of the underlying analysis, but this really isn't a good post.

Reply
Load More
A Moderate Update to your Artificial Priors
A Moderate Update to your Organic Priors
Concepts in formal epistemology
56Habryka's Shortform Feed
Ω
6y
Ω
436
38Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
6h
4
20Open Thread - Summer 2025
17d
15
91ASI existential risk: Reconsidering Alignment as a Goal
3mo
14
346LessWrong has been acquired by EA
3mo
53
772025 Prediction Thread
6mo
21
23Open Thread Winter 2024/2025
6mo
60
45The Deep Lore of LightHaven, with Oliver Habryka (TBC episode 228)
7mo
4
36Announcing the Q1 2025 Long-Term Future Fund grant round
7mo
2
112Sorry for the downtime, looks like we got DDosd
7mo
13
610(The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser
7mo
270
Load More
Roko's Basilisk
4d
Roko's Basilisk
4d
AI Psychology
6mo
(+58/-28)