LESSWRONG
LW

152
habryka
47954Ω17982695597118
Message
Dialogue
Subscribe

Running Lightcone Infrastructure, which runs LessWrong and Lighthaven.space. You can reach me at habryka@lesswrong.com. 

(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
56Habryka's Shortform Feed
Ω
6y
Ω
436
A Review of Nina Panickssery’s Review of Scott Alexander’s Review of “If Anyone Builds It, Everyone Dies”
habryka20h81

Aella is the partner of one of the authors! Of course she had advance access! I don't know about Grimes, seems plausible to me (though not super clear whether to count here as sympathetic or critical, I don't really know what she believes about this stuff, and also she has a lot of reach otherwise). 

Reply
A Review of Nina Panickssery’s Review of Scott Alexander’s Review of “If Anyone Builds It, Everyone Dies”
habryka1d5-1

I also don't think sympathetic people who aren't writing book reviews got copies in-advance, so my best guess is stuff is relatively symmetric. I don't really know how copies were distributed, but my sense is not that many advance copies were distributed in-total (my sense is largely downstream of publisher preference).

Reply
Obligated to Respond
habryka1d61

For me the answer is "roughly the beginning of the 20th century?"

Like, seems to me that around that time humanity had enough of the pieces figured out to make a more naturalistic worldview work pretty well. 

It's kind of hard to specify what it would have meant to press that button some centuries earlier, since like, I think a non-trivial chunk of religion was people genuinely trying to figure out what reality is made out of, and what the cosmology of the world is, etc. Depending on the details of this specification I would have done it earlier. 

Reply
My AI Vibes are Shifting
habryka2d410

Those companies are run by humans, so no, of course the world does not look like total human disempowerment to me? 

If practically all of the world's governments and corporations were run by AIs... well, then I expect we would be dead, but if for some reason we were not, it seems very likely that yes, that would constitute total human disempowerment.

Reply
LessWrong is migrating hosting providers (report bugs!)
habryka4d20

Yeah, my model is if someone does this once they'll waive the charges. We already had autoscaling in our previous hosting context and both under the current setup and the previous setup people could DDos us if they want to take us down. Within a week or so we could likely switch things around to be robust against most forms of DDos (probably at some cost to user-experience and development experience).

If someone does this a lot, we can just turn on billing limits, and then go down instead of going bankrupt, which is roughly the same situation we were in before.

Reply
LessWrong is migrating hosting providers (report bugs!)
habryka4d40

It's true! May history judge who was right in the end.

Reply
New User's Guide to LessWrong
habryka4d20

Thank you! Fixed.

Reply
Visionary arrogance and a criticism of LessWrong voting
habryka4d31

Definitely! Requests are totally fine!

Reply
Visionary arrogance and a criticism of LessWrong voting
habryka4d91

*** Comment Guideline: If you downvote this post, please also add a Reaction or a 30+ character comment prepended with "Downvote note:" on what to improve. ***

Sorry, to be clear, this is not a valid comment guideline on LessWrong. The current moderation system allows authors to moderate comments (assuming they have the necessary amount of karma). It does not allow authors to change how people vote. I can imagine at some point maybe doing something here, but it seems dicey, and is not part of how LessWrong currently works.

Reply
LessWrong is migrating hosting providers (report bugs!)
habryka4d290

I might respond in more depth later, and I am sure other team members have opinions, but roughly: 

  • React, our frontend framework, has chosen a kind of weird path where if you want to utilize the latest set of features in React 19, you basically have to use NextJS (more concretely, server functions and server components are two features that you would be extremely hard to use without framework support, and NextJS is the only framework with support).
  • We've been using NextJS for all of the other web projects that we've been building (including AI 2027, the new MIRI book website, our internal Lighthaven booking infrastructure, our conference scheduling software Writehaven, and our internal company infrastructure), and it's generally been a great experience in almost every respect (it's been a less great experience for LessWrong, which isn't surprising since it's a much much bigger and more complicated codebase).
    • Jim also has some not-great experience working on some non-Lightcone projects
  • AWS Beanstalk was a kind of terrible deployment/hosting service, or at least we never figured out how to use it properly. Our deploys would routinely take 20+ minutes, and then take another 20+ minutes to roll back, which means we had multiple instances of ~1 hour downtime that could have instead been a 5-minute downtime if deploys and rollbacks had been fast.
  • NextJS is a serverless framework. There are some developer experience benefits you get from restructuring things in a serverless way. The one I am most excited about is having preview deployments. PR review is much easier if every pull request just has a link to a deployed version of the app attached to it that you can click to visit, click around in to find any bugs or visual issues, and leave comments on directly. 

There are some more reasons, but these are the big ones from my perspective. 

Reply
Load More
A Moderate Update to your Artificial Priors
A Moderate Update to your Organic Priors
Concepts in formal epistemology
246Banning Said Achmiz (and broader thoughts on moderation)
26d
379
97Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
2mo
43
23Open Thread - Summer 2025
3mo
68
93ASI existential risk: Reconsidering Alignment as a Goal
5mo
14
356LessWrong has been acquired by EA
6mo
52
782025 Prediction Thread
9mo
21
23Open Thread Winter 2024/2025
9mo
59
46The Deep Lore of LightHaven, with Oliver Habryka (TBC episode 228)
9mo
4
36Announcing the Q1 2025 Long-Term Future Fund grant round
9mo
2
112Sorry for the downtime, looks like we got DDosd
10mo
13
Load More
CS 2881r
6 days ago
(+204)
Roko's Basilisk
2 months ago
Roko's Basilisk
2 months ago
AI Psychology
9 months ago
(+58/-28)