LESSWRONG
LW

747
habryka
47928Ω17982695593118
Message
Dialogue
Subscribe

Running Lightcone Infrastructure, which runs LessWrong and Lighthaven.space. You can reach me at habryka@lesswrong.com. 

(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
56Habryka's Shortform Feed
Ω
6y
Ω
436
LessWrong is migrating hosting providers (report bugs!)
habryka15h20

Yeah, my model is if someone does this once they'll waive the charges. We already had autoscaling in our previous hosting context and both under the current setup and the previous setup people could DDos us if they want to take us down. Within a week or so we could likely switch things around to be robust against most forms of DDos (probably at some cost to user-experience and development experience).

If someone does this a lot, we can just turn on billing limits, and then go down instead of going bankrupt, which is roughly the same situation we were in before.

Reply
LessWrong is migrating hosting providers (report bugs!)
habryka20h40

It's true! May history judge who was right in the end.

Reply
New User's Guide to LessWrong
habryka21h20

Thank you! Fixed.

Reply
Visionary arrogance and a criticism of LessWrong voting
habryka1d31

Definitely! Requests are totally fine!

Reply
Visionary arrogance and a criticism of LessWrong voting
habryka1d91

*** Comment Guideline: If you downvote this post, please also add a Reaction or a 30+ character comment prepended with "Downvote note:" on what to improve. ***

Sorry, to be clear, this is not a valid comment guideline on LessWrong. The current moderation system allows authors to moderate comments (assuming they have the necessary amount of karma). It does not allow authors to change how people vote. I can imagine at some point maybe doing something here, but it seems dicey, and is not part of how LessWrong currently works.

Reply
LessWrong is migrating hosting providers (report bugs!)
habryka1d290

I might respond in more depth later, and I am sure other team members have opinions, but roughly: 

  • React, our frontend framework, has chosen a kind of weird path where if you want to utilize the latest set of features in React 19, you basically have to use NextJS (more concretely, server functions and server components are two features that you would be extremely hard to use without framework support, and NextJS is the only framework with support).
  • We've been using NextJS for all of the other web projects that we've been building (including AI 2027, the new MIRI book website, our internal Lighthaven booking infrastructure, our conference scheduling software Writehaven, and our internal company infrastructure), and it's generally been a great experience in almost every respect (it's been a less great experience for LessWrong, which isn't surprising since it's a much much bigger and more complicated codebase).
    • Jim also has some not-great experience working on some non-Lightcone projects
  • AWS Beanstalk was a kind of terrible deployment/hosting service, or at least we never figured out how to use it properly. Our deploys would routinely take 20+ minutes, and then take another 20+ minutes to roll back, which means we had multiple instances of ~1 hour downtime that could have instead been a 5-minute downtime if deploys and rollbacks had been fast.
  • NextJS is a serverless framework. There are some developer experience benefits you get from restructuring things in a serverless way. The one I am most excited about is having preview deployments. PR review is much easier if every pull request just has a link to a deployed version of the app attached to it that you can click to visit, click around in to find any bugs or visual issues, and leave comments on directly. 

There are some more reasons, but these are the big ones from my perspective. 

Reply
Rana Dexsin's Shortform
habryka2d40

Yeah, the UI isn't amazing. It's kind of a tricky problem to work on for a few reasons, but we should make the UI a lot more obvious.

Reply2
My talk on AI risks at the National Conservatism conference last week
habryka2d1310

Then, what should those people actually do with that knowledge? 

Focus a mixture of stigma, regulation, and financial pressures on the people who are responsible for building AGI/ASI. Importantly "responsible" is very different from "associated with".

If AI devs are making fortunes endangering humanity, and we can't negate their salaries or equity stakes, we can at least undercut the social status and moral prestige of the jobs that they're doing.

Yep, I am in favor of such stigmas for people working on frontier development. I am not in favor of e.g. such a stigma for people who are developing self-driving cars, or are working on stopping AI themselves (and as such are "associated with building AGI/ASI"). 

I think we both agree pretty strongly that I think there should be a lot of negative social consequences for people responsible for building AGI/ASI. My sense is you want to extend this further beyond "responsible" and into "associated with", and I think this is bad. Yes, we can't expect perfect causal models from the public and the forces behind social pressures, but we can help make them more sane and directed towards the things that help, as opposed to the things that are just collateral damage or actively anti-helpful. That's all I am really asking for.

Reply
MAGA speakers at NatCon were mostly against AI
habryka3d44

I... again am happy to accept critique of my posting, but I think you are really weirdly off-base here. Feel free to ask some neutral third-party to do an evaluation of our commenting or tweeting styles and how they compare to local norms of discourse. 

In-particular, who cares about using words like "fuck"? What does this have to do with anything? Saying "fuck them" is much less aggressive or bad than saying "Behind your pretty-boy mask, you're a sociopathic ghoul. Glad that Americans are learning the truth about the deep, dark, bitter pit where your soul should be."! 

I have certainly said the former to friends or acquaintances many times and received it many times. If you ever hear me or anyone else say the latter (or anything like it) earnestly to you, I think something is seriously going wrong.

Reply
My talk on AI risks at the National Conservatism conference last week
habryka3d2622

We need to morally stigmatize anyone associated with building AGI/ASI.

No, I think our top priority should be to get people to come to an accurate understanding of the risks associated with AI. I think this requires being able to distinguish between real risks and fake risks. Not everyone associated with AI deserves to be morally stigmatized, and while I agree we should be willing to accept some collateral damage "stigmatizing anyone associated with building AGI" with an implied "by any means necessary[1]" is IMO not a reasonable strategy.

  1. ^

    My guess is you do consider some things across the line, but it seems likely to me that the lines of what you consider acceptable to do in the pursuit of stigmatizing people is quite different from me (and my guess is also most other people here)

Reply22
Load More
A Moderate Update to your Artificial Priors
A Moderate Update to your Organic Priors
Concepts in formal epistemology
CS 2881r
3 days ago
(+204)
Roko's Basilisk
2 months ago
Roko's Basilisk
2 months ago
AI Psychology
9 months ago
(+58/-28)
245Banning Said Achmiz (and broader thoughts on moderation)
23d
379
97Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
2mo
43
22Open Thread - Summer 2025
3mo
64
93ASI existential risk: Reconsidering Alignment as a Goal
5mo
14
355LessWrong has been acquired by EA
5mo
52
782025 Prediction Thread
9mo
21
23Open Thread Winter 2024/2025
9mo
59
46The Deep Lore of LightHaven, with Oliver Habryka (TBC episode 228)
9mo
4
36Announcing the Q1 2025 Long-Term Future Fund grant round
9mo
2
112Sorry for the downtime, looks like we got DDosd
9mo
13
Load More