Running Lightcone Infrastructure, which runs LessWrong and Lighthaven.space. You can reach me at habryka@lesswrong.com.
(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)
It's true! May history judge who was right in the end.
Thank you! Fixed.
Definitely! Requests are totally fine!
*** Comment Guideline: If you downvote this post, please also add a Reaction or a 30+ character comment prepended with "Downvote note:" on what to improve. ***
Sorry, to be clear, this is not a valid comment guideline on LessWrong. The current moderation system allows authors to moderate comments (assuming they have the necessary amount of karma). It does not allow authors to change how people vote. I can imagine at some point maybe doing something here, but it seems dicey, and is not part of how LessWrong currently works.
I might respond in more depth later, and I am sure other team members have opinions, but roughly:
There are some more reasons, but these are the big ones from my perspective.
Yeah, the UI isn't amazing. It's kind of a tricky problem to work on for a few reasons, but we should make the UI a lot more obvious.
Then, what should those people actually do with that knowledge?
Focus a mixture of stigma, regulation, and financial pressures on the people who are responsible for building AGI/ASI. Importantly "responsible" is very different from "associated with".
If AI devs are making fortunes endangering humanity, and we can't negate their salaries or equity stakes, we can at least undercut the social status and moral prestige of the jobs that they're doing.
Yep, I am in favor of such stigmas for people working on frontier development. I am not in favor of e.g. such a stigma for people who are developing self-driving cars, or are working on stopping AI themselves (and as such are "associated with building AGI/ASI").
I think we both agree pretty strongly that I think there should be a lot of negative social consequences for people responsible for building AGI/ASI. My sense is you want to extend this further beyond "responsible" and into "associated with", and I think this is bad. Yes, we can't expect perfect causal models from the public and the forces behind social pressures, but we can help make them more sane and directed towards the things that help, as opposed to the things that are just collateral damage or actively anti-helpful. That's all I am really asking for.
I... again am happy to accept critique of my posting, but I think you are really weirdly off-base here. Feel free to ask some neutral third-party to do an evaluation of our commenting or tweeting styles and how they compare to local norms of discourse.
In-particular, who cares about using words like "fuck"? What does this have to do with anything? Saying "fuck them" is much less aggressive or bad than saying "Behind your pretty-boy mask, you're a sociopathic ghoul. Glad that Americans are learning the truth about the deep, dark, bitter pit where your soul should be."!
I have certainly said the former to friends or acquaintances many times and received it many times. If you ever hear me or anyone else say the latter (or anything like it) earnestly to you, I think something is seriously going wrong.
We need to morally stigmatize anyone associated with building AGI/ASI.
No, I think our top priority should be to get people to come to an accurate understanding of the risks associated with AI. I think this requires being able to distinguish between real risks and fake risks. Not everyone associated with AI deserves to be morally stigmatized, and while I agree we should be willing to accept some collateral damage "stigmatizing anyone associated with building AGI" with an implied "by any means necessary[1]" is IMO not a reasonable strategy.
My guess is you do consider some things across the line, but it seems likely to me that the lines of what you consider acceptable to do in the pursuit of stigmatizing people is quite different from me (and my guess is also most other people here)
Yeah, my model is if someone does this once they'll waive the charges. We already had autoscaling in our previous hosting context and both under the current setup and the previous setup people could DDos us if they want to take us down. Within a week or so we could likely switch things around to be robust against most forms of DDos (probably at some cost to user-experience and development experience).
If someone does this a lot, we can just turn on billing limits, and then go down instead of going bankrupt, which is roughly the same situation we were in before.