Running Lightcone Infrastructure, which runs LessWrong and Lighthaven.space. You can reach me at habryka@lesswrong.com.
(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)
Self reviews are actively encouraged! Indeed, we will ask authors to do so explicitly in the review phase.
I don't think thousands of people at our partners have authorized access to model weights.
I don't understand the relevance of this. Of course almost no one at the partners has "authorized" access to model weights. This is in the cybersecurity section of the RSP.
The question is how many people have physical or digital access to machines that process model weights, which is how I understood you to define the "sophisticated" subset of "insiders" in the RSP. To quote directly from it:
We define “sophisticated insider risk” as risk from an insider who has persistent access or can request time-limited access to systems that process model weights.
Clearly datacenter executives at Amazon, Google and Microsoft have access to systems that process model weights. They can (approximately) just walk up to them, and probably log into accounts on the same machines. Possibly you meant here that "access to systems that process model weights" was intended as equivalent to "authorized access to model weights", but those are of course very different in the case of a datacenter provider, and to me it seemed a very intentional choice to define this threshold as "access to systems that process model weights" not "access to model weights".
I won't continue the argument about who has an idiosyncratic reading, but do want to simply state that I remain unconvinced that it's me (though not confident either)
Seems good for you to state! I would be glad to take bets about what neutral third parties would consider true. I don't think it's a total slam-dunk either, but feel like 75% confident that a mutually agreed third-party would end up thinking the interpretation I advocate for here is the correct one.
I feel like the epistemic qualifier at the top was pretty clear about the state of the belief, even if Lucie was wrong! I would not call this "attributing it to others", like nobody is going to quote this in an authoritative tone as something you said, unless the above is really a very egregious summary, but that currently seems unlikely to me.
Huh, I feel like it's pretty good for that purpose? If you want a list of posts that were popular but not endorsed, just take the difference between the highly upvoted posts and the review results.
The only requirement for a posts to enter the review phase is that anyone thinks it still has anything going for it. As such, if a post is obviously a fad, it won't end up thoroughly reviewed, but if it's really that common-knowledge that it was a fad, that seems fine. And even then, we still occasionally have people nominate posts for review just because they think they are bad and want people to do a retrospective on them.
I am working on the final financials for it as part the fundraising post! We ended up spending more like $3M instead of $2.6M, but a lot of it is on property tax which we are applying for an exemption for and will get back if the application succeeds, which makes the accounting a bit confusing.
My current guess is Lighthaven will have ended up netting around -$500k if you include the property tax, and -$200k if you exclude the property tax. But these numbers are quite provisional and could easily change by 50% either way until I've done the full math.
Thank you!
Lightcone is doing another fundraiser this year[1]! I am still working on our big fundraising post, but figured I would throw up something quick in case people are thinking about their charitable giving today.
Short summary of our funding situation: We are fundraising for $2M this year. Most of that goes into LessWrong and adjacent projects. Lighthaven got pretty close to breaking even this year (though isn't fully there). We also worked on AI 2027 which of course sure had a lot of effects. We do kind of have to raise around this much if we don't want to shut down since most of our expenses are fixed costs (my guess is the absolute minimum we could handle is something like $1.4M).
Donors above $2,000 can continue to get things at Lighthaven dedicated to them.
Donate here: https://www.every.org/lightcone-infrastructure
(I also just added a fundraising banner. I expect that to be temporary as I don't generally like having ad-like content on post pages, but we happen to be getting a very enormous amount of incoming traffic to Claude 4.5 Opus' Soul Document, and so a banner seemed particularly valuable for a day or two.)
Last year's fundraiser: https://www.lesswrong.com/posts/5n2ZQcbc7r4R8mvqc/the-lightcone-is-nothing-without-its-people
Welcome! Glad to have you around and hope you find good things here, and figure out how to contribute to both your own and other people's understanding of this crazy time we live in.
Yep, the stance is relatively hard. I am very confident that the alternative would be a pretty quick collapse of the platform, or it would require some very drastic changes in the voting and attention mechanisms on the site to deal with the giant wave of slop that any other stance would allow.
Apart from the present post I am betting a large fraction of LessWrong posts are already written with AI assistance. Some may spent significant time to excise the tell-tale marks of LLM prose which man... feels super silly? But many posts explicitly acknowledge AI-assistance. For myself, I so assume everybody is using of course using AI assistance during writing I don't even consider it worth mentioning. It amuses me when commenters excitedly point out that I've used AI to assist writing as if they've caught me in some sort of shameful crime.
Making prose flow is not the hard part of writing. I am all in favor of people using AIs to think through their ideas. But I want their attestations to be their personal attestations, not some random thing that speaks from world-models that are not their own, and which confidence levels do not align with the speaker. Again, AI-generated output is totally fine on the site, just don't use it for things that refer to your personal levels of confidence, unless you really succeeded at making it sound like you, and you stand behind it the way you would stand behind your own words.
Yes, odds notation is the only sane way to do Bayes. Who cares about Bayes theorem written out in math. Just think about hypotheses and likelihoods. If you need to rederive the math notation start from thinking in odds and rederive what you would do to get a probability out of odds.
I do sure feel confused why so many people mess up Bayes. The core of bayesian reasoning is literally just asking the question of "what is the probability that I would see this evidence given each one of my hypotheses", or in the case of a reasonable null hypothesis and a hypothesized conjecture the question of "would I be seeing anything different if I was wrong/if this was just noise?".
To be clear, this is also what is in all the standard Bayes guides. Eliezer's Bayes guide both on Yudkowsky.net and Arbital.com is centrally about the odds notation.