Running Lightcone Infrastructure, which runs LessWrong and Lighthaven.space. You can reach me at habryka@lesswrong.com.
(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)
Welcome! Glad to have you around and hope you find good things here, and figure out how to contribute to both your own and other people's understanding of this crazy time we live in.
Yep, the stance is relatively hard. I am very confident that the alternative would be a pretty quick collapse of the platform, or it would require some very drastic changes in the voting and attention mechanisms on the site to deal with the giant wave of slop that any other stance would allow.
Apart from the present post I am betting a large fraction of LessWrong posts are already written with AI assistance. Some may spent significant time to excise the tell-tale marks of LLM prose which man... feels super silly? But many posts explicitly acknowledge AI-assistance. For myself, I so assume everybody is using of course using AI assistance during writing I don't even consider it worth mentioning. It amuses me when commenters excitedly point out that I've used AI to assist writing as if they've caught me in some sort of shameful crime.
Making prose flow is not the hard part of writing. I am all in favor of people using AIs to think through their ideas. But I want their attestations to be their personal attestations, not some random thing that speaks from world-models that are not their own, and which confidence levels do not align with the speaker. Again, AI-generated output is totally fine on the site, just don't use it for things that refer to your personal levels of confidence, unless you really succeeded at making it sound like you, and you stand behind it the way you would stand behind your own words.
I think posts that are just "hey, I thought X was important, here is what an LLM said about it" seems fine. Just don't pass it off as your own writing.
Very reasonable questions!
1. This is not raw or lightly edited LLM output. Eg all facts and overall structure here are based on a handwritten draft.
As I have learned as a result of dealing with this complaint every day, when being given a draft to make into prose, the AI will add a huge amount of "facts". Phrasings, logical structure, and all that kind of stuff communicates quite important information (indeed, often more than the facts via the use of qualifiers, or the exact use of logical connectors).
2. The LLM assistance was about writing flowing, coherent prose which (for me at least) can take a lot of time. Some may take offence at typical LLMisms but I fail to see how this lowers the object-level quality. I could spend hours excising every sign of AI- but this defeats the purpose of using AI to enhance productivity.
In addition to the point above (the "writing flowing/coherent prose" part very much not actually being surface level), there is simply also an issue of enforcement. The default equilibrium of people pasting LLM output is that nobody is really talking to each other. I can't tell whether the LLM writing reflects what you actually wanted to say, or is just a random thing it made up. That's why I recommend putting it into a box.
3. That said, if the facts were also LLM generated and I handchecked them carefully I fail to see how this would actually lower the overall quality - in fact my best guess is that LLMs are already much much better in many-most domains than many-most people. eg twitter has seen marked improvements in epistemic quality since @ grok is this true happened. The future [and present] of writing and intellectual work is Artificial Intelligence. To claim otherwise seems to be a denial of the reality of the imminent and immanent arrival of a superior machine intelligence.
I agree! LLMs are indeed actually quite great at generating facts. They are also pretty decent at some aspects of writing and communication.
There is no doubt the future of writing and intellectual work is AI. My guess is within a year or two something big will have to change how LessWrong relates to it (just as we had to change within the last year how we relate to it). But for now AI is not yet better than the median LessWrong commenter at the kind of writing on LessWrong, and even if it was at a surface level, there are various other dynamics that make it unlikely therefore the right choice is for LW to be a place where humans post unmarked LLM output as their own.
4. Pragmatically, I find the present guidelines to be unclear. Am I allowed to post AI-assisted writing if I mark it as such? If so - I will just mark everything I write as AI content and let the reader decide if they trust my judgement.
I mean, you can put all your writing into collapsible sections, but I highly doubt you would get much traction that way. If you mark non-AI writing as AI content that's also against the moderation rules.
Simply keep the two apart, and try to add prose to explain the connection between them. Feel free to extensively make use of AI, just make sure it's clear which part is AI, and which part is not. Yes, this means you can't use AI straightforwardly to write your prose. Such is life. The costs aren't worth it for LW.
This is just false. Of the above list, the people who were doing Inkhaven in some form or another:
Almost 50% of the whole list!
Me, Ben and Robert were just participating in Inkhaven (Ben for ~3 weeks, me and Robert for a week each) and Wentworth was doing his daily posting because of Inkhaven. It's obvious there is a huge effect here (you can of course dispute that the posts are good, but trying to somehow slice it up as Inkhaven not having a huge effect seems very unlikely to have any good case for it).
Airtable also allows this but requires people to have an Airtable account, IIRC.
Ah, cool, sorry that I misunderstood!
I don't have a better way of checking whether those quotes are real than to do my own OCR for the PDF, and I don't currently have one handy. They seem plausibly real to me, but you know, that's kind of the issue :P
I did provide a direct chat link. I don't have any active system prompts or anything like that, to my knowledge, so that should give you all the tools to replicate. I agree the system might not always do this, though it clearly did that time (and seems to generally do this when I've used it).
I think Adria linked to the exact PDF, in case you don't have access to uploaded files. You can also just search the filename and find it yourself as a PDF.
Sure, and my policy above doesn't rule that out. The only thing I said is that there is some price for which we'll do it (my guess is de-facto there are probably some clearing prices here, as opposed to zero, but that would be a different conversation).
Lightcone is doing another fundraiser this year[1]! I am still working on our big fundraising post, but figured I would throw up something quick in case people are thinking about their charitable giving today.
Short summary of our funding situation: We are fundraising for $2M this year. Most of that goes into LessWrong and adjacent projects. Lighthaven got pretty close to breaking even this year (though isn't fully there). We also worked on AI 2027 which of course sure had a lot of effects. We do kind of have to raise around this much if we don't want to shut down since most of our expenses are fixed costs (my guess is the absolute minimum we could handle is something like $1.4M).
Donors above $2,000 can continue to get things at Lighthaven dedicated to them.
Donate here: https://www.every.org/lightcone-infrastructure
(I also just added a fundraising banner. I expect that to be temporary as I don't generally like having ad-like content on post pages, but we happen to be getting a very enormous amount of incoming traffic to Claude 4.5 Opus' Soul Document, and so a banner seemed particularly valuable for a day or two.)
Last year's fundraiser: https://www.lesswrong.com/posts/5n2ZQcbc7r4R8mvqc/the-lightcone-is-nothing-without-its-people