I operate by Crocker's rules. All LLM output is explicitely designated as such. I have made no self-hiding agreements.
is there a service auto matching donation swap?
Unfortunately not, seems like the legal situation here is still unresolved, and as long as that's the case my best guess is that nobody will want to take the legal risk of building such a platform.
At a wild guess, I'd say that if the useful artifact is literally a paragraph or less, and you've gone over it several times, then it could be "ok" as testimony according to me. Like, if the LLM drafted a few sentences, and then you read them and deeply checked "is this really the right way to say this? does this really match my idea / felt sense?", and then you asked for a bunch of rewrites / rewordings, and did this several times, then plausibly that's just good.
Yeah, insofar as I'd endorse publishing LLM text that'd be the minimum, maybe in addition to adding links.
Code feels similar, I often end up deleting a bunch of LLM-generated code because it's extraneous to my purpose, and this is much more of an issue because I don't feel like publishing LLM-written text but don't know how to feel about LLM-written code. I guess a warning at the top telling the reader that they're about to wade into some-level-of-unedited code is warranted.
Why LLM it up? Just give me the prompt.
Often to-me-useful artefacts come about in a long conversation with LLMs after many bits of steering & revisions, so there's a spectrum of how me-generated to LLM-generated some text is.
I ask because I haven't been able to notice any measurable impacts from meditation on any variable in my life (possibly too much information at 1 2 3. I also like meditation and will go on probably 2, plausibly 3 retreats this year, but the lack of measurable impact on my life leaves me skeptical that it's actually a good use of my time.
A key missing ingredient holding back LLM economic impact is that they're just not robust enough.
I disagree with this in this particular context. We are looking at AI companies trying to automate AI automation via AIs. Most tasks in AI R&D don't require much reliability, I don't know the distribution of outcomes in ML experiments but I reckon a lot of them are basically failures/have null results, but the distribution of the impact of such experiments has a long tail [1] . Also ML experiments don't have many irreversible parts, AI R&D researchers aren't like surgeons where mistakes have huge costs: Any ML experiment can be sandboxed, given a bounded amount of resources, shut down when it takes up too much. You need high reliability when the cost of failure is necessarily very high, but when running ML experiments that's not the case.
Edit: Claude 4.5 Sonnet gives feedback on my text above, says that the search strategy matters if we're looking at ML engineering. If it's breadth-first & innovations don't require a deep tree to go down, then low reliability is fine. But if we need to combine ≥4 innovations in a depth-first search then reliability matters more.
I don't think this is a crux for me but learning that it's a thin-tailed distribution would make me at least think about this problem a bit more. Claude claims hyperparameter tunes have lognormal returns (shifted so that the mean is slightly below baseline). ↩︎
I hadn't seen DirectedEvolution's review, so it was useful evidence for me. The questions Yarrow picked for determining whether LW was early seemed fair on the LW side (what other question than "did people on LW talk about COVID-19 early on" would one even be asking?), if not on the side of governments and the mainstream media. Even though DirectedEvolution's review exists I remember people claiming LW was early on COVID-19, so Yarrow's independent look into it is still useful. (Again, I know Yarrow is not set up to be fair to LW, but even unfair/biased information sources can be useful if one can model the degree/way in which they're biased.) I think the statement "LW was early on COVID-19" [[1]] (which I used to believe!) is just wrong? I haven't seen counter-evidence, yet people continue saying it.
I'd say e.g. that Metaculus came out far ahead of basically anyone else, with the first market being published on the 19th of January. (I'm looking at my predictions from the time on Metaculus and PredictionBook and there was a lot of fast updating based on evidence. I wish more LWers had forecasting track records (like this one) instead of vaguely talking about "Bayes points" [[2]] .)
These kinds of retrospectives are in general not done very often, and follow-up/investigation of common claims are rare and kinda annoying to do, so I want to signal-boost empirical posts like the one I linked.
I think at minimum you have to credit the trading gains many LessWrongers made at the time, and MicroCovid.
Yeah, Yarrow is not trying to be fair. MicroCovid was cool, as was VaccinateCA. Maybe I'll take a look into how many trading gains LWers made at the time, though this suffers from obvious selection bias. (I, for example, made no gains at the time, because I didn't see the signal to start an investment account early on.)
There were other misses from the LW side like this Zvi roundup.
Complicatedly I think LW settled more firmly on "this is a problem" while there was a bunch of political positioning in March/early April, but my understanding is that (inter-)governmental health organizations were earlier, even if their governments didn't listen to them. My impression is that the general population wasn't very worried until basically a week before lockdowns, and the polls Yarrow uses to refute this aren't very convincing to me because I just don't trust polls, in general—it's very cheap to say "I'm worried", but expensive to do anything about it. ↩︎
<microrant>I'm baffled that this has gotten into parlance, it seems like a purely social fiction??? Another way of assigning status with almost no grounding in any kind of measurement?</microrant> ↩︎
Ah, I didn't just mean "who was early in talking about the COVID-19 pandemic", but a general history of the COVID-19 pandemic, similar to how people have written thousand-page books on the history of the Soviet Union. I agree that "who talked about the Soviet Union early on" isn't that interesting of a question.
Such a book would include who believed/said/did what when, and which actions had which effects, possibly counterfactuals? People have written books about the Soviet Union (lots of them, in fact), COVID-19 was arguably a bigger event and I don't want it to fall into a memory-hole.
Meditating daily will change your life.
In which ways?
Crystal Healing — or the Origins of Expected Utility Maximizers (Alexander Gietelink Oldenziel/Kaarel/RP, 2023)