Out of curiosity, let's see many points Scott Alexander would get for the last three months of 2025. Skipping the open threads and links and guest posts and highlights from comments and meetup announcements and contests and grant results...
Total: 43 points.
I didn't make a postmortem of the original Halfhaven, so maybe let's use this comment for a quick survey?
Please add a react (not vote) to this comment (by clicking the light-gray emoji in the bottom-right corner),
✔ - if you participated in the original Halfhaven and produced 30 posts during October and November 2025
✘ - if you participated in the original Halfhaven but failed to produce 30 posts
No specific suggestion, just a note that you should make sure that your very message does not become another public link to their identity. That would be ironic.
AI agents are sending unwelcome "thank you" emails
Oh, that was a funny story!
Also there is a debate on Hacker News, and as usual, the people who don't understand what happened, express the strongest opinions and get most upvotes.
if you think any of the guys working for Elon or Trump are too "crazy" (code for: unpredictable, aggressive, and personalist; hated by a conformist and cowardly and crumpling establishment).
Unfortunately, being unpredictable and aggressive and hated is not sufficient to produce good results.
The level of competence I associate with crazy people working for Elon or Trump is more like: "Tell them to find the woke programs that need to be purged for political reasons, and they bring you a bunch of chemical studies on trans-isomers, despite having all necessary information and the state of the art artificial intelligence at their disposal". Like, a high school student with a free version of ChatGPT would probably do a better job.
(I am specifically making note about having the AI at their disposal, to address a possible excuse "well, they had to act quickly, and there were too many studies and not enough time".)
It was a joke. But trying to get "speech" involved in your product seems like an obvious idea if you want to get some constitutional projection. It just probably wouldn't work with literally food.
Maybe go more meta, and instead pay someone whose full-time job will be to find and interview people who want to work on AI Alignment, and do the paperwork (applying for other grants) for them.
It feels like the problem is that we only have two extreme outcomes: either Tom is not allowed to create his business, or Fred goes out of business.
While the situation looks like Fred created a lot of value, and then Tom improved it a little, so a fair outcome would be that Fred gets paid more and Tom gets paid less but nonzero... and this seems like the only outcome that definitely won't happen.
(Another part that feels unfair is that Fred took a greater risk by exploring a new option, but Tom already knew that Fred's business was profitable.)
Rats are particularly drawn to certain woo practices (jhanas and meditation, circling and authentic relating, psychedelics) while rejecting others (astrology, reiki, palm reading). What principles do you think determine which practices get adopted?
A quick approximate guess: It's about developing your "mental powers", which sounds attractive to the same people who are attracted by the idea of developing their mental power of rationality.
Meditation and circling are based on a promise that if you start thinking and/or talking differently, you will unlock some kind of mental superpowers. Psychedelics unlock supposedly mental superpowers by swallowing a pill. Wannabe rationalists are attracted to the idea of having mental superpowers.
Astrology is about studying stars; almost as boring as astronomy. Palm reading, again, the real power is out there, you can just fatalistically study it. I am not familiar with reiki, but it seems like doing something with energies in your body, which sounds similar to exercise; boring.
(A similar perspective: With meditation and circling the power comes from you. With astrology and palm reading the power is in the stars and the lines. With psychedelics, the power comes from outside, but it boosts your brain. With reiki, the power comes from you, but it stays in the parts of your body outside the brain, and those are lower-status than the brain.)
Well, yes. The question is, how much exactly. I mean, what are the points even supposed to reflect?
The thing I am trying to capture is "value for the reader". As a reader of ACX, I prefer longer articles to shorter articles, assuming constant frequency. But I prefer two articles a week to one twice-as-long article.
Mathematically speaking, this means that the function "how many words for N points" should be growing, faster than linearly. But that still leaves many options. Intuitively, I chose 250n²+250n, as a quadratic (faster than linear) expression with backwards compatibility (results in 500 for n=1). If we extrapolate that further, it would be 5 points for 7.5k words, 6 points for 10.5k words, 7 points for 14k words, 8 points for 18k words, 9 points for 22.5k words, and 10 points for 27.5k words.
According to this extended scale, Scott would get 6+4+1+3+10+2+2+3+1+2+1+2+3+3+2+3+1+2= 51 points.
But that assumes that the particular quadratic function is the right one. If we chose an exponential function instead: 1 point for 500 words, 2 points for 1k words, 3 points for 2k words, 4 points for 4k words, 5 points for 8k words, and 6 points for 16k words, that would give slightly more points to articles below 4k, but fewer points above 10k, together 5+4+2+3+6+2+3+3+2+3+1+3+3+4+2+4+2+3= 55 points.
Both functions seem to pass the smell test -- Scott should get more than 30 points, but not like an order of magnitude more (especially not after I have filtered out the links and contests and highlights). Getting about twice as much sounds about right. Still doesn't answer which function to choose; both seem okay precisely because they give similar results.
Another question is, how often will Halfhaven participants produce articles longer than 5k words. I know I probably won't, which makes the scoring of articles over 5k words seem irrelevant. I am open to changing my mind if someone actually starts writing the articles of such length, but at this moment it feels like if someone can write that kind of article repeatedly, they no longer need this project.
tl;dr -- the choice is arbitrary, and probably irrelevant for Halfhaven participants, so although I kinda agree with you, I am not going to change it now (but might change it in April, dunno)