Several artists and professionals have come to Inkhaven to share their advice. They keep talking about form—even if you have a raw feeling or interest, you must channel it through one of the forms of art or journalism to make it into something people can appreciate.
I'm curious to understand why they care so much; and because it seems interesting, I'd like to explore form. As a first step, I have written down some of the types of blogpost that are written on LessWrong.
Concept-handles are the most common way that the discourse is attempted to be moved forwards, through helping us notice phenomena and orient to them.
The thing about a Concept-Handle-Post is that, as long as the post gives you a good hook for the idea, the rest of the post can be pretty crazy. It can take a niche position as though it's common place, it can have fiction in it, it can be crazily long and people forget 90% of it, it can be just a few paragraphs.
Some Examples
In rationality is can be about individual rationality ("Reason as Memetic Immune Disorder", "Schelling Fences on Slippery Slopes") or group rationality ("The Costly Coordination Mechanism of Common Knowledge", "Anti-social Punishment", "The hostile telepaths problem").
It can also be to help understand the laws of reasoning ("Local Validity as a Key to Sanity and Civilization", "Strong Evidence is Common") or to improve our local culture ("Your Cheerful Price", "“PR” is corrosive; “reputation” is not.", "Orienting Toward Wizard Power").
This also applies in discussion of AGI. Naturally you can do this on the object level of discussing how AGI works ("The Solomonoff Prior is Malign", "Alignment By Default", etc). You can also just be helping us think about how to take responsibility for the world being okay ("Focus on the places where you feel shocked everyone's dropping the ball", "Don't die with dignity; instead play to your outs"), or just focus on general ability to do world-optimization ("Being the (Pareto) Best in the World", Coordination as a Scarce Resource).
A great story. Or an educational parable. Its length is either "long", "very long", or "far too long".
Some Examples
One-Shot fiction is typically either hard sci-fi ("The Company Man", "The Redaction Machine", "The Parable of Predict-O-Matic") or didactic ("The Bayesian Tyrant", "Hero Licensing").
It is also sometimes not one-shot ("Luna Lovegood and the Chamber of Secrets").
Someone has gathered lots of evidence and ideas together, and is putting together a small worldview, a breakthrough of sorts.
Some Examples
This is often about AI risk ("The case for ensuring that powerful AIs are controlled", "The case for aligning narrowly superhuman models", Model Organisms of Misalignment: The Case for a New Pillar of Alignment Research"), understanding LLMs ("Simulators", "The Rise of Parasitic AI"), or rationality ("Radical Probabilism").
Sometimes it's about novel biological technology ("Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible", "The Case for Extreme Vaccine Effectiveness").
These take subjects of interest and give you the 80/20. You get the core idea of the book, some key quotes, an attempt to pass the intellectual turing test of the author, and then engagement and disagreement.
There are so many examples! Here are a few: Going Infinite, Design Principles of Biological Circuits, The Secret Of Our Success, Unlocking the Emotional Brain, Governing the Commons.
Rationalists love to explain things, and their primer is your one-stop shop for learning a new subject. These are great explainers of relatively known science (e.g. "A voting theory primer for rationalists", "Introduction to abstract entropy").
After thinking about a subject for long enough, you come up with an ontology, or a framework, for fitting it all together. To explain it takes a bit of work and many examples!
Framework posts are effort-posts about a subject of interest to LessWrong, such as rationality ("Varieties Of Argumentative Experience", "Basics of Rationalist Discourse") or about AI ("My computational framework for the brain", "A Three-Layer Model of LLM Psychology", "Six Dimensions of Operational Adequacy in AGI Projects").
Everyone loves to talk about themselves, and rationalists especially love a fixed point. For example, Rationalism before the Sequences tells some of the history, and The Rationalists of the 1950s (and before) also called themselves “Rationalists” puts us in historical context.
This is a distinguished category of posts that have spent a while as the most upvoted post on LessWrong. For many years it was "Thoughts on the Singularity Institute (SI)", a post arguing against donating to Eliezer's AI org, and in recent years "Where I Agree and Disagree With Eliezer" has taken the top spot.
But whether it's "Contra Yudkowsky's Ideal Bayesian", "Contra Yudkowsky on AI Doom", "Challenges to Yudkowsky's Pronoun Reform Proposal", "I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness", "re: Yudkowsky on biological materials", "Contra Yudkowsky on Doom from Foom #2", "Noting an error in Inadequate Equilibria", "My Objections to "We’re All Gonna Die with Eliezer Yudkowsky"", or even just "My AI Model Delta Compared To Yudkowsky", the genre is going strong!
Some are straightforward predictions ("What 2026 Looks Like"), some are warnings ("What failure looks like", "How AI Takeover Might Happen in 2 Years", "It Looks Like You're Trying To Take Over The World"). Can sometimes be confused with 'fiction'.
This is a post that is not on any topic of central interest to LessWrong, but the topic or question it asks is so interesting, and the writing sufficiently engaging, that they're loved anyway ("What it's like to dissect a cadaver", "There’s no such thing as a tree (phylogenetically)", "Toni Kurz and the Insanity of Climbing Mountains", "Recommendation: reports on the search for missing hiker Bill Ewasko").
Somehow, even though these aren't especially related to our frameworks about the world (rationality, AI, etc) I think about some of these more than any of my frameworks about the world. The piece on Toni Kurz and the mountain climbers I couldn't stop talking about for ~2 years.
...these are not all the types of LessWrong post.
At dinner, Scott Alexander remarked that people need to stop writing in lists. Just before dinner, Dynomight led a session on making posts that are lists, and I had started making just such a post. I did not have time to change course!
I was relieved to find out that Scott meant the practice of making bulleted or numbered lists in-place of prose, which I also agree is poor writing.