[Edit 2024-04-06]: Thank you Nicky for turning this post into a song. I think it's quite good. Now I can finally upvote this post. The lyrics can be found in the "The Worst Post on LessWrong" heading of this post.

Did you ever find yourself be afraid of publishing something, in a way that seems unhelpful? If so, then this post is for you.

Quoting myself:

You don't want to push away readers by writing bad posts, and you do not want them to update towards you being dumb.

Even if somebody like that publishes something occasionally, they might hamper themself, by being overly selective on what they write. The best way to get better at writing is to write a lot after all.

I think I managed to successfully work around this problem. In the beginning, I intentionally published something very bad on LessWrong. It wasn't optimized to be bad, but it was a post about a random low-quality thought I had.

It was so bad that the post the alt account I used to post that post was deleted (at least I can't find the post anymore). I only found out later, though (I am not sure if this had worked if it got deleted immediately). After I had put out that post, almost all of my fear of publishing went away. I published something very bad, and yet nothing terrible happened. My internal state was something like, "I am fine. Nothing terrible happens when you post something bad. I think I can do this again. But better." After all, this first truly abysmal post set the baseline very, very low.

I did something similar in the alignment Forum. Though that article was a lot better than my first article, it definitely falls short of my unrealistic expectations at the time. That article is not very good, and not very important in itself. But writing it, I can actually write articles like that. There is no major obstacle that prevents me from writing more and doing it better.

So I encourage you to follow a similar strategy if you find yourself being afraid of publishing. Don't optimize for making the content bad. I recommend setting a timer of 1 hour and committing to publish whatever you have after that hour.

For good measure, the rest of this post will be random garbage (generated only by me and not AI). If you want, take this as your baseline. Beat me to it, and write a post that is better than this:

The Worst Post on LessWrong

Can you eat a rock? The answer is yes. You just need to be smart about it. If you try to bite the rock, you will bite out your teeth. That is no good. Instead, you should grind up the rock into a very fine powder. Then you are going to get some strong acid and put the powder in that. Once the rock has been completely disintegrated, add some base, to reach PH 7. Because we already used acid on the rock, we don't need to bother with digestion anymore. Just take a giant syringe, pull the rock water into it and inject it directly into your bloodstream.

I am hopping on a mountain, falling down because I am so juicy. Nothing ranks as highly as mustard on a banana. A mirror reflections things! Yay! When I turn in circles really fast, I get dizzy.

Here I am just hitting my keyboard randomly: lxa,ak.,0g0i08idra,gipdlidzosrcudrgdRGDRDrg,dprgdi.pa.r. Pretty nice Huh?

Cardboard is brown. It is TRUE! But what is TRUE cardboard? Its red, obviously. I am sorry I am not good with colors, I eat bananas with mustard. It makes me cry. Not because the mustard is hot, but because it is so tasty. To few people know about this amazing trick.

Making money is easy. Yes. No? Yes! NO!? AHHHHHHHHHHHHHHH.

Tell me about roasted peanuts. Well they are roasted, and they are peanuts, what else is there to know? Is roastedness a property of you? Maybe. I am pretty crunchy.

Ugah agah ugah aga. Yes. I invented a new language.

13, 49, 38, 28, 969, 392, 0, 2, 4, are all numbers. If I had 2 bananas I would be mare happy than having 0 bananas. But having 969 bananas would make me sad. They would rott and I would need to throw them away. Unless I sell them. CAPITALISM!

What is a hand? Well it is something, but not anything. Also I have one. Maybe more. Fingers are like bendy tubes that can wrap around stuff.

Pep pop pep pop. Aha. Another language. This one is about talking like a robot. It has only two words. So you can talk in binary. I guess just interpret it as x86 machine code. So you only can speak programs. Well, I guess you could speak something that is not in the instruction set.

I wanted to write 1000 words total for this article, but I am only at 600 words. Writing random garbage is so much harder than I thought. Also, it requires more creativity that I thought, and I can actually generate semi interesting stuff like the above paragraph. Which means I failed to write complete garbage. NOOOOOOOOOOOOOOOOOOOOOOOOO.

I am sitting on a thing right now. I don't know the name of the thing in English. It is called Sessel in German thought. If you buy stick notes, buy post it super sticky notes, if you want them to not fall off of your whiteboard.

How the heck does tissue work. Maybe it is the capitiliarry effect that makes it suck up that liquid. It is pretty funny that we are just cutting down trees to make this stuff. Should it not be possible to do this articficially? Why do you need to grow a whole tree for that.

When you eat something, you are actually destroying nanomachines. That is pretty cool. Plex pointed it out to me.

a b c. Three things, YES!

  @@
@    @ @@ @  @
 @  @
@    @ @@ @
 @@@@
@@  @@ @@
  @@

What is that? Can't you see? It's communism.

Ok I now have almost 800 words. I can do this. Go. Yes. Just hitting my keyboard randomly would be cheating I think.

Take a mellon, cut it in half, eat out the flesh with a spoon, and then wear the melon half as a hat. I guarantee that this will make you look really cool, and everybody will be envious of you.

What if somebody would just randomly screem every 10 seconds. That would be kind of annoying. Maybe somebody on this world actually has this problem. But you don't need to find them. You can just record yourself screaming, and then run thsis shell script:

while true; do
    mpv scream.mp3 &
    sleep 10
done

This would give you the experience of standing next to a person with such a condition. This is especially fun if the scream autdio file is a lot longer than 10 seconds.

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 12:39 AM

You, sir, are a true inspiration! I cannot yet claim to have a post with as much negative karma as this one, but it is the sum of my hopes and dreams that one day I might surpass it!

And indeed, if I may be so bold, I think there is much I might be able to improve upon here! "I am hopping on a mountain, falling down because I am so juicy" is clearly much too poetic, meaningful, and remarkable to be a part of truly the worst possible LessWrong post!

Maybe a post full of Lorem Ipsum could work , or perhaps a full reposting of A Pickle For the Knowing Ones. Both seem somehow lacking.

I do not yet have an answer as to what the worst possible post looks like. How does one even define it? I suppose their are multiple measures - most negative karma being one, fewest aggregate number of votes (and thus lowest level of engagement) may be another. 

But both of these, in a way, would be a success - they would have achieved what their author set out to achieve. Is that not what it means to be successful? Is success not always constituted in the fulfilling of some intended objective? I say, the worst possible LessWrong post would have to have the very opposite effect desired by its writer, and illicit the very opposite response. Therefore, assuming some level of competency on the part of the author, they would have to be in some respect self-effacing. A system full of contradictions must hold together long enough to produce a blog post. 

I can only dream, one day, of being that system.

For now, perhaps I should concentrate on a smaller, more attainable goal - the worst ever LessWrong comment, perhaps?

The goal of writing the worst LessWrong post is to set the standard by which you will measure yourself in the future. You want to use it as a tool to stop handicapping your own thought processes by constantly questioning yourself: "Is this really good enough?", "Should I write about this?", "Would anyone care?" Asking these questions is not necessarily a problem, in fact, they are probably good questions to consider. But in my experience, there is a self-deprecating way that you can ask these questions, which will just be demotivating, which I think is better to avoid.

The point of this post is to argue that you should lower your standards and just push out some posts when you start out writing. Nothing makes you better at writing than writing a lot. Don't worry about the quality of your posts too much in the beginning. Putting out many posts is more important. And there is some merit in them being bad because once you start to measure yourself against your past self, it will be easy to see how you improved and count that as a success.

[-]Mir7mo00

The thing people need to realize is that when somebody writes a bad post, it doesn't harm the readers (except insofar as you falsely advertised quality). If something is badly argued readers are immune. If something is persuasively argued, but wrong, readers that fall for it now have an opportunity to discover a hole in their epistemic defenses.

Mostly, people read arguments as a proxies for making judgments about the quality of the author, or whether they agree/disagree with them. Notice that these purposes are orthogonal to actually learning something.


Unrelatedly, what would you say are your top ideas/insights/writings as judged by the usefwlness you think they'd confer to you-a-year-or-two-ago? (Asking you this as a proxy for "give me your best insights" because I feel like the latter causes people to overweight legibility or novelty-relative-to-average-community-member or something.)

The thing people need to realize is that when somebody writes a bad post, it doesn't harm the readers (except insofar as you falsely advertised quality). If something is badly argued readers are immune. If something is persuasively argued, but wrong, readers that fall for it now have an opportunity to discover a hole in their epistemic defenses.

That breaks for AGIs. I hope we don't have our cutting-edge AI systems write posts ... ups


My best insight is that you can think. Because most of the failure comes from not thinking. It is so simple that it sounds dumb but it really seems true to me. This applies to everything really, including alignment. You can just think about how to align an AGI. I feel like I am kind of good at thinking about alignment, and I think the main thing that I did that made me better good, was simply to start to think about it. Then once you have done it a bit you will get better. A lot better than all the people who always come up with excuses why this would not be good like "I first need to learn X math topic" or "I first need to understand Infrabayesianism and functional decision theory, and really everything anyone has ever done first before I can start to think about it myself". I was making these kinds of excuses for years before I just tried, and I made basically no progress before I just tried.

Learning things is of course very good. But if you are not trying to do research something is very wrong I think. Arnold Schwarzenegger once said something about how he would do sport every single day. Even when he had basically no time because he was traveling he would still do some pushups. I recommend doing the same, but for thinking about AI alignment. Do it every day, at least a little bit.

[-]Mir7mo00

My best insight is that you can think. Because most of the failure comes from not thinking. … I was making these kinds of excuses for years… and I made basically no progress before I just tried."

I resonate with this. Cognitive psychology and sequences taught me to be extremely mistrustful of my own thoughts, and while I do think much of that was a necessary first step, I also think it's very easy to get stuck in that mindset because you've disowned the only tools that can save you. Non-cognition is very tempting if your primary motivation is to not be wrong—especially when that mindset is socially incentivised.

  1. Introspection is the most reliable source of knowledge we have, but it isn't publishable/sharable/legible, so very few people use it to its full potential. (point made a week ago)
  2. The popular evo-psych idea "self-deception for the purpose of other-deception" is largely a myth. We were never privy to our intentions in the first place, so there's nothing for self-deception to explain. Our "conscious mind" is us looking at ourselves from an outside perspective, and we may only infer our true intentions via memory traces we're lucky enough to catch a glimpse of. (two weeks ago)
  3. I stress introspection so much because it's ~futile to make novel object-level progress without a deep familiarity/understanding of the tools you use for the task.

Do it every day, at least a little bit.

This particular sentence I'm sorta out of phase with feeling-wise, however. I'm patiently trying to cultivate intrinsic motivation by… a large list of complicated subjective tricks most of which are variations on "minimise total motivational force spent on civil war between parts of myself." It's like politics, or antibiotic resistance—if I overreach in an attempt to eliminate a faction I'm still too weak to permanently defeat, it's likely to backfire.

The Light is more powerfwl, though Darkness is quicker, easier, more seductive.

This particular sentence I'm sorta out of phase with feeling-wise, however. I'm patiently trying to cultivate intrinsic motivation by… a large list of complicated subjective tricks most of which are variations on "minimise total motivational force spent on civil war between parts of myself." It's like politics, or antibiotic resistance—if I overreach in an attempt to eliminate a faction I'm still too weak to permanently defeat, it's likely to backfire.

I completely agree with this approach. It is just that starting is the hardest part and if do something every day at least a little bit, you will make it a lot easier to start on command. That is one of the advantages. Normally thinking about alignment for 1 minute is not causing an internal war I expect. I think having this goal is good, and there is no conflict necessarily with what you are talking about. I think it is best if both are combined.

The popular evo-psych idea "self-deception for the purpose of other-deception" is largely a myth.

It is very real in my experience. In hindsight, I have caught myself many times. Maybe I mean something different than you. Motivated early stopping would fall into this for me, when you are sort of not aware/suppressing to become aware of that you are doing this. Which I think is the default. I am very sure I have observed my mind suppressing further thought once it early stopped with some ridiculous justification.

I stress introspection so much because it's ~futile to make novel object-level progress without a deep familiarity/understanding of the tools you use for the task.

I think this is wrong. I was very very terrible at introspection just 2 years ago. Yet I did manage to learn how to make very good games, without really introspecting for years later about why the things work that I did. Though I agree that I could probably have done better with introspection.

[-]Mir7mo10

I was very very terrible at introspection just 2 years ago. Yet I did manage to learn how to make very good games, without really introspecting for years later about why the things work that I did.

More specifically, I mean progress wrt some long-term goal like AI alignment, altruism, factory farming, etc. Here, I think most ways of thinking about the problem are wildly off-target bc motivations get distorted by social incentives. Whereas goals in narrow games like "win at chess" or "solve a math problem" are less prone to this, so introspection is much less important.

Well, I am talking about creating games, not playing them if that was unclear. I think that is significantly harder than making games. It took over a thousand hours of practice to get good. I think AI alignment is a lot harder, but I think the same pattern applies to some extent. For example, asking the question "What will this project look like if it goes really well is a good question." Why well when John asked this question to a bunch of people he got good results. I have not thought about why you get good results, but asking this question. But I am pretty sure I could understand it better, and that is likely to be useful, compared to not understanding. But clearly, you can get benefits even when you don't understand.

Most of the time when you are applying a technique you will just be applying the technique. You will normally not retrieve all of the knowledge of why this technique works before using it. And it works fine. The knowledge about why the technique is mostly useful for refining the technique is my guess. However, applying the refined technique does not require retrieving the knowledge. In fact, you might often forget the knowledge but not the refined technique, i.e. the procedural knowledge.