*** Comment Guideline: If you downvote this post, please also add a Reaction or a 30+ character comment prepended with "Downvote note:" on what to improve. ***
Sorry, to be clear, this is not a valid comment guideline on LessWrong. The current moderation system allows authors to moderate comments (assuming they have the necessary amount of karma). It does not allow authors to change how people vote. I can imagine at some point maybe doing something here, but it seems dicey, and is not part of how LessWrong currently works.
Got it — apologies for bending these rules (I didn't consider that this may break a rule) as an attempt to operationalise the post.
Can I state a lighter version instead, where I encourage all standard voting behaviour, but append a request for downvote justification? I've replaced the Comment Guideline accordingly as a placeholder until I receive further clarification.
I.e:
*** Voting Guideline: You should freely vote and react according to your views and LessWrong norms — I do not want to infringe upon this. ***
*** Comment Request: However, please allow me to make a request: if you do downvote this post and are willing to make that transparent, it would help me to operationalise the recommendation I put across in this post if you add a Reaction or a 30+ character comment prepended with "Downvote note:" on what to improve. ***
You may be interested in a very similar discussion from several months ago: When you downvote, explain why.
I want to say something like, you are not owed attention on your post just because it is written with good logic. That's sort of harsh, but I do think that you have to earn the reader's trust. People downvote for all sorts of reasons, not all of them are because of some logical mistake you made, sometimes it's just because the post is not relevant, or seems elementary, or isn't written well, or doesn't engage with previous work.
I can understand getting unexplained downvotes being demoralizing, but demanding people spend more of their own effort and time to engage with you is a losing proposition. You have to make it worth their time.
But, I'm feeling generous today and I'll try and write some of my thoughts anyway.
I found this post confusing to read, and had to go back and re-read the whole thing after reading it the first time to understand what you were even saying. For example one of the first sentences:
On this post I will intentionally try to illustrate how I would see my recommendation playing out:
And yet, I don't know what your recommendation even is yet. Take some time to explain your recommendation, and why I should care first, then I know what you're talking about in this section.
There are similar sorts of problems all over the piece with assumptions that aren't justified, jumping around tonally between sections, and mixing up explaining the problem with your preferred solution. It's just not a well-written piece, or so I judged it.
Hopefully that helps!
Thank you for the feedback!
On the path to benevolence there's a whole lot of friction. I agree with most of what you said, and I think we can extract substantive value that builds on my post:
Demanding people spend more of their own effort and time to engage with you is a losing proposition. You have to make it worth their time.
I agree, but I feel that there is a distinct imbalance where a post can take hours of effort, and be cast aside with a 10-second vibe check and 1 second "downvote click". I believe that the platform experience for both post authors and readers could be significantly improved by adding a second post-level signal that only takes an additional few seconds — this could be a React like "Difficult to Parse" or a ~30-character tip like "Same ideas posted recently: [link]".
Given the existing author/reader time-investment imbalance, it feels fair to suggest adding this.
Take some time to explain your recommendation, and why I should care first, then I know what you're talking about in this section.
This is a valid call-out — in fairness it was an imprecision on my part because I added the "Operationalising my recommendation" section in the first (2025/09/13) edit, and overlooked the fact that this meant it preceded my stating the recommendation. I've updated the post to state the recommendation upfront. [Meta note: This to me is the beauty of rationality and the LessWrong platform — we can co-create great logical works. I hope this doesn't look too much like "relying on the reader to proof-read" in lieu of https://www.lesswrong.com/posts/nsCwdYJEpmW5Hw5Xm/lesswrong-is-providing-feedback-and-proofreading-on-drafts ]
There are similar sorts of problems all over the piece with assumptions that aren't justified, jumping around tonally between sections, and mixing up explaining the problem with your preferred solution.
This connects to the uncertainty I relayed to @Richard_Kennaway:
I really enjoy Scott Alexander's writing and, while clearly he's a far more distinguished and capable writer than me, I feel he is a good role-model as someone who uses rationality but also storytelling prose to try to relay their point. That's effectively what I hope to accomplish — but I could only really get there if I get feedback on my writing.
I have three instances of posts in this style where I do successfully have some degree of positive feedback: [1], [2], [3]
At the same time, this is a red flag to me:
It's just not a well-written piece, or so I judged it.
In [rare] cases where I do successfully compel someone to fully read and engage with my post, I have a huge duty to the outcome that they enjoy and find insightful value in my writing.
The last thing I want to be doing is to be actually wasting someone's time.
I agree, but I feel that there is a distinct imbalance where a post can take hours of effort, and be cast aside with a 10-second vibe check and 1 second "downvote click".
You don't get points for effort. Just for value.
One way to think of it is like you are selling some food in a market. Your potential buyers don't care if the food took you 7 hours or 7 minutes to make, they care how good it tastes, and how expensive it is. The equivalent for something like an essay is how useful/insightful/interesting your ideas is, and how difficult/annoying/time-consuming it is to read.
You can decrease the costs (shorter, easy-to-follow, humor), but eventually you can't decrease them any more and your only option is to increase the value. And well, increasing the value can be hard.
I wholeheartedly agree with you.
There is something else going on here though. As I commented on this post, which also (in my view) fell prey to the phenomenon I am describing:
It’s complex enough for me to make the associations I’ve made and distill them into a narrative that makes sense to me. I can’t one-shot a narrative that lands broadly… but until I discover something that I’m comfortable falsifies my hypothesis, I’m going to keep trying different narratives to gather more feedback: with the goal of either falsifying my hypothesis or broadly convincing others that it is in fact viable.
To follow your analogy: I'm not asking that people purchase my sandwiches. I'm just asking that people clarify if they need them heated up and sliced in half, and don't just tell everyone else in the market that my sandwiches suck.
This directly aligns with a plea I express in the current post:
- The strongest counterargument is that I should just write my post like an automaton,[1] I add a footnote clarifying, instead of infusing the [attempts at] humour and illustrative devices that come naturally to me.
- The problem with this is that writing like that isn't fun to me and doesn't come as naturally. In essence it would be a barrier to me contributing anything at all. I view that as a shame, because I do believe that all of my logic is robustly defensible and wholly laid out within the post.
I believe that there is value in my ideas, and I'm not that far off repositioning them in a way that will land more broadly. I just need light, constructive feedback to more closely align our maps.
However in absence of this light, constructive feedback on LessWrong, I'm quite forcefully cast aside and constrained to other avenues. Epistemic status: I attend a weekly rationality meetup in Los Angeles, I attend AI Safety and AI Alignment Research meet-ups in Los Angeles and San Francisco, and I work directly on and with frontier AI solutions.
This is [attempted] use of hyperbole for humour — instead of "like an automaton", precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles.
Sometimes, a post or comment seems so far from epistemic virtue as to be not worth spending effort describing all the problems. I mutter “not even wrong”, downvote, and move on.
I have not voted either way on the current post.
Thank you for providing your evaluative criteria.
To me you hit on a precise, valid downvote signal: "this post is effortful for me to falsify". That would be helpful to writers like me to receive as a precise, labelled signal in order to optimise.
The disconnect I guess is that to me, all of my logic is robustly defensible and wholly laid out within the post. That's precisely why I'm so keen on someone, anyone, being able to precisely state any logical inconsistency or area that lacks clarity.
If they were to do so, then I could expand on the area where our world-views/maps [https://www.lesswrong.com/w/map-and-territory] are too distinct, in pursuit of correcting one of our maps. To me this is the essence of rationality, and it's jarring that I'm not able to get it on this platform.
Since I have to defer to an LLM for this: ChatGPT5 Pro gives me the following checklist [points 1 through 5, all other language is my own] to avoid being "not even wrong", I've added my view on how strongly I'm doing against each item:
Update (2025/09/13): Clarified Comment Guideline notice. Added "Operationalising my recommendation" section. Added "What this will look like if my criticism is valid" section. Added Appendix with snapshot of conversation thread.
Update (2025/09/13): Replaced Comment Guideline with a Voting Guideline and Comment Request to be in accordance with LessWrong rules. Added "What this post is, and why you should care" section up front.
Update (2025/09/13 3PM PST): Minor grammatical changes for flow. Added context to the first comment exchange [Footnote 8]. Added a prediction to [Footnote 7] — I think no more updates are required. Added "Notice to new/returning readers".
*** Notice to new/returning readers: This post has undergone a few updates (as above) but I believe is now in its final form. You are arriving at a good time — the storm has dissipated and the post should be more accessible than earlier iterations. ***
*** Voting Guideline: You should freely vote and react according to your views and LessWrong norms — I do not want to infringe upon this. ***
*** Comment Request: However, please allow me to make a request: if you do downvote this post and are willing to make that transparent, it would help me to operationalise the recommendation I put across in this post if you add a Reaction or a 30+ character comment prepended with "Downvote note:" on what to improve. ***
On this post[3] I will intentionally try to illustrate how I would see my recommendation playing out:
Automatic Rate Limiting on LessWrong suggests a stable grid dynamic:
Low-quality consensus posts/comments
(Usually somewhat upvoted, or heavily upvoted when they're funny or particularly emotionally resonant) | High-quality consensus posts/comments
(Usually pretty upvoted) |
Low-quality contrarian posts/comments
(Usually somewhat downvoted, or heavily downvoted if they're rude) | High-quality contrarian posts/comments
(Usually heavily upvoted) |
The crux of my criticism aligns with this grid: we agree that high-quality contrarian posts/comments are the most valuable — they are usually heavily upvoted. It follows that we should devise mechanisms that provide an actionable route for low-quality contrarian posts/comments to become high-quality to improve the platform as a whole.[4]
I love the LessWrong platform. I think that it attracts an incredibly intelligent, well-read audience with a diverse range of perspectives.
I feel that the technical implementation of the site is exceptional — a daily, curated news-cycle, with emergent high quality posts for the homepage; posts are easily readable, and conversational threads are natural, well-moderated, and easy to parse; the Reaction system feels nicely implemented in a way where it is available to opt in to, but isn’t overpowering for folks that just want textual discourse.
As a lurker and engager of a variety of posts on the platform, I think that my own writing is decently LessWrong-y — I lay out my thoughts step-by-step and avoid logical inconsistencies or incongruously big leaps. I’m decently well-versed in the rationality literature and cite core works that my ideas build upon.
Sometimes I spend hours putting together a post that I’m proud of, but then receive no feedback besides a couple of downvotes.
This is incredibly frustrating for two reasons:
I’m decently skilled at channelling my attention well and good at tuning out noise, but I’d be lying if I were to say that I don’t find it off-putting when this happens.
I have written about a concept of “the tension between truth-seeking and societal harmony”. Authentically expressing what you feel to be true creates tension if it doesn’t match societal norms. This is a shame because I’m very pro- free-speech: I think that the world is made a better place by allowing more people to express their ideas about what is true.[5]
This is not being enabled on LessWrong: a down-voting agent can effectively silence my voice just because they disagree with me. On individual comments, “overall karma” and “agreement karma” are distinct, but for a post only a single voting metric exists.
Is this not directly opposed to LessWrong’s central mission?
LessWrong is an online forum/community that was founded with the purpose of perfecting the art of human[6] rationality.
Now we get to some self-awareness.
Jeff Bezos, a visionary free-thinker, through Amazon advanced a cultural model with 16 core Leadership Principles (LPs) — but also some principles not codified as LPs. One of these is Amazon’s doc-writing culture, and another is the idea of embracing Being Peculiar.
From the "Being Peculiar" article linked:
"What kind of person owns being peculiar?" My answer, someone who is more concerned about their own legacy, than what others think of him. While this is admirable, it is also quite peculiar by American society's standards.
This is hitting at the core of my argument — the only way you can be a visionary is to express a bunch of things that are, by definition, not the societal norm. Put another way: people with societally normative viewpoints will disagree with you.
I’m not equating myself to Jeff Bezos. What I’m saying is that if I were Jeff Bezos, perhaps my really insightful ideas would be buried by the current site setup.
This dynamic is especially true when someone expresses a lot of confidence in their non-normative ideas. This makes sense, because it pattern-matches as delusion which itself is off-putting. You could argue that maybe we should wait for someone to acquire a lot of status, e.g become a billionaire, and only then can we give them a platform for confidently expressing non-normative ideas. I’m not sure that I agree that this would be the best form of society though.
I think the site could be improved by implementing a system where negative votes on a post require the voter to cite their reason for negative voting — using either the Reaction system or a brief note of 30+ characters.
These reasons should be held to account: people should be able to see and downvote them if they are flawed.
I’m anticipating this post to be a straight shot to meta-irony: I have confidently made a non-normative claim, so expect a couple of negative post votes, absent of material feedback.
In this post I have presented a well-reasoned contrarian viewpoint:
This post and my comments are significantly downvoted, nobody has engaged with these four core points and the damage is done: my contrarian view is suppressed, my future posts and comments hold less weight, and I'm disillusioned by the capacity of folks to engage with contrarian viewpoints in good faith.[7]
Snapshot of first comment exchange, taken 2025/09/13
Post at -15 overall karma [10 votes] —
[Presumed][8] Downvoter (+4 overall karma [4 votes], +3 agreement karma [3 votes]):
You may be interested in a very similar discussion from several months ago: When you downvote, explain why.
Contrarian (-7 overall karma [3 votes], -10 agreement karma [3 votes]):
Are you implying that this post ("Visionary arrogance and a criticism of LessWrong voting") should be downvoted because it reaches the same conclusion as a 7-month old post which lacks half of the framing ("visionary arrogance") that I use to describe voting behaviour motivations?
If so that sounds logically flawed to me, and so I both disagree and have downvoted you.
If you were not implying that and simply offering some additional context for me to refer to (the discussion in the comments is valuable), then I apologise and will revert my downvoting.
This is use of hyperbole for [attempted] comic effect.
This is also use of hyperbole for humour — instead of "like an automaton", precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles.
I could alternatively use the word platform.
My "criticism" is that this is not happening currently,
But then allowing society to diligently give feedback, from simple disagreement up to structured punishment.
"We say "human" rationality, because we're most interested in how us humans can perform best given how our brains work (as opposed to the general rationality that'd apply to AIs and aliens too)."
This is the case as of 2025/09/13: "-15 karma from 10 votes on the post, -7 karma from 3 votes and -10 agreement karma from 3 votes on my first comment". In response I have made 4 relatively small updates to the post, called out in the first line, to highlight the irony: I think that this validates my criticism and strengthens the post.
Update (2025/09/13, 3pm PST): The post at is currently at -17 karma from 12 votes. It has 17 comments (including my own).
I've made a few edits — not wholly-transformational, but admittedly making it easier for a reader to parse — since posting. These edits have come about directly as a result of discourse in the comments.
My moderately-strongly held view is that it is now in its final form and beautifully captures everything that I set out to do.
I'd be interested in anyone's viewpoint upon a full re-read, if indeed they are willing to dedicate the time — I totally get it if not (it's the weekend)!
I'm not sure that this final form will be effectively surfaced to any new readers on the site, since the karma is so low. I have two ongoing Private Conversations, with whom I am providing this same update, and I hope I may be able to at least solicit final form feedback from them.
My vision: perhaps now the post is high-quality and will accrue positive karma. That is to say that I'm calling the karmic bottom at -17.
I explicitly moderated my response by saying that I apologise and will revert my criticism of my interpretation of their comment if my interpretation is incorrect.
When I received the comment, the post vote tally was at "-3", and I was presented with a Bayesian to evaluate: P (Comment was provided in accordance with my explicit request | I made an explicit request and post vote tally is at "-3")
I admit that to better operationalise this I could have clarified for commenters that were following my Comment Guideline to explicitly "prepend [their comment] with "Downvote note:" " — I have made this edit to the post.
Update (2025/09/13): The Presumed Downvoter followed up to clarify that they were not implying a reason to downvote this post, but they acknowledge that me reaching that conclusion was reasonable. Per the terms that I explicitly communicated, I've flipped my votes to approve and agree with their comment. I leave the rest of my comment unchanged, as a record of how I would engage with someone who was providing a reason to downvote that I disagree with.
In practice, I wouldn't transparently say "If so that sounds logically flawed to me, and so I both disagree and have downvoted you." which is unnecessarily confrontational — I would just do it silently. I only state it transparently here as part of operationalising my vision for deriving more signal from downvotes.