268

LESSWRONG
LW

267
CommunitySite Meta
Personal Blog

-4

Visionary arrogance and a criticism of LessWrong voting

by soycarts
12th Sep 2025
7 min read
26

-4

CommunitySite Meta
Personal Blog

-4

Visionary arrogance and a criticism of LessWrong voting
8thenoviceoof
-5soycarts
3The Dao of Bayes
-5soycarts
5The Dao of Bayes
-1soycarts
5JBlack
-7soycarts
1JBlack
3soycarts
2JBlack
1soycarts
1soycarts
2thenoviceoof
1soycarts
7Drake Morrison
0soycarts
1Drake Morrison
1soycarts
7Richard_Kennaway
1soycarts
3the gears to ascension
3soycarts
1habryka
1soycarts
3habryka
New Comment
26 comments, sorted by
top scoring
Click to highlight new comments since: Today at 8:00 PM
[-]thenoviceoof2mo85

You may be interested in a very similar discussion from several months ago: When you downvote, explain why.

Reply
[+]soycarts2mo*-5-11
[-]Drake Morrison2mo73

I want to say something like, you are not owed attention on your post just because it is written with good logic. That's sort of harsh, but I do think that you have to earn the reader's trust. People downvote for all sorts of reasons, not all of them are because of some logical mistake you made, sometimes it's just because the post is not relevant, or seems elementary, or isn't written well, or doesn't engage with previous work. 

I can understand getting unexplained downvotes being demoralizing, but demanding people spend more of their own effort and time to engage with you is a losing proposition. You have to make it worth their time.

But, I'm feeling generous today and I'll try and write some of my thoughts anyway.

I found this post confusing to read, and had to go back and re-read the whole thing after reading it the first time to understand what you were even saying. For example one of the first sentences: 

On this post I will intentionally try to illustrate how I would see my recommendation playing out:
 

And yet, I don't know what your recommendation even is yet. Take some time to explain your recommendation, and why I should care first, then I know what you're talking about in this section. 

There are similar sorts of problems all over the piece with assumptions that aren't justified, jumping around tonally between sections, and mixing up explaining the problem with your preferred solution. It's just not a well-written piece, or so I judged it.

Hopefully that helps!

Reply1
[-]soycarts2mo00

Thank you for the feedback!

On the path to benevolence there's a whole lot of friction. I agree with most of what you said, and I think we can extract substantive value that builds on my post:

Demanding people spend more of their own effort and time to engage with you is a losing proposition. You have to make it worth their time.

I agree, but I feel that there is a distinct imbalance where a post can take hours of effort, and be cast aside with a 10-second vibe check and 1 second "downvote click". I believe that the platform experience for both post authors and readers could be significantly improved by adding a second post-level signal that only takes an additional few seconds — this could be a React like "Difficult to Parse" or a ~30-character tip like "Same ideas posted recently: [link]".

Given the existing author/reader time-investment imbalance, it feels fair to suggest adding this.

Take some time to explain your recommendation, and why I should care first, then I know what you're talking about in this section. 

This is a valid call-out — in fairness it was an imprecision on my part because I added the "Operationalising my recommendation" section in the first (2025/09/13) edit, and overlooked the fact that this meant it preceded my stating the recommendation. I've updated the post to state the recommendation upfront. [Meta note: This to me is the beauty of rationality and the LessWrong platform — we can co-create great logical works. I hope this doesn't look too much like "relying on the reader to proof-read" in lieu of https://www.lesswrong.com/posts/nsCwdYJEpmW5Hw5Xm/lesswrong-is-providing-feedback-and-proofreading-on-drafts ]

There are similar sorts of problems all over the piece with assumptions that aren't justified, jumping around tonally between sections, and mixing up explaining the problem with your preferred solution.

This connects to the uncertainty I relayed to @Richard_Kennaway:

  1. Uncertainty: I'm uncertain about whether my writing style is suitable for this platform, or if I should defer to in-person interactions and maybe video content to express my ideas instead.
  2. The strongest counterargument [to my post] is that I should just write my post like an automaton,[this use of hyperbole for humour — instead of "like an automaton", precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles] instead of infusing the [attempts at] humour and illustrative devices that come naturally to me.

I really enjoy Scott Alexander's writing and, while clearly he's a far more distinguished and capable writer than me, I feel he is a good role-model as someone who uses rationality but also storytelling prose to try to relay their point. That's effectively what I hope to accomplish — but I could only really get there if I get feedback on my writing.

I have three instances of posts in this style where I do successfully have some degree of positive feedback: [1], [2], [3]

At the same time, this is a red flag to me:

It's just not a well-written piece, or so I judged it.

In [rare] cases where I do successfully compel someone to fully read and engage with my post, I have a huge duty to the outcome that they enjoy and find insightful value in my writing.

The last thing I want to be doing is to be actually wasting someone's time.

Reply
[-]Drake Morrison2mo10

I agree, but I feel that there is a distinct imbalance where a post can take hours of effort, and be cast aside with a 10-second vibe check and 1 second "downvote click".

 

You don't get points for effort. Just for value. 

One way to think of it is like you are selling some food in a market. Your potential buyers don't care if the food took you 7 hours or 7 minutes to make, they care how good it tastes, and how expensive it is. The equivalent for something like an essay is how useful/insightful/interesting your ideas is, and how difficult/annoying/time-consuming it is to read. 

You can decrease the costs (shorter, easy-to-follow, humor), but eventually you can't decrease them any more and your only option is to increase the value. And well, increasing the value can be hard.

Reply
[-]soycarts2mo*10

I wholeheartedly agree with you.

There is something else going on here though. As I commented on this post, which also (in my view) fell prey to the phenomenon I am describing:

It’s complex enough for me to make the associations I’ve made and distill them into a narrative that makes sense to me. I can’t one-shot a narrative that lands broadly… but until I discover something that I’m comfortable falsifies my hypothesis, I’m going to keep trying different narratives to gather more feedback: with the goal of either falsifying my hypothesis or broadly convincing others that it is in fact viable.

To follow your analogy: I'm not asking that people purchase my sandwiches. I'm just asking that people clarify if they need them heated up and sliced in half, and don't just tell everyone else in the market that my sandwiches suck.

This directly aligns with a plea I express in the current post:

  1. The strongest counterargument is that I should just write my post like an automaton,[1] I add a footnote clarifying, instead of infusing the [attempts at] humour and illustrative devices that come naturally to me.
  2. The problem with this is that writing like that isn't fun to me and doesn't come as naturally. In essence it would be a barrier to me contributing anything at all. I view that as a shame, because I do believe that all of my logic is robustly defensible and wholly laid out within the post.

I believe that there is value in my ideas, and I'm not that far off repositioning them in a way that will land more broadly. I just need light, constructive feedback to more closely align our maps.

However in absence of this light, constructive feedback on LessWrong, I'm quite forcefully cast aside and constrained to other avenues.

  1. ^

    This is [attempted] use of hyperbole for humour — instead of "like an automaton", precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles.

Reply
[-]Richard_Kennaway2mo74

Sometimes, a post or comment seems so far from epistemic virtue as to be not worth spending effort describing all the problems. I mutter “not even wrong”, downvote, and move on.

I have not voted either way on the current post.

Reply
[-]soycarts2mo10

Thank you for providing your evaluative criteria.

To me you hit on a precise, valid downvote signal: "this post is effortful for me to falsify". That would be helpful to writers like me to receive as a precise, labelled signal in order to optimise.

The disconnect I guess is that to me, all of my logic is robustly defensible and wholly laid out within the post. That's precisely why I'm so keen on someone, anyone, being able to precisely state any logical inconsistency or area that lacks clarity.

If they were to do so, then I could expand on the area where our world-views/maps [https://www.lesswrong.com/w/map-and-territory] are too distinct, in pursuit of correcting one of our maps. To me this is the essence of rationality, and it's jarring that I'm not able to get it on this platform.

Since I have to defer to an LLM for this: ChatGPT5 Pro gives me the following checklist [points 1 through 5, all other language is my own] to avoid being "not even wrong", I've added my view on how strongly I'm doing against each item:

  1. State one main claim in plain language.
    1. Strongly achieved: "we should devise mechanisms that provide an actionable route for low-quality contrarian posts/comments to become high-quality to improve the platform as a whole."
  2. Define key terms (what exactly do you mean by X?).
    1. Moderately achieved — I could have formatted this differently, instead of containing it within the prose:
    2. Contrarian — surely is a standard term, I illustrated it as "Being Peculiar" or exhibiting "visionary arrogance"
    3. "High quality content" — stated as "[content that is] usually heavily upvoted [on the platform]"
    4. "devise mechanisms" — stated both as "implementing a system where negative votes on a post require the voter to cite their reason for negative voting — using either the Reaction system or a brief note of 30+ characters." and self-referentially on this post as I describe the "comment following my guideline" → "me responding"  → "readers casting their vote" dynamic
    5. "the platform as a whole" — described LessWrong, and its central mission
  3. Show your reasoning chain: premises → inference → conclusion.
    1. Strongly achieved:
    2. Premises: Spend hours putting together a post that contains a contrarian view that I’m proud of, but then receive no feedback besides a couple of downvotes.
    3. Inference: Contrarian view is too loosely dismissed on the platform "The only way you can be a visionary is to express a bunch of things that are, by definition, not the societal norm... [but] people with societally normative viewpoints will disagree with you."
    4. Conclusion: "I’m anticipating this post to be a straight shot to meta-irony: I have confidently made a non-normative claim, so expect a couple of negative post votes, absent of material feedback." — this has extended to -16 karma across 11 votes, and still nobody has engaged to offer a logical inconsistency.
  4. Cite evidence and say what would change your mind.
    1. Strongly achieved:
    2. Using my great wit,[this is use of hyperbole for humour] I self-referentially operationalised the post to illustrate my point. What would change my mind is if people actually upvoted me.
  5. Quantify uncertainty (even roughly) and address the strongest counterargument.
    1. Weakly achieved — I guess I was leaving this open for audience participation.
    2. Uncertainty: I'm uncertain about whether my writing style is suitable for this platform, or if I should defer to in-person interactions and maybe video content to express my ideas instead.
    3. The strongest counterargument is that I should just write my post like an automaton,[this is also use of hyperbole for humour — instead of "like an automaton", precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles] instead of infusing the humour and illustrative devices that come naturally to me.
    4. The problem with this is that writing like that isn't fun to me and doesn't come as naturally. In essence it would be a barrier to me contributing anything at all. I view that as a shame, because I do believe my post wholly consists of robust logic and satisfies this "avoid being 'not even wrong'" checklist. If I had provided this checklist at the top of my post, would it have made my post easier to parse and thus well-received by the community? Or am I still missing something?
Reply
[-]the gears to ascension20d*30

I agree that this is a problem. I don't think the best way to fix it is to either change culture or change the voting system. There is probably a change to the site that would help with it. The trouble is that when something has a lot of hard-to-interpret noise in it, as is usually the case with both crackpot and crackpot-flavor-but-actually-insightful expert rambling, it's hard to spend the time to figure out if the details resolve one way or the other. Also, like, experts can output crackpottery on their own field of expertise sometimes (I didn't have anyone particular in mind besides yann, I asked Sonnet 4.5, who suggested Linus Pauling on vitamin c, Lord Kelvin on age of earth, Fred Hoyle and steady state universe).

Like, the whole reason we have the scientific standards we do is that even if one is an expert in a field who has made previous verified breakthroughs, it's really easy to have a brilliant, wrong idea. Maybe the value here is in making it easy to tell whether other people will find your post easy to follow? probably the primary thing I'd suggest would be trying to organize the post progressive-jpeg-style: try to fit as much as possible as early as possible, so that it becomes clear quickly why your post is relevant-or-not for a given reader. also just, try to compress as much as you can.

of course, these are annoyingly high standards. your grid dynamic thing seems like a thing I've seen happen. it's probably at least some of why I feel motivated to comment on low-upvote wacky posts. it's a bit of a chore to do well, though, and probably the primary reason I do it is procrastination.

if cultural things are viable, probably a good one would be people being very willing to put reacts when they downvote, yeah.

(I didn't read your post in full because I found it to be taking longer to parse than I felt like spending, fwiw. I'm responding to a skim.)

Reply1
[-]soycarts20d30

Thank you for sharing your expert insight!

probably the primary thing I'd suggest would be trying to organize the post progressive-jpeg-style: try to fit as much as possible as early as possible, so that it becomes clear quickly why your post is relevant-or-not for a given reader. also just, try to compress as much as you can.

This is a fair point and in some cases it's not too much additional cognitive load to structure things this way. I have noticed though that it can be "...complex enough for me to make the associations I’ve made and distill them into a narrative that makes sense to me. I can’t one-shot a narrative that lands broadly". Other times the fun and the motivation in writing is from crafting the narrative creatively. If narratives have to follow line by line then we wouldn't get things like Infinite Jest.

A low-cost idea I had that could help: folks who get their post or comment downvoted could receive a message linking back to the New User's Guide to LessWrong but mainly up-front highlighting that these contra-contrarian forces exist, and "If you've been downvoted and/or rate-limited, don't take it too hard. LessWrong has fairly particular standards. My recommendation is to read some of the advice at the end here and try again."[1]

I've spoken with multiple smart rationalist people in person who have described being discouraged from writing on LessWrong because of echo chamber effects / imbalanced curation.

  1. ^

    https://www.lesswrong.com/posts/hHyYph9CcYfdnoC5j/automatic-rate-limiting-on-lesswrong 

Reply1
[-]habryka2mo11

*** Comment Guideline: If you downvote this post, please also add a Reaction or a 30+ character comment prepended with "Downvote note:" on what to improve. ***

Sorry, to be clear, this is not a valid comment guideline on LessWrong. The current moderation system allows authors to moderate comments (assuming they have the necessary amount of karma). It does not allow authors to change how people vote. I can imagine at some point maybe doing something here, but it seems dicey, and is not part of how LessWrong currently works.

Reply
[-]soycarts2mo10

Got it — apologies for bending these rules (I didn't consider that this may break a rule) as an attempt to operationalise the post.

Can I state a lighter version instead, where I encourage all standard voting behaviour, but append a request for downvote justification? I've replaced the Comment Guideline accordingly as a placeholder until I receive further clarification.

I.e:

*** Voting Guideline: You should freely vote and react according to your views and LessWrong norms — I do not want to infringe upon this. ***

*** Comment Request: However, please allow me to make a request: if you do downvote this post and are willing to make that transparent, it would help me to operationalise the recommendation I put across in this post if you add a Reaction or a 30+ character comment prepended with "Downvote note:" on what to improve. ***

Reply
[-]habryka2mo31

Definitely! Requests are totally fine!

Reply
Moderation Log
More from soycarts
View more
Curated and popular this week
26Comments

Posted (2025/09/12 3PM PST)

Update (2025/09/13 3AM PST): Clarified Comment Guideline notice. Added "Operationalising my recommendation" section. Added "What this will look like if my criticism is valid" section. Added Appendix with snapshot of conversation thread.

Update (2025/09/13 10AM PST): Replaced Comment Guideline with a Voting Guideline and Comment Request to be in accordance with LessWrong rules. Added "What this post is, and why you should care" section up front.

Final Update (2025/09/13 3PM PST): Minor grammatical changes for flow. Added context to the first comment exchange [Footnote 8]. Added a prediction to [Footnote 7] — I think no more updates are required. Added "Notice to new/returning readers".


*** Notice to new/returning readers: This post has undergone a few updates (as above) but I believe is now in its final form. You are arriving at a good time — the storm has dissipated and the post should be more accessible than earlier iterations. ***

*** Voting Guideline: You should freely vote and react according to your views and LessWrong norms — I do not want to infringe upon this. ***

*** Comment Request: However, please allow me to make a request: if you do downvote this post and are willing to make that transparent, it would help me to operationalise the recommendation I put across in this post if you add a Reaction or a 30+ character comment prepended with "Downvote note:" on what to improve. ***


What this post is, and why you should care

  1. A recommendation / feature request for the LessWrong platform
    1. I state "we should devise mechanisms that provide an actionable route for low-quality contrarian posts/comments to become high-quality to improve the platform as a whole."
      1. I designed the post to self-referentially portray my recommendation in action.
  2. Key terms used
    1. Contrarian — I feel this is a standard term, I state it as someone using a "non-normative claim" and illustrate it as "Being Peculiar" or exhibiting "visionary arrogance"
    2. "High quality content" — I state as "[content that is] usually heavily upvoted [on the platform]"
    3. "Devise mechanisms" — I state as "implementing a system where negative votes on a post require the voter to cite their reason for negative voting — using either the Reaction system or a brief note of 30+ characters."
      1. Self-referential version: I describe a "comment according to my Comment Request" → "I respond"  → "readers cast their vote" dynamic.
    4. "The platform as a whole" — I describe LessWrong, and its central mission
      1. Self-referential version: This post is my platform.
  3. Reasoning chain
    1. Premises: Contrarian authors spend hours putting together a post that contains a view that they may feel is well-reasoned, but then receive no feedback besides a couple of downvotes.
      1. Self-referential version: I have determined a way to operationalise my recommendation upfront. I have suggested an opportunity to improve the LessWrong team's model of supporting contrarian views with stability.
    2. Inference: Contrarian views are too loosely dismissed on the platform "The only way you can be a visionary is to express a bunch of things that are, by definition, not the societal norm... [but] people with societally normative viewpoints will disagree with you."
      1. Self-referential version: Unfortunately, there are mechanisms that will suppress my contrarian views.
    3. Conclusion: "I’m anticipating this post to be a straight shot to meta-irony: I have confidently made a non-normative claim, so expect a couple of negative post votes, absent of material feedback."
      1. Self-referential version: I have shown how a person confidently expressing non-normative ideas [me] is easy to dismiss, despite this being a necessary condition of being a visionary free-thinker.
  4. Why I'm right, and what would change my mind
    1. Using my great wit,[1] I self-referentially operationalised this post to illustrate my point.
    2. What would change my mind is if people actually upvoted me.
  5. What I'm uncertain about, and a steel-man counterargument
    1. I'm uncertain about whether my writing style is suitable for this platform, or if I should defer to in-person interactions and maybe video content to express my ideas instead.
    2. The strongest counterargument is that I should just write my post like an automaton,[2] instead of infusing the [attempts at] humour and illustrative devices that come naturally to me.
    3. The problem with this is that writing like that isn't fun to me and doesn't come as naturally. In essence it would be a barrier to me contributing anything at all. I view that as a shame, because I do believe that all of my logic is robustly defensible and wholly laid out within the post.

Operationalising my recommendation

On this post[3] I will intentionally try to illustrate how I would see my recommendation playing out:

  1. Contrarian post is made and receives downvote, with Downvoter comment providing justification.
  2. Contrarian should agree or disagree with the Downvoter, and cast their vote accordingly.
  3. Other readers can cast their votes on the comments of the Contrarian and the Downvoter — here we have rationally operationalised truth-seeking by transparently surfacing a weakness of the post via the Downvoters comment, hearing the Contrarian's yielding or defence, and by being able to rate both sides.

Prior context

Automatic Rate Limiting on LessWrong suggests a stable grid dynamic:

Low-quality consensus posts/comments

 

(Usually somewhat upvoted, or 

heavily upvoted when they're funny 

or particularly emotionally resonant)

High-quality consensus posts/comments

 

(Usually pretty upvoted)

Low-quality contrarian posts/comments

 

(Usually somewhat downvoted, or 

heavily downvoted if they're rude)

High-quality contrarian posts/comments

 

(Usually heavily upvoted)

The crux of my criticism aligns with this grid: we agree that high-quality contrarian posts/comments are the most valuable — they are usually heavily upvoted. It follows that we should devise mechanisms that provide an actionable route for low-quality contrarian posts/comments to become high-quality to improve the platform as a whole.[4]

The platform as a whole

I love the LessWrong platform. I think that it attracts an incredibly intelligent, well-read audience with a diverse range of perspectives.

I feel that the technical implementation of the site is exceptional — a daily, curated news-cycle, with emergent high quality posts for the homepage; posts are easily readable, and conversational threads are natural, well-moderated, and easy to parse; the Reaction system feels nicely implemented in a way where it is available to opt in to, but isn’t overpowering for folks that just want textual discourse.

As a lurker and engager of a variety of posts on the platform, I think that my own writing is decently LessWrong-y — I lay out my thoughts step-by-step and avoid logical inconsistencies or incongruously big leaps. I’m decently well-versed in the rationality literature and cite core works that my ideas build upon.

My criticism

Sometimes I spend hours putting together a post that I’m proud of, but then receive no feedback besides a couple of downvotes.

This is incredibly frustrating for two reasons:

  • Firstly — it seems that once voting on a post flips negative, the site won’t surface it anywhere near as prominently to readers.
  • Much more than that — receiving a negative vote with no qualitative feedback is solely disruptive to the author: there’s no signal for how to build on your ideas or frame them differently.

I’m decently skilled at channelling my attention well and good at tuning out noise, but I’d be lying if I were to say that I don’t find it off-putting when this happens.

I have written about a concept of “the tension between truth-seeking and societal harmony”. Authentically expressing what you feel to be true creates tension if it doesn’t match societal norms. This is a shame because I’m very pro- free-speech: I think that the world is made a better place by allowing more people to express their ideas about what is true.[5]

This is not being enabled on LessWrong: a down-voting agent can effectively silence my voice just because they disagree with me. On individual comments, “overall karma” and “agreement karma” are distinct, but for a post only a single voting metric exists.

Is this not directly opposed to LessWrong’s central mission?

LessWrong is an online forum/community that was founded with the purpose of perfecting the art of human[6] rationality.

Visionary arrogance

Now we get to some self-awareness.

Jeff Bezos, a visionary free-thinker, through Amazon advanced a cultural model with 16 core Leadership Principles (LPs) — but also some principles not codified as LPs. One of these is Amazon’s doc-writing culture, and another is the idea of embracing Being Peculiar.

From the "Being Peculiar" article linked:

"What kind of person owns being peculiar?" My answer, someone who is more concerned about their own legacy, than what others think of him. While this is admirable, it is also quite peculiar by American society's standards.

This is hitting at the core of my argument — the only way you can be a visionary is to express a bunch of things that are, by definition, not the societal norm. Put another way: people with societally normative viewpoints will disagree with you.

I’m not equating myself to Jeff Bezos. What I’m saying is that if I were Jeff Bezos, perhaps my really insightful ideas would be buried by the current site setup.

This dynamic is especially true when someone expresses a lot of confidence in their non-normative ideas. This makes sense, because it pattern-matches as delusion which itself is off-putting. You could argue that maybe we should wait for someone to acquire a lot of status, e.g become a billionaire, and only then can we give them a platform for confidently expressing non-normative ideas. I’m not sure that I agree that this would be the best form of society though.

My recommendation

I think the site could be improved by implementing a system where negative votes on a post require the voter to cite their reason for negative voting — using either the Reaction system or a brief note of 30+ characters.

These reasons should be held to account: people should be able to see and downvote them if they are flawed.

I’m anticipating this post to be a straight shot to meta-irony: I have confidently made a non-normative claim, so expect a couple of negative post votes, absent of material feedback.

What this will look like if my criticism is valid

In this post I have presented a well-reasoned contrarian viewpoint:

  1. Determined a way to operationalise my recommendation upfront
  2. Suggested an opportunity to improve the LessWrong team's model of supporting contrarian views with stability
  3. Described mechanisms that suppress contrarian views
  4. Described how a person confidently expressing non-normative ideas is easy to dismiss, despite this being a necessary condition of being a visionary free-thinker

This post and my comments are significantly downvoted, nobody has engaged with these four core points and the damage is done: my contrarian view is suppressed, my future posts and comments hold less weight, and I'm disillusioned by the capacity of folks to engage with contrarian viewpoints in good faith.[7]

 

Appendix

Snapshot of first comment exchange, taken 2025/09/13

Post at -15 overall karma [10 votes] —

[Presumed][8] Downvoter (+4 overall karma [4 votes], +3 agreement karma [3 votes]):

You may be interested in a very similar discussion from several months ago: When you downvote, explain why.

Contrarian (-7 overall karma [3 votes], -10 agreement karma [3 votes]):

Are you implying that this post ("Visionary arrogance and a criticism of LessWrong voting") should be downvoted because it reaches the same conclusion as a 7-month old post which lacks half of the framing ("visionary arrogance") that I use to describe voting behaviour motivations?

If so that sounds logically flawed to me, and so I both disagree and have downvoted you.

If you were not implying that and simply offering some additional context for me to refer to (the discussion in the comments is valuable), then I apologise and will revert my downvoting.

  1. ^

    This is use of hyperbole for [attempted] comic effect.

  2. ^

    This is also use of hyperbole for humour — instead of "like an automaton", precisely I mean that a common writing style on LW is to provide a numbered/bulleted list of principles.

  3. ^

    I could alternatively use the word platform.

  4. ^

    My "criticism" is that this is not happening currently,

  5. ^

    But then allowing society to diligently give feedback, from simple disagreement up to structured punishment.

  6. ^

    "We say "human" rationality, because we're most interested in how us humans can perform best given how our brains work (as opposed to the general rationality that'd apply to AIs and aliens too)."

  7. ^

    This is the case as of 2025/09/13: "-15 karma from 10 votes on the post, -7 karma from 3 votes and -10 agreement karma from 3 votes on my first comment". In response I have made 4 relatively small updates to the post, called out in the first line, to highlight the irony: I think that this validates my criticism and strengthens the post.

    Update (2025/09/13, 3pm PST): The post at is currently at -17 karma from 12 votes. It has 17 comments (including my own).

    I've made a few edits — not wholly-transformational, but admittedly making it easier for a reader to parse — since posting. These edits have come about directly as a result of discourse in the comments.

    My moderately-strongly held view is that it is now in its final form and beautifully captures everything that I set out to do.

    I'd be interested in anyone's viewpoint upon a full re-read, if indeed they are willing to dedicate the time — I totally get it if not (it's the weekend)!

    I'm not sure that this final form will be effectively surfaced to any new readers on the site, since the karma is so low. I have two ongoing Private Conversations, with whom I am providing this same update, and I hope I may be able to at least solicit final form feedback from them.

    My vision: perhaps now the post is high-quality and will accrue positive karma. That is to say that I'm calling the karmic bottom at -17.

  8. ^

    I explicitly moderated my response by saying that I apologise and will revert my criticism of my interpretation of their comment if my interpretation is incorrect.

    When I received the comment, the post vote tally was at "-3", and I was presented with a Bayesian to evaluate: P (Comment was provided in accordance with my explicit request | I made an explicit request and post vote tally is at "-3")

    I admit that to better operationalise this I could have clarified for commenters that were following my Comment Guideline to explicitly "prepend [their comment] with "Downvote note:" " — I have made this edit to the post.

    Update (2025/09/13): The Presumed Downvoter followed up to clarify that they were not implying a reason to downvote this post, but they acknowledge that me reaching that conclusion was reasonable. Per the terms that I explicitly communicated, I've flipped my votes to approve and agree with their comment. I leave the rest of my comment unchanged, as a record of how I would engage with someone who was providing a reason to downvote that I disagree with.

    In practice, I wouldn't transparently say "If so that sounds logically flawed to me, and so I both disagree and have downvoted you." which is unnecessarily confrontational — I would just do it silently. I only state it transparently here as part of operationalising my vision for deriving more signal from downvotes.

Mentioned in
2Personal Account: To the Muck and the Mire