This is a special post for quick takes by metachirality. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
13 comments, sorted by Click to highlight new comments since: Today at 12:40 AM

What if we had a setting to hide upvotes/hide reactions/randomize order of comments so they aren't biased by the desire to conform?

Try greaterwrong.com, turn on anti-kibbitzer, sort comments by New

This is probably possible, but I wouldn't use it.  I WANT comments to conform on many dimensions (clarity of thought, non-attack framing of other users, verbose enough to be clear, not so verbose as to waste time or obscure the point, etc.).  I don't want them to (necessarily) conform on content or ideas, but the first desire outweighs the second by quite a bit, and I think for many voters/reactors, it is used that way.

would argue for this being default as well. +1

You can randomize the default comment ordering in your account settings page.

But you can't change it for anyone else's view, which is the important thing.

I don't know whether metachirality was thinking of a setting for authors or for commenters. What makes you confident he was talking about the author version?

I'm talking for commenters

I think there should be a way to find the highest rated shortform posts.

You can! Just go to the all-posts page, sort by year, and the highest-rated shortform posts for each year will be in the Quick Takes section: 

2024: 

2023: 

2022: 

A joke stolen from Tamsin Leake: A CDT agent buys a snack from the store. After paying: "Wow, free snack!"

That's true, though, once the payment is a sunk cost. :) You don't have to eat it, any more than you would if someone handed you the identical snack for free. (And considering obesity rates, if your reaction isn't at least somewhat positive and happy when looking at this free snack, maybe you shouldn't eat it...)

One common confusion I see is analogizing whole LLMs to individual humans, when it is more appropriate to analogize LLMs to the human genome and individual instantiations of an LLM to individual humans, and thus conclude that LLMs can't think or aren't conscious.

The human genome is more or less unchanging but one can pull entities from it that can learn from its environment. Likewise LLMs are more or less unchanging but one can pull entities from it that can learn from the context.

It would be pretty silly to say that humans can't think or aren't conscious because the human genome doesn't change.