I immediately recognize the pattern that's being playing out in this post and in the comments. I've seen it so many times, in so many forms.
Some people know the "game" and the "not-game", because they learned the lesson the hard way. They nod along, because to them it's obvious.
Some people only know the "game". They think the argument is about "game" vs "game-but-with-some-quirks", and object because those quirks don't seem important.
Some people only know the "not-game". They think the argument is about "not-game" vs "not-game-but-with-some-quirks", and object because those quirks don't seem important.
And these latter two groups find each other, and the "gamers" assume that everyone is a "gamer", the "non-gamers" assume that everyone is a "non-gamer", and they mostly agree in their objections to the original argument, even though in reality they are completely talking past each other. Worse, they don't even know what the original argument is about.
Other. People. Are. Different.
Modeling them as mostly-you-but-with-a-few-quirks is going to lead you to wrong conclusions.
(Meta: writing this in separate comment to enable voting / agreement / discussion separately)
If you want to make the case for tactical nuclear deployment not happening (which I hope is the likely outcome), I want to see a model of how you see things developing differently
I'll list a few possible timelines. I don't think any of these is particularly likely, but they are plausible, and together with many other similar courses of events they account for significant chunks of probability mass.
On Nord Stream sabotage:
That leaves us with Russia and Germany. I don't see what Germany could gain from this. I don't see what Russia could gain from this either, but then Russia has developed a habit of doing things despite having nothing to gain from them. Also, I see some reasons why Russia could think this is a good idea (implicitly threatening the West by demonstrating willingness to use grey-zone warfare against their critical infrastructure, to try to get them to back down).
So possibly Russia. (Low confidence.)
Epistemic status: proof by lack of imagination.
Thus I claim we don't know whether people see dreams.
That's a pretty bold claim just a few sentences after claiming to have aphantasia.
Some of my dreams have no visuals at all, just a vague awareness of the setting and plot points. Others are as vivid and detailed as waking experience (or even more, honestly), at least as far as vision is concerned. Dreams can fall anywhere on a spectrum between these extremes, and sometimes they can even be a mixture (e.g. a visual experience of the place and an awareness of characters in that place that don't appear visually).
Yes, people do see dreams. I'm fairly certain I can tell the difference.
Yes, I'm aware of all that, and I agree with your premises, but your argument doesn't prove what you think it does. Let's try to reductio it ad absurdum, and turn the same argument against the possibility of fast technological or scientific feedback cycles.
If you live in a technologically backwards society (think bronze age), you can't become more advanced technologically yourself, because you'll starve spending your time trying to do science. The technology of society (including agriculture, communication, tools, etc.) needs to progress as a whole. If you live in a scientifically backwards society, you can't have more accurate beliefs, because you'll be burned at the stake by all the people believing in nonsense. Therefore, science and technology can only progress as fast as the majority can adopt it.
And all of the above is true, actually, up to a certain point in history. But once the scientific understanding of society advances to the point where it understands that science is a thing and has a basic understanding of how science works, it can basically create a mesa-feedback-loop. Similarly, once you have technologies like writing and free market capitalism, suddenly it's possible to set up a tech company, sell something worthwhile and in exchange not starve.
And that's the frame for my original comment. I didn't mean to imply that a fast moral feedback loop would involve a single person going on some meditation retreat that is somehow a clever feedback loop in disguise and then come back more moral or whatnot. I think it is possible that there is some innovation, moral or social or otherwise (e.g. a common understanding of common knowledge), that would enable the creation of fast moral and social feedback loops.
So the question, again: what are the necessary conditions for such a feedback loop? Are they present? What would it look like? How would you recognize it if it was happening right in front of you?
It seems pretty likely that moral and social progress are just inherently harder problems, given that you can't [...] have fast feedback cycles from reality (like you do when trying to make scientific, technological and industrial progress).
We can't? Have we tried? Have you tried? Is there some law of physics I'm missing? What would a real, genuine attempt to do just that even look like? Would you recognize it if it was done right in front of you?
There are multiple meanings of "progress" afoot here. Tabooing the word, my reading of your point is "moving toward any specific imagined future state of the world we all agree is good is good, therefore moving forward is good".
(Another non-native having a go at it...)
When your advice both ways seems fine,Calibrate, then make it rhyme.
more transparent to outsiders
There is the danger of it being more transparency-illuding instead. (Yeah, I just invented that term, but what did I mean by it?)
My gut feeling is that attracting more attention to a metric, no matter how good, will inevitably Goodhart it.
That is a good gut feeling to have, and Goodhart certainly does need to be invoked in the discussion. But the proposal is about using a different metric with a (perhaps) higher level of attention directed towards it, not just directing more attention to the same metric. Different metrics create different incentive landscapes to optimizers (LessWrongers, in this case), and not all incentive landscapes are equal relative to the goal of a Good LessWrong Community (whatever that means).
I am not sure what problem you are trying to solve, and whether your cure will not be worse than the disease.
This last sentence comes across as particularly low-effort, given that the post lists 10 dimensions along which it claims karma has problems, and then evaluates the proposed system relative to karma along those same dimensions.