And once you have a message you actually need to share, you'll actually be able to express it.
that doesn't look like good advice to me. or, rather, to make sense, one should assume, a priori, that the day will come when one will have "a message you actually need to share". this way of thinking proving too much - why shouldn't I read those trip reports, assuming one day i will need this knowledge, instead?
there are a lot of things one can do with one's time, and most people don't come to situations like that. learning to write better because one day i will want to write trip report is bad algorithm to use.
with science, the problem is not that people bad at writing, bad that the system have a standard of how to write, and it's a bad standard. if good writer tried to have good writing in it's science paper, they would have been told to rewrite it, to be more "scientific", so learning to write wouldn't help.
better advice will be to see what skills one predictably may need in the future, and practice those skills. but, claim, this will not include "writing" for most people.
there is a high correlation between what voters as a whole want on any single narrow issue, and what the outcome of the black box produces.
I'm way to late to the party, but I'm still reading old posts, and this is evidence that other people may too.
this statement is not true, in the country that I live. there are some narrow issues in which there is large public majority, and the black box produce not-correlated output. and this happen for a long period of time.
For the rest of this comment, I'm going to restrict "fox" to "good-scoring generalist forecaster"
why? this is very much not how I understand it, and I find it hard to even generate hypothesis to how you came to this conclusion. which make the rest of the comment seem irrelevant and missing the point.
foxes are people who go to people that have some big theory, and telling them they are wrong, everyone who try to have one theory is wrong, they should have many theories in their toolbox and use the more appropriate.
basically I understood it as Toolbox-thinking and Law-thinking.
restricting foxes to good-scoring generalist forecaster is even more weird. like... why? bad scoring are not foxes? it's look to me like trying to rig the game.
I opened a tab with this comment as something I want to answer too, when an opportunity arise for that. it was two years now, and the tab remain open. I wrote two post about that in my blog, and clarified my thinking about the issue a lot. and yet, I still believe what I believed then, although hopefully I'm more capable to express this. not sure if that, English is hard. but, it's worth trying.
here is my opinion, in its most simple form: one should not be cooperation bot.
if I have repeated opportunities to pay 1 unit to get someone 10 units, and they have the same option, it's better for us both to make those trades. but if the other people refuse to do that, they take happily the provided 10 units and then not give up 1 unit to give me 1, I should stop.
that is the Morality as "Coordination", vs "Do-Gooding" distinction.
and this is what I remember I want to write to you, 2 years ago, and didn't succeeded on the first try. you call for Civic Duty look to me as call to cooperate even if the other side defect, call to create cooperation-cooperation equilibrium, while ignoring that other people not cooperating.
there is no such thing as acceptable exchange rate, that is the wrong category. there is an option to enter agreement to cooperate, to exchange 1 unit of mine for 10 or 100 or 1000000 of yours if the opportunity arise. and the right thing to do is tit-for-tat with forgiveness. and it's definitely not being cooperation bot. and deciding that if the exchange rate is high enough you must take it, it's you civic duty, is just calling people to be cooperation bots, and punish those who don't do that, while ignoring the important difference between those who enter the agreement and those who don't.
***
I don't really engage with this comment. I don't think it react to the concept I tried, and failed, to communicate, so I tried again.
but I can say that from my current ontology, when I see Morality-as-Coordination and Morality-as-Do-Gooding as two very different things, the claims about local governments look confused to me. there is "charity" that is coordination, and there is charity that is do-gooding, and those are different things.
most of that governments do is not do-gooding, it's coordination, with elements on insurance.
small and very belated comment to comment (6): I did, actually believe in things and do things because of the decision theory. I sometimes had the impulse myself, but considered it childish and stupid and basically irrational, as in the past i saw people implement "spite" in really bad way.
then I encountered the decision theory on that, and evidence that people react to incentives more then I thought (I'm not sure if it's a crux - I endorse doing the correct thing even when the relevant people don't react to incentives).
so I changed my mind. I endorse the feeling you describe when it arise, but i still naturally pretty low on it. I still behave accordingly, though.
It's not exactly supportive evidence for your claim, but it feel to me like it close enough, so I wanted to write here - that it's possible and desirable to change one's behaviour to do the right decision-theoretical thing. especially in the direction of not negotiating with terrorists.
so I did actually change my behaviour.
One of the best forums I knew was homeschooling place, that was officially anti-vax. it was really bad at true-seeking, but I learned there so much. it turned out there are a lot of things that doesn't get said in places that are no default-collaborative. my models of people and societies much better because I was there. and also I leaved when I couldn't endure the idea that I'm bad and wrong for pointing out obvious falsehoods.
I don't think you can seek true without pointing out at false thing and saying they are false. but also, this way of thinking tend to be blind to all of the things that can exist only in places where there is good faith assumption. there is a lot of sharing of information, of ability to say or write socially forbidden truths.
and I think most places are far far away from the Pareto front.
the way to collaboratively say "I don't understand" include demonstrating Good Faith - trying to come up with two hypotheses, trying to understand in a way legible to the people reading it.
you can disagree without blocking. more effort needed to be invested, and it's coordination problem, without norms it can lead to bad places when the less busy person win without regard to truth.
but when it works, it works better then the bad-faith-allowing place. disagreements with good faith are harder, but are more rational.
I'm thinking a lot about the way that my good faith heuristics, that I got from place that was atrocious at truth-seeking, is so good at promoting it. this is because it forbid very frequent failure mode of irrationality. people who decided the other side is enemy tend to interpret things not in the most correct way, but in the most strawman-y way. it's very very bad, and it's dynamic that prevent thinking and understanding.
the "it will be used i status based asymmetry" claim look to me like general counterargument to any norms at all. in the same time, it pessimistic in way tat look unjustified to me. there are rules that enforced more or less neutrally, and i see no reason to assume the worst here. LessWrong is not actually the sort of places when the popular kids never get called out.
while the emoticons suggestion is interesting, it's missing the point of building mental habits. like asking myself if I'm dreaming is practice for lucid dreaming. having other people do that loss that benefit. it's also loss the saving throw of avoiding escalating by apologizing,
I actually disagree, it may be true about the other parts of the prefix, but reading something that is tagged as uncharitable rant would make me less inclined to get mad and react uncharitably. the point of the tagging things as rants is to prevent the dynamic when emotions mindkill people and discussions, and saying something is unfair at the beginning produce different emotional reaction.
also, the last time i encountered that sort of prefix it was at part of long post, and was much less hard to parse, as i had context. and it prevented the need to going back to understand what was intended, thing that is cognitively costly, too.
the question is if something is worth it. reading long posts is more cognitively costly then watching tiktok videos. and yet, here we are. something being costly is not enough to avoid it.
Can anyone reading this truly deny that those warnings came true from the doom sayer's perspective?
yes. your arrow of causality look backwards to me - I don't see divorce destigmatization - > more divorce. in the divorce case it's clearly more divorce -> destigmatization. i don't remember where to find the posts about how the laws that allow divorce came after the spike in divorce, and not the other way around.
there is important point here. i only recently re-evaluate my opinion on TV and decided the doomers was right there. but it sure look to me you over-generalize and give very dubious examples here, without evidence.
do TV destroyed the ability to read complex text? are you sure? because I don't. I will need to see some statistics about that.
and it's important, those subtleties you just wave away. there is a great difference between world where destigmatization - > more divorce and world where more divorce -> destigmatization.
looking on the various ways that doomers was right, partially right and partially wrong. and just straightforwardly wrong can teach interesting things. but we can't learn it if you round both being right and being wrong to being basically right!
interesting, i came to the opposite conclusion. or, more concretely, i found that there are rules that i can follow 95% of the time, and it's actually better then follow it 100% of the time, and there are rules when i simply can't, and i slip all the way down the slippery slope. and a lot of time i can search for differentiator, or more complicated solution.
simple rules need for coordination, when you have about five words. but i have much wider channel to coordinate with future-me, and i should use it to my advantage.
so my lesson was that having rules that are in the form of "never do X" or "always do Y" are almost always bad rules that lack granularity.
instead of "never go sleep after midnight" i can go for "don't go sleep past midnight, except once a month you can read a book for as long as you want, if you commit to wake up on time at the morning, and go sleep early the next day".
there are patterns when "breaking" the rule occasionally does not break the habit, and get me more utility (washing the dishes before going to sleep 95% of the time is better then 100% of the time - i get the utility in the rare cases when I'm honestly too tired, and i do not slip on this slope).
now, rules in the form of (1), that lack granularity, lack build-in exceptions, look to me surprisingly stupid. do you really can have only this simple rule? are you sure? did you tried?
you say " humans tend to heavily biased towards believing that their future selves will make the decisions that they want it to make. " but it look to me there are symmetrical and opposing biases -toward optimism and toward pessimism. and you just chose bias in the other direction. i prefer calibration.
and claims like (4) look to me like self-fulfilling prophesies. why do you believe that? my experience show that it's mostly wrong. sometimes it is true, and then i get one of the hard line rules. but most of the time, more granularity is possible and sderiable. and having it, having better rules, is skill that is possible to develop. as is the ability to have exceptions without dismantling the rule.