For the last few months I've taken up the habit of explicitly predicting how much karma I'll get for each of my contributions on LW. I picked up the habit of doing so for Main posts back in the Visiting Fellows program, but I've found that doing it for comments is way more informative.
It forces you to build decent models of your audience and their social psychology, the game theoretic details of each particular situation, how information cascades should be expected to work, your overall memetic environment, etc. It also forces you to be reflective and to expand on your gut feeling of "people will upvote this a lot" or "people will downvote this a little bit"; it forces you to think through more specifically why you expect that, and how your contributions should be expected to shape the minds of your audience on average.
It also makes it easier to notice confusion. When one of my comments gets downvoted to -6 when I expected -3 then I know some part of my model is wrong; or, as is often the case, it will get voted back up to -3 within a few hours.
Having powerful intuitive models of social psychology is important for navigating disagreement. It helps you realize when people are agreeing or disagreeing for reasons they don't want to state explicitly, why they would find certain lines of argument more or less compelling, why they would feel justified in supporting or criticizing certain social norms, what underlying tensions they feel that cause them to respond in a certain way, etc, which is important for getting the maximum amount of evidence from your interactions. All the information in the world won't help you if you can't interpret it correctly.
Doing it well also makes you look cool. When I write from a social psychological perspective I get significantly more karma. And I can help people express things that they don't find easy to explicitly express, which is infinitely more important than karma. When you're taking into account not only people's words but the generators of people's words you get an automatic reflectivity bonus. Obviously, looking at their actual words is a prerequisite and is also an extremely important habit of sane communication.
Most importantly, gaining explicit knowledge of everyday social psychology is like explicitly understanding a huge portion of the world that you already knew. This is often a really fun experience.
There are a lot of subskills necessary to do this right, but maybe doing it wrong is also informative, if you keep trying.
Why do you make a comment if you expect it to have net negative downvotes? I've always felt that I strongly agreed with how the community voted things. If it has three downvotes, it's probably not worth seeing.
Do you think that Less Wrong is really that bad at voting that some things need to be said which we'll downvote anyway?
I would have thought that "will be downvoted" is fairly close to "should not post."
I find that seeing a comment with a lot of down-votes has the exact opposite effect on me. "Six down-votes! What crazy half-formed idea is Will_Newsome talking about now!?"
The quality control benefits of down-voting are mostly deterrence which requires people feel good about up-votes and bad when they get down-voted.
(Just teasing you, Will)
Which is pretty dangerous, on the whole. It's reinforcement learning for meshing with local preconceptions. Luckily local preconceptions 'round these parts are pretty top notch relatively speaking, but unluckily they're really suboptimal objectively speaking. This is extra dangerous for people like me who do a lot of associative learning. Please everyone, do be careful when it comes to letting people reward or punish you for thinking in certain ways.
It is, in principle, dangerous. But the vast majority of down-votes are for rhetoric rather than position. Usually, my comments which contradict local preconceptions are just low karma relative to the replies, not negative karma. Most down-votes are for rhetorical and stylistic violations rather than unpopular beliefs.
I'd say this goes for a lot of your down-votes, too. Especially if we include 'paying insufficient attention to likely inferential distances' as a rhetorical violation. Though of course at this point a number people may be especially sensitive to your comment quality and mediocre comments that would usually be ignored are down voted if your name is attached.
Thinking about what standards you should hold yourself to when it comes to choosing rhetoric and style is also an important kind of thinking though. Like, using the phrase "it seems to me as if" habitually is a cheap way to get karma but it's also a good habit of thought. But sometimes my rhetoric is negatively reinforced when the only other option I had was in some should world where I wasn't prodromal schizohrenic and so by letting people's downvotes affect my perception of what I should or shouldn't have been able to do it's like I'm implicitly endorsing an inaccurate model of how justification should work or just how my mind is structured or how people should process justification when they have uncertainty about how others' minds are structured. Policies spring from models, and letting policies be punished based on inaccurate models is like saying it's okay to have inaccurate models. (Disclaimer: Policy debates always have fifteen trillion sides and fifteen gazillion ways to go meta, this is just one of them.)
(Note that it doesn't necessarily matter that beliefs and policies tend to be mutually supporting rationalizations linked only in the mind of the believer.)
Fittingly, this comment currently has six upvotes. :-)
There has been at least one occasion when I've posted something despite correctly expecting it to be downvoted. In that case it was a topic where I felt the LW community was letting politeness/pro-social-ness get in the way of rationality/actually-being-right.
I'd love to see a collection of the lowest karma comments of high karma contributors.
My script for downloading all comments of a user has been updated to allow sorting by votes. After sorting, scroll to the bottom to see the lowest karma comments.
Going through some now. Preliminary findings: looking at the most downvoted comments a user ever made is a really good way to induce attribution bias. ("You people are jerks!")
This was not the karma I predicted for this comment.
Me too, but even more interesting would be lowest karma comments of mediocre contributors. The high karma contributors are rarely downvoted when they formulate an idea, because people suppose that even if it sounds crazy it mustn't be so since it originated from an elite contributor. Therefore I suppose that lowest karma comments of high karma contributors would mostly be trivial snarky remarks downvoted for incompatibility of sense of humor. At least this is my hypothesis - let it be tested, if we can collect such comments somehow.
In response to Jack's expression of interest I went used Wei_Dai's script to download all my comments and had a search through with some regex. By approximate count the greatest number of downvoted comments were jests of the type you mentioned, followed by comments of the form "I don't approve of the grandparent either but your specific criticism Y is wrong for this logical reason". The lowest vote that I spotted was -6 for a comment along the lines of "I fundamentally disagree with your accusations of me and do not wish to continue this conversation".
The selection here is somewhat biased in as much as I am comfortable deleting comments if for any reason a conversation is unsatisfactory to me. There are quite possibly comments or jokes that would have gone into free-fall if I did not delete them when they reached -3 in 10 seconds flat. The downvoted comments that remain I either still endorse, consider important for the conversation to make sense, haven't noticed or don't care about enough to click on. (This isn't to say that deleting a comment indicates that I do not endorse it entirely. I also have no problem with choosing my battles.)
I'm afraid Jack would be disappointed in that few of the most downvoted comments seem to be about object level subject matter. Or, if they are, it is object level conversation about something that people are... passionate about. It isn't a source of ideas I have that people most disagree with, which may be interesting to see!
Can you please give us more information? Or even better, link to the actual instance.
It is certainly close to 'best not to post'. There are some times where posting things that you know will be negatively received is worth doing anyway. But it doesn't work well if you try it too frequently. You end up with a reputation as a crockpot at best either in general or specific to one topic. 'Qualia' and 'quantum monadology' spring to mind as past examples. Will is risking getting his own reputation on the subject of theism.
You end up with a reputation as a poor communicator. Unpopular but non-obviously stupid ideas that a person (or paperclip maximizer, etc.) tries to articulate are not punished, like here. That came to mind since I participated in the conversation but there are many other examples on LW.
What? No you don't. You end up with a reputation as a poor communicator when you communicate poorly, which is a different thing altogether.
I am assuming that non-stupid ideas well communicated will not be negatively received.
I'm pretty sure I passed that threshold awhile ago. At least many of my comments get systematically downvoted without getting read these days. ETA: And not just ones that have to do with "theism" (more like theology).
There is not one reputation on a forum like this, where people don't engage in gossip about other users. You have as many reputations as many users are here. So even if somebody is downvoting your comments without reading them (by the way, are you sure about that?), it still doesn't mean that you can't lose more reputation.
Yes. (It would be a weird hypothesis for me to come up with with little evidence and then assert confidently.)
I wasn't claiming I can't lose more reputation.
How did you manage to test it?
Refreshed the page every 5 seconds. If all my comments get downvoted at once that is strong evidence that they weren't actually read, especially if it happens more than once.
There's also a precedent, I dunno if you saw my discussion post about karmassassination.
I am considering making a thread in which people type, for reinforcement, a sentence emphasizing how little they know about votes they receive. What do you think of "I do not know why my comment got the votes it got"? It doesn't reflect partial knowledge or educated guesses enough to be perfect, can you think of better?
The point of Bayesian thinking is that you should have an idea why things are happening. If you genuinely don't know why your comments are getting the votes it did, then ask. This is not a shy forum. You'll build up a few data points and can resume being a competent Bayesian with a pretty good idea why you're getting the votes you do.
It doesn't have to be bad at judging, to judge some things wrongly.
(50% good/bad individual judgments seems natural as a baseline for the good/bad overall judgment impression cutoff but personally-empirically I think it might be more like 75% (I guess due to selection effects of some kind). Sorta like school grades. Do others have other impressions?)
A question of moral philosophy?
Because Less Wrong's extrapolated volition would have upvoted it, and if you didn't post it anyway then Less Wrong's extrapolated volition would be justified in getting mad at you for having not even tried to help Less Wrong's extrapolated volition to obtain faster than it otherwise would have (by instantiating its decision policy earlier in time, because there's no other way for the future to change the present than by the future-minded present thinkers' conscious invocation).
Because definitions of reasonableness get made up after the fact as if teleologically, and it doesn't matter whether or not your straightforward causal reasons seemed good enough at the time, it matters whether or not you correctly predict the retrospective judgment of future powers who make things up after the fact to apportion blame or credit according to higher level principles than the ones that appeared salient to you, or the ones that seemed salient as the ones that would seem salient to them.
This is how morality has always worked, this is how we ourselves look back on history, judging the decisions of the past by our own ideals, whether those decisions were made by past civilizations or past lovers. This pattern of unreasonable judging itself seems like an institution that shouldn't be propped up, so there's no safety in self-consistency either. And if you get complacent about the foundational tensions here, or oppositely if you act rashly as a result of feeling those tensions, then that itself is asking to be seen as unjustified in retrospect.
And if no future agent manages to become omniscient and omnibenevolent then any information you managed to propagate about what morality truly is just gets swallowed by the noise. And if an omniscient and omnibenevolent agent does pop up then it might be the best you can hope for is to be a martyr or a scapegoat, and all that you value becomes a sacrifice made by the ideal future so that it can enter into time. Assuming for some crazy reason that you were able to correctly intuit in the first place what concessions the future will demand that you had already made.
You constantly make the same choices as Sophie and Abraham, it's just less obvious to you that you're making them, less salient because it's not your child's life on the line. Not obviously at this very moment anyway.
Go meta, be clever.
In other words, when everyone thinks you're wrong, do it anyway because you're sure it's right and they'll come around eventually?
This has been the foundation for pretty much all positive social disobedience, but it's wrong a lot more often.
People who disagree with mainstream opinions here, but do so for well articulated and coherent reasons are usually upvoted. If you think something will be downvoted, I think you should take very seriously the idea that it's either very wrong or you're not articulating it well enough for it to be useful.
I still don't see the point of writing obfuscated comments, though. If serving a possible future god is your cup of tea, it seems to me that making your LW comments more readable should help you in reaching that goal. If that demands sacrifice, Will, could you please make that sacrifice?
Okay, I haven't even been here that long and I'm already getting tired of this conversation.
If you really can predict your karma, you should post encrypted predictions* offsite at the same time as you make your post, or use some similar scheme so your predictions are verifiable.
Seems obviously worth the bragging rights.
* A prediction is made up of a post id, a time, and a karma score, and means that the post will have that karma score at that time.
So I guess now is a decent time to reveal that I'd predicted this post would receive 8 karma.
After how long?
A month or so. So, October 14th.
That makes me tempted to downvote it so you'll be right. Which isn't the point, so I did what I originally intended and upvoted.
(Suggstion: Vote on likelihood ratios, not posterior probabilities. (I find it unlikely that this post deserves to be downvoted on its own merits. The continued political downvoting saddens me.))
I am new to LessWrong, so can you please explain why you should be downvoted?
Will Newsome thinks he's a rationalist hipster. See this comment thread if you really want to know.
Vladimir_Nesov correctly interpreted me in that thread. Everybody else just had a lot of fun taking turns talking about how I am bad at communication (bad at trying to communicate), as if that was something I didn't already have a detailed model of.
Well, to be fair he kind of is a rationalist hipster, for certain values of 'hipster'.
(No offense, Will.)