All of swestrup's Comments + Replies

The Finale of the Ultimate Meta Mega Crossover

Oh, I understood that. Except that your explanation of what happened at the end of Permutation City made sense whereas how that story actually ended did not. Hence I prefer your explanation of the ending of Permutation City to the one provided in the book.

The Finale of the Ultimate Meta Mega Crossover

I really enjoyed the story, and I have to say that I prefer your ending to Permutation City than the one that Egan wrote.

The story is an alternate history of A Fire Upon the Deep, but it's a sequel to Permutation City - it's not an alternative to Egan's ending, but something that could have happened after Egan's ending took place as written.

The Sword of Good

I agree, which is why I tend to shy away from performing a moral analysis of Fantasy stories in the first place. That way lies a bottomless morass.

2Aurini12yFantasy stories, and ninety percent of science fiction nowadays...
The Sword of Good

Interesting. Is hard to reconstruct my reasoning exactly, but I think that I assumed that things I didn't know were simply things I didn't know, and based my answer on the range of possibilities -- good and bad.

Huh; I thought my browser had failed, and this post hadn't appeared. Anyway...

There's an old army saying: "Being in the army ruins action movies for you." I feel the same way about 'scifi' - Aside from season 3, every episode of Torchwood (that I've recently started watching, now that I finished Sopranos) is driving me up the wall. I propose a corollary saying:

"Understanding philosophical materialism and the implications thereof ruins 99% of Science Fiction... and don't get me started on Fantasy!"

In my opinion, there are three essen... (read more)

The Sword of Good

It would say that the likelihood is overwhelming that BOTH choices will lead to bad ends. The only question is: which is worse. That's why I was saying it was between two evils.

Besides, its hard to reconcile the concept of 'Good' with a single flawed individual deciding the fate of the world, possibly for an infinite duration. The entire situation is inherently evil.

6Aurini12yThough it wasn't explicitly said, it was heavily implied that either choice would be for a potentially infinite duration. This is a world of fantasy and prophecy, after all: I got the impression that the current social order was stable, and given that there was magic (not psychic ability but magic) it's also fair to assume that the scientific method doesn't work (not that this makes any sense, but you have to suspend that disbelief for magic to work [gnomes are still allowed to build complex machines, they're just not allowed to build useful machines]). The way I interpreted it was that he had a choice between the status quo for 1000 years, or and unknown change, guided by good intentions, for 1000 years. Besides, the Big Bad was Marty Stu. How could I not side with him? (Another great work, Yudkowski - you really should send one of these to Asimov's SciFi)
The Sword of Good

My first impression of this story was very positive, but as it asks us to ask moral questions about the situation, I find myself doing so and having serious doubts about the moral choices offered.

First of all, it appears to be a choice between two evils, not evil and good. On one hand is a repressive king-based classist society that is undeniably based on socially evil underpinnings. On the other hand we have an absolute unquestionably tyranny that plans to do good. Does no one else have trouble deciding which is the lesser problem?

Secondly, we know for a ... (read more)

8Kaj_Sotala12yI gathered that the choice being a difficult one was the whole point. It's not a genuine choice if the right choice is obvious, that much was explicitly stated. You say it "clearly" wasn't a Choice Between Good and Evil, but I don't think that's clear. One choice might still have a good outcome and the other an evil one. It's just that we don't know which one is which.
Absolute denial for atheists

I rather enjoy the taste of a Brown Cow, which is Creme de Cacoa in Milk. Then again, I'm sure I'd prefer a proper milkshake. Generally, if I drink an alcoholic beverage its for the side effects.

Harnessing Your Biases

Granted, the title was probably too flip, but I think yours is a little wordy. I'm not sure I can do better at the moment other than maybe something like "Self-Publication as a Truth Filter".

0Eliezer Yudkowsky12yGo ahead and change it. It won't break any links.
0kpreid12yClarification: My quotation from your article was not intended to be a suggestion of a title.
-1cousin_it12yI feel your post is related to Eliezer's "Say It Loud" [http://lesswrong.com/lw/u3/say_it_loud/]. That by the way is a great title, try to do no worse.
Fourth London Rationalist Meeting?

Reading this, I suddenly had an A-Ha! and checked my post from last month that had, mysteriously, never garnered a single comment or vote and discovered that it was in the drafts area. I could swear that I double checked it at the time to make sure it had been published, but in any case, I've now made sure its published. Thanks!

Atheism = Untheism + Antitheism

To echo scientists who say that something is "Not Even Wrong" if its untestable and/or non-scientific to the point of being incomprehensible, my position on the whole religion question is one that I tend to call Ignosticism in which I say that religions definitions of God are so self-contradictory that I don't even know what they mean by God.

Generally, when some asks if I believe in God, I tell them to define it. When they ask me why, I ask them if they believe in Frub. If so, why? If not, why? Without me giving them a definition, how can they possibly give a rational answer.

5Eliezer Yudkowsky12yWell, sure. By the time the Untheists had talked with any theist from our world for a short period of time, they would deduce that "God" could not be cashed out as a consistent model of anything but rather consisted of the conversational rule "Agree with extreme positive statements".
Religion, Mystery, and Warm, Soft Fuzzies

The above is a great list. Here are a couple more to add:

Vision can also be divided into a modelling sense (what's out there) and a targetting sense (where is something). There are known cases of someone losing one of these without the other. (ie a totally 'blind' man being able to perfectly track a moving target with his pointing finger by 'guessing'.)

As well, we have something called the 'General Chemical Sense' that alerts us to damage to mucus membranes, and is the thing that is complaining when you have the sensation of burning during excretion after you've had a spicy meal.

Religion, Mystery, and Warm, Soft Fuzzies

I think this post made some very good points and I've voted it up, but I want to pick a nit with the mention of "your five senses". Thats Aristotelean mythology. We have many more than five, and so could you please edit this to just read "your senses"?

(Actually, since I'm posting this, I should mention I don't believe in qualia either, but that is a debate of an entirely different order).

7SoullessAutomaton12yTo sate the curiousity of anyone uninclined to look for information themselves, other senses include: * Equilibrioception, via the inner ear, providing sensation of angular momentum and acceleration * Proprioception, feedback on the movement and position of the body. This is why you can close your eyes and touch your fingertips together. * Various internal signals, such as hunger * Pain, a distinct sensation that can be caused by various conditions * What is commonly regarded as the sense of "touch" can be separated into multiple distinct types, including heat, cold, and pressure. For a demonstration of the difference between heat and cold sensation, place small amounts of the chemicals menthol (from peppermint extract) and capsaicin (from chili peppers) in your mouth--the former triggers cold receptors, while the latter triggers heat (and pain) receptors. As an aside, there are also five distinct sensations of flavor, not the four that were commonly accepted until recently.
The mind-killer

I think it will be very necessary to carefully frame what it would be that we might wish to accomplish as a group, and what not. I say this because I'm one of those who thinks that humanity has less than a 50% chance of surviving the next 100 years, but I have no interest in trying to avert this. I am very much in favour of humanity evolving into something a lot more rational than what it is now, and I don't really see how one can justify saying that such a race would still be 'humanity'. On the other hand, if the worry is the extinction of all rational th... (read more)

0byrnema12yI wonder how many rationalists share this view. If a significant number, it would be worthwhile to even discuss this first, in hopes to muster a broader consensus about what the group should do or even to just be aware of the reasons for lack of agreement.
Generalizing From One Example

Okay, then I shall attempt to come up with a post that doesn't re-cover too much of what yours says. I shall have to rethink my approach somewhat to do that though.

Generalizing From One Example

I find it interesting that some folks have mental imagery and others don't, because this possibility had never occurred to me despite having varying ability with this at different times. My mental imagery is far more vivid and detailed when I'm asleep than when I'm awake, which I've often wondered about.

Generalizing From One Example

This post completely takes the wind out of the sails of a post I was planning to make on 'Self-Induced Biases' where one mistakes the environment one has chosen for themselves as being, in some sense, 'typical' and then derives lots of bad mental statistics from this. Thus, chess fanatics will tend to think that chess is much more popular than it is, since all their friends like chess, disregarding the fact that they chose those friends (at least partly) based on a commonality of interests.

A worse case is when the police start to think that everyone is a criminal because that's all they ever seem to meet.

1MrHen12yI would read that.
5Scott Alexander12yNo, not really. I kind of thought we needed more on that, but that this post was long enough already. And I didn't even think of the police-criminal thing. If you have more than what you said in this comment, please do post it, maybe with this post in the "related to" section.
Evangelical Rationality

But what does that have to do with the adjectives of 'near' and 'far'?

0[anonymous]12yPeople somewhere (someone know the reference?) have done studies on how priming people with certain types of words influences how they react to new information. The findings noticed that whenever people were made to think of things far in the future or socially distant they were more likely to think abstractly and idealistically. When prompted with things in the present or immediate future or by things that are close to them in their social network they are more likely to react with the practical thinking. Pjeby's explaination is a good way of describing just why this has come to be the case.
0pjeby12yThe "near" system drives our behavior in relation to things that are "near" in terms of time, space, precision, and detail. The "far" system drives our verbalizations and abstractions regarding things that are "far" on those same axes.
The ideas you're not ready to post

Lurkers and Involvement.

I've been thinking that one might want to make a post, or post a survey, that attempts to determine how much folks engage with the contents on less wrong.

I'm going to assume that there are far more lurkers than commenters, and far more commenters than posters, but I'm curious as to how many minutes, per day, folks spend on this site.

For myself, I'd estimate no more than 10 or 15 minutes but it might be much less than that. I generally only read the posts from the RSS feed, and only bother to check the comments on one in 5. Even then... (read more)

The ideas you're not ready to post

I think there's a post somewhere in the following observation, but I'm at a loss as to what lesson to take away from it, or how to present it:

Wherever I work I rapidly gain a reputation for being both a joker and highly intelligent. It seems that I typically act in such a way that when I say something stupid, my co-workers classify it as a joke, and when I say something deep, they classify it as a sign of my intelligence. As best I can figure, its because at one company I was strongly encouraged to think 'outside the box' and one good technique I found for... (read more)

Evangelical Rationality

I have to admit, I've never understood Hanson's Near-Far distinction either. As described it just doesn't seem to mesh at all with how I think about thinking. I keep hoping someone else will post their interpretation of it from a sufficiently different viewpoint that I can at least understand it well enough to know if I agree with it or not.

3pjeby12yThere are two types of thinking: sensory experience, and abstractions about sensory experience. Each type of thinking has strengths and weaknesses. Sensory thinking lets you leverage a high degree of unconscious knowledge and processing power, applied to detailed models. Abstract thinking can jump several steps at a time, but lacks precision. A major distinction between the two systems is that our actions are actually driven almost exclusively by the sensory system, and only indirectly influenced by the abstract system. The abstract system, in contrast, exists primarily to fulfill social goals: it's the brain's "spin doctor", whose job is to come up with plausible-sounding explanations that make you seem like an attractive ally, mate, etc. Thus, each system has different biases: the sensory system is optimized for caring about what happens to you, right now, whereas the abstract system is optimized for thinking about how things "ought" to be for the whole group in the future... in ways that just "coincidentally" turn out to be for your own good. ;-) The two systems can work together or against each other. In a typical dysfunctional scenario, the sensory system alerts you to a prediction of danger associated with a thought (e.g. of a task you're about to complete), and the abstract system then invents a plausible reason for not following up on that thought, perhaps followed by a plausible reason to do something else. Unfortunately, once people notice this, they have a tendency to respond by having their abstract system think, "I shouldn't do that" or "I should do X instead"... which then does nothing. Or they invent reasons for how they got that way, or why other people or circumstances are against them, or whatever. What I teach people to do is observe what the sensory machinery is doing, and retrain it to do other things. As I like to put it, "action is not an abstraction". The only time that our abstract thoughts lead to behavior changes is when they cause
My Way

A friend of mine has offered to lend me the Kushiel series on a number of occasions. I'm starting to think I should take her up on that.

Of Gender and Rationality

Well, as an additional data point on how folks find less wrong, I found it through Overcoming Bias. I found that site via a link from some extropian or transhumanist blog, although I'm not sure which.

And I found the current set of my extropian and/or transhumanist blogs by actively looking for articles on cutting-edge science, which turn out to often be referenced by transhumanist blogs.

1CronoDAS12yI, too, found Less Wrong from Overcoming Bias; I'm pretty sure I found Overcoming Bias from some comment on author David Brin's blog, but I don't remember when.
Counterfactual Mugging

If we assume I'm rational, then I'm not going to assume anything about Omega. I'll base my decisions on the given evidence. So far, that appears to be described as being no more and no less than what Omega cares to tell us.

0fractalman8yFine, then interchange "assume omega is honest" with, say, "i've played a billiion rounds of one-box two-box with him" ...It should be close enough.
Welcome to Less Wrong!

I never knew I had an inbox. Thanks for telling us about that, but I wonder if we might not want to redesign the home page to make some things like that a bit more obvious.

0ChrisHibbert12yYes, this was valuable. I've been using my user page and re-displaying each of the comments to find new comments. Now I've added my inbox to my bookmark list of places to check every morning (right after the cartoons.)
0PhilGoetz12yYou can also give an email address. Hopefully, LW will forward private messages to your email. I haven't tested it yet.
1arundelo12yYeah, I've been looking at my user page not realizing that it didn't show replies to comments. Now I see I have four replies I didn't know about.
Of Gender and Rationality

This touches on something that I've been thinking about, but am not sure how to put into words. My wife is the most rational woman that I know, and its one of the things that I love about her. She's been reading Overcoming Bias, but I've never been completely sure if its due to the material, or because she's a fan of Eliezer. Its probably a combination of the two. In either case, she's shown no interest in this particular group, and I'm not sure why.

I also have a friend who is the smartest person and the best thinker that I've ever met. He's a practicing r... (read more)

"Stuck In The Middle With Bruce"

That, of course, is your opinion and you're welcome to it. But I thought that I was (perhaps too verbosely to be clear) pointing out that this the original article was yet-another post on Less Wrong that seemed to be saying.

"Do X. Its the rational thing to do. If you don't do X, you aren't rational."

I was trying to point out that there may be many rational reasons for not doing X.

"Stuck In The Middle With Bruce"

Ah, interesting. That was not considered important enough to get into the RSS feed, so I never saw it.

"Stuck In The Middle With Bruce"

I find it 'interesting' that we've both had our posts voted down to zero. Could it be that someone objects to pointing out that the game is a money sink and therefore one might have perfectly rational reasons to avoid it?

0MrHen12yIn addition to what Z M Davis said, I voted both of your posts down because I felt they added nothing useful to the discussion. Thomblake's was just information responding to yours, so I left it alone. This comment isn't meant as arrogant or aggressive, just an explanation since it seems you've asked for one. To directly answer your question: I do not object to the comment, but I think it is less valuable than other comments. Hope that helps.
0[anonymous]12yKarma now starts at zero. (Or were both these posts once at 1?)
2Z_M_Davis12yPosts now start at zero [http://lesswrong.com/lw/9d/zerobased_karma_coming_through/], with self-voting no longer allowed.
"Stuck In The Middle With Bruce"

I have a Magic deck, but I don't often play. That's because Magic is not only an interesting game, its been carefully designed to continually suck more money out of your pocket.

Ever since it was first introduced (I happen to own a first generation deck) the game has been slowly increasing the power levels of the cards so that older cards are less and less valuable and one needs to buy ever more newer cards just to stay competitive.

Add to this the fact they regularly bring out new types of cards that radically shift the power balances in the game and one f... (read more)

4thomblake12yThe game was actually designed without the 'collectable' element, which emerged naturally from the design process since everybody always wanted access to more/newer cards as they played. See any of the various histories regarding Richard Garfield's original concept and playtesting. Arguably, the focus on sucking money out of your pocket came about the time the cards began to develop aftermarket values, it became widely popular, and events like sanctioned tournaments and the 'pro tour' began ('94-'96)
Why Support the Underdog?

My first thought was to assume it was part of the whole alpha-male dominance thing. Any male that wants to achieve the status of alpha-male starts out in a position of being an underdog and facing an entrenched opposition with all of the advantages of resources.

But, of course, alpha-males outperform when it comes to breeding success and so most genes are descended from males that have confronted this situation, strove against "impossible" odds, and ultimately won.

Of course, if this is the explanation, then one would expect there to be a strong difference in how males and females react to the appearance of an underdog.

0Nebu12yMe too: Montreal, Canada.
1Pierre-Andre12yQuébec, Qc, Canada.
Building Communities vs. Being Rational

Well, that's just me. I've never been afraid of leaping feet-first into a paradox and seeing where that takes me. Which reminds me, maybe there's a post in that.

Building Communities vs. Being Rational

These are both good points. Frankly I wasn't trying to rock the boat with my post, I was trying to find out if there was a group of disgruntled rationalists who hadn't liked the community posts and had kept silent. Had that been the case, this post would (I'm assuming) have helped to draw them out.

As for what I WOULD like to see, that's a tricky problem in that I am interested in Rationality topics that I know little to nothing about. The trouble is, right now I don't know what it is that I don't know.

Issues, Bugs, and Requested Features

Comments vs. Upvoting.

I've been wondering if the number of comments that a post (or comment) gets should have an effect on a karma score. I say this because there are some 1-point comments that have many replies attached to them. Clearly folks thought the comment had some value, or they wouldn't have replied to it. Maybe we need have each comment count as a vote, with the commenter having to explicitly choose +,-,or neutral in order to post?

1Vladimir_Nesov12yJust a grab for attention? That would be annoying for the users, bad interface design decision.
1thomblake12yI agree. I think it's terrible whenever I see a comment that has sparked a large discussion but has a low (or even negative!) score. Either people are feeding the trolls, or folks are not upvoting a comment that clearly did its job. EDIT: I disagree about needing to click another button in order to comment - voting is separate from commenting.
3 Levels of Rationality Verification

I'm only now replying to this, since I've only just figured out what it was that I was groping for in the above.

The important thing is not compression, but integration of new knowledge so that it affects future cognition, and future behaviour. The ability to change one's methodologies and approaches based on new knowledge would seem to be key to rationality. The more subtle the influence (ie, a new bit of math changes how you approach buying meat at the supermarket) then the better the evidence for deep integration of new knowledge.

Counterfactual Mugging

You are stating that. But as far as I can tell Omega is telling me its a capricious omnipotent being. If there is a distinction, I'm not seeing it. Let me break it down for you:

1) Capricious -> I am completely unable to predict its actions. Yes.
2) Omnipotent -> Can do the seemingly impossible. Yes.

So, what's the difference?

6bogdanb12yIt's not capricious in the sense you give: you are capable of predicting some of its actions: because it's assumed Omega is perfectly trustworthy, you can predict with certainty what it will do if it tells you what it will do. So, if it says it'll give you 10k$ in some condition (say, if you one-box its challenge), you can predict that it'll give it the money if that condition arises. If it were capricious in the sense of complete inability of being predicted, it might amputate three of your toes and give you a flower garland. Note that the problem supposes you do have certainty that Omega is trustworthy; I see no way of reaching that epistemological state, but then again I see no way Omega could be omnipotent, either. -------------------------------------------------------------------------------- On an somewhat unrelated note, why would Omega ask you for 100$ if it had simulated you wouldn't give it the money? Also, why would it do the same if it had simulated you would give it the money? What possible use would an omnipotent agent have for 100$?
3 Levels of Rationality Verification

When I look at my question there, the only answer that seems appropriate is 'Introspection' as that's at least a step towards an answer.

Counterfactual Mugging

And if Omega comes up to me and says "I was going to kill you if you gave me $100. But since I've worked out that you won't, I'll leave you alone." then I'll be damn glad I wouldn't agree.

This really does seem like pointless speculation.

Of course, I live in a world where there is no being like Omega that I know of. If I knew otherwise, and knew something of their properties, I might govern myself differently.

7MBlume12yWe're not talking Pascal's Wager here, you're not guessing at the behaviour of capricious omnipotent beings. Omega has told you his properties, and is assumed to be trustworthy.
Counterfactual Mugging

I think my answer would be "I would have agreed, had you asked me when the coin chances were .5 and .5. Now that they're 1 and 0, I have no reason to agree."

Seriously, why stick with an agreement you never made? Besides, if Omega can predict me this well he knows how the coin will come up and how I'll react. Why then, should I try to act otherwise. Somehow, I think I just don't get it.

1[anonymous]12yIt doesn't matter too much but we can assume the Omega doesn't know how the coin will come up. That would be rather futile, wouldn't it? Of course, deciding to give Omega $100 now isn't trying to change how you would react, it is just choosing your reaction.
Closet survey #1

I don't have much of a vested interest in being or remaining human. I've often shocked friends and acquaintances by saying that if there were a large number of intelligent life forms in the universe and I had my choice, I doubt I'd choose to be human.

8wedrifid9yI'm going to be an elven wizard.
4Curiouskid9yAre there (many) people on here who don't agree with you?
3 Levels of Rationality Verification

This has been voted into the negatives, but I'm not sure its so basically bad as an idea. If we can set up a system where all of the students, teachers, and any other staff, are all in continuous rationality competitions with each other, then this would quickly cause one to hone their skills.

For example, maybe the teacher of a class is chosen from within a class and has to fight (metaphorically) to maintain that position. Maybe the choice of whether you are teacher, student, principal, cafeteria cook, or janitor depends on the outcomes of numerous rationality contests between members.

And note that I don't necessarily mean that cafeteria cook or janitor would be positions that go to the losers...

3 Levels of Rationality Verification

Well, there's always the idea of using fMRI scans to determine if someone is thinking in 'rational' patterns. You stick them under the machine and give them a test. You ignore the results of the test, but score the student on what parts of their brains light up.

3 Levels of Rationality Verification

You'd have to define 'cheated on'. A fair number of the most rational folks I know live in non-traditional marriage arrangements.

1[anonymous]12yPerhaps because they realise the real probability of cheating.
4MBlume12yThis is entirely true. We're going for emotional effect, so on that test, I'd keep it to the self-identified monogamists
3 Levels of Rationality Verification

I agree. The only solutions to this that I can see is to either not let students know when they are being tested, or to have a system of continual testing.

They key is probably to test someone without letting them know you are testing them. If I ran a martial arts dojo and wanted to make sure my students were really super badass ninjas, I would give them a convincing looking "test" that included things you would expect to see: strength, speed, form, technique, success in actual matches, etc.

This would have very little weighting in the actual grade, however. The real test would be some sort of surprise fight or fights where the student has no idea that the fight is actually one of the tests. Per... (read more)

3 Levels of Rationality Verification

A friend of mine, the most consistently rational person I know of, once told me that his major criteria for whether a piece of information is useful is if it can allow him to forget multiple other pieces of information, because they are now derivable from his corpus of information, given this new fact.

I have a vague feeling that there should be a useful test of rationality based on this. Some sort of information modeling test whereby one is given a complex set of interrelated but random data, and a randomly-generated data-expression language. Scoring is ba... (read more)

2Roko12ycompression != rationality, methinks
3 Levels of Rationality Verification

Well, you asked for DUMB ideas, so here's mine. It has the advantage that I'm sure no one else will suggest it. This is based on an accidental discovery (so far as I know, unpublished) that one can compare two arbitrary documents for similarity (even if they are in different word-processor formats) by running them both through a recognizer built out of a random state machine and comparing bit masks of all the states traversed. The more common they are, the more states will be traversed in both.

So, lets assume we have a panel of highly rational individuals ... (read more)

0[anonymous]12yThat scares me! It sounds altogether too much like the famous beauty pagent, with a bit of "guess the teachers answer" and radomly generated poetry thrown in for good measure. I know I'd be far happier if it was shown to be a really stupid idea. I have a hunch, however, that a correlation of the kind you hypothesis would exist. The part that scares me is that there could well be more than one style of thinking of equal merit, with one being far more common than the other. Naturally the suspicion that I'd end up in the minority and downgraded for it is troublesome. There is more than enough of that sort of bias in schools already! Upvoted for being the right kind of idea and incidently my answer to the example question is a) 7.5. The other three make absolutely no sense while I acknowledge that there is a possibility (though it is improbable) that the way the brain functions could make a quantisation of said greeness at least have some meaning.
1thomblake12yI've actually proposed something like this to test for personality type. The main reason it never got implemented is there isn't really a good, workable theory of persistent personality.
5MichaelVassar12yI think that this resembles the MMPI methodology. http://en.wikipedia.org/wiki/Minnesota_Multiphasic_Personality_Inventory [http://en.wikipedia.org/wiki/Minnesota_Multiphasic_Personality_Inventory]

NOT CRAZY ENOUGH! We need EVEN STUPIDER ideas!

(Voted up for being the best try so far, though.)

The Costs of Rationality

You decided to try achieving that "non-rational" goal, so it must be to your benefit (at least, you must believe so).

Yes, exactly. The fact that you think its to your benefit, but it isn't, is the very essence of what I mean by a non-rational goal.

1Yosarian28yThat might actually be the main cost of rationality. You may have goals that will hurt you if you actually achieve them, and by not being rational, you manage to not achieve those goals, making your life better. Perhaps, in fact, people avoid rationality because they don't really want to achieve those goals, they just think they want to. There's an Amanda Palmer song where the last line is "I don't want to be the person that I want to be." Of course, if you become rational enough, you may be able to untangle those confused goals and conflicting desires. There's a dangerous middle ground, though, where you may get just better at hurting yourself.
LessWrong anti-kibitzer (hides comment authors and vote counts)

Actually, I find I have the exact opposite problem. I almost never vote. Partly that's because I read Less Wrong through an RSS feed that doesn't even show the vote totals. I only ever vote if, like now, I've gone to the actual site in order to comment.

Even then, I find that I am comparing the quality of Less Wrong posts and comments against the entire corpus of what I read on a daily basis, some of which is great, and some of which is dreck.

So, I tend to only vote when the quality of what is written is extremely good -- enough so that I want to 'reward' it -- or extremely bad, so that I want to punish. The vast majority is in the middle and so I don't bother to vote.

Posting now enabled

I am replying to my own post here, because I've been fascinated how the score on this post keeps changing. It was at +1 immediately after I posted it, then dropped to -2 within seconds. The next time I checked it was at +1 and I voted it down to -1. Now its back up at +1. There may well have been intermediate ups and downs I missed. To bad I can't see a history of the voting.

1[anonymous]12yFascinating. I wonder. Do people who see a '-1' on a good post feel more inclined to upvote... if you have the patience...
The Costs of Rationality

I don't really have any ideas other than the "negative net sum" worth I mentioned above, but then that just begs the question of what metric one is using to measure worth.

Load More