Bad Concepts Repository

by moridinamael1 min read27th Jun 2013204 comments

23

Personal Blog

We recently established a successful Useful Concepts Repository.  It got me thinking about all the useless or actively harmful concepts I had carried around for in some cases most of my life before seeing them for what they were.  Then it occurred to me that I probably still have some poisonous concepts lurking in my mind, and I thought creating this thread might be one way to discover what they are.

I'll start us off with one simple example:  The Bohr model of the atom as it is taught in school is a dangerous thing to keep in your head for too long.  I graduated from high school believing that it was basically a correct physical representation of atoms.  (And I went to a *good* high school.)  Some may say that the Bohr model serves a useful role as a lie-to-children to bridge understanding to the true physics, but if so, why do so many adults still think atoms look like concentric circular orbits of electrons around a nucleus?  

There's one hallmark of truly bad concepts: they actively work against correct induction.  Thinking in terms of the Bohr model actively prevents you from understanding molecular bonding and, really, everything about how an atom can serve as a functional piece of a real thing like a protein or a diamond.

Bad concepts don't have to be scientific.  Religion is held to be a pretty harmful concept around here.  There are certain political theories which might qualify, except I expect that one man's harmful political concept is another man's core value system, so as usual we should probably stay away from politics.  But I welcome input as fuzzy as common folk advice you receive that turned out to be really costly.

204 comments, sorted by Highlighting new comments since Today at 8:29 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The concept of "deserve" can be harmful. We like to think about whether we "deserve" what we get, or whether someone else deserves what he/she has. But in reality there is no such mechanism. I prefer to invert "deserve" into the future: deserve your luck by exploiting it.

Of course, "deserve" can be a useful social mechanism to increase desired actions. But only within that context.

Also "need". There's always another option, and pretending sufficiently bad options don't exist can interfere with expected value estimations.

And "should" in the moralizing sense. Don't let yourself say "I should do X". Either do it or don't. Yeah, you're conflicted. If you don't know how to resolve it on the spot, at least be honest and say "I don't know whether I want X or not X". As applied to others, don't say "he should do X!". Apparently he's not doing X, and if you're specific about why it is less frustrating and effective solutions are more visible. "He does X because it's clearly in his best interests, even despite my shaming. Oh..." - or again, if you can't figure it out, be honest about it "I have no idea why he does X"

4[anonymous]7yThat would work nice if I was so devoid of dynamic inconsistency that “I don't feel like getting out of bed” would reliably entail “I won't regret it if I stay in bed”; but as it stands, I sometimes have to tell myself “I should get out of bed” in order to do stuff I don't feel like doing but I know I would regret not doing.
2jimmy7yThis John Holt quote [http://lesswrong.com/lw/11r/rationality_quotes_july_2009/w8j] is about exactly this.
3Larks7yThis is a fact about you, not about "should". If "should" is part of the world, you shouldn't remove it from your map just because you find other people frustrating. One common, often effective strategy is to tell people they should do the thing. The correct response to meeting a child murderer is "No, Stop! You should not do that!", not "Please explain why you are killing that child." (also physical force)
6jimmy7yIt's not about having conveniently blank maps. It's about having more precise maps. I realize that you won't be able to see this as obviously true, but I want you to at least understand what my claim is: after fleshing out the map with specific details, your emotional approach to the problem changes and you become aware of new possible actions without removing any old actions from your list of options - and without changing your preferences. Additionally, the majority of the time this happens, "shoulding" is no longer the best choice available. Sometimes, sure. I still use the word like that sometimes, but I try to stay aware that it's short hand for "you'd get more of what you want if you do"/"I and others will shame you if you don't". It's just that so often that's not enough. And this is a good example. "Correct" responses oughtta get good results; what result do you anticipate? Surely not "Oh, sorry. didn't realize... I'll stop now". It sure feels appropriate to 'should' here, but that's a quirk of your psychology that focuses you on one action to the exclusion of others. Personally, I wouldn't "should" a murderer any more than I'd "should" a paperclip maximizer. I'd use force, threats of force and maybe even calculated persuasion. Funny enough, were I to attempt to therapy a child murderer (and bold claim here - I think I could do it), I'd start with "so why do ya kill kids?"
2TheOtherDave7yMostly, the result I anticipate from "should"ing a norm-violator is that other members of my tribe in the vicinity will be marginally more likely to back me up and enforce the tribal norms I've invoked by "should"ing. That is, it's a political act that exerts social pressure. (Among the tribal members who might be affected by this is the norm-violator themselves.) Alternative formulas like "you'll get more of what you want if you don't do that!" or "I prefer you not do that!" or "I and others will shame you if you do that!" don't seem to work as well for this purpose. But of course you're correct that some norm-violators don't respond to that at all, and that some norm-violations (e.g. murder) are sufficiently problematic that we prefer the violator be physically prevented from continuing the violation.
-2DSherron7y"Should" is not part of any logically possible territory, in the moral sense at least. Objective morality is meaningless, and subjective morality reduces to preferences. It's a distinctly human invention, and it's meaning shifts as the user desires. Moral obligations are great for social interactions, but they don't reflect anything deeper than an extension of tribal politics. Saying "you should x" (in the moral sense of the word) is just equivalent to saying "I would prefer you to x", but with bonus social pressure. Just because it is sometimes effective to try and impose a moral obligation does not mean that it is always, or even usually, the case that doing so is the most effective method available. Thinking about the actual cause of the behavior, and responding to that, will be far, far more effective. Next time you meet a child murderer, you just go and keep on telling him he shouldn't do that. I, on the other hand, will actually do things that might prevent him from killing children. This includes physical restraint, murder, and, perhaps most importantly, asking why he kills children. If he responds "I have to sacrifice them to the magical alien unicorns or they'll kill my family" then I can explain to him that the magical alien unicorns dont't exist and solve the problem. Or I can threaten his family myself, which might for many reasons be more reliable than physical solutions. If he has empathy I can talk about how the parents must feel, or the kids themselves. If he has self-preservation instincts then I can point out the risks for getting caught. In the end, maybe he just values dead children in the same way I value children continuing to live, and my only choice is to fight him. But probably that's not the case, and if I don't ask/observe to figure out what his motivations are I'll never know how to stop him when physical force is no option.
2ArisKatsaris7yI really think this is a bad summarization of how moral injuctions act. People often feel a conflict for example between "I should X" and "I would prefer to not-X". If a parent has to choose between saving their own child, and a thousand other children, they may very well prefer to save their own child, but recognize that morality dictated they should have saved the thousand other children. My own guess about the connection between morality and preferences is that morality is an unconscious estimation of our preferences about a situation, while trying to remove the bias of our personal stakes in it. (E.g. the parent recognizes that if their own child wasn't involved, if they were just hearing about the situation without personal stakes in it, they would prefer that a thousand children be saved rather that only one.) If my guess is correct it would also explain why there's disagreement about whether morality is objective or subjective (morality is a personal preference, but it's also an attempt to remove personal biases - it's by itself an attempt to move from subjective preferences to objective preferences).
0[anonymous]7yThat's a good theory.
-3DSherron7yThis is because people are bad at making decisions, and have not gotten rid of the harmful concept of "should". The original comment on this topic was claiming that "should" is a bad concept; instead of thinking "I should x" or "I shouldn't do x", on top of considering "I want to/don't want to x", just look at want/do not want. "I should x" doesn't help you resolve "do I want to x", and the second question is the only one that counts. I think that your idea about morality is simply expressing a part of a framework of many moral systems. That is not a complete view of what morality means to people; it's simply a part of many instantiations of morality. I agree that such thinking is the cause of many moral conflicts of the nature "I should x but I want to y", stemming from the idea (perhaps subconscious) that they would tell someone else to x, instead of y, and people prefer not to defect in those situations. Selfishness is seen as a vice, perhaps for evolutionary reasons (see all the data on viable cooperation in the prisoner's dilemma, etc.) and so people feel the pressure to not cheat the system, even though they want to. This is not behavior that a rational agent should generally want! If you are able to get rid of your concept of "should", you will be free from that type of trap unless it is in your best interests to remain there. Our moral intuitions do not exist for good reasons. "Fairness" and it's ilk are all primarily political tools; moral outrage is a particularly potent tool when directed at your opponent. Just because we have an intuition does not make that intuition meaningful. Go for a week while forcing yourself to taboo "morality", "should", and everything like that. When you make a decision, make a concerted effort to ignore the part of your brain saying "you should c because it's right", and only listen to your preferences (note: you can have preferences that favor other people!). You should find that your decisions become easier and that you pre
1asr7yThese aren't the only two possibilities. Lots of important aspects of the world are socially constructed. There's no objective truth about the owner of a given plot of land, but it's not purely subjective either -- and if you don't believe me, try explaining it to the judge if you are arrested for trespassing. Social norms about morality are constructed socially, and are not simply the preferences or feelings of any particular individual. It's perfectly coherent for somebody to say "society believes X is immoral but I don't personally think it's wrong". I think it's even coherent for somebody to say "X is immoral but I intend to do it anyway."
-1DSherron7yYou're sneaking in connotations. "Morality" has a much stronger connotation than "things that other people think are bad for me to do." You can't simply define the word to mean something convenient, because the connotations won't go away. Morality is definitely not understood generally to be a social construct. Is that social construct the actual thing many people are in reality imagining when they talk about morality? Quite possibly. But those same people would tend to disagree with you if you made that claim to them; they would say that morality is just doing the right thing, and if society said something different then morality wouldn't change. Also, the land ownership analogy has no merit. Ownership exists as an explicit social construct, and I can point you to all sorts of evidence in the territory that shows who owns what. Social constructs about morality exist, but morality is not understood to be defined by those constructs. If I say "x is immoral" then I haven't actually told you anything about x. In normal usage I've told you that I think people in general shouldn't do x, but you don't know why I think that unless you know my value system; you shouldn't draw any conclusions about whether you think people should or shouldn't x, other than due to the threat of my retaliation. "Morality" in general is ill-defined, and often intuitions about it are incoherent. We make much, much better decisions by throwing away the entire concept. Saying "x is morally wrong" or "x is morally right" doesn't have any additional effect on our actions, once we've run the best preference algorithms we have over them. Every single bit of information contained in "morally right/wrong" is also contained in our other decision algorithms, often in a more accurate form. It's not even a useful shorthand; getting a concrete right/wrong value, or even a value along the scale, is not a well-defined operation, and thus the output does not have a consistent effect on our actions.
1asr7yMy original point was just that "subjective versus objective" is a false dichotomy in this context. I don't want to have a big long discussion about meta-ethics, but, descriptively, many people do talk in a conventionalist way about morality or components of morality and thinking of it as a social construction is handy in navigating the world. Turning now to the substance of whether moral or judgement words ("should", "ought", "honest", etc) are bad concepts -- At work, we routinely have conversations about "is it ethical/honest to do X", or "what's the most ethical way to deal with circumstance Y". And we do not mean "what is our private preference about outcomes or rules" -- we mean something imprecise but more like "what would our peers think of us if they knew" or "what do we think our peers ought to think of us if they knew". We aren't being very precise how much is objective, subjective, and socially constructed, but I don't see that we would gain from trying to speak with more precision than our thoughts actually have. Yes, these terms are fuzzy and self-referential. Natural language often is. Yes, using 'ethical' instead of other terms smuggles in a lot of connotation. That's the point! Vagueness with some emotional shading and implication is very useful linguistically and I think cognitively. The original topic was "harmful" concepts, I believe, and I don't think all vagueness is harmful. Often the imprecision is irrelevant to the actual communication or reasoning taking place.
-1DSherron7yThe accusation of being bad concepts was not because they are vague, but because they lead to bad modes of thought (and because they are wrong concepts, in the manner of a wrong question). Being vague doesn't protect you from being wrong; you can talk all day about "is it ethical to steal this cookie" but you are wasting your time. Either you're actually referring to specific concepts that have names (will other people perceive of this as ethically justified?) or you're babbling nonsense. Just use basic consequentialist reasoning and skip the whole ethics part. You gain literally nothing from discussing "is this moral", unless what you're really asking is "What are the social consequences" or "will person x think this is immoral" or whatever. It's a dangerous habit epistemically and serves no instrumental purpose.
0buybuydandavis7ySubjectivity is part of the territory.
-1DSherron7yThings encoded in human brains are part of the territory; but this does not mean that anything we imagine is in the territory in any other sense. "Should" is not an operator that has any useful reference in the territory, even within human minds. It is confused, in the moral sense of "should" at least. Telling anyone "you shouldn't do that" when what you really mean is "I want you to stop doing that" isn't productive. If they want to do it then they don't care what they "should" or "shouldn't" do unless you can explain to them why they in fact do or don't want to do that thing. In the sense that "should do x" means "on reflection would prefer to do x" it is useful. The farther you move from that, the less useful it becomes.
3buybuydandavis7yBut that's not what they mean, or at least not all that they mean. Look, I'm a fan of Stirner and a moral subjectviist, so you don't have to explain the nonsense people have in their heads with regard to morality to me. I'm on board with Stirner, in considering the world populated with fools in a madhouse, who only seem to go about free because their asylum takes in so wide a space. But there are different kinds of preferences, and moral preferences have different implications than our preferences for shoes and ice cream. It's handy to have a label to separate those out, and "moral" is the accurate one, regardless of the other nonsense people have in their heads about morality.
-2DSherron7yI think that claiming that is just making the confusion worse. Sure, you could claim that our preferences about "moral" situations are different from our other preferences; but the very feeling that makes them seem different at all stems from the core confusion! Think very carefully about why you want to distinguish between these types of preferences. What do you gain, knowing something is a "moral" preference (excluding whatever membership defines the category)? Is there actually a cluster in thing space around moral preferences, which is distinctly separate from the "preferences" cluster? Do moral preferences really have different implications than preferences about shoes and I've cream? The only thing I can imagine is that when you phrase an argument to humans in terms of morality, you get different responses than to preferences ("I want Greta's house" vs "Greta is morally obligated to give me her house"). But I can imagine no other way in which the difference could manifest. I mean, a preference is a preference is a term in a utility function. Mathematically they'd better all work the same way or we're gonna be in a heap of trouble.
1buybuydandavis7yI don't think moral feelings are entirely derivative of conceptual thought. Like other mammals, we have pattern matching algorithms. Conceptual confusion isn't what makes my preference for ice cream preferences different from my moral preferences. Is there a behavioral cluster about "moral"? Sure. How many people are hated for what ice cream they eat? For their preference in ice cream, even when they don't eat it? For their tolerance of a preference in ice cream in others? Not many that I see. So yeah, it's really different. And matter is matter, whether alive or dead, whether your shoe or your mom.
1buybuydandavis7yI can't remember where I heard the anecdote, but I remember some small boy discovering the power of "need" with "I need a cookie!".
0Fhyve7yI think any correct use of "need" is either implicitly or explicitly a phrase of the form "I need X (in order to do Y)".
4PhilGoetz7y"Deserve" is harmful because we would often rather destroy utility than allow an undeserved outcome distribution. For instance, most people would probably rather punish a criminal than reform him. I nominate "justice" as the more basic bad concept. It's a good concept for sloppy thinkers who are incapable of keeping in mind all the harm done later by injustices now, a shortcut that lets them choose actions that probably increase utility in the long run. But it is a bad concept for people who can think more rigorously. A lot of these "bad concepts" will probably be things that are useful given limited rationality. “Are the gods not just?" "Oh no, child. What would become us us if they were?” ― C.S. Lewis, Till We Have Faces
3Viliam_Bur7yI'd say "justice" is a heuristics; better than nothing, but not the best possible option. This could be connected with their beliefs about probability of successfully reforming the criminal. I guess the probability strongly depends on the type of crime and type of treatment, and even is not the same for all classes of criminals (e.g. sociopaths vs. people in relative rare situation that overwhelmed them). They may fear that with a good lawyer, "reform, don't punish" is simply a "get out of jail free" card. To improve this situation, it would help to make the statistics of reform successes widely known. But I would expect that in some situations, they are just not available. This is partially an availability heuristics on my part, and partially my model saying that many good intentions fail in real life. Also, what about unique crimes? For example, an old person murders their only child, and they do not want to have any other child, ever. Most likely, they will never do the same crime again. How specifically would you reform them? How would you measure the success of reforming them? If we are reasonably sure they never do the same thing again, even without a treatment, then... should we just shrug and let them go? The important part of the punishment is the precommitment to punish. If a crime already happened, causing e.g. pain to the criminal does not undo the past. But if the crime is yet in the future, precommiting to cause pain to the criminal influences the criminal's outcome matrix. Will precommitment to reforming have similar effects? ("Don't shoot him, or... I will explain you why shooting people is wrong, and then you will feel bad about it!")
0buybuydandavis7yActually, I think that's some of what they are keeping in mind and find motivating.
0PhilGoetz7yIf they were able to keep it in mind separately, they could include that in their calculations, instead of using justice as a kind of sufficient statistic to summarize it.
-1Eugine_Nier7yWould you also two box on Newcomb’s problem?
1PhilGoetz7yYou can still use precommitment, but tie it to consequences rather than to Justice. Take Edward Snowden. Say that the socially-optimal outcome is to learn about the most alarming covert government programs, but not about all covert programs. So you want some Edward Snowdens to reveal some operations, but you don't want that to happen very often. The optimal behavior may be to precommit to injustice, punishing government employees who reveal secrets regardless of whether their actions were justified.
0Eugine_Nier7yInternational espionage is probably one of the worst examples to attempt to generalize concepts like justice from. It's probably better to start with simpler (and more common) examples like theft or murder and then use the concepts developed on the simpler examples to look at the more complicated one.
3Kaj_Sotala7yUpvoted, but I would note that it's interesting to see a moral value listed in a (supposedly value-neutral) "bad concepts repository". The idea that "deserve" in the sense in which you mention is a harmful and meaningless concept is a rather consequentialist notion, and seeing this so highly upvoted says something about the ethics that this community has adopted - and if I'm right in assuming that a lot of the upvoters probably thought this a purely factual confusion with no real ethical element, then it says a bit about the moral axioms [http://lesswrong.com/lw/hox/effective_altruism_through_advertising/96cq] that we tend to take for granted. Again, not saying this as a criticism, just as something that I found interesting. E.g. part of my morality used to say that if I only deserved some pleasures in case I had acted in the right ways or was good enough: and this had nothing to do with a consequentialist it-is-a-way-of-motivating-myself-to-act-right logic, it was simply an intrinsic value that I would to some extent have considered morally right to have even if possessing it was actively harmful. Somebody coming along and telling me that "in reality, your value is not grounded in any concrete mechanism" would have had me going "well, in that case your value of murder being bad is not grounded in any concrete mechanism either". (A comment saying that "the concept of murder can be harmful, since in reality there is no mechanism for determining what's murder" probably wouldn't have been upvoted.)
2Larks7ySo you're saying we like thinking about a moral property, but we're wrong to do so, because this property is not reliably instanciated? Desert theorist do not need to disagree - there's no law of physics that means people necessarily get what they deserve. Rather, we are supposed to be the mechanism - we must regulate our own affairs so as to ensure that people get what they deserve.
1Leonhart7yPerhaps the bad concept here is actually "karma", which I understand roughly to be the claim that there is a law of physics that means people necessarily get what they deserve.
2fubarobfusco7yI think around here we can call that the just-world fallacy [http://en.wikipedia.org/wiki/Just-world_hypothesis].
1Randy_M7yTo me deserve flows from experiencing the predicatable consequences of one's actions. If the cultural norms for my area is to wait in line at the bank, checkout, restraunt, etc., and I do so, I deserve to be served when I reach the front of it (barring any prior actions towards the owners like theft, or personal connections). Someone who comes in later does not deserve to be served until others in the queue have been. Or, less in a less relative example, if I see dark clouds and go out dressed for warm weather when I have rain clothes at hand, I deserve to feel uncomforable. I do not deserve to be assaulted by random strangers, when I have not personally performed any actions that would initaiate conflict that violence would resolve or done anything which tends to anger other people. Of course, the certainty of getting what one deserves is not 1, and one must expect that the unexpected will happen in some context eventually.
1Kawoomba7yOn the flipside, egalitarian instincts (e.g. "justice and liberty for all", "all men are created equal") are often deemed desirable, even though many a times "deserve" stems from such concepts of how a society should supposedly be like, "what kind of society I want to live in". There is a tension between decrying "deserve" as harmful, while e.g. espousing the (in many cases) egalitarian instincts they stem from ("I should have as many tech toys as my neighbor", "I'm trying to keep up with the Joneses", etc.).
0pinyaka7yI think this is a different flavor of deserving. Stabilizer is using deserve to explain how people got into the current situation while you're using it to describe desirable future situation. The danger is assuming that because we are capable of acting in a way that gives people what they deserve, that in all situations someone must have already done so, so everyone must have acted in such a way that they have earned their present circumstances through moral actions.
-1Eugine_Nier7yThe concept of "deserve" is only harmful to the extent people apply it to things they don't in fact deserve. In this respect, it's no different from the concept of "truth".
0ThrustVectoring7yIt's part of a larger pattern of mistaking your interpretations of reality as reality itself. There's no ephemeral labels floating around that are objectively true - you can't talk too much, work too hard, or be pathetic. You can only say things that other people would prefer not to hear, do work to the exclusion of other objectives, or be pitied by someone.
0wedrifid7yIf excessive work causes an overuse injury or illness then "worked too hard" would seem to be a legitimate way to describe reality. (Agree with the other two.)
0[anonymous]7yI agree with that. I also suspect many people treat deserving of rewards and deserving of punishments as separate concepts. As a result they might reject one while staying attached to the other and become even more confused.

The word "is" in all its forms. It encourages category thinking in lieu of focussing on the actual behavior or properties that make it meaningful to apply. Example: "is a clone really you?" Trying to even say that without using "is" poses a challenge. I believe it should be treated the same as goto: occasionally useful but usually a warning sign.

[-][anonymous]7y 10

So some, like Lycophron, were led to omit 'is', others to change the mode of expression and say 'the man has been whitened' instead of 'is white', and 'walks' instead of 'is walking', for fear that if they added the word 'is' they should be making the one to be many. -Aristotle, Physics 1.2

ETA: I don't mean this as either criticism or support, I just thought it might be interesting to point out that the frustration with 'is' has a long history.

6Viliam_Bur7yE-Prime [http://en.wikipedia.org/wiki/E-Prime]. We could support speaking this way on LW by making a "spellchecker" that would underline all the forbidden words.
5J_Taylor7yIn that sentence, I find the words "clone", "really" and "you" to be as problematic as "is".
7[anonymous]7yYou're perfectly comfortable with the indefinite article?
3J_Taylor7yNo, but I am much more comfortable with it than I am with the other words.
2[anonymous]7yNot having a word for “is” didn't stop the Chinese from coming up with the “ white horse not horse [http://en.wikipedia.org/wiki/When_a_white_horse_is_not_a_horse]” thing, though.

Bad Concept: Obviousness

Consider this - what distinguishes obviousness from a first impression? Like some kind of meta semantic stop sign, "it's obvious!" can be used as an excuse to stop thinking about a question. It can be shouted out as an argument with an implication to the effect of "If you don't agree with me instantly, you're an idiot." which can sometimes convince people that an idea is correct without the person actually supporting their points. I sometimes wonder if obviousness is just an insidious rationalization that we cling to when what we really want is to avoid thinking or gain instant agreement.

I wonder how much damage obviousness has done?

I've found the statement "that does not seem obvious to me" to be quite useful in getting people to explain themselves without making them feel challenged. It's among my list of "magic phrases" which I'm considering compiling at posting at some point.

5John_Maxwell7yLooking forward to this.
1Elo6yMagic phrases please?
1sixes_and_sevens6yThis seems like a good premise for a post inviting people to contribute their own "magic phrases". Sadly, I've used up my Discussion Post powers by making an idle low-quality post about weird alliances last week. I now need to rest in my crypt for a week or so until people forget about it.
0gjm6yOK, I'm confused. (Probably because I'm missing a joke.) Reading the above in isolation I'd take it as indicating that you posted something that got you a big ball o' negative karma, which brought you below some threshold that meant you couldn't post to Discussion any more. Except that your "weird alliances" post is at +7, and your total karma is over 4k, and your last-30-days karma is over 200, and none of your posts or comments in the last week or so is net negative, and those are all very respectable numbers and surely don't disqualify anyone from doing anything. So, as I say, I seem to be missing a joke. Oh well.
2sixes_and_sevens6yMaking non-trivial posts carries psychological costs that I feel quite acutely. I would love to be able to plough through this (c.f. Comfort Zone Expansion) by making a lot of non-trivial posts. Unfortunately, making non-trivial posts also carries time costs that I feel quite acutely. I have quite fastidious editorial standards that make writing anything quite time-consuming (you would be alarmed at how much time I've spent writing this response), and this is compounded by engaging in long, sticky discussions. The Weird Alliances post was an attempt to write something quickly to lower standards, and as a result it was of lower quality than I would have liked. This made the psychological cost greater. I've yet to figure out how to unknot this perverse trade-off between psychological and time costs, but it means I would prefer to space out making posts.
2gjm6yAh, OK, understood. Best of luck with the unknotting. (I'd offer advice, but I have much the same problem myself.)
3Kaj_Sotala7yRelated: On Saying the Obvious [http://lesswrong.com/lw/9q5/on_saying_the_obvious/]
0Epiphany7yGood link. I like that Grognor mentions that obviousness is just a matter of perception and people's ideas about what's obvious will vary, so we shouldn't assume other people know "obvious" things. However, I think that it's really important for us to be aware that if you think something is obvious, you stop questioning, and you're then left with what is essentially a first impression - but I don't see Grognor mention that semantic stop sign like effect in the post, nor do I see anything about people using obviousness as a way to falsely support points. Do you think Grognor would be interested in updating the article to include additional negative effects of obviousness? Then again putting too many points into an article makes articles confusing and less fun to read. Maybe I should write one. Do you know if anyone has written an article yet on obviousness as a meta semantic stop sign, or obviousness as a false supportive argument? If not, I'll do it.
1gwern7yNo; he's quit LW.
0Kaj_Sotala7yNot that I could recall.
0Epiphany7yOk, I'll post about this in the open thread to gauge interest / see if anyone else knows of a pre-existing LW post on these specific obviousness problems.
2bokov7yThe worst professors I have had disproportionally shared the habit of dismissing as obvious concepts that weren't. Way to distract students from the next thing you were going to say.
2wedrifid7ySee also: Expecting Short Inferential Distances [http://lesswrong.com/lw/kg/expecting_short_inferential_distances/]
3Viliam_Bur7yAlso related: Illusion of Transparency: Why No One Understands You [http://lesswrong.com/lw/ke/illusion_of_transparency_why_no_one_understands/] Explainers Shoot High. Aim Low! [http://lesswrong.com/lw/kh/explainers_shoot_high_aim_low/] Double Illusion of Transparency [http://lesswrong.com/lw/ki/double_illusion_of_transparency/]
0Epiphany7yThat's not quite what I meant, but that's a good article. What I meant is more along the lines of... two people are trying to figure out the same thing together, one jumps to a conclusion and the other one does not. It's that distance between the first observation and the truth I am referring to, not the distance between one person's perspective and another's. Reads that article again. I think this is my third time.
1Eugine_Nier7yWell, in mathematics papers it tends to mean, "I'm certain this is true, but now that I can't think of an argument at the moment".
0Epiphany7yHahahah! Oh, that's terrible. Now I just realized that my meaning was not entirely explicit. I edited my statement to add the part about not supporting points.
0Armok_GoB7yThat seems like just a wrong use of obvious. When I say "obvious" I usually mean I cannot explain something because my understanding is subconscious and opaque to introspection.
1Epiphany7yI'm glad you seem to be aware of this problem. Unfortunately, I don't think the rest of the world is aware of this. The dictionary [http://dictionary.reference.com/browse/obvious?s=t] currently defines obvious as meaning "easily seen" and "evident", unfortunately.

There is a cultural heuristic (especially in Eastern cultures) that we should respect older people by default. Now, this is not a useless heuristic, as the fact that older people have had more life experiences is definitely worth taking into account. But at least in my case (and I suspect in many other cases), the respect accorded was disproportionate to their actual expertise in many domains.

The heuristic can be very useful when respecting the older person is not really a matter of whether he/she is right or wrong, but more about appeasing power. It can be very useful to distinguish between the two situations.

8Viliam_Bur7yHow old is the "older" person? 30? 60? 90? In the last case, respecting a 90-years old person is usually not about appeasing power. It seems more like retirement insurance. A social contract that while you are young, you have to respect old people, so that while you are old, you will get respect from young people. Depends on what specifically "respecting old people" means in given culture. If you have to obey them in their irrational decisions, that's harmful. But if it just means speaking politely to them and providing them hundred trivial advantages, I would say it is good in most situations. Specifically, I am from Eastern Europe, where there is a cultural norm of letting old people sit in the mass transit. As in: you see an old person near you, there are no free places to sit, so you automatically stand up and offer the old person to sit down. The same for pregnant women. (There are some seats with a sign that requires you to do this, but the cultural norm is that you do it everywhere.) -- I consider this norm good, because for some people the difference in utility between standing and sitting is greater than for average people. (And of course, if you have a broken leg or something, that's an obvious exception.) So it was rather shocking for me to hear about cultures where this norm does not exist. Unfortunately, even in my country in recent decades this norm (and the politeness is general) is decreasing.
5wedrifid7yMore relevant to the social reasons for the heuristic, they have also had more time to accrue power and allies. For most people that is what respect is about (awareness of their power to influence your outcomes conditional on how much deference you give them). Oh, yes, those were the two points I prepared in response to your first paragraph. You nailed both, exactly! Signalling social deference and actually considering an opinion to be strong Bayesian evidence need not be the same thing.
2PhilGoetz7yBut I think that in America today, we don't respect older people enough. Heck, we don't often even acknowledge their existence. Count what fraction of the people you pass on the street today are "old". Then count what fraction of people you see on TV or in the movies are old.
5buybuydandavis7yI think that our age cohorted Lord of the Flies educational system has much to do with "we" being age cohorted as well.
2Stabilizer7yIt is not surprising that there aren't a proportional number of old people in TV/movies right now. And I suspect there never were. TV/movie audience desire to view people who possess high-status markers. Two important markers are beauty and power. In reality, younger people typically have beauty but not much power. Older people have more power and less beauty. Since TV/movies don't have the constraints of reality, we can make young people who are beautiful also powerful. We can rarely make old people beautiful with some exceptions, which TV/movies often exploit. I don't think this has anything to do with respect.
2jklsemicolon7yThis is a contradiction.
2Stabilizer7ySorry if it was confusing but you are taking it out of context. I actually meant: the fact that we don't have a proportional number of old people in TV/movies as in real life is not because we respect old people less in real life. It is simply a reflection of the freedoms available in TV/movies.

(Thinking about this for a bit, I noticed that it was more fruitful for me to think of "concepts that are often used unskillfully" rather than "bad concepts" as such. Then you don't have to get bogged down thinking about scenarios where the concept actually is pretty useful as a stopgap or whatever.)

2drethelin7yThat's well-known as the mindslaver problem in MTG
6wedrifid7yCan you explain more how that problem relates to the mindslaver card in the MTG community? (Or provide a link? The top results on google were interesting but I think not the meme you were referring to.)

I think this is a slightly different issue. In Magic there's a concept of "strictly better" where one card is deemed to be always better than another (eg Lightning Bolt over Shock), as opposed to statistically better (eg Silver Knight is generally considered better than White Knight but the latter is clearly preferable if you're playing against black and not red). However, some people take "strictly better" too, um, strictly, and try to point out weird cases where you would prefer to have the seemingly worse card. Often these scenarios involve Mindslaver (eg if you're on 3 life and your opponent has Mindslaver you'd rather have Shock in hand than Lightning Bolt).

The lesson is to not let rare pathological cases ruin useful generalizations (at least not outside of formal mathematics).

2Stabilizer7yBy the way even in formal mathematics (and maybe especially in formal mathematics), while pathological cases are interesting, nobody discards perfectly useful theories just because the theory allows pathologies. For example, nobody hesitates to use measure theory in spite the Banach-Tarski paradox; nobody hesitates to use calculus even though the Weierstrass function exists; few people hesitate in using the Peano axioms in spite of the existence of non-standard models of that arithmetic.
1Fhyve7yNitpick: I would consider the Weierstrass function a different sort of pathology than non-standard models or Banach-Tarski - a practical pathology rather than a conceptual pathology. The Weierstrass function is just a fractal. It never smooths out no matter how much you zoom in.
0Stabilizer7yI agree that the Weierstrass function is different. I felt a tinge of guilt when I included the Weierstrass function. But I included it since it's probably the most famous pathology. That being said, I don't quite understand the distinction you're making between a practical and a conceptual pathology. The distinction I would make between the Weierstrass and the other two is that the Weierstrass is something which is just counter-intuitive whereas the other two can be used as a reason to reject the entire theory. They are almost antithetical to the purpose of the theory. Is that what you were getting at?
1wedrifid7yAhh, that would do it. The enemy being the one who uses the card would tend to make inferiority desirable in rather a lot of cases.

Implicitly assuming that you mapped out/classified all possible realities. One of the symptoms is when someone writes "there are only two (or three or four...) possibilities/alternatives..." instead of "The most likely/only options I could think of are..." This does not always work even in math (e.g. the statement "a theorem can be either true or false" used to be thought of as self-evidently true), and it is even less reliable in a less rigorous setting.

In other words, there is always at least one more option than you have listed! (This statement itself is, of course, also subject to the same law of flawed classification.)

9fubarobfusco7yThere's a Discordian catma to the effect that if you think there are only two possibilities — X, and Y — then there are actually Five possibilities: X, Y, both X and Y, neither X nor Y, and something you haven't thought of.
6buybuydandavis7yJaynes had a recommendation for multiple hypothesis testing - one of the hypotheses should always be "something I haven't thought of".

"Your true self", or "your true motivations". There's a tendency sometimes to call people's subconscious beliefs and goals their "true" beliefs and goals, e.g. "He works every day in order to be rich and famous, but deep down inside, he's actually afraid of success." Sometimes this works the other way and people's conscious beliefs and goals are called their "true" beliefs and goals in contrast to their unconscious ones. I think this is never really a useful idea, and the conscious self should just be called the conscious self, the subconscious self should just be called the subconscious self, and neither one of them needs to be privileged over the other as the "real" self. Both work together to dictate behavior.

"Rights". This is probably obvious to most consequentialists, but framing political discussions in terms of rights, as in "do we have the right to have an ugly house, or do our neighbors not have the right not to look at an ugly house if they don't want to?" is usually pretty useless. Similarly, "freedom" is not really a good terminal value, because pretty much anything can be defined as freedom, e.g. "by making smoking in restaurants illegal, the American people have the freedom not to smell smoke in a restaurant if they don't want to."

1[anonymous]7yMost examples I recall, of pointing out which - conscious vs unconscious - is the "true" motivation, were attempts to attack someone's behavior. An accuser picks one motivation that is disagreeable or unpleasant, and uses it to cast aspersion on a positive behavior. I don't think that one self is being privileged over the other solely because of confusion as to which motivations really dictate behavior. It largely depends on which is more convenient for the accuser who designates the "true" self. Also, you may want to put your two bad concepts into different comments. That way they can be upvoted or downvoted separately.

Within my lifetime, a magic genie will appear that grants all our wishes and solves all our problems.

For example, many Christians hold this belief under the names the Kingdom, the Rapture, and/or the second coming (details depend on sect). It leads to excessive discounting of the future, and consequent poor choices. In Collapse Jared Diamond writes about how apocalyptic Christians who control a mining company cause environmental problems in the United States.

Belief in a magic problem solving genie also causes people to fail to take effective action to improve their lives and help others, because they can just wait for the genie to do it for them.

4Desrtopa7yI think this would probably be a pretty destructive idea were it not for the fact that for most people who hold it, it seems to be such a far [http://wiki.lesswrong.com/wiki/Near/far_thinking] belief that they scarcely consider the consequences.
2Viliam_Bur7yIf I believe the world will be destroyed during the next year, the near reaction would be to quit the job, sell everything I can, and enjoy the money while I can. Luckily, most people who share this belief don't do that. But there are also long-term plans, such as getting more education, protecting the nature, planning for retirement... and those need to be done in far mode, where "but the world will be destroyed this year" can be used as an excuse. -- I wonder how often people do this. Probably more often than the previous example.
1bokov7yOr that we will create a magic genie to grant all our wishes and solve our problems?

I am not sure I am comfortable with the idea of an entirely context-less "bad concept". I have the annoying habit of answering questions of the type "Is it good/bad, useful/useless, etc." with a counter-question "For which purpose?"

Yes, I understand that rare pathological cases should not crowd out useful generalizations. However given the very strong implicit context (along with the whole framework of preconceived ideas, biases, values, etc.) that people carry around in their heads, I find it useful and sometimes necessary to... (read more)

1buybuydandavis7yGood, for what, for whom. Similarly, instead of grousing how the world isn't the way I'd like it, or a person isn't the way I'd like them, I try to ask "what's valuable here for me?", which is a more productive focus.
0John_Maxwell7y"Should" is another word like this. Generally when people say should, they either mean with respect to how best to achieve some goal, or else they're trying to make you follow their moral rules.

"Harmony" -- specifically the idea of root) progressions -- in music theory. (EDIT: That's "music theory", not "music". The target of my criticism is a particular tradition of theorizing about music, not any body of actual music.)

This is perhaps the worst theory I know of to be currently accepted by a mainstream academic discipline. (Imagine if biologists were Lamarckians, despite Darwin.)

6maia7yWhat's wrong with it?
7komponisto7ySee discussion here [http://slatestarcodex.com/2013/04/11/read-history-of-philosophy-backwards/#comment-3031] , which has more links.
-1maia7yEr. That's an article about the history of philosophy. Am I missing something, or was it supposed to be about music theory?
1komponisto7yThe link is to a comment.
1maia7yAh, ok. I was on my cellphone, so probably assumed that the instant-scroll-down-to-comment-section was a bug instead of a feature (or possibly it went to the wrong place, even).
4RichardKennaway7yCould you expand on that? It has never been clear to me what music theory is — what constitutes true or false claims about the structure of a piece of music, and what constitutes evidence bearing on such claims. What makes the idea of "harmony" wrong? What alternative is "right"? Schenker's theory? Westergaard's? Riemann? Partsch? (I'm just engaging in Google-scholarship here, I'd never heard of these people until moments ago.) But what would make these, or some other theory, right?
7komponisto7yYou're in good company, because it's never been clear to music theorists either, even after a couple millennia of thinking about the problem. However, I do have my own view on the matter. I consider the music-theoretical analogue of "matching the territory" to be something like data compression. That is, the goodness of a musical theory is measured by how easily it allows one to store (and thus potentially manipulate) musical data in one's mind. Ideally, what you want is some set of concepts such that, when you have them in your mind, you can hear a piece of music and, instead of thinking "Wow! I have no idea how to do that -- it must be magic! [http://www.overcomingbias.com/2013/06/why-no-non-fiction-lyrics.html#comment-938169182] ", you think "Oh, how nice -- a zingoban together with a flurve and two Type-3 splidgets" , and -- most importantly -- are then able to reproduce something comparable yourself.
0pianoforte6117yI'm afraid that despite reading a fair chunk of Mathemusicality I've given up on Westergaard's "An Introduction to Tonal Theory" in favor of Steven Laitz's "The Complete Musician". Steven Laitz is a Schenkerian but his book is fairly standard and uses harmony, voice leading and counterpoint. Actually I'm beginning to conclude that if you want to compose, then starting off by learning music theory of any sort is totally wrongheaded. It is like trying to learn French by memorizing vocabulary and reading books on grammar (which is disturbingly how people try to learn languages in high school). The real way that people learn French is by starting off with very simple phrases and ideas then gradually expanding their knowledge by communicating with people who speak French. Grammar books and vocabulary books are important but as a supplement only to the actual learning that takes place from trying to communicate. Language and music are subconscious processes I don't know what a similar approach to music composition would look like, but I'm reasonably convinced that it would be much better than the current system. I should admit though that I am monolingual and I can't compose music - so my thoughts are based only on theory and anecdotes.
2komponisto7yIf I may ask, what was your issue with Westergaard? (As a polyglot composer, I agree that there is an analogy of language proficiency to musical composition, but would draw a different conclusion: harmonic theory is like a phrasebook, whereas Westergaardian theory is like a grammar text. The former may seem more convenient for certain ad hoc purposes, but is hopelessly inferior for actually learning to speak the language.)
2pianoforte6117yI don't have any particular issue with Westergaard, I just couldn't make it through the book. Perhaps with more more effort I could but I'm lacking motivation due to low expectancy. It was a long time ago that I attempted the book, but If I had to pinpoint why, there are few things I stumbled over: The biggest problem was that I have poor aural skills. I cannot look at two lines and imagine what they sound like so I have to play them on a piano. Add in more lines and I am quickly overwhelmed. A second problem was the abstractness of the first half of the book. Working through counterpoint exercises that didn't really sound like music did not hold my attention for very long. A third problem was the disconnect between the rules I was learning and my intuition. Even though I could do the exercises by following the rules, too often I felt like I was counting spaces rather than improving my understand of how musical lines are formed. I think that your comparison is very interesting because I would predict that a phrasebook is much more useful than a grammar text for learning a language. The Pimsleur approach, which seems to be a decent way to start to learning a language, is pretty much a phrase book in audio form with some spaced repetition thrown in for good measure. Of course the next step, where the actual learning takes place, is to start trying to communicate with native speakers, but the whole point of Pimsleur is to get you to that point as soon as possible. This important because most people use grammatical rules implicitly rather than explicitly. Certainly grammar texts can be used to improve your proficiency in a language, but I highly doubt that anyone has actually learned a language using one. Without the critical step of communication, there is no mechanism for internalizing the grammatical rules. (Sorry for taking such a long tangent into language acquisition, I wasn't initially planning on stretching the analogy that far.)
3komponisto7yThanks for your feedback on the Westergaard text. I think many of your problems will be addressed by the material I plan to write at some indefinite point in the future. It's unfortunate that ITT is the only exposition of Westergaardian theory available (and even it is not technically "available", being out of print), because your issues seem to be with the book and not with the theory that the book aims to present. There is considerable irony in what you say about aural skills, because I consider the development of aural skills -- even at the most elementary levels -- to be a principal practical use of Westergaardian theory. Unfortunately, Westergaard seems not to have fully appreciated this aspect of his theory's power, because he requests of the reader a rather sophisticated level of aural skills (namely the ability to read and mentally hear a Mozart passage) as a prerequisite for the book -- rather unnecessarily, in my opinion. This leads to the point about counterpoint exercises, which, if designed properly, should be easier to mentally "hear" than real music -- that is, indeed, their purpose. Unfortunately, this is not emphasized enough in ITT. Thank goodness I'm here to set you straight, then. Phrasebooks are virtually useless for learning to speak a language. Indeed they are specifically designed for people who don't want to learn the language, but merely need to memorize a few phrases (hence the name), for -- as I said -- ad hoc purposes. (Asking where the bathroom is, what someone's name is, whether they speak English, that sort of thing.) Here's an anecdote to illustrate the problem with phrasebooks. When I was about 10 years old and had just started learning French, my younger sister got the impression that pel was the French word for "is". The reason? I had informed her that the French translation of "my name is" was je m'appelle -- a three syllable expression whose last syllable is indeed pronounced pel. What she didn't realize was that the thre
2pianoforte6117yAlright I've read most of the relevant parts of ITT. I only skimmed the chapter on phrases and movements and I didn't read the chapter on performance. I do have one question is the presence of the borrowing operation the only significant difference between Westergaardian and Schenkerian theory? As for my thoughts, I think that Westergaardian theory is much more powerful than harmonic theory. It is capable of accounting for the presence of every single note in a composition unlike harmonic theory which seems to be stuck with a four part chorale texture plus voice leading for the melody. Moreover, Westergaardian analyses feel much more intuitive and musical to me than harmonic analyses. In other words its easier for me to hear the Westergaardian background than it is for me to hear the chord progression. For me the most distinctive advantage of Westergaardian analyses is that it respects the fact that notes do not have to "line up" according to a certain chord structure. Notes that are sounding at the same time may be performing different functions, whereas harmonic theory dictates that notes sounding at the same time are usually "part of a chord" which is performing some harmonic function. For example its not always clear to me that a tonic chord in a piece (which harmonic theory regards as being a point of stability) is really an arrival point or a result of notes that just happen to coincide at that moment. The same is true for other chords. A corollary of this seems to be that Harmonic analyses work fine when the notes do consistently line up according to their function, which happens all the time in pop music and possibly in Classical music although I'm not certain of this. Having said that, my biggest worry with Westergaardian theory is that it is almost too powerful. Whereas Harmonic theory constrains you to producing notes that do sound in some sense tonal (for a very powerful example of this see here [http://www.oup.com/us/companion.websites/9780195336
2bogus7yNote that when analyzing tonal music with Westergaardian analysis, it is generally the case that anticipation and delay tend to occur at relatively shallow levels in the piece's structure. The deeper you go, the more notes are going to be "aligned", just like they might be expected to be in a harmonic analysis. Moreover, the constraints of consonance and dissonance in aligned lines (as given by the rules of counterpoint; see Westergaard's chapters on species counterpoint) will also come into play, when it comes to these deeper levels. So it seems that Westergaardian analysis can do everything that you expect harmonic analysis to do, and of course even more. Instead of having "harmonic functions" and "chords", you have constraints that force you to have some kind of consonance in the background.
2komponisto7yThe short answer is: definitely not. The long answer (a discussion of the relationship between Schenkerian and Westergaardian theory) is too long for this comment, but is something I plan to write about in the future. For now, be it noted simply that the two theories are quite distinct (for all that Westergaardian theory owes to Schenker as a predecessor) -- and, in particular, a criticism of Schenker can by no means necessarily be taken as a criticism of Westergaard, and vice-versa (see below). The way I like to put it is that in Westergaardian theory, the function of a note is defined by its relationship to other notes in its line (and to the local tonic, of course), and not by its relationship to the "root" of the "chord" to which it belongs (as in harmonic theory). If by "work fine" you mean that it is in fact possible to identify the "appropriate" Roman numerals to assign in such cases, sure, I'll give you that. But what is such an "analysis" telling you? Taken literally, it means that you should understand the notes in the passage in terms of the indicated progression of "roots". Which, in turn, implies that in order to hear the passage in your head, you should first, according to the analyst, imagine the succession of roots (which often, indeed typically, move by skip), and only then imagine the other notes by relating them to the roots -- with the connection of notes in such a way as to form lines being a further, third step. To me, this is self-evidently a preposterously circuitous procedure when compared with the alternative of imagining lines as the fundamental construct, within which notes move by step -- without any notion of "roots" entering at all. I am as profoundly unimpressed with that "demonstration" as I am with that whole book and its author -- of which, I must say, this example is entirely characteristic, in its exclusive obsession with the most superficial aspects of musical hearing and near-total amputation of the (much deeper) musical
1pianoforte6117yThanks, this operation being notably absent in Schenkerian theory (I think). I suppose I will have to live with that for now. By work fine, I mean the the theory is falsifiable, and has predictive power. If you are given half of the bars in a Mozart piece, using harmonic theory can give a reasonable guess as to the rest. I'm not that confident about Mozart though, certainly pop music can be predicted using harmonic theory. Could it be that your subjective experience of music is different than most people? It certainly sounds very alien to me. While its true that listening to the long range structure of a sonata is pleasurable to me, there are certainly 3 to 4 bar excerpts that I happen to enjoy in isolation without context. But you think that 3 bars is not enough to distinguish non-music from music. You also claim that the stylistic differences are minor, yet I would wager that virtually 100% of people (with hearing) can point out d) as being to only tonal example. This is very strange to me; suppose mozart were to replace all of the f's in sonata [http://www.youtube.com/watch?v=meop0rG3tLc] in c major with f sharps. I think that the piece of music would be worse. Not objectively, or fundamentally worse. Just worse to a typical listener's ears. A pianist who was used to playing mozart might wonder if there was a mistake in the manuscript.
2komponisto7yOn the contrary, Schenker uses it routinely. If you're talking about the expectations that a piece sets up for the listener, Westergaardian theory has much more to say about that than harmonic theory does. Or, let me rather say: an analyst equipped with Westergaardian theory is in a better position to talk about that, in much greater detail and precision, than one equipped with harmonic theory. You might try having a closer look at Chapter 8 of ITT, which you said you had only skimmed so far. (A review of Chapter 7 wouldn't hurt either.) Not in the sense that you mean, no. (Otherwise my answer might be "I should hope so!") I'm not missing anything that "most people" would hear. It's the opposite: I almost certainly hear more than an average human: more context, more possibilities, more vividness. (What kind of musician would I be were it otherwise?) I'm acutely aware of the differences between passages (a) through (d). It's just that I also see (or, rather, hear) a much larger picture -- a picture that, by the way, I would like more people to hear (rather than being discouraged from doing so and having their existing prejudices reinforced). That is not what I said. You would be closer if you said I thought 3 bars were not enough to distinguish good music from bad music. But of course it depends on how long the 3 bars are, and what they contain. My only claim here is that these particular excerpts are too short and contain too little to be judged against each other as music. And again, this is not because I don't hear the effect of the constraints that produced (d) as opposed to (a), but rather most probably because: (1) I'm not impressed by (d) because I understand how easy it is to produce; and (2) I hear structure in (a) that "most people" probably don't hear (and certainly aren't encouraged to hear by the likes of Tymoczko), not because they can't hear it, but mostly because they haven't heard enough music to be in the habit of noticing those phenomena; an
0pianoforte6117yAfter looking at Chapter 8, its becoming obvious that learning Westergaardian theory to an extent that it would be actually useful to me is going to take a lot of time and analyses (and I don't know if I will get around to that any time soon). Regarding harmony, this document may be of interest to you - its written by a Schenkerian who is familiar with Westergaard: http://www.artsci.wustl.edu/~rsnarren/texts/HarmonyText.pdf [http://www.artsci.wustl.edu/~rsnarren/texts/HarmonyText.pdf]
0pianoforte6117yOne more question. Do you also think that Westergaardian theory is superior for understanding jazz? I've encountered jazz pianists on the internet who insist that harmony and voice leading are ABSOLUTELY ESSENTIAL for doing jazz improvisation and anyone suggests otherwise is a heretic who deserves to be burnt at the stake. Hyperbole aside, jazz classes do seem to incorporate a lot of harmony and voice leading into their material and their students do seem to make fine improvisers and composers. Oh, and for what its worth, you've convinced me to give Westergaard another shot.
1komponisto7yYes. My claim is not repertory-specific. (Note that this is my claim I'm talking about, not Westergaard's.) More generally, I claim that the Westergaardian framework (or some future theory descended from it) is the appropriate one for understanding any music that is to be understood in terms of the traditional Western pitch space (i.e. the one represented by a standardly-tuned piano keyboard), as well as any music whose pitch space can be regarded as an extension, restriction, or modification of the latter. How many of them are familiar with Westergaardian (or even Schenkerian) theory? I've encountered this attitude among art-music performers as well. My sense is that such people are usually confusing the map and the territory (i.e. confusing music theory and music), à la Phil Goetz above. They fail to understand that the concepts of harmonic theory are not identical to the musical phenomena they purport to describe, but instead are merely one candidate theory of those phenomena. Some of them do -- probably more or less exactly the subset who have enough tacit knowledge not to need to take their theoretical instruction seriously, and the temperament not to want to. I'm delighted to hear that, of course, although I should reiterate that I don't expect ITT to be the final word on Westergaardian theory.
0pianoforte6117yThis was my hypothesis as well (which is what the jazz musician responded with hostility to). If this is true though, then why are jazz musicians so passionate about harmony and voice leading? They seem to really believe that its a useful paradigm for understanding music. Perhaps this is just belief in belief?
0komponisto7yIt's difficult to know what other people are thinking without talking to them directly. With this level of information I would make only two points: 1) It doesn't count as "passionate about harmony and voice leading" unless they understand Westergaardian theory well enough to contrast the two. Otherwise it just amounts to "passionate about music theory of some kind". 2) It doesn't have anything to do with jazz. If they're right that harmony is the superior theory for jazz, then it's the superior theory of music in general. Given the kind of theory we're looking for (cf. Chapter 1 of ITT), different musical traditions should not have different theories. (Analogy: if you find that the laws of physics are different on different planets, you have the wrong idea about what "laws of physics" means.)
0pianoforte6117yI don't think that we disagree all that much. We both agree that there are some people who are able to learn structural rules implicitly without explicit instruction. We typically call these people "good at languages" or "good at music". Our main disagreement therefore, is how large that set of people is. I happen to think that it is very large given that everyone learns the grammatical rules of their first language this way, and a fair number of polyglots learn their second language this way as well (Unless you deny the usefulness of Pimsleur like approaches). If I understand you correctly, you think that the group of people who are able to properly learn a language/music this way is smaller, because it often results in bad habits and poor inferences about the structure of the language. I would endorse this as well - grammatical texts are useful for refining your understanding of the structure of a language. Because it is scary to learn to swim without arm floats even if there is someone else helping you (I think that phrase books are analogous to arm floats). Other than that I would agree with most of this. If you want secondary instruction in a language then you should probably use a grammar book and not a phrase book and I may return to Westergaard after I have taken some composition lessons. Also I would go one step further and say that not only is it possible to learn a language via immersion, it is necessary, and any other tools you may use to learn a language should help to support this goal.
0NancyLebovitz7yTentatively-- grammatical texts have a complex relationship with language. They can be somewhat useful but still go astray because they're for a different language, with the classic example being grammar based on Latin being used to occasionally force English out of its normal use. I suspect the same happens when formal grammar is used to claim that casual and/or spoken English is wrong.
1[anonymous]7yModern descriptive [https://en.wikipedia.org/wiki/Linguistic_description] grammars (like this one [https://en.wikipedia.org/wiki/The_Cambridge_Grammar_of_the_English_Language]) aren't anywhere near that bad.
0Douglas_Knight7yYes, accurate grammars are better than inaccurate grammars. But I think you are focusing too much on the negative effects and not noticing the positive effects. It is hard to notice people's understanding of grammar except when they make a mistake or correct someone else, both of which are generally negative effects. Americans are generally not taught English grammar, but often are taught a foreign language, including grammar. Huge numbers of them claim that studying the foreign grammar helped them understand English grammar. Of course, they know the grammar is foreign, so they don't immediately impose it on English. But they start off knowing so little grammar that the overlap with the other language is already quite valuable, as are the abstractions involved.
0Fhyve7yI have read around and I still can't really tell what Westergaardian theory is. I can see how harmony fails as a framework (it doesn't work very well for a lot of music I have tried to analyze) so I think there is a good chance that Westergaard is (more) right. However, other than the fact that there are these things called lines, and that there exist rules (I have not actually found a list or description of such rules) for manipulating them. I am not sure how this is different from counterpoint. I don't want to go and read a textbook to figure this out, I would rather read ~5-10 pages of exposition and big-picture
1komponisto7yThe best I can recommend is the following article: Peles, Stephen. "An Introduction to Westergaard's Tonal Theory".In Theory Only 13:1-4 [September 1997] pp. 73-94 It's a rather obscure journal, but if you have access to a particularly good university library (or interlibrary loan), you may be able to find it. Failing that, if you PM me with your email address, I can send you the text of the article (without figures, unfortunately).
1Douglas_Knight7yThe defunct journal's web site is open access. Text [http://quod.lib.umich.edu/g/genpub/0641601.0013.001?rgn=main;view=fulltext] (search for Peles). Table of contents [http://quod.lib.umich.edu/g/genpub/0641601.0013.001?rgn=main;view=toc] of page by page scans; first page [http://quod.lib.umich.edu/g/genpub/0641601.0013.001/79].
1komponisto7yWow, thanks!
-3PhilGoetz7yNo. Just no. You're trying to enshrine your aesthetic preferences as rational. Besides, chord progressions work. Most people like music that uses chord progressions better than music that doesn't. Compare album sales of Elvis vs. Arnold Schoenberg.
3komponisto7yYou've completely misunderstood my claim, as arundelo pointed out [http://lesswrong.com/lw/htw/bad_concepts_repository/98h5]. It's like accusing moridinamael of denying the atomic theory of matter (or worse, being opposed to scientific inquiry) because he/she criticized the Bohr model. I.e. you're taking for granted the very thing I'm claiming is wrong, and then somehow using my statement to deduce other unrelated beliefs that I don't in fact hold. (I'm somewhat surprised, because we had some fairly extensive discussions about all this in person a couple months ago [http://lesswrong.com/lw/h8o/meetup_washington_dc_kennedy_center_meetup_with/]. )
-1PhilGoetz7yI'm afraid my brain chose to remember the jogging path, the view of the Potomac, the bridges, and some of the joggers, but nothing about what we said. If you converted me to your view, I have lapsed back into my old ways. I have to learn everything several times. I don't see how I've misunderstood your claim. I realize you claim harmony doesn't cut reality at the joints. I think that's an aesthetic judgment. You say that Westergardian theory allows one to treat the music of Berg, Schoenberg, and Webern as belonging to the same school as earlier Western music, as if this were a point in favor of that theory. To me, it is a proof that the theory is both wrong and destructive, because my aesthetic sense says that music is crap. We agree that the test of a theory of music is whether it helps one compose good music. I've never tried to write music using either theory, but if using Westergardian theory allows one to write music like that of Berg, my aesthetic judgements, which are different than yours, say that proves it is a bad theory. Perhaps if I had been raised in a culture that used Westergardian composition techniques, I would be acclimatized to it, and would appreciate that music, and have a low opinion of harmonic theory. Even supposing that were true, which I doubt, it would only mean that this is culturally relative. Not a failure of rationality. It seems to me that to claim that harmonic theory is objectively wrong, you must also claim that the tastes of people like me, who like things written using harmonic theory and dislike things not using harmonic theory, are also objectively wrong. If you showed that Westergardian theory gave a simpler explanation of the music that I like, that would help convince me that it was a superior theory. (I don't expect you can do this in a blog post.) But even then, calling it a bad concept would be like calling Newtonian physics a bad concept because it doesn't explain motion at relativistic speeds.
4bogus7yThis is not really true, for a variety of reasons: 1. Schenker and Westergaard do not claim that their theory can explain atonal music. A claim that Schenckerian/Westergaardian analysis helps explain tonal music is much stronger than the claim about atonal music, and should be evaluated on its own merits. In particular, we know that Schencker was aware of early atonal music, and didn't like it. 2. People's "aesthetic sense" seems to be quite dependent on their musical experience. Modern atonal music was the result of a very gradual development of taking existing (e.g. tonal) music and adding more and more "atonality" (whatever that means: some would say dissonance, others would talk about modulation, or complexity). People generally learn to appreciate atonal music by retracing these developments gradually, and listening to more and more challenging pieces. Thus, while your aesthetic sense says that this music sucks, this may not prove much. 3. There is plenty of music that was clearly "not written using harmonic theory" insofar as harmonic theory (e.g. as detailed by Rameau's Treatise on Harmony) postdates it. And yet, Renaissance and Baroque period music (and even a lot of secular Medieval music) is generally appreciated, just as much as music written after harmony-based theories became established. I do agree that this would be quite relevant.
2komponisto7yI understand and sympathize. (It wasn't that I thought I converted you to my view, but that I thought I had done a better job of conveying what my complaints about harmonic theory were.) The misunderstanding is most evident when you write a phrase like: which begs the whole question. You assume that harmonic theory is an accurate description of "how those things are written", which is the very thing I deny. You seem to be confusing music theory with music, which is like mixing up the map and the territory. Not quite. At least, the emphasis is on "helps", not on "good". You should think of a work of music (including its aesthetic qualities) being held fixed when we evaluate theories; the parameter we're measuring that determines how good the theory is is how easily the theory allows us to produce the music in question. (Furthermore, it certainly can't be the case that harmonic theory's classifications track your likes and dislikes. After all, you apparently don't like Beethoven's Great Fugue [http://lesswrong.com/lw/84b/things_you_are_supposed_to_like/], and yet as far as harmonic theory is concerned it's in the same category as his other works, which you do like [http://lesswrong.com/lw/84b/things_you_are_supposed_to_like/529v].) I disagree that harmonic theory is anywhere near as good as Newtonian physics. I would instead compare it -- unfavorably -- to pre-Darwinian theories of biodiversity. I specifically believe it to be one of the worst theories of all time (whereas Newtonian physics is one of the best).
0PhilGoetz7yI don't understand music theory enough to continue the debate. I don't even understand what you mean by harmonic theory, since I assume you don't mean we should throw away 1-3-5 chords. I have noticed that Baroque music tends more often than classical or romantic music to have passages that starts on one chord, and the different parts walk their different ways to another chord with no pivot chords, just walk the bass and damn the torpedoes in between. is that related to what you're talking about?
2komponisto7yBy harmonic theory I mean the idea proposed by Jean-Philippe Rameau in 1722 of analyzing music as a succession of simultaneities ("chords"), to each of which is assigned a "root", and with the order of chords being governed by relationships among the roots. The above doesn't make any literal sense, but if what you mean by this is that Baroque music violates Rameau's rules of root progression more often than later music (which, believe it or not, is actually what I think you mean), then this is almost certainly not the case: generally speaking, music gets more complex as you go forward in history, and the more complex it is, the more likely it is to crash Rameau's theory. (Yes, I know that popular histories tell you that Classical music was simpler than Baroque. This is wrong.) The reality is that the torpedoes were always damned. Rameau and his theoretical successors mistook certain superficial patterns (which automatically arise in particularly simple musical contexts) for underlying laws. The actual underlying laws were discovered by Schenker [http://en.wikipedia.org/wiki/Heinrich_Schenker] and Westergaard.
-1PhilGoetz7yWould you deny that Baroque music deviates from common chords more often than classical music does?
2komponisto7yYes. Look at how many Baroque vs. Classical entries there are on this list of examples of augmented sixth chords [http://musictheoryexamples.com/25A6.html], for instance.
0PhilGoetz7yThat appears to be an effect of the data compiler's bias. This list of I-5-7 chords [http://musictheoryexamples.com/3tonicdom.html] from the same source has the same ratio.
0komponisto7yFrom Wikipedia [http://en.wikipedia.org/wiki/Augmented_sixth_chord]: This implies that its use increased over time, and in particular was greater in the Classical and Romantic periods than in the Baroque.
-2PhilGoetz7yThat's an argument that classical music uses more augmented sixths chords, which are not especially uncommon. Contrast that with something like the chord held at the start of Bach's Fugue in D minor -- it's got a C#, a D, and an E it in; what the hell is it? That's what I was talking about when I said "I have noticed that Baroque music tends more often than classical or romantic music to have passages that starts on one chord, and the different parts walk their different ways to another chord with no pivot chords, just walk the bass and damn the torpedoes in between," which makes perfectly simple literarl sense. Classical music moves from one resolved chord to another thru a series of pivot chords. Baroque music sometimes just walks the bass, and maybe the top note also, by one half-step per "chord" until it arrives at the destination chord, passing through intermediate states that aren't any kind of recognized chord, certainly nothing so common as an augmented 6th. Now, if when we say Baroque you're thinking Vivaldi and I'm thinking Bach's organ music, that could account for the difference of opinion.
1gjm7yBach wrote umpteen different fugues in D minor, none of which is so obviously better or more important than the others as to deserve the title "Bach's Fugue in D minor". And it's kinda unusual for a fugue to begin with any sort of held chord, though maybe whichever one you're thinking of does. Would you care to be more specific?
2arundelo7yI bet PhilGoetz is talking about the toccata in the famous Toccata and Fugue in D minor, BWV 565 [https://en.wikipedia.org/wiki/Toccata_and_Fugue_in_D_minor,_BWV_565], which has a C# diminished 7 over a D pedal tone (about 30 seconds into this recording [https://www.youtube.com/watch?v=_FXoyr_FyFw]).
0gjm7yYeah, I thought he might be talking about that too, so I looked at the score. The chord immediately before the start of the fugue doesn't fit Phil's description.
-2PhilGoetz7yYes, I'm taking about BWV 565. I was too lazy to look up the number, and I should have said "Tocatta and Fugue in Dm". He only wrote two things called "Tocatta and Fugue in Dm", and this is the more famous one. And, YES, the chord does fit my description. I don't have to look it up; I play it, and I know you begin by striking a very low D, then the C# almost an octave above it, then the E just above that, and more notes beyond as well. AND I just went downstairs and checked the score, just in case you were actually right. I think you may be talking about the next chord. What I'm calling the "chord" is written as an ascending series of notes, but most players hold them all down until the last one. It's the weird one, not the "pivot" & not the resolution.
0gjm7yAt least one of us is very confused. I don't think it's me. At the end of the toccata there is a chord containing the following notes, from bottom to top: D (in the bass, on the pedals), another D (lowest note on the manuals), F, A, D. This is a perfectly ordinary chord of D minor, of course. After that there is a semiquaver rest and then the fugue subject begins (or, perhaps better, the fugue subject begins with a semiquaver rest). At that point, as is normal in a fugue, there is only one voice sounding. Oh, wait, you weren't talking about the fugue at all? You meant the chord a few bars into the toccata? Well, OK then, that chord contains the notes you said it does. (Though, I repeat, it isn't "the chord held at the start of Bach's Fugue in D minor"; it's in the toccata, not the fugue; in a discussion of music analysis such distinctions are really worth making.) But there's nothing weird about that chord! It's a standard diminished-7th chord (everything at intervals of 3 semitones from some starting point; in this instance C#, E, G, Bb). If I may quote from that bastion of the avant garde, Wikipedia: Diminished seventh, check. Rooted on the leading tone, check. Minor key, check. It's perfectly commonplace. (There are plenty of much weirder things in Bach.)
-1PhilGoetz7yThe chord is in measure 2 of the piece, and contains these notes: D, C#, E, G, Bb, C#, E. A diminished 7th in Dm should have D, F, Ab, Bb, shouldn't it? This is a diminished 7th C#, so what's the D doing there? Anyway, my impression is that diminished 7ths are much more common in organ music than in piano music. I think of them as "that organ-music chord". And if you look up diminished 7th in the same music database that komponisto linked to above, you'll see it has a much higher fraction of baroque entries than any of the other items on that list. Perhaps part of the issue is when I hear "baroque" I think Bach, and when I hear "classical" I think Mozart. I think Bach does more weird chords than Mozart does. Or consider Beethoven's Moonlight Sonata--it's chock full of different chords juxtaposed in unusual ways, but they're almost all common chords.
1arundelo7yWikipedia: pedal point [https://en.wikipedia.org/wiki/Pedal_point]
0komponisto7yI don't share this impression at all. How much piano music do you know? There's probably a lot more of it than there is of organ music. This is certainly the case in the nineteenth century, which was probably the heyday of the diminished seventh (while being the low point of the organ repertory). Eh? Among a combined total of 70-80 examples on this page [http://musictheoryexamples.com/20VII.html] and this one [http://musictheoryexamples.com/26CT.html], I count about 7-8 Baroque examples, so about 10%. I'm not going to count through all the other 24 pages for comparison, but I don't think this supports the thesis that the diminished seventh is particularly characteristic of the Baroque as opposed to the Classical or Romantic; indeed, it is the Romantic which dominates the examples, as I predicted above. (And note by the way that not one of the Baroque examples that I could find was specifically an organ piece!) What data is this based on? And for what definition of "weird"? Did you see the Mozart example I cited in my other comment [http://lesswrong.com/lw/htw/bad_concepts_repository/9ecl]? Do you have any reason to think that example is particularly uncharacteristic (in a way that your Bach example isn't)? This a piece with plenty of diminished sevenths! (And what do you mean by "juxtaposed in unusual ways"?) Phil, in all seriousness, you really ought to look at the Westergaard book. You would like it, and it would really help clarify your thinking about music. (I believe I have already directed you to an electronic copy via e-mail.)
0komponisto7y"Uncommon" doesn't mean anything without reference to a time period; the point is that they are more uncommon in the Baroque period than in the Classical. The Classical period uses a richer "vocabulary of chords" than the Baroque, if one insists on thinking in such terms (as a Westergaardian, I don't think in terms of a "vocabulary of chords", of course). First of all "Bach's Fugue in D minor" is highly ambiguous; Wikipedia [http://en.wikipedia.org/wiki/List_of_compositions_by_Johann_Sebastian_Bach] lists 10 such works by J.S. Bach alone (BWV 538, 539, 554, 565, 851, 875, 899, 903, 905, and 948). But you can find a chord containing those same three pitch-classes (along with G# and B) in the first movement of Mozart's Symphony No. 29 [http://petrucci.mus.auth.gr/imglnks/usimg/c/c5/IMSLP00061-Mozart_-_Symphony_No_29_in_A_Major__K201.pdf] (p.4, second system, 4th measure, 1st and 3rd quarter). "Pivot chord" is a technical term in harmonic theory (which, again, I don't subscribe to) meaning a chord shared by two different keys which is used in modulating between them. You don't appear to be using this term correctly here (we're not talking about key changes), and I'm not sure exactly what you do mean. "Resolved chord" is not a standard term at all, but maybe you mean "consonant chord". (?) However, both Baroque and Classical music "move from one [consonant] chord to another" (well, except when moving to dissonant chords, which also occurs in both periods...) So this sentence reads like confused gobbledygook to me. A musical example of the phenomenon which you think occurs in Baroque music but not Classical would help (but we know it isn't "a chord with C#, D, and E", as the Mozart example I gave shows). You just have to compare apples to apples. If the most complex works of J.S. Bach are what you mean by "Baroque", then the most complex works of Haydn, Mozart, and (at least early) Beethoven have to be what you mean by "Classical". I think what actually accounts
2arundelo7yYou couldn't be expected to tell it from the grandparent, but komponisto is saying not that tonal music is bad but that the standard set of harmony concepts does not cut reality at the joints [http://mathemusicality.wordpress.com/category/anti-harmony/], even when dealing with Elvis or Bach. See also thelink [http://slatestarcodex.com/2013/04/11/read-history-of-philosophy-backwards/#comment-3031] given in komponisto's other comment [http://lesswrong.com/r/discussion/lw/htw/bad_concepts_repository/98hm]. (I haven't looked into this enough to have a strong opinion on it. I will say that the standard set of harmony concepts is an extremely important part of my mental furniture.)
1komponisto7yThe title of the post is "Bad Concepts Repository", not "Bad Musical Repertory". Shouldn't that make it a given that theories of things, rather than things themselves, are what what we're critiquing here?
0arundelo7yHopefully you can take my comment [http://lesswrong.com/lw/htw/bad_concepts_repository/98nl] as an application of the principle of charity to PhilGoetz rather than a critique of your comment that he was responding to. ("Harmony is a bad concept?! But all my favorite music was written using that concept!")
1bogus7yI agree that whether "the standard set of harmony concepts" is actually superseded by Schenkerian/Westergaardian analysis is not really obvious. Westergaard has a highly non-trivial theory of what counts as "consonance" or "dissonance" in a melodic line, which is roughly equivalent to "harmony" in standard music theory. The other way that traditional "harmony" is recovered is that this kind of analysis allows for a note in the 'background'/'deep' structure to be tonicized over, effectively becoming a "temporary tonic" and admitting the construction of tonic triads ('arpeggiation'). It would not be hard to make a strong case that "harmony" is a derived phenomenon; just take a bunch of chord progressions (or pieces that are commonly analyzed in terms of chord progressions) and re-analyze them in terms of the Schenkerian/Westergaardian concepts (deep structures, arpeggiation, tonicization). Then show how this leads either to a simplified analysis, or to one that's a better description of the music.
2komponisto7yIf you don't find it obvious after studying Westergaard and comparing it to (say) Piston, then my best guess is that you're relying on tacit musical knowledge that you don't realize others lack, or which you mistakenly think is being communicated in Piston (etc.) but which actually isn't. Not so -- there is nothing in Westergaard about root progressions (Rameau's "fundamental bass"), which is the defining concept of "harmony" in the traditional (theoretical) sense. Consonance and dissonance are part of traditional contrapuntal theory, which goes back to long before Rameau. (Yes, Westergaard does draw on the tradition of contrapuntal theory, as did Schenker.) Again, if you think this is what is meant by "harmony", you are missing the point. (Yes, Rameau kinda sorta had this idea as part of his theory -- but not really. It's really a Schenkerian idea.) In harmonic theory, the "hierarchy" has only two levels of structure: a note is either part of the chord, or not part of the chord ("nonharmonic tones"). In Westergaardian theory (as in Schenkerian theory), there is no limit to the number of levels. Take the Mozart analysis that folds out from the back of the Westergaard book. The data in that analysis cannot be expressed in terms of harmonic theory. The latter is simply not rich enough. All you can do in harmonic theory is write Roman numerals under the score, which (at best) might be considered roughly equivalent to showing one level of reduction in the Westergaardian analysis (though not really, because the Roman numerals only contain pitch-class information, not pitch information like the Westergaardian version; plus harmonic theory's "chords" frequently and typically mix up different levels of Westergaardian structure).

Entitlement and Anti-entitlement, especially in the context of: 1. the whole Nice Guy thing and 2. the discourse on the millennial generation. It becomes a red herring, and in the former case leads to ambiguity between 'a specific person must do something' and 'this should be easier than it is. Plus it seems to turn semi-utilitarians deontologist. In the case of millennials, it tends to involve big inferential distance problems.

This one is well known, but having an identity that is too large can make you more susceptible to being mind killed.

2shminux7yHow much of an identity is just right?
4wedrifid7y"I'm a gorgeous blonde child who roams the forest alone stealing food from bears." is just right [http://en.wikipedia.org/wiki/Goldilocks_and_the_Three_Bears].
3tondwalkar7yPaul Graham suggests keeping your identity as small as sustainable. [1] That is, it's beneficial to keep your identity to just "rationalist" or just "scientist", since they contradict having a large identity. He puts it better than I do: [1] http://www.paulgraham.com/identity.html [http://www.paulgraham.com/identity.html]
0Armok_GoB7yThis goes well for belief's included in your identity, but I've always been uncertain about it it's supposed to also extend to things like episodic memories (separated from believing the information contained in them), realtionship in neutral groups such as a family or a fandom, precommitments, or mannerisms?
0tondwalkar7yI'm not sure what you're saying here; you think of your memories as part of your identity? These memberships are all heuristics for expected interactions with people. Nothing actionable is lost if you bayes-induct for each situation separately, save the effort you're using to compute and the cognitive biases and emotional reactions you get from claiming "membership". Alternately you could still use the membership heuristic, but with a mental footnote that you're only using it because it's convenient, and there are senses in which the membership's representation of you may be misleading.
1Armok_GoB7y@episodic memories: I don't personally have any like that, but I hear many people do consider the subjective experience of pivotal events in their life as part of who they are. @relationships: I'm talking the literal membership here, the thing that exists as a function of the entanglement between states in different brains. To clarify, I'm not talking about "your identity" here as in the information about what you consider your identity, but rather the referent of that identity. To many people, their physical bodies are part of their identity in this sense. Even distant objects, or large organizations like nations, can be in extreme cases. Just because it's a trend here to only have information that resides in your own brain as part of your identity doesn't mean it's necessary, or even especially common in it's pre form in most places.
1tondwalkar7yAh, it appears we're talking about different things. I'm referring to ideological identity ("I'm a rationalist" , "I'm a libertarian", "I'm pro-choice", "I'm an activist" ), which I think is distinct from "I'm my mind" identity. In particular, you can be primed psychologically and emotionally by the former more than the latter.
1Armok_GoB7yIt seems like we both, and possibly the original Keeping Your Identity Small article, are committing the typical mind fallacy.
1hylleddin7yMy guess would be only as large as necessary to capture your terminal values, in so far as humans have terminal values.
0Will_Newsome7y"How much" I'm not sure, but a strategy that I find promising and that is rarely talked about is identity min-maxing.
6buybuydandavis7yWhen I "die", I won't cease to exist, I'll be united with my dead loved ones where we'll live forever and never be separated again.
2[anonymous]7yTo elaborate on the harm of the "live forever" belief, it makes people apathetic to the suffering of human life. We all get one life, nothing more. Some - many - people spend their entire lives in great pain from starvation, diseases, oppression, etc. An observer's belief in a perfect, eternal afterlife mitigates the horror of this waste of human life. "They may suffer now, but after death, they'll have an eternity of happiness and content."
1taelor7yThis argument presupposes that the "live forever" belief is false. While it is, offering it as an explanation for why the "death is good" belief is bad is unhelpful, as nearly all the people who hold the latter belief also hold the former.

The concept that forgiveness is a good thing. This is a bad concept because the word "forgive" suggests holding a grudge and then forgiving someone. It's simpler and better to just never hold grudges in the first place.

9Kaj_Sotala7yRetracted my previous comment, because it was agreeing with your claim that it's better to never hold grudges in the first place, which I quickly realized I also disagreed with. A grudge is an act of retaliation against someone who has harmed you. They hurt you, so you now retract your cooperation - or even engage in active harm against them - until they have made sufficient amends. If they hurt you by accident or it was something minor, then yes, probably better not to hold a grudge. But if they did something sufficiently bad, then it is better to hold a grudge to show them that you will not accept such behavior, and that you will only engage in further cooperation once they have made some sign of being trustworthy. Otherwise you are encouraging them to do it again, since you've shown that they can do it with impunity - and by this you are also harming others, by not punishing untrustworthy people and making it more profitable to be untrustworthy. You do not forgive DefectBot, nor do you avoid developing a grudge in the first place, you hold a grudge against it and will no longer cooperate. In this context, "forgiveness is a good thing" can be seen as a heuristic that encourages us to err on the side of punishing leniently, because too eager punishment will end up alienating people who would've otherwise been allies, because we tend to overestimate the chance of somebody having done a bad thing on purpose, because holding grudges is psychologically costly, or for some other reason.
2[anonymous]7yObligatory link to one of the highest-voted LW posts ever [http://lesswrong.com/lw/24o/eight_short_studies_on_excuses/]
6Prismattic7yEven worse: "Forgive and forget" as advice. It combines the problem with forgiveness with explicitly advising people not to update on the bad behavior of others.
2[anonymous]7yWhy blame forgiveness for the existence of grudges? The causal chain didn't go: moral philosophers invent forgiveness -> invention of resentment follows because everyone wants to give forgiveness a try.
1PrometheanFaun7yIt's also cowardly or anti-social. Forgiving is the easy thing to do, forgive and you longer have to enact any reprisal and you can potentially keep an ally. You also allow a malefactor to get away with their transgression, which will enable them to continue to pull the same shit on other people.
0Kaj_Sotala7ysixes and sevens's comment [http://lesswrong.com/r/discussion/lw/htw/bad_concepts_repository/98h1] applies to this one as well, I think.

Can someone put a "repository" tag on this post? Thanks!

"It isn't fair."

Ask someone to what "it" refers, and they'll generally be shocked by the notion that their words should have referents. When the shock wears off, it will be that "the situation" is unfair, which is a category error. The state of the universe is unfair? Is gravity unfair too? How about the fact that it rained yesterday?

Fairness is a quality of a moral being or rules enforced by moral beings. But there is rarely any particular unfair being or rule enforced by beings behind "it isn't fair".

"It isn't fair" empirically means "I don't like it and I approve of and support taking something out of someone's hide to quell my discomfort."

5RomeoStevens7yI have no problem with referring to states of the universe as unfair.
0buybuydandavis7yI'm sure the universe feels terribly guilty about it's transgression when you do.
2pragmatist7yInducing guilt in the target of the judgment is not the sole (or even primary) purpose of moral judgment, nor is it a necessary feature. That the target must be capable of experiencing guilt is not a necessary feature either. Do you disagree with any of this? I am, in general, much more inclined to attribute unfairness to states of affairs than to people. Usually it's a state of affairs that people could potentially do something to alter/mitigate, though, so I wouldn't call a law of nature unfair.
0buybuydandavis7yIn case it wasn't clear, my comment on the universe felling guilty was my way of pointing out the futility of considering the universe unfair. No.
3pragmatist7yBut human beings can change states of the universe. Is your point that they will not be motivated to do so if the judgment of unfairness is impersonal?
1wedrifid7yIt quite often means "I don't like it and will attempt to change it by the application of social pressure and other means as deemed necessary".

Within my lifetime, the world will end.

This too is a common belief of fundamentalist Christians (though by no means limited to them), and has many of the same effects as the belief that "Within my lifetime, a magic genie will appear that grants all our wishes and solves all our problems." For instance, no one will save for retirement if they think the world will end before they retire. And it's not important to worry about the state of the environment in 50 years, if the world ends in 25.

However this belief has an important distinction from the ... (read more)

-1Eugine_Nier7yExcept, it was a non-event even in those places where this didn't happen.
[-][anonymous]7y 1

There's one hallmark of truly bad concepts: they actively work against correct induction.

Sir Karl Popper (among others) made some strong arguments that induction is a bad concept.

-3timtyler7yThose arguments are now known to be nonsense [http://en.wikipedia.org/wiki/Bayesian_inference].

In the lies-to-children/simplification for the purpose of heuristics department there are a largish reference class of these concepts that are basically built into the mind, that there is no way to remotely explain the proper replacement in words due to large amounts of math and technicality and so unknown to almost everyone, but that none the less can be very dangerous to take at face value. Some examples include (with approximate name for replacement concept in parenthesis): "real"(your utility function), "truth"(provability), "free will"(optimizing agent), "is-a"(configuration spaces)

Similarity and contagion.

0AspiringRationalist7yCare to elaborate?
0Leonhart7yThis old post [http://lesswrong.com/lw/zr/the_laws_of_magic/] is a decent elaboration, which I should have linked in the first place.

"Come to terms with." Just update already. See also "seeking closure", "working through", "processing", all of which pieces of psychobabble are ways of clinging to not updating already.

I would agree with this if I didn't have a human brain that got stuck to past events.

-6RichardKennaway7y
4Armok_GoB7yI always assumed it stood for "I have updated on that specific belief, but I have to also go through all the myriad connected ones and re-evaluating them, and then seeing how it all propagates back, and iterate this until the web is relaxed, and this will take a while because I have limited clock speed."
2TimS7yIn parallel to what sixes is saying, be careful about conflating "closure" and "working through." Closure: comes from an external source - can be unhealthy to pursue because you cannot force another person / entity to give whatever "it" is to you. Working through it: comes from an internal process - can be healthy if done successfully. In practice, effectively coming to terms with some loss involves shifting from seeking closure to working through the loss.