We recently established a successful Useful Concepts Repository.  It got me thinking about all the useless or actively harmful concepts I had carried around for in some cases most of my life before seeing them for what they were.  Then it occurred to me that I probably still have some poisonous concepts lurking in my mind, and I thought creating this thread might be one way to discover what they are.

I'll start us off with one simple example:  The Bohr model of the atom as it is taught in school is a dangerous thing to keep in your head for too long.  I graduated from high school believing that it was basically a correct physical representation of atoms.  (And I went to a *good* high school.)  Some may say that the Bohr model serves a useful role as a lie-to-children to bridge understanding to the true physics, but if so, why do so many adults still think atoms look like concentric circular orbits of electrons around a nucleus?  

There's one hallmark of truly bad concepts: they actively work against correct induction.  Thinking in terms of the Bohr model actively prevents you from understanding molecular bonding and, really, everything about how an atom can serve as a functional piece of a real thing like a protein or a diamond.

Bad concepts don't have to be scientific.  Religion is held to be a pretty harmful concept around here.  There are certain political theories which might qualify, except I expect that one man's harmful political concept is another man's core value system, so as usual we should probably stay away from politics.  But I welcome input as fuzzy as common folk advice you receive that turned out to be really costly.

New Comment
204 comments, sorted by Click to highlight new comments since: Today at 4:01 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The concept of "deserve" can be harmful. We like to think about whether we "deserve" what we get, or whether someone else deserves what he/she has. But in reality there is no such mechanism. I prefer to invert "deserve" into the future: deserve your luck by exploiting it.

Of course, "deserve" can be a useful social mechanism to increase desired actions. But only within that context.

Also "need". There's always another option, and pretending sufficiently bad options don't exist can interfere with expected value estimations.

And "should" in the moralizing sense. Don't let yourself say "I should do X". Either do it or don't. Yeah, you're conflicted. If you don't know how to resolve it on the spot, at least be honest and say "I don't know whether I want X or not X". As applied to others, don't say "he should do X!". Apparently he's not doing X, and if you're specific about why it is less frustrating and effective solutions are more visible. "He does X because it's clearly in his best interests, even despite my shaming. Oh..." - or again, if you can't figure it out, be honest about it "I have no idea why he does X"

4A1987dM11y
That would work nice if I was so devoid of dynamic inconsistency that “I don't feel like getting out of bed” would reliably entail “I won't regret it if I stay in bed”; but as it stands, I sometimes have to tell myself “I should get out of bed” in order to do stuff I don't feel like doing but I know I would regret not doing.
3jimmy11y
This John Holt quote is about exactly this.
4Larks11y
This is a fact about you, not about "should". If "should" is part of the world, you shouldn't remove it from your map just because you find other people frustrating. One common, often effective strategy is to tell people they should do the thing. The correct response to meeting a child murderer is "No, Stop! You should not do that!", not "Please explain why you are killing that child." (also physical force)

This is a fact about you, not about "should". If "should" is part of the world, you shouldn't remove it from your map just because you find other people frustrating.

It's not about having conveniently blank maps. It's about having more precise maps.

I realize that you won't be able to see this as obviously true, but I want you to at least understand what my claim is: after fleshing out the map with specific details, your emotional approach to the problem changes and you become aware of new possible actions without removing any old actions from your list of options - and without changing your preferences. Additionally, the majority of the time this happens, "shoulding" is no longer the best choice available.

One common, often effective strategy is to tell people they should do the thing.

Sometimes, sure. I still use the word like that sometimes, but I try to stay aware that it's short hand for "you'd get more of what you want if you do"/"I and others will shame you if you don't". It's just that so often that's not enough.

The correct response to meeting a child murderer is "No, Stop! You should not do that!", not "

... (read more)
2TheOtherDave11y
Mostly, the result I anticipate from "should"ing a norm-violator is that other members of my tribe in the vicinity will be marginally more likely to back me up and enforce the tribal norms I've invoked by "should"ing. That is, it's a political act that exerts social pressure. (Among the tribal members who might be affected by this is the norm-violator themselves.) Alternative formulas like "you'll get more of what you want if you don't do that!" or "I prefer you not do that!" or "I and others will shame you if you do that!" don't seem to work as well for this purpose. But of course you're correct that some norm-violators don't respond to that at all, and that some norm-violations (e.g. murder) are sufficiently problematic that we prefer the violator be physically prevented from continuing the violation.
-1DSherron11y
"Should" is not part of any logically possible territory, in the moral sense at least. Objective morality is meaningless, and subjective morality reduces to preferences. It's a distinctly human invention, and it's meaning shifts as the user desires. Moral obligations are great for social interactions, but they don't reflect anything deeper than an extension of tribal politics. Saying "you should x" (in the moral sense of the word) is just equivalent to saying "I would prefer you to x", but with bonus social pressure. Just because it is sometimes effective to try and impose a moral obligation does not mean that it is always, or even usually, the case that doing so is the most effective method available. Thinking about the actual cause of the behavior, and responding to that, will be far, far more effective. Next time you meet a child murderer, you just go and keep on telling him he shouldn't do that. I, on the other hand, will actually do things that might prevent him from killing children. This includes physical restraint, murder, and, perhaps most importantly, asking why he kills children. If he responds "I have to sacrifice them to the magical alien unicorns or they'll kill my family" then I can explain to him that the magical alien unicorns dont't exist and solve the problem. Or I can threaten his family myself, which might for many reasons be more reliable than physical solutions. If he has empathy I can talk about how the parents must feel, or the kids themselves. If he has self-preservation instincts then I can point out the risks for getting caught. In the end, maybe he just values dead children in the same way I value children continuing to live, and my only choice is to fight him. But probably that's not the case, and if I don't ask/observe to figure out what his motivations are I'll never know how to stop him when physical force is no option.
4ArisKatsaris11y
I really think this is a bad summarization of how moral injuctions act. People often feel a conflict for example between "I should X" and "I would prefer to not-X". If a parent has to choose between saving their own child, and a thousand other children, they may very well prefer to save their own child, but recognize that morality dictated they should have saved the thousand other children. My own guess about the connection between morality and preferences is that morality is an unconscious estimation of our preferences about a situation, while trying to remove the bias of our personal stakes in it. (E.g. the parent recognizes that if their own child wasn't involved, if they were just hearing about the situation without personal stakes in it, they would prefer that a thousand children be saved rather that only one.) If my guess is correct it would also explain why there's disagreement about whether morality is objective or subjective (morality is a personal preference, but it's also an attempt to remove personal biases - it's by itself an attempt to move from subjective preferences to objective preferences).
0[anonymous]11y
That's a good theory.
-3DSherron11y
This is because people are bad at making decisions, and have not gotten rid of the harmful concept of "should". The original comment on this topic was claiming that "should" is a bad concept; instead of thinking "I should x" or "I shouldn't do x", on top of considering "I want to/don't want to x", just look at want/do not want. "I should x" doesn't help you resolve "do I want to x", and the second question is the only one that counts. I think that your idea about morality is simply expressing a part of a framework of many moral systems. That is not a complete view of what morality means to people; it's simply a part of many instantiations of morality. I agree that such thinking is the cause of many moral conflicts of the nature "I should x but I want to y", stemming from the idea (perhaps subconscious) that they would tell someone else to x, instead of y, and people prefer not to defect in those situations. Selfishness is seen as a vice, perhaps for evolutionary reasons (see all the data on viable cooperation in the prisoner's dilemma, etc.) and so people feel the pressure to not cheat the system, even though they want to. This is not behavior that a rational agent should generally want! If you are able to get rid of your concept of "should", you will be free from that type of trap unless it is in your best interests to remain there. Our moral intuitions do not exist for good reasons. "Fairness" and it's ilk are all primarily political tools; moral outrage is a particularly potent tool when directed at your opponent. Just because we have an intuition does not make that intuition meaningful. Go for a week while forcing yourself to taboo "morality", "should", and everything like that. When you make a decision, make a concerted effort to ignore the part of your brain saying "you should c because it's right", and only listen to your preferences (note: you can have preferences that favor other people!). You should find that your decisions become easier and that you pre
2asr11y
These aren't the only two possibilities. Lots of important aspects of the world are socially constructed. There's no objective truth about the owner of a given plot of land, but it's not purely subjective either -- and if you don't believe me, try explaining it to the judge if you are arrested for trespassing. Social norms about morality are constructed socially, and are not simply the preferences or feelings of any particular individual. It's perfectly coherent for somebody to say "society believes X is immoral but I don't personally think it's wrong". I think it's even coherent for somebody to say "X is immoral but I intend to do it anyway."
-1DSherron11y
You're sneaking in connotations. "Morality" has a much stronger connotation than "things that other people think are bad for me to do." You can't simply define the word to mean something convenient, because the connotations won't go away. Morality is definitely not understood generally to be a social construct. Is that social construct the actual thing many people are in reality imagining when they talk about morality? Quite possibly. But those same people would tend to disagree with you if you made that claim to them; they would say that morality is just doing the right thing, and if society said something different then morality wouldn't change. Also, the land ownership analogy has no merit. Ownership exists as an explicit social construct, and I can point you to all sorts of evidence in the territory that shows who owns what. Social constructs about morality exist, but morality is not understood to be defined by those constructs. If I say "x is immoral" then I haven't actually told you anything about x. In normal usage I've told you that I think people in general shouldn't do x, but you don't know why I think that unless you know my value system; you shouldn't draw any conclusions about whether you think people should or shouldn't x, other than due to the threat of my retaliation. "Morality" in general is ill-defined, and often intuitions about it are incoherent. We make much, much better decisions by throwing away the entire concept. Saying "x is morally wrong" or "x is morally right" doesn't have any additional effect on our actions, once we've run the best preference algorithms we have over them. Every single bit of information contained in "morally right/wrong" is also contained in our other decision algorithms, often in a more accurate form. It's not even a useful shorthand; getting a concrete right/wrong value, or even a value along the scale, is not a well-defined operation, and thus the output does not have a consistent effect on our actions.
1asr11y
My original point was just that "subjective versus objective" is a false dichotomy in this context. I don't want to have a big long discussion about meta-ethics, but, descriptively, many people do talk in a conventionalist way about morality or components of morality and thinking of it as a social construction is handy in navigating the world. Turning now to the substance of whether moral or judgement words ("should", "ought", "honest", etc) are bad concepts -- At work, we routinely have conversations about "is it ethical/honest to do X", or "what's the most ethical way to deal with circumstance Y". And we do not mean "what is our private preference about outcomes or rules" -- we mean something imprecise but more like "what would our peers think of us if they knew" or "what do we think our peers ought to think of us if they knew". We aren't being very precise how much is objective, subjective, and socially constructed, but I don't see that we would gain from trying to speak with more precision than our thoughts actually have. Yes, these terms are fuzzy and self-referential. Natural language often is. Yes, using 'ethical' instead of other terms smuggles in a lot of connotation. That's the point! Vagueness with some emotional shading and implication is very useful linguistically and I think cognitively. The original topic was "harmful" concepts, I believe, and I don't think all vagueness is harmful. Often the imprecision is irrelevant to the actual communication or reasoning taking place.
-1DSherron11y
The accusation of being bad concepts was not because they are vague, but because they lead to bad modes of thought (and because they are wrong concepts, in the manner of a wrong question). Being vague doesn't protect you from being wrong; you can talk all day about "is it ethical to steal this cookie" but you are wasting your time. Either you're actually referring to specific concepts that have names (will other people perceive of this as ethically justified?) or you're babbling nonsense. Just use basic consequentialist reasoning and skip the whole ethics part. You gain literally nothing from discussing "is this moral", unless what you're really asking is "What are the social consequences" or "will person x think this is immoral" or whatever. It's a dangerous habit epistemically and serves no instrumental purpose.
0buybuydandavis11y
Subjectivity is part of the territory.
-1DSherron11y
Things encoded in human brains are part of the territory; but this does not mean that anything we imagine is in the territory in any other sense. "Should" is not an operator that has any useful reference in the territory, even within human minds. It is confused, in the moral sense of "should" at least. Telling anyone "you shouldn't do that" when what you really mean is "I want you to stop doing that" isn't productive. If they want to do it then they don't care what they "should" or "shouldn't" do unless you can explain to them why they in fact do or don't want to do that thing. In the sense that "should do x" means "on reflection would prefer to do x" it is useful. The farther you move from that, the less useful it becomes.
3buybuydandavis11y
But that's not what they mean, or at least not all that they mean. Look, I'm a fan of Stirner and a moral subjectviist, so you don't have to explain the nonsense people have in their heads with regard to morality to me. I'm on board with Stirner, in considering the world populated with fools in a madhouse, who only seem to go about free because their asylum takes in so wide a space. But there are different kinds of preferences, and moral preferences have different implications than our preferences for shoes and ice cream. It's handy to have a label to separate those out, and "moral" is the accurate one, regardless of the other nonsense people have in their heads about morality.
-2DSherron11y
I think that claiming that is just making the confusion worse. Sure, you could claim that our preferences about "moral" situations are different from our other preferences; but the very feeling that makes them seem different at all stems from the core confusion! Think very carefully about why you want to distinguish between these types of preferences. What do you gain, knowing something is a "moral" preference (excluding whatever membership defines the category)? Is there actually a cluster in thing space around moral preferences, which is distinctly separate from the "preferences" cluster? Do moral preferences really have different implications than preferences about shoes and I've cream? The only thing I can imagine is that when you phrase an argument to humans in terms of morality, you get different responses than to preferences ("I want Greta's house" vs "Greta is morally obligated to give me her house"). But I can imagine no other way in which the difference could manifest. I mean, a preference is a preference is a term in a utility function. Mathematically they'd better all work the same way or we're gonna be in a heap of trouble.
1buybuydandavis11y
I don't think moral feelings are entirely derivative of conceptual thought. Like other mammals, we have pattern matching algorithms. Conceptual confusion isn't what makes my preference for ice cream preferences different from my moral preferences. Is there a behavioral cluster about "moral"? Sure. How many people are hated for what ice cream they eat? For their preference in ice cream, even when they don't eat it? For their tolerance of a preference in ice cream in others? Not many that I see. So yeah, it's really different. And matter is matter, whether alive or dead, whether your shoe or your mom.
1buybuydandavis11y
I can't remember where I heard the anecdote, but I remember some small boy discovering the power of "need" with "I need a cookie!".
0Fhyve11y
I think any correct use of "need" is either implicitly or explicitly a phrase of the form "I need X (in order to do Y)".
6PhilGoetz11y
"Deserve" is harmful because we would often rather destroy utility than allow an undeserved outcome distribution. For instance, most people would probably rather punish a criminal than reform him. I nominate "justice" as the more basic bad concept. It's a good concept for sloppy thinkers who are incapable of keeping in mind all the harm done later by injustices now, a shortcut that lets them choose actions that probably increase utility in the long run. But it is a bad concept for people who can think more rigorously. A lot of these "bad concepts" will probably be things that are useful given limited rationality. “Are the gods not just?" "Oh no, child. What would become us us if they were?” ― C.S. Lewis, Till We Have Faces
3Viliam_Bur11y
I'd say "justice" is a heuristics; better than nothing, but not the best possible option. This could be connected with their beliefs about probability of successfully reforming the criminal. I guess the probability strongly depends on the type of crime and type of treatment, and even is not the same for all classes of criminals (e.g. sociopaths vs. people in relative rare situation that overwhelmed them). They may fear that with a good lawyer, "reform, don't punish" is simply a "get out of jail free" card. To improve this situation, it would help to make the statistics of reform successes widely known. But I would expect that in some situations, they are just not available. This is partially an availability heuristics on my part, and partially my model saying that many good intentions fail in real life. Also, what about unique crimes? For example, an old person murders their only child, and they do not want to have any other child, ever. Most likely, they will never do the same crime again. How specifically would you reform them? How would you measure the success of reforming them? If we are reasonably sure they never do the same thing again, even without a treatment, then... should we just shrug and let them go? The important part of the punishment is the precommitment to punish. If a crime already happened, causing e.g. pain to the criminal does not undo the past. But if the crime is yet in the future, precommiting to cause pain to the criminal influences the criminal's outcome matrix. Will precommitment to reforming have similar effects? ("Don't shoot him, or... I will explain you why shooting people is wrong, and then you will feel bad about it!")
0buybuydandavis11y
Actually, I think that's some of what they are keeping in mind and find motivating.
0PhilGoetz11y
If they were able to keep it in mind separately, they could include that in their calculations, instead of using justice as a kind of sufficient statistic to summarize it.
-2Eugine_Nier11y
Would you also two box on Newcomb’s problem?
2PhilGoetz11y
You can still use precommitment, but tie it to consequences rather than to Justice. Take Edward Snowden. Say that the socially-optimal outcome is to learn about the most alarming covert government programs, but not about all covert programs. So you want some Edward Snowdens to reveal some operations, but you don't want that to happen very often. The optimal behavior may be to precommit to injustice, punishing government employees who reveal secrets regardless of whether their actions were justified.
0Eugine_Nier11y
International espionage is probably one of the worst examples to attempt to generalize concepts like justice from. It's probably better to start with simpler (and more common) examples like theft or murder and then use the concepts developed on the simpler examples to look at the more complicated one.
4Kaj_Sotala11y
Upvoted, but I would note that it's interesting to see a moral value listed in a (supposedly value-neutral) "bad concepts repository". The idea that "deserve" in the sense in which you mention is a harmful and meaningless concept is a rather consequentialist notion, and seeing this so highly upvoted says something about the ethics that this community has adopted - and if I'm right in assuming that a lot of the upvoters probably thought this a purely factual confusion with no real ethical element, then it says a bit about the moral axioms that we tend to take for granted. Again, not saying this as a criticism, just as something that I found interesting. E.g. part of my morality used to say that if I only deserved some pleasures in case I had acted in the right ways or was good enough: and this had nothing to do with a consequentialist it-is-a-way-of-motivating-myself-to-act-right logic, it was simply an intrinsic value that I would to some extent have considered morally right to have even if possessing it was actively harmful. Somebody coming along and telling me that "in reality, your value is not grounded in any concrete mechanism" would have had me going "well, in that case your value of murder being bad is not grounded in any concrete mechanism either". (A comment saying that "the concept of murder can be harmful, since in reality there is no mechanism for determining what's murder" probably wouldn't have been upvoted.)
2Larks11y
So you're saying we like thinking about a moral property, but we're wrong to do so, because this property is not reliably instanciated? Desert theorist do not need to disagree - there's no law of physics that means people necessarily get what they deserve. Rather, we are supposed to be the mechanism - we must regulate our own affairs so as to ensure that people get what they deserve.
2Leonhart11y
Perhaps the bad concept here is actually "karma", which I understand roughly to be the claim that there is a law of physics that means people necessarily get what they deserve.
4fubarobfusco11y
I think around here we can call that the just-world fallacy.
1Randy_M11y
To me deserve flows from experiencing the predicatable consequences of one's actions. If the cultural norms for my area is to wait in line at the bank, checkout, restraunt, etc., and I do so, I deserve to be served when I reach the front of it (barring any prior actions towards the owners like theft, or personal connections). Someone who comes in later does not deserve to be served until others in the queue have been. Or, less in a less relative example, if I see dark clouds and go out dressed for warm weather when I have rain clothes at hand, I deserve to feel uncomforable. I do not deserve to be assaulted by random strangers, when I have not personally performed any actions that would initaiate conflict that violence would resolve or done anything which tends to anger other people. Of course, the certainty of getting what one deserves is not 1, and one must expect that the unexpected will happen in some context eventually.
1Kawoomba11y
On the flipside, egalitarian instincts (e.g. "justice and liberty for all", "all men are created equal") are often deemed desirable, even though many a times "deserve" stems from such concepts of how a society should supposedly be like, "what kind of society I want to live in". There is a tension between decrying "deserve" as harmful, while e.g. espousing the (in many cases) egalitarian instincts they stem from ("I should have as many tech toys as my neighbor", "I'm trying to keep up with the Joneses", etc.).
0pinyaka11y
I think this is a different flavor of deserving. Stabilizer is using deserve to explain how people got into the current situation while you're using it to describe desirable future situation. The danger is assuming that because we are capable of acting in a way that gives people what they deserve, that in all situations someone must have already done so, so everyone must have acted in such a way that they have earned their present circumstances through moral actions.
-3Eugine_Nier11y
The concept of "deserve" is only harmful to the extent people apply it to things they don't in fact deserve. In this respect, it's no different from the concept of "truth".
0ThrustVectoring11y
It's part of a larger pattern of mistaking your interpretations of reality as reality itself. There's no ephemeral labels floating around that are objectively true - you can't talk too much, work too hard, or be pathetic. You can only say things that other people would prefer not to hear, do work to the exclusion of other objectives, or be pitied by someone.
0wedrifid11y
If excessive work causes an overuse injury or illness then "worked too hard" would seem to be a legitimate way to describe reality. (Agree with the other two.)
0[anonymous]11y
I agree with that. I also suspect many people treat deserving of rewards and deserving of punishments as separate concepts. As a result they might reject one while staying attached to the other and become even more confused.

(Thinking about this for a bit, I noticed that it was more fruitful for me to think of "concepts that are often used unskillfully" rather than "bad concepts" as such. Then you don't have to get bogged down thinking about scenarios where the concept actually is pretty useful as a stopgap or whatever.)

4drethelin11y
That's well-known as the mindslaver problem in MTG

That's well-known as the mindslaver problem in MTG

Can you explain more how that problem relates to the mindslaver card in the MTG community? (Or provide a link? The top results on google were interesting but I think not the meme you were referring to.)

I think this is a slightly different issue. In Magic there's a concept of "strictly better" where one card is deemed to be always better than another (eg Lightning Bolt over Shock), as opposed to statistically better (eg Silver Knight is generally considered better than White Knight but the latter is clearly preferable if you're playing against black and not red). However, some people take "strictly better" too, um, strictly, and try to point out weird cases where you would prefer to have the seemingly worse card. Often these scenarios involve Mindslaver (eg if you're on 3 life and your opponent has Mindslaver you'd rather have Shock in hand than Lightning Bolt).

The lesson is to not let rare pathological cases ruin useful generalizations (at least not outside of formal mathematics).

4Stabilizer11y
By the way even in formal mathematics (and maybe especially in formal mathematics), while pathological cases are interesting, nobody discards perfectly useful theories just because the theory allows pathologies. For example, nobody hesitates to use measure theory in spite the Banach-Tarski paradox; nobody hesitates to use calculus even though the Weierstrass function exists; few people hesitate in using the Peano axioms in spite of the existence of non-standard models of that arithmetic.
2Fhyve11y
Nitpick: I would consider the Weierstrass function a different sort of pathology than non-standard models or Banach-Tarski - a practical pathology rather than a conceptual pathology. The Weierstrass function is just a fractal. It never smooths out no matter how much you zoom in.
0Stabilizer11y
I agree that the Weierstrass function is different. I felt a tinge of guilt when I included the Weierstrass function. But I included it since it's probably the most famous pathology. That being said, I don't quite understand the distinction you're making between a practical and a conceptual pathology. The distinction I would make between the Weierstrass and the other two is that the Weierstrass is something which is just counter-intuitive whereas the other two can be used as a reason to reject the entire theory. They are almost antithetical to the purpose of the theory. Is that what you were getting at?
2wedrifid11y
Ahh, that would do it. The enemy being the one who uses the card would tend to make inferiority desirable in rather a lot of cases.

The word "is" in all its forms. It encourages category thinking in lieu of focussing on the actual behavior or properties that make it meaningful to apply. Example: "is a clone really you?" Trying to even say that without using "is" poses a challenge. I believe it should be treated the same as goto: occasionally useful but usually a warning sign.

[-][anonymous]11y130

So some, like Lycophron, were led to omit 'is', others to change the mode of expression and say 'the man has been whitened' instead of 'is white', and 'walks' instead of 'is walking', for fear that if they added the word 'is' they should be making the one to be many. -Aristotle, Physics 1.2

ETA: I don't mean this as either criticism or support, I just thought it might be interesting to point out that the frustration with 'is' has a long history.

6Viliam_Bur11y
E-Prime. We could support speaking this way on LW by making a "spellchecker" that would underline all the forbidden words.
6J_Taylor11y
In that sentence, I find the words "clone", "really" and "you" to be as problematic as "is".
8[anonymous]11y
You're perfectly comfortable with the indefinite article?
3J_Taylor11y
No, but I am much more comfortable with it than I am with the other words.
3A1987dM11y
Not having a word for “is” didn't stop the Chinese from coming up with the “white horse not horse” thing, though.

Implicitly assuming that you mapped out/classified all possible realities. One of the symptoms is when someone writes "there are only two (or three or four...) possibilities/alternatives..." instead of "The most likely/only options I could think of are..." This does not always work even in math (e.g. the statement "a theorem can be either true or false" used to be thought of as self-evidently true), and it is even less reliable in a less rigorous setting.

In other words, there is always at least one more option than you have listed! (This statement itself is, of course, also subject to the same law of flawed classification.)

There's a Discordian catma to the effect that if you think there are only two possibilities — X, and Y — then there are actually Five possibilities: X, Y, both X and Y, neither X nor Y, and something you haven't thought of.

8buybuydandavis11y
Jaynes had a recommendation for multiple hypothesis testing - one of the hypotheses should always be "something I haven't thought of".

There is a cultural heuristic (especially in Eastern cultures) that we should respect older people by default. Now, this is not a useless heuristic, as the fact that older people have had more life experiences is definitely worth taking into account. But at least in my case (and I suspect in many other cases), the respect accorded was disproportionate to their actual expertise in many domains.

The heuristic can be very useful when respecting the older person is not really a matter of whether he/she is right or wrong, but more about appeasing power. It can be very useful to distinguish between the two situations.

How old is the "older" person? 30? 60? 90? In the last case, respecting a 90-years old person is usually not about appeasing power.

It seems more like retirement insurance. A social contract that while you are young, you have to respect old people, so that while you are old, you will get respect from young people. Depends on what specifically "respecting old people" means in given culture. If you have to obey them in their irrational decisions, that's harmful. But if it just means speaking politely to them and providing them hundred trivial advantages, I would say it is good in most situations.

Specifically, I am from Eastern Europe, where there is a cultural norm of letting old people sit in the mass transit. As in: you see an old person near you, there are no free places to sit, so you automatically stand up and offer the old person to sit down. The same for pregnant women. (There are some seats with a sign that requires you to do this, but the cultural norm is that you do it everywhere.) -- I consider this norm good, because for some people the difference in utility between standing and sitting is greater than for average people. (And of course, if you have a broken leg or something, that's an obvious exception.) So it was rather shocking for me to hear about cultures where this norm does not exist. Unfortunately, even in my country in recent decades this norm (and the politeness is general) is decreasing.

7wedrifid11y
More relevant to the social reasons for the heuristic, they have also had more time to accrue power and allies. For most people that is what respect is about (awareness of their power to influence your outcomes conditional on how much deference you give them). Oh, yes, those were the two points I prepared in response to your first paragraph. You nailed both, exactly! Signalling social deference and actually considering an opinion to be strong Bayesian evidence need not be the same thing.
2PhilGoetz11y
But I think that in America today, we don't respect older people enough. Heck, we don't often even acknowledge their existence. Count what fraction of the people you pass on the street today are "old". Then count what fraction of people you see on TV or in the movies are old.
8buybuydandavis11y
I think that our age cohorted Lord of the Flies educational system has much to do with "we" being age cohorted as well.
4Stabilizer11y
It is not surprising that there aren't a proportional number of old people in TV/movies right now. And I suspect there never were. TV/movie audience desire to view people who possess high-status markers. Two important markers are beauty and power. In reality, younger people typically have beauty but not much power. Older people have more power and less beauty. Since TV/movies don't have the constraints of reality, we can make young people who are beautiful also powerful. We can rarely make old people beautiful with some exceptions, which TV/movies often exploit. I don't think this has anything to do with respect.
3jklsemicolon11y
This is a contradiction.
4Stabilizer11y
Sorry if it was confusing but you are taking it out of context. I actually meant: the fact that we don't have a proportional number of old people in TV/movies as in real life is not because we respect old people less in real life. It is simply a reflection of the freedoms available in TV/movies.

Bad Concept: Obviousness

Consider this - what distinguishes obviousness from a first impression? Like some kind of meta semantic stop sign, "it's obvious!" can be used as an excuse to stop thinking about a question. It can be shouted out as an argument with an implication to the effect of "If you don't agree with me instantly, you're an idiot." which can sometimes convince people that an idea is correct without the person actually supporting their points. I sometimes wonder if obviousness is just an insidious rationalization that we cling to when what we really want is to avoid thinking or gain instant agreement.

I wonder how much damage obviousness has done?

I've found the statement "that does not seem obvious to me" to be quite useful in getting people to explain themselves without making them feel challenged. It's among my list of "magic phrases" which I'm considering compiling at posting at some point.

6John_Maxwell11y
Looking forward to this.
2Elo9y
Magic phrases please?
2sixes_and_sevens9y
This seems like a good premise for a post inviting people to contribute their own "magic phrases". Sadly, I've used up my Discussion Post powers by making an idle low-quality post about weird alliances last week. I now need to rest in my crypt for a week or so until people forget about it.
0gjm9y
OK, I'm confused. (Probably because I'm missing a joke.) Reading the above in isolation I'd take it as indicating that you posted something that got you a big ball o' negative karma, which brought you below some threshold that meant you couldn't post to Discussion any more. Except that your "weird alliances" post is at +7, and your total karma is over 4k, and your last-30-days karma is over 200, and none of your posts or comments in the last week or so is net negative, and those are all very respectable numbers and surely don't disqualify anyone from doing anything. So, as I say, I seem to be missing a joke. Oh well.
3sixes_and_sevens9y
Making non-trivial posts carries psychological costs that I feel quite acutely. I would love to be able to plough through this (c.f. Comfort Zone Expansion) by making a lot of non-trivial posts. Unfortunately, making non-trivial posts also carries time costs that I feel quite acutely. I have quite fastidious editorial standards that make writing anything quite time-consuming (you would be alarmed at how much time I've spent writing this response), and this is compounded by engaging in long, sticky discussions. The Weird Alliances post was an attempt to write something quickly to lower standards, and as a result it was of lower quality than I would have liked. This made the psychological cost greater. I've yet to figure out how to unknot this perverse trade-off between psychological and time costs, but it means I would prefer to space out making posts.
4gjm9y
Ah, OK, understood. Best of luck with the unknotting. (I'd offer advice, but I have much the same problem myself.)
5Kaj_Sotala11y
Related: On Saying the Obvious
0Epiphany11y
Good link. I like that Grognor mentions that obviousness is just a matter of perception and people's ideas about what's obvious will vary, so we shouldn't assume other people know "obvious" things. However, I think that it's really important for us to be aware that if you think something is obvious, you stop questioning, and you're then left with what is essentially a first impression - but I don't see Grognor mention that semantic stop sign like effect in the post, nor do I see anything about people using obviousness as a way to falsely support points. Do you think Grognor would be interested in updating the article to include additional negative effects of obviousness? Then again putting too many points into an article makes articles confusing and less fun to read. Maybe I should write one. Do you know if anyone has written an article yet on obviousness as a meta semantic stop sign, or obviousness as a false supportive argument? If not, I'll do it.
2gwern11y
No; he's quit LW.
0Kaj_Sotala11y
Not that I could recall.
0Epiphany11y
Ok, I'll post about this in the open thread to gauge interest / see if anyone else knows of a pre-existing LW post on these specific obviousness problems.
3bokov11y
The worst professors I have had disproportionally shared the habit of dismissing as obvious concepts that weren't. Way to distract students from the next thing you were going to say.
3wedrifid11y
See also: Expecting Short Inferential Distances
5Viliam_Bur11y
Also related: Illusion of Transparency: Why No One Understands You Explainers Shoot High. Aim Low! Double Illusion of Transparency
0Epiphany11y
That's not quite what I meant, but that's a good article. What I meant is more along the lines of... two people are trying to figure out the same thing together, one jumps to a conclusion and the other one does not. It's that distance between the first observation and the truth I am referring to, not the distance between one person's perspective and another's. Reads that article again. I think this is my third time.
1Eugine_Nier11y
Well, in mathematics papers it tends to mean, "I'm certain this is true, but now that I can't think of an argument at the moment".
0Epiphany11y
Hahahah! Oh, that's terrible. Now I just realized that my meaning was not entirely explicit. I edited my statement to add the part about not supporting points.
0Armok_GoB11y
That seems like just a wrong use of obvious. When I say "obvious" I usually mean I cannot explain something because my understanding is subconscious and opaque to introspection.
2Epiphany11y
I'm glad you seem to be aware of this problem. Unfortunately, I don't think the rest of the world is aware of this. The dictionary currently defines obvious as meaning "easily seen" and "evident", unfortunately.

"Your true self", or "your true motivations". There's a tendency sometimes to call people's subconscious beliefs and goals their "true" beliefs and goals, e.g. "He works every day in order to be rich and famous, but deep down inside, he's actually afraid of success." Sometimes this works the other way and people's conscious beliefs and goals are called their "true" beliefs and goals in contrast to their unconscious ones. I think this is never really a useful idea, and the conscious self should just be called the conscious self, the subconscious self should just be called the subconscious self, and neither one of them needs to be privileged over the other as the "real" self. Both work together to dictate behavior.

"Rights". This is probably obvious to most consequentialists, but framing political discussions in terms of rights, as in "do we have the right to have an ugly house, or do our neighbors not have the right not to look at an ugly house if they don't want to?" is usually pretty useless. Similarly, "freedom" is not really a good terminal value, because pretty much anything can be defined as freedom, e.g. "by making smoking in restaurants illegal, the American people have the freedom not to smell smoke in a restaurant if they don't want to."

1[anonymous]11y
Most examples I recall, of pointing out which - conscious vs unconscious - is the "true" motivation, were attempts to attack someone's behavior. An accuser picks one motivation that is disagreeable or unpleasant, and uses it to cast aspersion on a positive behavior. I don't think that one self is being privileged over the other solely because of confusion as to which motivations really dictate behavior. It largely depends on which is more convenient for the accuser who designates the "true" self. Also, you may want to put your two bad concepts into different comments. That way they can be upvoted or downvoted separately.

Within my lifetime, a magic genie will appear that grants all our wishes and solves all our problems.

For example, many Christians hold this belief under the names the Kingdom, the Rapture, and/or the second coming (details depend on sect). It leads to excessive discounting of the future, and consequent poor choices. In Collapse Jared Diamond writes about how apocalyptic Christians who control a mining company cause environmental problems in the United States.

Belief in a magic problem solving genie also causes people to fail to take effective action to improve their lives and help others, because they can just wait for the genie to do it for them.

6Desrtopa11y
I think this would probably be a pretty destructive idea were it not for the fact that for most people who hold it, it seems to be such a far belief that they scarcely consider the consequences.
3Viliam_Bur11y
If I believe the world will be destroyed during the next year, the near reaction would be to quit the job, sell everything I can, and enjoy the money while I can. Luckily, most people who share this belief don't do that. But there are also long-term plans, such as getting more education, protecting the nature, planning for retirement... and those need to be done in far mode, where "but the world will be destroyed this year" can be used as an excuse. -- I wonder how often people do this. Probably more often than the previous example.
2bokov11y
Or that we will create a magic genie to grant all our wishes and solve our problems?

I am not sure I am comfortable with the idea of an entirely context-less "bad concept". I have the annoying habit of answering questions of the type "Is it good/bad, useful/useless, etc." with a counter-question "For which purpose?"

Yes, I understand that rare pathological cases should not crowd out useful generalizations. However given the very strong implicit context (along with the whole framework of preconceived ideas, biases, values, etc.) that people carry around in their heads, I find it useful and sometimes necessary to help/force people break out of their default worldview and consider other ways of looking at things. In particular, ways where good/bad evaluation changes the sign.

To get back to the original point, a concept is a mental model of reality, a piece of a map. A bad concept would be wrong and misleading in the sense that it would lead you to incorrect conclusions about the territory. So a "bad concept" is just another expression for a "bad map". And, um, there are a LOT of bad maps floating around in the meme aether...

2buybuydandavis11y
Good, for what, for whom. Similarly, instead of grousing how the world isn't the way I'd like it, or a person isn't the way I'd like them, I try to ask "what's valuable here for me?", which is a more productive focus.
0John_Maxwell11y
"Should" is another word like this. Generally when people say should, they either mean with respect to how best to achieve some goal, or else they're trying to make you follow their moral rules.

"Harmony" -- specifically the idea of root) progressions -- in music theory. (EDIT: That's "music theory", not "music". The target of my criticism is a particular tradition of theorizing about music, not any body of actual music.)

This is perhaps the worst theory I know of to be currently accepted by a mainstream academic discipline. (Imagine if biologists were Lamarckians, despite Darwin.)

What's wrong with it?

9komponisto11y
See discussion here, which has more links.
-2maia11y
Er. That's an article about the history of philosophy. Am I missing something, or was it supposed to be about music theory?
1komponisto11y
The link is to a comment.
2maia11y
Ah, ok. I was on my cellphone, so probably assumed that the instant-scroll-down-to-comment-section was a bug instead of a feature (or possibly it went to the wrong place, even).
8Richard_Kennaway11y
Could you expand on that? It has never been clear to me what music theory is — what constitutes true or false claims about the structure of a piece of music, and what constitutes evidence bearing on such claims. What makes the idea of "harmony" wrong? What alternative is "right"? Schenker's theory? Westergaard's? Riemann? Partsch? (I'm just engaging in Google-scholarship here, I'd never heard of these people until moments ago.) But what would make these, or some other theory, right?

Could you expand on that? It has never been clear to me what music theory is — what constitutes true or false claims about the structure of a piece of music, and what constitutes evidence bearing on such claims.

You're in good company, because it's never been clear to music theorists either, even after a couple millennia of thinking about the problem.

However, I do have my own view on the matter. I consider the music-theoretical analogue of "matching the territory" to be something like data compression. That is, the goodness of a musical theory is measured by how easily it allows one to store (and thus potentially manipulate) musical data in one's mind.

Ideally, what you want is some set of concepts such that, when you have them in your mind, you can hear a piece of music and, instead of thinking "Wow! I have no idea how to do that -- it must be magic!", you think "Oh, how nice -- a zingoban together with a flurve and two Type-3 splidgets" , and -- most importantly -- are then able to reproduce something comparable yourself.

0pianoforte61111y
I'm afraid that despite reading a fair chunk of Mathemusicality I've given up on Westergaard's "An Introduction to Tonal Theory" in favor of Steven Laitz's "The Complete Musician". Steven Laitz is a Schenkerian but his book is fairly standard and uses harmony, voice leading and counterpoint. Actually I'm beginning to conclude that if you want to compose, then starting off by learning music theory of any sort is totally wrongheaded. It is like trying to learn French by memorizing vocabulary and reading books on grammar (which is disturbingly how people try to learn languages in high school). The real way that people learn French is by starting off with very simple phrases and ideas then gradually expanding their knowledge by communicating with people who speak French. Grammar books and vocabulary books are important but as a supplement only to the actual learning that takes place from trying to communicate. Language and music are subconscious processes I don't know what a similar approach to music composition would look like, but I'm reasonably convinced that it would be much better than the current system. I should admit though that I am monolingual and I can't compose music - so my thoughts are based only on theory and anecdotes.
3komponisto11y
If I may ask, what was your issue with Westergaard? (As a polyglot composer, I agree that there is an analogy of language proficiency to musical composition, but would draw a different conclusion: harmonic theory is like a phrasebook, whereas Westergaardian theory is like a grammar text. The former may seem more convenient for certain ad hoc purposes, but is hopelessly inferior for actually learning to speak the language.)
4pianoforte61111y
I don't have any particular issue with Westergaard, I just couldn't make it through the book. Perhaps with more more effort I could but I'm lacking motivation due to low expectancy. It was a long time ago that I attempted the book, but If I had to pinpoint why, there are few things I stumbled over: The biggest problem was that I have poor aural skills. I cannot look at two lines and imagine what they sound like so I have to play them on a piano. Add in more lines and I am quickly overwhelmed. A second problem was the abstractness of the first half of the book. Working through counterpoint exercises that didn't really sound like music did not hold my attention for very long. A third problem was the disconnect between the rules I was learning and my intuition. Even though I could do the exercises by following the rules, too often I felt like I was counting spaces rather than improving my understand of how musical lines are formed. I think that your comparison is very interesting because I would predict that a phrasebook is much more useful than a grammar text for learning a language. The Pimsleur approach, which seems to be a decent way to start to learning a language, is pretty much a phrase book in audio form with some spaced repetition thrown in for good measure. Of course the next step, where the actual learning takes place, is to start trying to communicate with native speakers, but the whole point of Pimsleur is to get you to that point as soon as possible. This important because most people use grammatical rules implicitly rather than explicitly. Certainly grammar texts can be used to improve your proficiency in a language, but I highly doubt that anyone has actually learned a language using one. Without the critical step of communication, there is no mechanism for internalizing the grammatical rules. (Sorry for taking such a long tangent into language acquisition, I wasn't initially planning on stretching the analogy that far.)
5komponisto11y
Thanks for your feedback on the Westergaard text. I think many of your problems will be addressed by the material I plan to write at some indefinite point in the future. It's unfortunate that ITT is the only exposition of Westergaardian theory available (and even it is not technically "available", being out of print), because your issues seem to be with the book and not with the theory that the book aims to present. There is considerable irony in what you say about aural skills, because I consider the development of aural skills -- even at the most elementary levels -- to be a principal practical use of Westergaardian theory. Unfortunately, Westergaard seems not to have fully appreciated this aspect of his theory's power, because he requests of the reader a rather sophisticated level of aural skills (namely the ability to read and mentally hear a Mozart passage) as a prerequisite for the book -- rather unnecessarily, in my opinion. This leads to the point about counterpoint exercises, which, if designed properly, should be easier to mentally "hear" than real music -- that is, indeed, their purpose. Unfortunately, this is not emphasized enough in ITT. Thank goodness I'm here to set you straight, then. Phrasebooks are virtually useless for learning to speak a language. Indeed they are specifically designed for people who don't want to learn the language, but merely need to memorize a few phrases (hence the name), for -- as I said -- ad hoc purposes. (Asking where the bathroom is, what someone's name is, whether they speak English, that sort of thing.) Here's an anecdote to illustrate the problem with phrasebooks. When I was about 10 years old and had just started learning French, my younger sister got the impression that pel was the French word for "is". The reason? I had informed her that the French translation of "my name is" was je m'appelle -- a three syllable expression whose last syllable is indeed pronounced pel. What she didn't realize was that the three s
4pianoforte61110y
Alright I've read most of the relevant parts of ITT. I only skimmed the chapter on phrases and movements and I didn't read the chapter on performance. I do have one question is the presence of the borrowing operation the only significant difference between Westergaardian and Schenkerian theory? As for my thoughts, I think that Westergaardian theory is much more powerful than harmonic theory. It is capable of accounting for the presence of every single note in a composition unlike harmonic theory which seems to be stuck with a four part chorale texture plus voice leading for the melody. Moreover, Westergaardian analyses feel much more intuitive and musical to me than harmonic analyses. In other words its easier for me to hear the Westergaardian background than it is for me to hear the chord progression. For me the most distinctive advantage of Westergaardian analyses is that it respects the fact that notes do not have to "line up" according to a certain chord structure. Notes that are sounding at the same time may be performing different functions, whereas harmonic theory dictates that notes sounding at the same time are usually "part of a chord" which is performing some harmonic function. For example its not always clear to me that a tonic chord in a piece (which harmonic theory regards as being a point of stability) is really an arrival point or a result of notes that just happen to coincide at that moment. The same is true for other chords. A corollary of this seems to be that Harmonic analyses work fine when the notes do consistently line up according to their function, which happens all the time in pop music and possibly in Classical music although I'm not certain of this. Having said that, my biggest worry with Westergaardian theory is that it is almost too powerful. Whereas Harmonic theory constrains you to producing notes that do sound in some sense tonal (for a very powerful example of this see here), Westergaardian theory seems to allow you to do almos
4bogus10y
Note that when analyzing tonal music with Westergaardian analysis, it is generally the case that anticipation and delay tend to occur at relatively shallow levels in the piece's structure. The deeper you go, the more notes are going to be "aligned", just like they might be expected to be in a harmonic analysis. Moreover, the constraints of consonance and dissonance in aligned lines (as given by the rules of counterpoint; see Westergaard's chapters on species counterpoint) will also come into play, when it comes to these deeper levels. So it seems that Westergaardian analysis can do everything that you expect harmonic analysis to do, and of course even more. Instead of having "harmonic functions" and "chords", you have constraints that force you to have some kind of consonance in the background.
3komponisto10y
The short answer is: definitely not. The long answer (a discussion of the relationship between Schenkerian and Westergaardian theory) is too long for this comment, but is something I plan to write about in the future. For now, be it noted simply that the two theories are quite distinct (for all that Westergaardian theory owes to Schenker as a predecessor) -- and, in particular, a criticism of Schenker can by no means necessarily be taken as a criticism of Westergaard, and vice-versa (see below). The way I like to put it is that in Westergaardian theory, the function of a note is defined by its relationship to other notes in its line (and to the local tonic, of course), and not by its relationship to the "root" of the "chord" to which it belongs (as in harmonic theory). If by "work fine" you mean that it is in fact possible to identify the "appropriate" Roman numerals to assign in such cases, sure, I'll give you that. But what is such an "analysis" telling you? Taken literally, it means that you should understand the notes in the passage in terms of the indicated progression of "roots". Which, in turn, implies that in order to hear the passage in your head, you should first, according to the analyst, imagine the succession of roots (which often, indeed typically, move by skip), and only then imagine the other notes by relating them to the roots -- with the connection of notes in such a way as to form lines being a further, third step. To me, this is self-evidently a preposterously circuitous procedure when compared with the alternative of imagining lines as the fundamental construct, within which notes move by step -- without any notion of "roots" entering at all. I am as profoundly unimpressed with that "demonstration" as I am with that whole book and its author -- of which, I must say, this example is entirely characteristic, in its exclusive obsession with the most superficial aspects of musical hearing and near-total amputation of the (much deeper) musical phe
2pianoforte61110y
Thanks, this operation being notably absent in Schenkerian theory (I think). I suppose I will have to live with that for now. By work fine, I mean the the theory is falsifiable, and has predictive power. If you are given half of the bars in a Mozart piece, using harmonic theory can give a reasonable guess as to the rest. I'm not that confident about Mozart though, certainly pop music can be predicted using harmonic theory. Could it be that your subjective experience of music is different than most people? It certainly sounds very alien to me. While its true that listening to the long range structure of a sonata is pleasurable to me, there are certainly 3 to 4 bar excerpts that I happen to enjoy in isolation without context. But you think that 3 bars is not enough to distinguish non-music from music. You also claim that the stylistic differences are minor, yet I would wager that virtually 100% of people (with hearing) can point out d) as being to only tonal example. This is very strange to me; suppose mozart were to replace all of the f's in sonata in c major with f sharps. I think that the piece of music would be worse. Not objectively, or fundamentally worse. Just worse to a typical listener's ears. A pianist who was used to playing mozart might wonder if there was a mistake in the manuscript.
4komponisto10y
On the contrary, Schenker uses it routinely. If you're talking about the expectations that a piece sets up for the listener, Westergaardian theory has much more to say about that than harmonic theory does. Or, let me rather say: an analyst equipped with Westergaardian theory is in a better position to talk about that, in much greater detail and precision, than one equipped with harmonic theory. You might try having a closer look at Chapter 8 of ITT, which you said you had only skimmed so far. (A review of Chapter 7 wouldn't hurt either.) Not in the sense that you mean, no. (Otherwise my answer might be "I should hope so!") I'm not missing anything that "most people" would hear. It's the opposite: I almost certainly hear more than an average human: more context, more possibilities, more vividness. (What kind of musician would I be were it otherwise?) I'm acutely aware of the differences between passages (a) through (d). It's just that I also see (or, rather, hear) a much larger picture -- a picture that, by the way, I would like more people to hear (rather than being discouraged from doing so and having their existing prejudices reinforced). That is not what I said. You would be closer if you said I thought 3 bars were not enough to distinguish good music from bad music. But of course it depends on how long the 3 bars are, and what they contain. My only claim here is that these particular excerpts are too short and contain too little to be judged against each other as music. And again, this is not because I don't hear the effect of the constraints that produced (d) as opposed to (a), but rather most probably because: (1) I'm not impressed by (d) because I understand how easy it is to produce; and (2) I hear structure in (a) that "most people" probably don't hear (and certainly aren't encouraged to hear by the likes of Tymoczko), not because they can't hear it, but mostly because they haven't heard enough music to be in the habit of noticing those phenomena; and,
0pianoforte61110y
After looking at Chapter 8, its becoming obvious that learning Westergaardian theory to an extent that it would be actually useful to me is going to take a lot of time and analyses (and I don't know if I will get around to that any time soon). Regarding harmony, this document may be of interest to you - its written by a Schenkerian who is familiar with Westergaard: http://www.artsci.wustl.edu/~rsnarren/texts/HarmonyText.pdf
0pianoforte61111y
One more question. Do you also think that Westergaardian theory is superior for understanding jazz? I've encountered jazz pianists on the internet who insist that harmony and voice leading are ABSOLUTELY ESSENTIAL for doing jazz improvisation and anyone suggests otherwise is a heretic who deserves to be burnt at the stake. Hyperbole aside, jazz classes do seem to incorporate a lot of harmony and voice leading into their material and their students do seem to make fine improvisers and composers. Oh, and for what its worth, you've convinced me to give Westergaard another shot.
1komponisto11y
Yes. My claim is not repertory-specific. (Note that this is my claim I'm talking about, not Westergaard's.) More generally, I claim that the Westergaardian framework (or some future theory descended from it) is the appropriate one for understanding any music that is to be understood in terms of the traditional Western pitch space (i.e. the one represented by a standardly-tuned piano keyboard), as well as any music whose pitch space can be regarded as an extension, restriction, or modification of the latter. How many of them are familiar with Westergaardian (or even Schenkerian) theory? I've encountered this attitude among art-music performers as well. My sense is that such people are usually confusing the map and the territory (i.e. confusing music theory and music), à la Phil Goetz above. They fail to understand that the concepts of harmonic theory are not identical to the musical phenomena they purport to describe, but instead are merely one candidate theory of those phenomena. Some of them do -- probably more or less exactly the subset who have enough tacit knowledge not to need to take their theoretical instruction seriously, and the temperament not to want to. I'm delighted to hear that, of course, although I should reiterate that I don't expect ITT to be the final word on Westergaardian theory.
0pianoforte61111y
This was my hypothesis as well (which is what the jazz musician responded with hostility to). If this is true though, then why are jazz musicians so passionate about harmony and voice leading? They seem to really believe that its a useful paradigm for understanding music. Perhaps this is just belief in belief?
-1komponisto11y
It's difficult to know what other people are thinking without talking to them directly. With this level of information I would make only two points: 1) It doesn't count as "passionate about harmony and voice leading" unless they understand Westergaardian theory well enough to contrast the two. Otherwise it just amounts to "passionate about music theory of some kind". 2) It doesn't have anything to do with jazz. If they're right that harmony is the superior theory for jazz, then it's the superior theory of music in general. Given the kind of theory we're looking for (cf. Chapter 1 of ITT), different musical traditions should not have different theories. (Analogy: if you find that the laws of physics are different on different planets, you have the wrong idea about what "laws of physics" means.)
0pianoforte61111y
I don't think that we disagree all that much. We both agree that there are some people who are able to learn structural rules implicitly without explicit instruction. We typically call these people "good at languages" or "good at music". Our main disagreement therefore, is how large that set of people is. I happen to think that it is very large given that everyone learns the grammatical rules of their first language this way, and a fair number of polyglots learn their second language this way as well (Unless you deny the usefulness of Pimsleur like approaches). If I understand you correctly, you think that the group of people who are able to properly learn a language/music this way is smaller, because it often results in bad habits and poor inferences about the structure of the language. I would endorse this as well - grammatical texts are useful for refining your understanding of the structure of a language. Because it is scary to learn to swim without arm floats even if there is someone else helping you (I think that phrase books are analogous to arm floats). Other than that I would agree with most of this. If you want secondary instruction in a language then you should probably use a grammar book and not a phrase book and I may return to Westergaard after I have taken some composition lessons. Also I would go one step further and say that not only is it possible to learn a language via immersion, it is necessary, and any other tools you may use to learn a language should help to support this goal.
0NancyLebovitz11y
Tentatively-- grammatical texts have a complex relationship with language. They can be somewhat useful but still go astray because they're for a different language, with the classic example being grammar based on Latin being used to occasionally force English out of its normal use. I suspect the same happens when formal grammar is used to claim that casual and/or spoken English is wrong.
1A1987dM11y
Modern descriptive grammars (like this one) aren't anywhere near that bad.
0Douglas_Knight10y
Yes, accurate grammars are better than inaccurate grammars. But I think you are focusing too much on the negative effects and not noticing the positive effects. It is hard to notice people's understanding of grammar except when they make a mistake or correct someone else, both of which are generally negative effects. Americans are generally not taught English grammar, but often are taught a foreign language, including grammar. Huge numbers of them claim that studying the foreign grammar helped them understand English grammar. Of course, they know the grammar is foreign, so they don't immediately impose it on English. But they start off knowing so little grammar that the overlap with the other language is already quite valuable, as are the abstractions involved.
0Fhyve11y
I have read around and I still can't really tell what Westergaardian theory is. I can see how harmony fails as a framework (it doesn't work very well for a lot of music I have tried to analyze) so I think there is a good chance that Westergaard is (more) right. However, other than the fact that there are these things called lines, and that there exist rules (I have not actually found a list or description of such rules) for manipulating them. I am not sure how this is different from counterpoint. I don't want to go and read a textbook to figure this out, I would rather read ~5-10 pages of exposition and big-picture
2komponisto11y
The best I can recommend is the following article: Peles, Stephen. "An Introduction to Westergaard's Tonal Theory".In Theory Only 13:1-4 [September 1997] pp. 73-94 It's a rather obscure journal, but if you have access to a particularly good university library (or interlibrary loan), you may be able to find it. Failing that, if you PM me with your email address, I can send you the text of the article (without figures, unfortunately).
2Douglas_Knight11y
The defunct journal's web site is open access. Text (search for Peles). Table of contents of page by page scans; first page.
1komponisto11y
Wow, thanks!
-5PhilGoetz11y

Entitlement and Anti-entitlement, especially in the context of: 1. the whole Nice Guy thing and 2. the discourse on the millennial generation. It becomes a red herring, and in the former case leads to ambiguity between 'a specific person must do something' and 'this should be easier than it is. Plus it seems to turn semi-utilitarians deontologist. In the case of millennials, it tends to involve big inferential distance problems.

This one is well known, but having an identity that is too large can make you more susceptible to being mind killed.

3shminux11y
How much of an identity is just right?
8wedrifid11y
"I'm a gorgeous blonde child who roams the forest alone stealing food from bears." is just right.
5tondwalkar11y
Paul Graham suggests keeping your identity as small as sustainable. [1] That is, it's beneficial to keep your identity to just "rationalist" or just "scientist", since they contradict having a large identity. He puts it better than I do: [1] http://www.paulgraham.com/identity.html
0Armok_GoB11y
This goes well for belief's included in your identity, but I've always been uncertain about it it's supposed to also extend to things like episodic memories (separated from believing the information contained in them), realtionship in neutral groups such as a family or a fandom, precommitments, or mannerisms?
0tondwalkar11y
I'm not sure what you're saying here; you think of your memories as part of your identity? These memberships are all heuristics for expected interactions with people. Nothing actionable is lost if you bayes-induct for each situation separately, save the effort you're using to compute and the cognitive biases and emotional reactions you get from claiming "membership". Alternately you could still use the membership heuristic, but with a mental footnote that you're only using it because it's convenient, and there are senses in which the membership's representation of you may be misleading.
1Armok_GoB11y
@episodic memories: I don't personally have any like that, but I hear many people do consider the subjective experience of pivotal events in their life as part of who they are. @relationships: I'm talking the literal membership here, the thing that exists as a function of the entanglement between states in different brains. To clarify, I'm not talking about "your identity" here as in the information about what you consider your identity, but rather the referent of that identity. To many people, their physical bodies are part of their identity in this sense. Even distant objects, or large organizations like nations, can be in extreme cases. Just because it's a trend here to only have information that resides in your own brain as part of your identity doesn't mean it's necessary, or even especially common in it's pre form in most places.
2tondwalkar11y
Ah, it appears we're talking about different things. I'm referring to ideological identity ("I'm a rationalist" , "I'm a libertarian", "I'm pro-choice", "I'm an activist" ), which I think is distinct from "I'm my mind" identity. In particular, you can be primed psychologically and emotionally by the former more than the latter.
1Armok_GoB11y
It seems like we both, and possibly the original Keeping Your Identity Small article, are committing the typical mind fallacy.
2hylleddin11y
My guess would be only as large as necessary to capture your terminal values, in so far as humans have terminal values.
0Will_Newsome11y
"How much" I'm not sure, but a strategy that I find promising and that is rarely talked about is identity min-maxing.

Death is good.

6buybuydandavis11y
When I "die", I won't cease to exist, I'll be united with my dead loved ones where we'll live forever and never be separated again.
2[anonymous]11y
To elaborate on the harm of the "live forever" belief, it makes people apathetic to the suffering of human life. We all get one life, nothing more. Some - many - people spend their entire lives in great pain from starvation, diseases, oppression, etc. An observer's belief in a perfect, eternal afterlife mitigates the horror of this waste of human life. "They may suffer now, but after death, they'll have an eternity of happiness and content."
1taelor11y
This argument presupposes that the "live forever" belief is false. While it is, offering it as an explanation for why the "death is good" belief is bad is unhelpful, as nearly all the people who hold the latter belief also hold the former.

Within my lifetime, the world will end.

This too is a common belief of fundamentalist Christians (though by no means limited to them), and has many of the same effects as the belief that "Within my lifetime, a magic genie will appear that grants all our wishes and solves all our problems." For instance, no one will save for retirement if they think the world will end before they retire. And it's not important to worry about the state of the environment in 50 years, if the world ends in 25.

However this belief has an important distinction from the ... (read more)

-1Eugine_Nier11y
Except, it was a non-event even in those places where this didn't happen.
[-][anonymous]11y20

There's one hallmark of truly bad concepts: they actively work against correct induction.

Sir Karl Popper (among others) made some strong arguments that induction is a bad concept.

-4timtyler11y
Those arguments are now known to be nonsense.

Can someone put a "repository" tag on this post? Thanks!

"It isn't fair."

Ask someone to what "it" refers, and they'll generally be shocked by the notion that their words should have referents. When the shock wears off, it will be that "the situation" is unfair, which is a category error. The state of the universe is unfair? Is gravity unfair too? How about the fact that it rained yesterday?

Fairness is a quality of a moral being or rules enforced by moral beings. But there is rarely any particular unfair being or rule enforced by beings behind "it isn't fair".

"It isn't fair" empirically means "I don't like it and I approve of and support taking something out of someone's hide to quell my discomfort."

6RomeoStevens11y
I have no problem with referring to states of the universe as unfair.
-2buybuydandavis11y
I'm sure the universe feels terribly guilty about it's transgression when you do.
3pragmatist11y
Inducing guilt in the target of the judgment is not the sole (or even primary) purpose of moral judgment, nor is it a necessary feature. That the target must be capable of experiencing guilt is not a necessary feature either. Do you disagree with any of this? I am, in general, much more inclined to attribute unfairness to states of affairs than to people. Usually it's a state of affairs that people could potentially do something to alter/mitigate, though, so I wouldn't call a law of nature unfair.
0buybuydandavis11y
In case it wasn't clear, my comment on the universe felling guilty was my way of pointing out the futility of considering the universe unfair. No.
4pragmatist11y
But human beings can change states of the universe. Is your point that they will not be motivated to do so if the judgment of unfairness is impersonal?
2wedrifid11y
It quite often means "I don't like it and will attempt to change it by the application of social pressure and other means as deemed necessary".

The concept that forgiveness is a good thing. This is a bad concept because the word "forgive" suggests holding a grudge and then forgiving someone. It's simpler and better to just never hold grudges in the first place.

Retracted my previous comment, because it was agreeing with your claim that it's better to never hold grudges in the first place, which I quickly realized I also disagreed with.

A grudge is an act of retaliation against someone who has harmed you. They hurt you, so you now retract your cooperation - or even engage in active harm against them - until they have made sufficient amends. If they hurt you by accident or it was something minor, then yes, probably better not to hold a grudge. But if they did something sufficiently bad, then it is better to hold a grudge to show them that you will not accept such behavior, and that you will only engage in further cooperation once they have made some sign of being trustworthy. Otherwise you are encouraging them to do it again, since you've shown that they can do it with impunity - and by this you are also harming others, by not punishing untrustworthy people and making it more profitable to be untrustworthy. You do not forgive DefectBot, nor do you avoid developing a grudge in the first place, you hold a grudge against it and will no longer cooperate.

In this context, "forgiveness is a good thing" can be seen as a heuristic that encourages us to err on the side of punishing leniently, because too eager punishment will end up alienating people who would've otherwise been allies, because we tend to overestimate the chance of somebody having done a bad thing on purpose, because holding grudges is psychologically costly, or for some other reason.

4A1987dM11y
Obligatory link to one of the highest-voted LW posts ever

Even worse: "Forgive and forget" as advice. It combines the problem with forgiveness with explicitly advising people not to update on the bad behavior of others.

4[anonymous]11y
Why blame forgiveness for the existence of grudges? The causal chain didn't go: moral philosophers invent forgiveness -> invention of resentment follows because everyone wants to give forgiveness a try.
1PrometheanFaun11y
It's also cowardly or anti-social. Forgiving is the easy thing to do, forgive and you longer have to enact any reprisal and you can potentially keep an ally. You also allow a malefactor to get away with their transgression, which will enable them to continue to pull the same shit on other people.
0Kaj_Sotala11y
sixes and sevens's comment applies to this one as well, I think.

Similarity and contagion.

0AspiringRationalist11y
Care to elaborate?
0Leonhart11y
This old post is a decent elaboration, which I should have linked in the first place.

In the lies-to-children/simplification for the purpose of heuristics department there are a largish reference class of these concepts that are basically built into the mind, that there is no way to remotely explain the proper replacement in words due to large amounts of math and technicality and so unknown to almost everyone, but that none the less can be very dangerous to take at face value. Some examples include (with approximate name for replacement concept in parenthesis): "real"(your utility function), "truth"(provability), "free will"(optimizing agent), "is-a"(configuration spaces)

"Come to terms with." Just update already. See also "seeking closure", "working through", "processing", all of which pieces of psychobabble are ways of clinging to not updating already.

I would agree with this if I didn't have a human brain that got stuck to past events.

-8Richard_Kennaway11y
6Armok_GoB11y
I always assumed it stood for "I have updated on that specific belief, but I have to also go through all the myriad connected ones and re-evaluating them, and then seeing how it all propagates back, and iterate this until the web is relaxed, and this will take a while because I have limited clock speed."
4TimS11y
In parallel to what sixes is saying, be careful about conflating "closure" and "working through." Closure: comes from an external source - can be unhealthy to pursue because you cannot force another person / entity to give whatever "it" is to you. Working through it: comes from an internal process - can be healthy if done successfully. In practice, effectively coming to terms with some loss involves shifting from seeking closure to working through the loss.