The best 15 words

by apophenia1 min read3rd Oct 2013386 comments

15

Personal Blog

People want to tell everything instead of telling the best 15 words.  They want to learn everything instead of the best 15 words.  In this thread, instead post the best 15-words from a book you've read recently (or anything else).  It has to stand on its own. It's not a summary, the whole value needs to be contained in those words.

 

  • It doesn't need to cover everything in the book, it's just the best 15 words.
  • It doesn't need to be a quote, it's just the best 15 words.
  • It doesn't have to be 15 words long, it's just the best "15" words.
  • It doesn't have to be precisely true, it's just the best 15 words.
  • It doesn't have to be the main 15 words, it just has to be the best 15 words.
  • It doesn't have to be the author's 15 words, it just has to be the best 15 words.
  • Edit: It shouldn't just be a neat quote--the point of the exercise is to struggle to move from a book down to 15 words.

 

I'll start in the comments below.

(Voted by the Schelling study group as the best exercise of the meeting.)

386 comments, sorted by Highlighting new comments since Today at 8:02 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Gödel, Escher, Bach by Douglas Hofstadter (or works of Quine):

"is false when preceded by its quotation" is false when preceded by its quotation.

5Stabilizer7yI hate you.
0mfb7yHmm, the whole statement is ' "is false when preceded by its quotation" is false when preceded by its quotation.', and it is not preceded by its quotation.
1[anonymous]7y"Yields falsehood when preceded by its quotation" yields falsehood when preceded by its quotation.

If you're the smartest person in the room, you're in the wrong room.

6Ishaan7ythis seems like it belongs in the boring advice repository, but i'll say it anyway: Smarter than Person X by most metrics ≠ nothing to learn from interacting with Person X I'd modify the wording of the advice to: "Strive to have at least one person close to you who exceeds you in your primary domains, (as well as the domains you wish to improve upon)"
6Morendil7yThose are not the best 15 words! Although this is the lesser of two evils. This [http://lesswrong.com/lw/irr/the_best_15_words/9ukm] comment and this [http://lesswrong.com/lw/irr/the_best_15_words/9uyg] are, it seems to me, trying too hard to be the smartest person in the room: technically correct, but only if you ride roughshod over Gricean principles. This is a common failure mode [http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/].
-1[anonymous]7y(My comment was kind-of tongue-in-cheek. I know what you actually meant.)
-1Ishaan7yIf "violating Gricean principles" = willfully misunderstanding what was meant, I wasn't. The trouble with what ismeant by "your in the wrong room" is that while it can be taken to mean "seek out intellectual superiors" is also means "avoid intellectual inferiors". I meant to contest the latter.
0[anonymous]7yHow so? “You're the smartest person in the room” means that you have no intellectual superiors in there. It doesn't mean you have no intellectual inferiors -- that'd be “you're not the dumbest person in the room”.
4[anonymous]7yBy that standards, in every room there is someone who shouldn't be there.
9shminux7yThat's why every room should have a way out.
1wadavis7yAnd right here is the breakdown on why it is ok to gun for your boss's job, because he is gunning for the next room.
8Iksorod7yBy that standard, no one should be in any room.
3Zvi7yBut I'm the only one here...
2wedrifid7y...which prompts that observation that apparently we should all be showering communally and only using toilets that are already occupied.
0Stabilizer7yWhat book is this?
0Morendil7yThe "or anything else" files.

Judea Pearl, Causality:

If two things are correlated, there is causation. Either A causes B, B causes A, they have common cause, or they have a common effect you're conditioning on.

Edit: If two variables are correlated, there is causation. Either A causes B, B causes A, they have common cause, or they have a common effect you're conditioning on.

4wedrifid7yThat's 28 words. Isn't it a bit long? (Still upvoted because the first sentence stands on its own with just 8 words.)
1[anonymous]7y
2Decius7yhttp://xkcd.com/882/ [http://xkcd.com/882/] Sometimes the cause is you've been looking at too many random data sets.
1Lumifer7yI am confused, that doesn't seem to be true. Consider a sine wave. It can be observed in a great number of phenomena, from the sound produced by a tuning fork to the plot of temperature in mid-latitudes throughout the year. All measurements which produce something resembling a sine wave are correlated. Remember that correlation (well, at least Pearson's correlation -- I assume that's what is meant here) is invariant to linear transformations so different scale is not a problem.
6Liron7yCorrelation isn't a property of a pair of mathematical functions or a pair of physical systems, it's a property of a pair of random variables. "A and B are correlated" means "Observing A can change your probabilistic beliefs about B". If you already know that A and B are both sine waves, then neither has any belief-updating power over the others, there's no randomness in the random variables. (I know that's not 100% precise... someone else please improve.)
2johnswentworth7yIn the vast majority of cases involving sine waves, the correlation between A and B is due to the common cause of time. Space is also a common cause of such correlations. However, if you imagine a sine wave in time and another sine wave in space, they have no correlation until you impose a correlation between space and time (e.g., by using a mapping from x to t). In that case, Armok's comment about a logical rather than physical cause might apply.
1Lumifer7yI don't understand what does that mean. In which sense can time be thought of as a cause?
-1johnswentworth7yI started writing a reply to this comment, but as I was thinking through it I realized that the situation is actually WAY more interesting than I thought and requires a whole post. I've posted it in discussion: http://lesswrong.com/r/discussion/lw/is7/the_cause_of_time/ [http://lesswrong.com/r/discussion/lw/is7/the_cause_of_time/] Sorry if it's a bit unclear right now, hopefully I'll have time to add some diagrams this weekend.
1Armok_GoB7yThis is a case of a common cause, in the form of a logical fact rather than a physical one.
0Lumifer7yI don't understand this. Which logical fact is the common cause? The fact that the measurements are correlated? Doesn't the whole thing collapse into a circle, then?
0Armok_GoB7yThe fact of the shape of a sine curve.
0RichardKennaway7yOnly if the frequencies are identical. In that case, follow the improbability [http://lesswrong.com/lw/pa/gazp_vs_glut/] and ask how they come to be identical.
0wedrifid7yThat doesn't seem to be strictly true. Of all the things that are correlated it would seem that there would be some that have none of the listed causal relationships. It is merely highly probable that one of those is the case.
6paulfchristiano7yTo the mathematicians, correlation is a statement about random variables, and not the same as empirical correlation (which is a statement about samples, and might be spurious). Of course the world isn't made of random variables, but only in the same sense that the world isn't made of causal models. They are models, and "correlation" and "causation" are features of the model which don't exist in the real world. In a causal model, correlation implies causation (somewhere).
5Lumifer7yBut then this "true correlation" is unobservable, is it not? Except for trivial cases we can never know what it is and can only rely on estimates, aka empirical correlations. Well, that makes Pearl's statement an uninteresting tautology. Correlation implies causation because we construct models this way...
-2Douglas_Knight7yEmphasizing random variables sounds pretty frequentist to me, while the source being summarized is bayesian. But, yes, models are made of random variables.
-3apophenia7ythanks, this is exactly the case. a better objection is, it's not strictly true because things can be some complex net of the above cases, and it doesn't always break down into one of the four, but that doesn't fit in "15" words, and it's less important edit: also it's possible in rare cases for things to be uncorrelated but causally connected
1selylindi7yTo address your correct criticism, how about we modify apophenia's "15" words to: • If two things are reliably correlated, there is causation. Either A causes B, B causes A, they have common cause, or they have a common effect you're conditioning on. A 15-word version is possible but awkward: • Reliable correlation implies causation: one causes the other, or there’s common cause, or common effect. Potentially a great deal of complexity is smuggled into the word "reliable". -- Edit: A friend pointed out to me that the above sentences provide unbalanced guidance for intuitions. A more evenly balanced version is: • Reliable correlation implies causation and unreliable correlation does not.
0AlanCrowe7yIt goes against the spirit of "15 words" to insist on strict truth. The merit of the quote lies in the fourth clause. That's the big surprise. The point of boiling it down to "15 words" is to pick which subtlety makes it into the shortest formulation.
1wedrifid7yI would suggest that it goes against the spirit of Judea Pearl's Causality to say things that are false or misleading. Do note that I actually support [http://lesswrong.com/lw/irr/the_best_15_words/9u8b] the example, despite the problems. I expect that the surrounding context in Pearl's work more than adequately explains the relevant details. What I would object to is any attempt to suppress discussion of the limitations of such claims---so if it was the case that the "spirit of '15 words'" discourages discussion and clarification then I would reject it as inappropriate on this site.
2apophenia7y"15 words" is a secretly a verb rather than a noun. I definitely think discussion and clarification is good, although in this particular thread I'm sad to some people engaging solely in that and missing an opportunity to try out the exercise instead.
1wedrifid7yAs the thread creator you are entitled to specify the way you want the phrase to be used and what sort of replies you want. That said, it seems that the norms that you are attempting to create and enforce for this '15 words' activity don't belong on this site. It seems to amount to provoking and enforcing all the worst of the failures of critical thought that constantly crop up in the "Rationality" Quotes threads. Given as a premise that I hold that belief you could infer that my voting policy must be to downvote: * Any thread or comment requesting the 'action' "15 words" be performed. * Any attempt to criticise, suppress or dismiss clarifications, elaborations and analysis that crop up in response to quotes. * Any comment, regardless of overall merit, for which a minor clarification is necessary but would be prohibited or discouraged. Note that this applies to the ancestral quote [http://lesswrong.com/lw/irr/the_best_15_words/9u7x] by Pearl which I had previously upvoted. In a context of enforced uncriticality any deviation from accuracy becomes a critical failure. That isn't what you saw. You saw people engaging in that in addition to engaging with the the exercise. They lost no opportunity, you merely couldn't tolerate the critical engagement that is an integral part of discussion on a rationalist forum.
0nshepperd7yIt's possible to find "spurious" correlations in a limited data sample, if two things just "happen" to happen together often by chance. But I don't think that really counts. Did you have any other scenarios in mind?
4wedrifid7yWhen absolute claims are made with exhaustive lists of possibilities then things can "not count" only when excluded explicitly. When dealing with things at the level of precision and rigour that Pearl works at the difference between 'almost true' and 'true' matters. Even with the ('probably' or 'overwhelmingly likely') caveat in place the statement remains valuable. It is still worth including such a parenthetical so as to avoid confusion. No, the set of all correlations that are not causally related in one of the listed ways seems to fit the criteria "limited" and to whatever extent they can be described as 'spurious' that description would apply to all of them. Admittedly, some of them are 'limited' only by such things as the size of the universe but the larger the sample the higher the improbability. I would replace 'spurious' with 'misleading'. A correlation just is. There isn't anything 'fake' or 'invalid' about it. The only thing that could be wrong about it is using it to draw an incorrect conclusion.
-1nshepperd7yI have a feeling including a parenthetical like that would invite more confusion than it avoids. "Oh cool, I guess my magical ESP powers are just one of the unlikely cases where I can be correlated with the hidden coin flips without any causal influence." Because "correlation" is normally taken to mean a systematic effect that can be expected to be predictive of future samples, or something. In this specific case, Pearl probably means something more precise by it (like correlations between nodes in a particular causal model). I suppose you could accurately clarify the original quote by saying "systematic correlation", which would pin down the idea referred to for people who haven't read the book.
-2wedrifid7yThe unqualified version is more compatible with muddled thinking about ESP than the qualified version. Specifically, it outright excludes the possibility "No, you were just lucky" from consideration. This exception applies in that case.
0witzvo7yDoesn't count?!
-1dspeyer7yWith enough data from the two correlands, this goes away. I don't know the exact math, but I think there's a way to say the number of variables you're looking at, and the strength of a given correlation, and get a probability that it's really there.
2Lumifer7yThis goes away only in the limit as the sample size goes to infinity. For a finite sample size (and given a certain set of assumptions) you can establish a range of values within which you believe "true" correlation resides, but this range will never contract to a single point.
-1JackV7yI think the problem may be what counts as correlated. If I toss two coins and both get heads, that's probably coincidence. If I toss two coins N times and get HH TT HH HH HH TT HH HH HH HH TT HH HH HH HH HH TT HH TT TT HH then there's probably a common cause of some sort. But real life is littered with things that look sort of correlated, like price of X and price of Y both (a) go up over time and (b) shoot up temporarily when the roads are closed, but are not otherwise correlated, and it's not clear when this should apply (even though I agree it's a good principle).
-1johnswentworth7yAn alternative version which avoids most of the complaints in replies below: Correlation doesn't imply causation, but it's damn strong evidence! (Please reply if you remember either the exact wording or the source of that quote).
-3Eugine_Nier7yNote, as I discuss here [http://lesswrong.com/lw/ezu/stuff_that_makes_stuff_happen/7nlo] for this to be true you need to allow mathematical truths (and the laws of physics) to serve as causes.

The First 20 Hours (Josh Kaufman):

Practice something for 20 hours, and you'll learn a lot. Don't worry about feeling stupid/clumsy.

0[anonymous]7ywould your recommend this book overall?
2bentarm7yTo be honest, no. There really isn't much more to it than is contained in the sixteen words above, or listening to one of Kaufman's TedX talks.

Causal Decision Theory / consequentialism:

"If your actions have results, you can use actions to choose your favorite result."

2niceguyanon7yI just realized that if you took the movie the secret and took out all the pseudo science BS, then condensed it to one sentence this is what you get.

Belongs in Discussion IMO

7Viliam_Bur7yCan you explain in 15 words what belongs to Main and what to Discussion? :D
2Ben_LandauTaylor7yMain: topics that are interesting to most LW readers, AND are notably worth reading Discussion: topics that are interesting to most LW readers, OR are notably worth reading for some readers Open Thread: everything else
2Desrtopa7yIt's not clear to me what category this post should fall under on that basis, but I'd suggest the heuristic that anything posted to Main which retains a positive score after a few days might as well stay there.
0Douglas_Knight7yMany posts with positive scores are ejected from main.
6apophenia7yCould you break down that intuition? Why? If you think that because it's short, I STRONGLY disagree--value added is not proportional to length. If you think that because it's an exercise, I disagree, although that's a stronger case. We happen to be doing original research in exercise form, and evidence shows exercises work better than academic articles. If you think that for some other reason, or something like the above but not quite, I'd love to hear it!
4Dorikka7yInsufficient value add by the OP. Given that, insufficient expected value add in the comments. (I think that the Textbooks List and Procedural Knowledge Gaps lists belong in Main because the collection of knowledge by commenters is valuable enough, even though the OP is not a huge value add on its own. )
0[anonymous]7yWhat a great reason! If I wanted to not just learn that lesson, but reinforce that sort of reasoning in the Less Wrong community (in a welcoming way, naturally), what would you suggest I do? And feel free to PM, as I agree about limiting discussion about where to put threads inside threads.
2Rob Bensinger7yI can understand that intuition, but I'd like to see people err more on the side of putting slightly subpar things on Main, as opposed to erring on the side of putting slightly superpar things on Discussion. Main is underused, and I think metadiscussion about where to categorize things has become a bit too common.
1[anonymous]7yWhy? Rationality Quotes threads are in Main too (though I suspect they are here more because of tradition than anything else).
0Dorikka7yYou can read my reply here [http://lesswrong.com/lw/irr/the_best_15_words/9ulp] for a rough sketch of my viewpoint. To be honest, I'm not very interested in this bit of meta and am likely tapping out.

Epistemic rationality (as far as I can tell):

"Take every mathematical structure that isn't ruled out by the evidence. Rank them by parsimony."

The Bell Curve:

Intelligence matters, you live in a high-IQ bubble, you're in politically-motivated denial about it, and your denial isn't helping anyone.

On Writing Well, by William Zinsser

Every word should do useful work. Avoid cliché. Edit extensively. Don’t worry about people liking it. There is more to write about than you think.

0wedrifid7y"Don’t worry about people liking it"? This sounds dangerous.
4aspera7yHere is some clarification from Zinsser himself (ibid.): N.B: These paragraphs are not contiguous in the original text.
1PrometheanFaun7yThat's not helpful. Say I've got an audience who wouldn't like me if they knew me as my inner circle does, who definitely wouldn't be convinced if I wrote as though I were writing for my own. What would Zinsser do? Give up? Write something else? I know that communicating effectively when you don't personally feel what you're saying tends to fail, well yes, it's hard, but that's precisely what I've got to do!
0witzvo7ySo perhaps the danger you're thinking of is the opportunity cost of spending time writing something that goes nowhere? That's sensible if you're already prone to writing lots of things and need a filter for what not to write. If you're like me, though, you don't write enough, and thoughts that you might productively pursue with the assistance of a keyboard/screen don't get pursued if you're always thinking about who'd want to read it before writing, or thinking excessively about making it "sound right" instead of just getting the ideas out in a form that is clear to yourself. So the relevant opportunity cost for someone like that is ideas that you don't give expression to or that you fail to discover, perhaps to your surprise, that some people will respond to favorably to your writing. In this sense, I think the principle is pretty useful, at least for me. If after writing it you think people won't like it, you could publish under a pseudonym, or just move on to writing the next thing.

Chip & Dan Heath, Made to Stick:

Communicate one thing.

50 shades:

Keep telling the girl that she is smart, beautiful and courageous and that you love her more than anything, and she will indulge your weirdest fantasy.

-2[anonymous]7yGreat summary. If only it actually worked...
1shminux7yIt probably does, more often than not. Two crucial items from this and similar stories I did not mention: she has to be into you to begin with and you have to be well enough off to make her feel (possibly subconsciously) financially secure with you.

I feel this quote belongs in this thread.

"For every complex problem there is an answer that is clear, simple, and wrong." -- H.L.Mencken

7Stabilizer7yBut often it is worth understanding why the clear, simple answer is wrong.
2Viliam_Bur7yBecause it is incompatible with the beliefs of my tribe. Because it is clear and simple, and therefore unfit to signal my sophistication. Or because there are some specific technical reasons why it is wrong. I guess these are the three most frequent reasons, perhaps even in the decreasing order of frequency, why clear and simple answers are wrong.
0DysgraphicProgrammer7yBecause the problem is complex and your clear, simple solutions has at least 3 knock-on effects, one of which will make the original problem worse. And the other 2 will cause new complex problems in 10 years time. The clear, simple solution to "X is to expensive" is "Declare a cheaper price for X by government fiat." By the time you have compensated for the knock-on effects, regulated to prevent cheaters, and taxed to pay for costs, the solution is no longer simple.
-1[anonymous]7yThe third one sounds a lot like “or anything else”.
0[anonymous]7yBecause it's actually an answer to a simple problem -- getting my mother out of the burning building [http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/] is a simpler problem than getting her out of it alive and well, so the clearest. simplest solution to the former is a wrong solution to the latter. (In such an example it is obvious, but in many real-world situations it's easier to lose purposes [http://lesswrong.com/lw/le/lost_purposes/].)
5apophenia7yNo, quotes don't belong in this thread, your intuition is wrong. This thread is about something closer to learning how to speak in original quotes.
-1Lumifer7yOh, I just went meta :-D

The Rebel Sell:

Counterculture movements are severely infected with status signalling spirals, making them various combinations of ineffectual, incoherent & parasitic.

The Better Angels of Our Nature:

Violence is down short- and long-term on a per capita basis. This is due to interacting effects of governments, women, trade, rationality & literature.

[-][anonymous]7y 9

If after ten minutes you don't know who the sucker is, it's you.

(Common advice which applies mainly to zero-sum competitive situations. I heard it in the context of negotiating with competitors, but I imagine it applies to poker, political strategy, and other things too.)

Matthieu Ricard, Happiness:

It's better to be happy than to be unhappy. If you're unhappy, you can fix it. Here's how: cultivate love, compassion and mindfulness.

Richard Dawkins, The Selfish Gene

Without the [view of life from gene perspective] there is no particular reason why an organism should 'care' about its reproductive success and that of its relatives, rather than, for instance, its own longevity

0apophenia7yGreat subset to have picked! Are there ways to shorten this style-wise or throw out technical vocabulary to make it accessible? Is some part of it less important than others, so that you can throw out ideas as well?

In fact, a sense of essence is, in essence, the essence of sense, in effect.

Douglas Hofstadter, Metamagical Themas

There's a quote I like from Terry Pratchett's juvenile book "Only You Can Save Mankind" that addresses a mistake that some people with a high IQ make:

"Just because you have a mind like a hammer doesn't mean you should treat everyone else like a nail."

That's 19 words (if you count "doesn't" as 1 word, rather than 2), but perhaps a 15 word version could be:

"Don't manipulate those you can out think, just because you are able to."

or, more abstractly,

"Don't treat people as inconvenient objects, even when you can get away with it."

5TheOtherDave7yIf you just want to reduce the wordcount of the original without changing its flavor, you could go with "Other people aren't nails just because your mind is a hammer." That said, worrying too much about exact wordcount seems silly. ("The hammer is my mind.")
3timujin7yWhy?
7Douglas_Reay7yFirstly, having a centralised command economy run by the 'bright' people in charge at the centre didn't work out particularly well for the USSR. Even if you are well intentioned and manipulating them in a direction that you think is in their best interests (which, in any case, isn't the situation the dictum was talking about), you're unlikely to manage their affairs better than they would themselves. Secondly, fooling people can become a habit. And the easiest person to fool is yourself. What do you changes who you are, to some extent. Thirdly, people often realise they have been manipulated, on some level, even if they can't put words to it, or they realise too late. And it isn't a nice feeling. In utilitarian terms, despite any gain in pleasure you get, it is likely to be a net loss of utility.
2CoffeeStain7yBecause your prior for "I am manipulating this person because it satisfies my values, rather than my pride" should be very low. If it isn't, then here's 4 words for you: "Don't value your pride."
3Moss_Piglet7ySorry to keep adding to the "why?" pile but do you mind explaining this one too?
4CoffeeStain7yFor certain definitions of pride. Confidence is a focus on doing what you are good at, enjoying doing things that you are good at, and not avoiding doing things you are good at around others. Pride is showing how good you are at things "just because you are able to," as if to prove to yourself what you supposedly already know, namely that you are good at them. If you were confident, you would spend your time being good at things, not demonstrating that you are so. There might be good reasons to manipulate others. Just proving to yourself that you can is not one of them, if there are stronger outside views on your ability to be found elsewhere (like asking unbiased observers). The Luminosity Sequence [http://lesswrong.com/lw/1xq/let_there_be_light/] has a lot to say about this, and references known biases [http://en.wikipedia.org/wiki/Lake_Wobegon_effect] people have when assessing their abilities.
1timujin7yMaybe that's just my personal quirk (is it?) but my pride is a good motivator for me to become stronger. If I think I am more able in some area than I actually am, then when evidence for the contrary comes knocking, I try as much as I can to defend the 'truth' I believe in by actually training myself in that area until I match that belief. And since I can't keep my mouth shut and thus I tell and demonstrate everyone how awesome I am when I am not actually that good, there is really no way out but to make myself match what other people think of me. Maybe that's not a very good rationality habit, but I am fully mindful of the process, and if I ever need to know my actual level at expense of that motivational factor, it is no trouble to sit down with a pencil and figure out the truth. It can hurt (because my real level almost always is way less than my expectations of it most of the time), but is probably worth it. Manipulating people just out of pride and sense of domination was actually the factor that developed my social skills more than anything else. I became more polite, started to watch my appearance, posture and facial expressions (because it's easier to trick those who like me), became better at detecting lies and other people's attempts to manipulate me. Also, I believe, it helped me to avoid conformity (when you see people making dumb mistakes on a regular basis just because you told them something, the belief in their sanity vanishes quickly). And I am safe from losing friends' trust, because I strive to never trick or decieve close people (in a very broad sense) and maintain something close to (but not quite) Radical Honesty policy wtih those whom I value. Am I walking the wrong path?
0CoffeeStain7yEh, probably not. Heuristically, I shy away from modes of thought that involve intentional self-deception, but that's because I haven't been mindful of myself long enough to know ways I can do this systematically without breaking down. I would also caution against letting small-scale pride translate into larger domains where there is less available evidence for how good you really are. "I am successful" has a much higher chance of becoming a cached self than "I am good at math." The latter is testable with fewer bits of evidence, and the former might cause you to think you don't need to keep trying. As for other-manipulation, it seems the confidence terminology can apply to social dominance as well. I don't think desiring superior charisma necessitates an actual belief in your awesomeness compared to others, just the belief that you are awesome. The latter to me is more what it feels like to be good at being social, and has the benefit of not entrenching a distance from others or the cached belief that others are useful manipulation targets rather than useful collaborators. People vary on how they can use internal representations to produce results. It's really hard to use probabilistic distributions on outcomes as sole motivator for behavior, so we do need to cache beliefs in the language of conventional social advice sometimes. The good news is that good people who are non-rationalists are a treasure trove for this sort of insight.

From the Buddha's Kalama Sutra:

Do not go upon what has been acquired by repeated hearing,
nor upon tradition,
nor upon rumor,
nor upon what is in a scripture,
nor upon surmise,
nor upon an axiom,
nor upon specious reasoning,
nor upon a bias towards a notion that has been pondered over,
nor upon another's seeming ability,
nor upon the consideration, "The monk is our teacher."
Rather, when you yourselves know: "These things are good; these things are not blamable; these things are praised by the wise; undertaken and observed, these things lead to benefit and happiness," enter on and abide in them.'

As Eilenberg-Mac Lane first observed, "category" has been defined in order to be able to define "functor" and "functor" has been defined in order to be able to define "natural transformation".

Saunders Mac Lane, Categories for the Working Mathematician

5[anonymous]7yIs there a way to explain that to a non-mathematician?
2Cyan7yHe's saying that he made up categories and functors because what he really wanted to study was the idea of natural transformations, and the former notions are needed to define the latter. Or: categories and functors are nice, but natural transformations are the bomb.
0hylleddin7yOr even a non-category theorist?

Neat. It would be nice to describe this site in a dozen or so words and put this description on the front page.

How about...

A community blog devoted to refining the art of human rationality

:P

2Torello7yI would be interested to see if other readers could come up with a more eye-catching description/slogan
0Dallas7yA community blog with the purpose of refining the practice of rational behavior? Eliminates human bias, doesn't imply that rationality is an 'art', and proclaims itself teleologically rather than ontologically.
1shminux7yOops, I missed the fine print :)
0apophenia7yThat would be great, but it would be more in the keeping of this thread to try and condense some section of this site to a dozen or so words. (Not leaving in everything, of course)

The Black Swan:

Don't pick up pennies in front of a steamroller. Especially if the guy encouraging you to do it is taking a cut of the pennies but not spending any time in front of the steamroller himself.

3simplicio7yFor an example from real life, check out page 9 of this document for a fund my investment advisor wanted me to invest in [https://docmgt.dynamic.ca/documentdownload/getdocument/3460]: "We got a positive number of pennies almost every day for several years!" (NB: I'm not making a global judgment about this fund, just about the inherent anti-epistemology of obsessing over day to day "volatility".)

Scott Kim, What is a Puzzle?

  1. A puzzle is fun,
  2. and it has a right answer.

http://www.scottkim.com/thinkinggames/whatisapuzzle/

3wedrifid7yI am dubious about any definition of "puzzle" for which the claim "This puzzle is not fun" is tautologically false, regardless of either the speaker or the puzzle in question.
0[anonymous]7yIf a puzzle is not fun, it is a chore, a problem or in the worst case, high school math homework.
0DSimon7yGood point, probably the title should be "What is a good puzzle?" then.
2TheOtherDave7yI disagree about #2, incidentally. It's a puzzle if I'm having fun trying to solve it.
2DSimon7yThat's interesting! I've had very different experiences: When I'm trying to solve a puzzle and learn that it had no good answer (i.e. was just nonsense, not even rising to the level of trick question), it's very frustrating. It retroactively makes me unhappy about having spent all that time on it, even though I was enjoying myself at the time.
2TheOtherDave7yI certainly agree that being made to treat nonsense as though it were sense is frustrating. And, sure, if things either have a right answer or are nonsense, then I agree with you, and with Scott Kim. Nonsense is not a puzzle. But I'm not sure that's true. I'm also not sure that replacing "a right answer" with "a good answer" as you just did preserves meaning. For example, I'm not sure there's a right answer to all puzzling questions about, say, human behavior, or ethics. There are good answers, though, and the questions themselves aren't all nonsense.

"Try to choose actions causing high total net utility gains when summed over everyone affected."

is an attempt at a 15 word summary of:

Precedent Utilitarians believe that when a person compares possible actions in a specific situation, the comparative merit of each action is most accurately approximated by estimating the net probable gain in utility for all concerned from the consequences of the action, taking into account both the precedent set by the action, and the risk or uncertainty due to imperfect information.

Jayne's Probability Theory:

There is nothing "subjective" about Bayesian probability.

EDIT: I like badger's suggestion below better than this one.

I'd go with: Probability exists in your mind, not the world, but there still is an "objective" way to calculate it.

2johnswentworth7yI like it. In the spirit of iterative improvement, how about this: Probabilities are subjective, but the information they represent is not. Use all available information on pain of paradox.
0wedrifid7yI propose an iteration with "on pain of paradox" truncated.
0johnswentworth7yI'm split on this one. I like it better without "pain of paradox," but it seemed like a third of the book was devoted to pains and paradoxes arising from ignoring information.
3wedrifid7yBecause a direct contradiction [http://lesswrong.com/lw/s6/probability_is_subjectively_objective/] of this quote is also true (and also something that the Jaynes would probably agree with) it is perhaps not the best 15 words in his work. The problem is that all the meaning conveyed relies on the reader plugging in suitable meanings for 'subjective' so that it makes sense. The knowledge needed to construct an interpretation of the quote that is correct and insightful gets deducted from the information that is conveyed by the quote. I do agree that this message and this source are worth quoting. If the excerpt badger quotes [http://lesswrong.com/lw/irr/the_best_15_words/9uh4] does come from Jaynes then it certainly deserves a place. Same message, less ambiguity.

The evaluator, which determines the meaning of expressions in a program, is just another program.

-- Structure and Interpretation of Computer Programs

"When you start treating people like people, they become people."

~Paul Vitale

"The person you can most easily fool is yourself. Before all else, avoid doing so."

~Richard Feynman (paraphrased)

"What is your excuse for not following the advice you claim is good for all?" ~paraphrase of link

  • Data = Signal - Noise
  • Information = Data + Encoding
  • Knowledge = Information + Context
  • Experience = Knowledge + Relevance
  • Wisdom = Experience + Meta

source

Do you wish to know more about human beings? Then postulate less.

1wedrifid7yThis seems wrong. Postulating seems to be a necessary part of exploring possibility space.
-2[anonymous]7yThat sounds like it should apply to much everything (except pure maths), not just human beings. “Whenever you ass-u-me, you make an ass out of U and me.”

Positivism: "Anything that can't be verified is meaningless". This can't be verified. So Positivism is meaningless / false.

Hehe...here's a controversial one.

The process and the consequences of fighting oppressive heirarchies are worse the heirarchies themselves - my take on Mencius Moldbug.

I don't really agree. But I've tried pretty hard to wrap my head around his ideology (he's incredibly long winded) and this is what I got from it. If I had to add a second sentence, it would be this:

"Progressive culture seduces intellectual elites and redirects their power to destructive, unreflective, self-righteous reformation."

..."This reformation inevitably strengthens th... (read more)

If you want me to cut an actual quote down to 15 words it'll sound like absolute nonsense, but if a paraphrase is sufficient;

"Humans thrive under Order and suffer without it, but Chaos is both easy and attractive."

I think that hits all the major points;

  1. The Cathedral expands like an ideal gas; it has no actual leaders but a definite direction, and that direction is to move society to a more disordered state.
  2. "Oppressive hierarchies," when referring to pre-democratic systems, are not bugs but features. Organizing people along the lines of their natural abilities and putting harsh incentives in place to foster cooperation is the time-tested way to govern a society well.
  3. Life has, in the aggregate, gotten worse even despite our technological advances due to the collapse of society from a highly ordered to a highly disordered state.
  4. Unscrambling the egg may well be impossible, and even if it is will require enormous activation energy (such as the final collapse of the USG).

(Fair warning: I'm not a Formalist per se, I think the Patchwork is way too silly an idea to put my name near it, but I hope that this is a good enough summary for someone with a low tolerance for Moldbuggery.)

See, the trouble I have with Moldbug is that it's written less like a thesis and more like a poem. I got through a bit of it, and it was pretty fun to read and it constantly felt like I was on the edge of some earth-shattering revelation which would destroy all my previous political notions...but in the end I came away not quite getting the point. In places where I did understand the point, I didn't understand how it was supported.

I can readily identify all the statements you've listed as belonging to Reactionary schools of thought, but the bit about Order, Chaos, and Cathedral are all so layered in metaphor that I'm not really sure what they actually mean, let alone why I should believe that they are true. The point about life getting worse seems empirical, and I haven't fully grasped why he believes this.

So far, what I've taken is that reforming pre-democratic heirarchies (order) is both an act of violence and leads to violence and turmoil (chaos). Like most violence, this is a transfer of power to the progressive powers (the universities, the liberal democracies, and the reformers - collectively, the cathedral).

3[anonymous]7yHave you read Yvain's summary [http://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/] ?
-1Ishaan7yYes. The three things I've read are Moldbug's "Open letter to Open Minded Progressives", Yvain's summary, and the first 1/3 of "A gentle introduction to Unqualified Reservations".
0Moss_Piglet7yYou know that's a 9 (technically 12, since there's a 9a - c) post series, right?
-1Ishaan7yYeah, the bookmark is currently on this [http://unqualified-reservations.blogspot.com/2009/01/gentle-introduction-to-unqualified_29.html] page.
3Moss_Piglet7yYeah, as much as Moldbug likes to talk about his site as a "red pill" it's really a horrible place to introduce yourself to Neoreaction. He is stingy with citations, assumes a lot of prior knowledge, and seems to assume his readers are either archive-binging or regulars. The Cathedral is a less clunky and more memorable way of saying "the bureaucracy of the international Progressive (he prefers Unitarian or Communist, but the territory is the same) movement and aligned criminal organizations." It's not exactly your standard conspiracy theory as there are no leaders, no actual plot, not even a conspiracy per se; just people reacting to a really bad set of incentives which drives politics leftwards and increases governmental entropy. It's an ideological feedback loop; a memetic parasite which gets more powerful by creating conditions hostile to its host. Order / Chaos is really just a D&D-laden way of articulating Hierarchy v Anarchy. An Ordered society has clear lines of authority stretching downwards from the top (Moldbugian Formalism literally means making sure Formal de-jure authority and informal de-facto authority line up), with an incentive structure which promotes civilized behavior through appeals to morality and self-interest. A Chaotic society has unclear and/or conflicting sources of authority, such as imperium in imperio, and the incentive structure promotes societal conflict. In terms of evidence, he alludes to some but you really need to come in with your own knowledge for the most part. Learn more about the biology (genetics and epigenetics) of intelligence, as well as other physiological differences linked to race / sex, a little macroeconomics and some history and you'll be able to make most of his points better than he can. Just ignore the climate skepticism and Chicago School stuff, put it down to him not having a science background. His crime stats are interesting but highly contested; his stats show a roughly 35x increase in reported murder
5Ishaan7yDrives politics leftwards: This is confusing to me because I'm sitting leftward. This insidious set of incentives is shifting society's values towards mine. I want this to happen. Am I supposed to be rubbing my hands together and cackling gleefully as Cthulhu does my bidding? Governmental entropy: So, the way you phrased that makes me assume that this doesn't mean "overturning of social order via revolution". But what does that mean, then? How is it measured? What's a real-world correlate? Personally, my confidence about climate change is based largely around my confidence in the scientific consensus. Moldbug seems sufficiently well read as to not allow himself to be ignorant of the scientific consensus on the matter. I would thereby not make the inference that Moldbug is scientifically illiterate, but that Moldbug mistrusts the validity of scientific consensus itself. The real question is not about the facts of climate change. The real question becomes - is he overestimating the degree to which the supposed "Cathedral" can control the scientific consensus, or are you and I underestimating it? Thus far the result has been: It's probably a bad idea to try and tear down imperfectly good systems to make room for better ones. World-improvement-plots should follow the heuristic of minimizing destruction to existing societal infrastructure. I'd call that conclusion valuable, but hardly a paradigm shift.
2Moss_Piglet7yToday? Absolutely. But Cthulhu doesn't stop swimming. The Whigs (both parties on both sides of the Atlantic applied), the Republicans (in the "First French Republic" sense), the Democrats (in the Jacksonian populist sense) and even most recently the Social Liberals all learned that lesson the hard way when they ended up taking their turns on the right side of the Overton Window. You are not an exception; eventually, there will be a point at which today's intellectuals become tomorrow's targets. Given the ferocity of the anti-science postmodernism of Europe and California today, I don't think it's that far off. Look up the word "biotruth" or do some research into the anti-GMO movement and you'll see your leftist buddies are on the front line right now fighting against the very science which might someday make transhumanism possible. Entropy is an apt metaphor here actually; Your metaphorical solid block of hydrogen at 0K is something like the thousand year Fnarg [http://www.moreright.net/books/Mencius%20Moldbug/Political%20Theory.pdf#page=14] . Authority is absolute, atomic and universally acknowledged. Of course, in reality absolute zero is impossible but the principle remains that a more ordered state is one of greater regularity and lower volatility. We've all heard the proverb about how a woman could carry a pot of gold from one end of the silk road to the other under the Mongol Empire's rule, and while certainly an exaggeration it's clear that terrorism and organized crime in the modern sense would have been virtually impossible there [http://en.wikipedia.org/wiki/Assassin#Downfall_and_aftermath]. On the other hand our metaphorical ideal gas might be something like the state of affairs during the Congolese or Somalian Civil Wars; or in other words, most of Post-Colonial Africa. The governments of these nations are actually rather large, in the sense that they employ a lot of people and pay them pretty well, and when you factor in the various NGOs and for

I don't see a distinction; if you think science is that institutionally corrupt, on a scale greater than even Lysenko could have aspired to, you're not functionally different from a postmodernist and have just as little credibility when talking about institutional science

Not necessarily. Moldbug might trust scientific consensus to be correct in areas where politics won't distort it.

i obviously do trust the scientific consensus, but steelmanning, there have been times when politics or culture has interfered with science in half-science half-humanities fields like anthropology. Historically, even biology has been tainted by politics at times, when it comes to sexuality. (I'm not talking about modern evolutionary biology, but historical things such as chalking up female orgasms to "hysteria" and the historical attribution of homosexual behavior in animals as "dominance displays")

That's a devil's advocate though. For the most part, I agree with you.

Look up the word "biotruth" or do some research into the anti-GMO movement and you'll see your leftist buddies are on the front line right now fighting against the very science which might someday make tra

... (read more)
1TheOtherDave7ySo, what is the harm?
7Ishaan7yThere isn't any "harm" - that's the entire point. It just feels wrong at a gut level. The example was specifically chosen to be something that did not upset "harm avoidance" or "egalitarianism" or "autonomy" (in the john haidt sense). I was trying to think of a world in which I might be the conservative one. In this case, I think that the notion of a human without negative effect is hitting some sort of psychological Uncanny Valley between human and alien for me. Maybe it violates some sort of purity norm? Or perhaps it causes individuals to in some senses leave the "in-group" by becoming less similar to me? The truth is that the strangeness would probably wear off after repeated exposure. I only had to think about the idea for a small amount of time before realizing it wasn't really as bad as it seemed at first. But I can I imagine that if I hadn't ever considered the idea in my youth, an older version of me would no longer be cognitively flexible enough to consider it as acceptable behavior. This is probably how conservatives feel with homosexuality. (And just the same way, if you take a young conservative who doesn't take any religious scriptures literally, and you give them repeated exposure, they tend to change their mind unless religion somehow interferes). (If everyone did it, there might be ...not harm, but dis-utility. It wouldn't be my optimal universe, though perhaps it wouldn't be worse than the present. I think that I consider diversity of experiences intrinsically valuable, so I'd feel like something intrinsic to humanity was lost if at least some toned-down brands of negative affect weren't preserved in at least some people. A more obvious problem is that it might be boring...I'm not sure whether the fact that they wouldn't find it boring makes it better or worse. I guess I'd be happy for them, but I wouldn't identify myself, or humanity, with them as much.) Yes... I think what bothers me most is that it is a subtraction. It's one fewer emot
1TheOtherDave7y(nods) Ah, I see. Gotcha. I certainly agree that we can be squeamish about things that we don't actually judge to be wrong, whatever our ethical standards are (unless we explicitly consider squeamishness our ethical standard, of course). That said, I don't seem to value diversity of experience enough that I'm willing to preserve suffering for the novelty/diversity value. Tangentially, IME the stuff we class as "positive affect" is way less boring to experience than the stuff we class as "negative affect," as well as involving less suffering.
2Ishaan7yI just remembered about eliezer's post about serious stories [http://lesswrong.com/lw/xi/serious_stories/]. He thinks that all stories involve conflict, fear, or sadness, and aren't interesting otherwise. I think he's got a point, about humans needing some sort of self-narrative, about having a need to live the sort of life you would like to read about. After reading Eliezer's post, I put it on my to-do list as a challenge to write a good story that involves no pain or conflict. I'm hoping to substitute conflict related suspense with strangeness and wonder suspense. That said, it's true that I'm having trouble thinking of counterexamples among non-short stories I've read which stand only on positive emotions. I wouldn't even know how to start going about this feat outside the realm of sci-fi-fantasy. Thanks for making me think about this though, because I was just shifting through my mental archive of short stories looking for one without conflict and came up with this [http://www.galactanet.com/oneoff/theegg_mod.html], which illustrates what I meant about awe and wonder having dramatic effects which rival those of pain and conflict. Idea cross posted at "serious stories" [http://lesswrong.com/lw/xi/serious_stories/9uql]
2TheOtherDave7yJust to pick the obvious counterexample that comes to mind... are we considering porn to be uninteresting? To not be stories? Or do we want to claim that all porn involves conflict, fear, or sadness? Hm. What makes you think that? I ask because I don't think I need to live the sort of life I'd like to read about., and I'm curious whether we're simply different that way, or whether perhaps this is a lack of self-awareness on my part.
2Ishaan7yMore thought: Our emotions are in some sense the human equivalent of "utility functions". We don't hate the suffering of other people in some abstract way - we hate the suffering of other people because it causes us pain to think about other people suffering. We love truth because of that rush of satisfaction upon hitting upon it. Yes, we intrinsically prefer pleasure over pain, but that's only part of the story. We also prefer the causes of satisfaction to happen, beyond preferring the feeling of satisfaction itself. We hate the causes of pain beyond the extent to which we hate the actual feeling of pain itself. You can't really replace the more abstract negative affects with a warning signal, because the negative affect was the reason you hated, say, deception, in the first place. Replacing negative affect in response to deception would be akin to removing part of the preference against deception. That's why sociopaths don't care about people. They don't feel guilt. You could tell them "this is where you would ordinarily feel guilty, if we hadn't removed your negative affect associated with hurting people" but they aren't going to care about the warning signal. Maybe some past version of themselves who hadn't had negative affect removed might have cared, but they will not. Negative affect is the switch that tells the brain "don't do things that cause that'. Removing negative affect would actually remove the perception of negative utility. For simple bodily pain, who cares...but you're going to start altering values if you mess with any of the more abstract stuff. So, when we radically alter our emotions, don't we also radically alter our "utility functions"? I'd like future-me's interests to generally align with current-me's coherent extrapolated interests.
3TheOtherDave7yIt seems to me that I negatively value other people's suffering... I want there to be less of it. Given the choice between reducing their suffering and reducing the pain I feel upon contemplating their suffering, it seems to me I ought to reduce their suffering. Given the option of reducing their suffering at the cost of experiencing just as much pain when I contemplate their lack of suffering as I do now when I contemplate their suffering, it seems to me I ought to reduce their suffering. None of that seems compatible with the idea that what I actually negatively value is the pain of thinking about other people suffering. What I can't figure out is whether you're suggesting that I'm ethically confused... that it simply isn't true that I ought to do those things, and if I understood the world better it would stop seeming to me that I ought to do them... or if I'm simply not being correctly described by your "we" statements and you're unjustifiedly generalizing from your own experience... or whether perhaps I've altogether misunderstood you.
1Ishaan7yNone of the above. I'm just trying to figure out why my intuition says that I do not want not block all negative affect and whether my intuition is wrong, and your objections are helping me to so. I've got no idea whether we're fundamentally different, or whether one of us is wrong - I'm just verbally playing with the space of ideas with you. The things I'm saying right now are exploratory thoughts and could easily be wrong - the hope is that value comes out of it. "We" is just a placeholder for humans. I'm making the philosophical claim that negative affect is the real-life, non-theoretical thing that corresponds to the game-theory construct of negative utility, with some small connotative differences. No, of course not. Here's what I'm suggesting: Thinking about other people's suffering causes the emotion "concern" (a negative emotion) which is in fact "negative utility". If you don't feel concern when faced with the knowledge that someone is in pain, it means that you don't experience "negative utility" in response to other people being in pain. I'm suggesting the fact that you negatively value people to be in pain is inextricably linked to the emotions you feel when people are in pain. I'm suggesting that If you remove concern (as occurs in real-world sociopathy) you won't have any intrinsic incentive to care about the pain of others anymore. (Not "you" in particular, but animals in general.) Basically, when modelling a real world object as an agent, we should consider whatever mechanism causes the neural circuits (or whatever the being is made of) that cause it to take action as indicative of "utility". In humans, the neural pattern "concern" causes us to take action when others suffer, so "concern" is negative utility in response to suffering. (This gets confusing when agents don't act in their interests, but if we want to nitpick about things like that we shouldn't be modelling objects as agents in the first place) Here's a question: Do you think we hav
2TheOtherDave7yI agree with this, in general. This suggests not only that concern implies negative utility, but that only concern implies negative utility and nothing else (or at least nothing relevant) does. Do you mean to suggest that? If so, I disagree utterly. If not, and you're just restricting the arena of discourse to utility-based-on-concern rather than utility-in-general, then OK... within that restricted context, I agree. That said, I'm pretty sure you meant the former, and I disagree. Maybe, but not necessarily. It depends on the specifics of the AI. Yes, that follows. I think both claims are false. I agree that in human minds, differential affect motivates action; if we eliminate all variation in affect we eliminate that motive for action, which either requires that we find another motivation for action, or (as you suggest) we eliminate all incentives for action. Are there other motivations? Are there situations under which the lack of such incentives is acceptable?
0Ishaan7yyes...we agree Shit I'm in a contradiction. Okay, I've messed up by using "affect" under multiple definitions, my mistake. Reformatting... 1) There are many mechanisms for creating beings that can be modeled as agents with utility 2) Let us define Affect as the mechanism that defines utility in humans - aka emotion. So now.... 3) Do moral considerations apply to all affect, or all things that approximate utility? if we meet aliens, what do we judge them by? They aren't going to be made out of neurons. Our definitions of "emotion" are probably not going to apply. But they might be like us - they might cooperate among themselves and they might cooperate with us. We might feel empathy for them. A moral system which disregards the preferences of beings simply because affect is not involved in implementing their minds seems to not match my moral system. I'd want to be able to treat aliens well. I have a dream that all beings that can be approximated as agents will be judged by their actions, and not any trivial specifics of how their algorithm is implemented. I'd feel some empathy for a FAI too. Even it it doesn't have emotions, it understands them. It's utility function puts it in the class of beings I'd call "good". My social instincts seem to apply to it - I'm friendly to it the same way it is friendly to me. So, what I'm saying is that "affect' and "utility" are morally equivalent. Even though there are multiple paths to utility they all carry similar moral weight. If you remove "concern" and replace it with a signal that has the same result on actions as concern, then maybe "concern" and the signal are morally equivalent.
0TheOtherDave7yI agree that distinct processes that result in roughly equivalent utility shifts are roughly morally equivalent.
0Ishaan7yDo you further agree that it follows from this that there is some hard limit to which it makes sense to self-modify to avoid certain negative emotions? (We can replace the negative emotions with other processes that have the same behavioral effect, but making someone undergo said other processes would be morally equivalent to making them undergo a negative emotion, so there isn't a point in doing so)
0TheOtherDave7yI don't agree that it follows, no, though I do agree that there's probably some threshold above which losing the ability to experience the emotions we currently experience leaves us worse off. I also don't agree that eliminating an emotion while adding a new process that preserves certain effects of that emotion which I value is equivalent (morally or otherwise) to preserving the emotion. More generally, I don't agree with your whole enterprise of equating emotions with utility shifts. They are different things.
2Ishaan7yhmm...here's a better way to illustrate what I'm getting at. Do you like to read stories that have conflict? (yes) Would you enjoy those stories if they didn't illicit emotions for you? (no) Now imagine you are unable to feel those emotions that the sad story illicits. Do you still feel like reading the story? (no) If not, isn't that one less item on the satisfaction menu? (yes) (In parenthesis are my answers.) You can apply this to other stuff. Most of the arts fit nicely. Arts are important to me. Or imagine that you feel down about some small matter, and your friend comes and makes you feel better. That whole dynamic just seems part of what it means to be human. Maybe life would be better without negative affect. Certainly, if I were to start never feeling negative affect tomorrow, I wouldn't be bothered (by definition). But that version of me would be so different from the current version. It would disrupt continuity quite a bit.. I guess the acid test would be to go into the postiive-affect-only state temporarily, and then go back to normal. If I still wanted to keep negative affect states after the experience then maybe it wouldn't really be a disruption of continuity at all. ("disrupt continuity" here is short for: this hypothetical future being might be descended from my computations in some way, but it differs from the being that I currently am in such a way that I should now be considered partially if not wholly dead)
2TheOtherDave7ySure, I expect that I'd have very different tastes in stories if my ability to experience emotion were significantly altered, and that there are stories I currently enjoy that I would stop enjoying. And, as you say, this applies to a Iot of things, not just stories. I also expect that I'd start liking a lot of things I don't currently like. I mean, I suppose it's possible that I'm currently at the theoretical apex of my ability to enjoy things without disrupting continuity, such that any change in my emotional profile would either disrupt continuity or narrow the range of things I can enjoy... but it doesn't seem terribly likely. I mean, what if I passed that apex point a while back, and I would actually have a wider menu of satisfaction if I increased my ability to be sad? Heck, what if having enough to eat stripped me of a huge set of potentially satisfying experiences involving starving, or giving up my last mouthful of food so someone I love can have enough to eat? Perhaps we would have done better to live closer to the edge of starvation? I dunno. This all sounds pretty silly to me. If it's compelling to you, I conclude we're just different in that way.
0Ishaan7yI think the reason we disagree is that you are only considering first-order preferences, which is understandable because the initial examples i provided were pretty near first order preferences. The other comment articulates my thoughts about why higher order preferences are necessarily affected when you alter emotions. Aren't your preferences (not first order preferences, but deeper ones) part of your self-identity? Is a version of you which doesn't really feel empathetic pain still you in any meaningful sense? Would such a being care about actual torture? (I'm aware I'm switching tracks here. I'm still attempting to capture my intuition.)
2TheOtherDave7yLike preferring that people not suffer, and the feeling of pain at contemplating suffering? See my reply there, then. "Affected" is a vague enough word that I suppose I can't deny that my preferences would be affected... but then, my preferences are affected when I stay up late, or drink coffee. It seems to me that you are equating emotions with preferences, such that altering my emotional profile is equivalent to altering my preferences. I'm not sure that's justified, as I said there. But, sure, there are preferences I strongly identify with, such that I would consider a being who didn't share those preferences to be not-me. And sure, I suppose I can imagine changes to my affect that are sufficiently severe as to effect changes to those preferences, thereby disrupting continuity. I'd prefer not to do that, all things being equal. But it seems to me you're trying to get from "there exist emotional changes so disruptive that they effectively kill the person I am" to "we shouldn't make emotional changes"... which strikes me as abuot as plausible as "there exist physiological changes so disruptive that they effectively kill the person I am" to "we shouldn't make physiological changes."
0Ishaan7yThat's actually really close to what I am saying, but minor alteration. I'm going from "there exist emotional changes so disruptive that they effectively kill the person I am" to "we probably shouldn't specifically make the emotional change where change = remove all negative affect. It's probably one of those changes that effectively kills most people." I'm totally down with making some emotional changes, such as "stop clinical depression", "remove hatred", etc. To follow the physiology analogy, "remove all negative affect" seems equivalent to saying "cut the right half of the brain off". That's approximately half of human emotion that we'd be removing. But maybe if we can replace "suffering" with an emotion that we don't intrinsically hate feeling which ends up producing the same "utility function" (as determined behaviorally), then it's all good? It's a lot of changes, but then again my preferences are where I place a large part of my identity, so if they are unaltered then maybe I haven't died here... Edit: Can you identify any positive preferences within yourself which do not correspond to a positive emotion? (or negative). I'm currently attempting to do so, nothing yet.
1TheOtherDave7yCan you taboo "negative affect"? I was fine with it as shorthand when it was pointing vaguely to an illustrative subset of the space of emotions, but if you mean to define it as a sharp-edged boundary of what we can safely eliminate, it might be helpful to define it more clearly. Depending on what you mean by the term, I might agree with you that "remove all negative affect" is too big a change. Well, I feel the emotion of satisfaction when I'm aware of my preferences being satisfied, so a correspondence necessarily exists in those cases. In cases where I'm not aware of my preference being satisfied, I typically don't experience any differential emotion. E.g., given a choice between people not suffering and my being unaware of people suffering, I prefer the former, although I don't experience them differently (emotionally or any other way).
-3Moss_Piglet7yThere is none, and the idea that it's at all a Left-Right issue is baffling. I personally don't like the idea on aesthetic principles but it's not the result of some Reactionist policy statement. People being happy prosperous and free is the goal of Reaction; why would anyone bother with a philosophy which promised sadness poverty and slavery? The difference is entirely in the question of what sorts of conditions in the real world will lead to a good society, and that is a simple factual question.
0Lumifer7yThat's a very good point. We have to start being careful about terminology here. The word "liberal" (at least in the contemporary US political discourse) has two quite distinct meanings. The first (at least historically) meaning is the "classic liberal" or "traditional liberal" or even "XIX century liberal" -- a political philosophy emphasizing individual rights and liberties. Nowadays a "classic liberal" is almost a synonym for a "small-l libertarian". The second meaning is "leftist", "progressive", "opposed to conservatism". This is the usual meaning in which the word in used in the US today. Now, what will happen if you dial the "liberal values" to 11? Liberal/classic, not much -- you'll get much weirdness, some of it disgusting, to be sure, but overall it might look like, I don't know, say, Burning Man. But the liberal/progressive values are a different kettle of fish. These include things like serious dislike of inequality. Or, for example, strong preference for community over individual. So turning these things to eleven gets you moving towards the Soviet Russia territory. You should start thinking about confiscatory tax regimes, limitations on property rights, etc.
8pragmatist7yI'm not sure how turning the dial to 11 works, but there seems to be a pretty glaring asymmetry in your analysis here. If turning the dial to 11 on progressivism takes you to Soviet Russia, why doesn't turning the dial to 11 on classical liberalism take you towards complete stateless anarchism, which I imagine would be considerably less congenial than Burning Man. "But," the classical liberal might say, "we believe the state does have a role to play in protecting its citizens from violence inflicted by others, and in enforcing contracts." Yeah, and progressives believe that the market has a role to play in solving the economic calculation problem. They also have commitments to civil liberties and individual autonomy that are incompatible with a Soviet-style dictatorship. If turning the dial past 10 is sufficient to erase those commitments, maybe it's also sufficient to erase the classical liberal's commitment to a night watchman state?

This line of conversation seems to focus on the "turning the dial to 11" idea, which I take to mean "increasing the distance from the mainstream".

I think I see a couple of problems with this.

First, a political ideology is composed of not one, but several "dial settings". Correlations between them are at least partly matters of historical accident, not logical necessity. We can conceive of dialing up or down any of these somewhat independently of one another.

Why is anti-colonialism linked to opposition to private property, instead of to protecting the private property rights of oppressed people? Why is it in the interests of "big-business conservatives" today to oppose scientific education, whereas in the mid-20th century the business establishment was strongly supportive of it? Why is antisemitism today found in both the far left and far right, whereas it once was a defining characteristic of right-wing nationalist populism? Because of the formation and breakdown of specific political alliances and economic conditions over historic time — not because these views are logically linked.

Second, a political ideology often opposes what outsiders se... (read more)

1Lumifer7yIt takes me towards, that is, in that general direction. It doesn't get there, though, because classical liberals were quite familiar with stateless anarchism and have rejected it. Again, turning the dial to 11 moves the progressives towards Soviet Russia without necessarily getting them there. Note my examples -- they do not mention hanging capitalists on the lampposts. Imagine a committed (maybe even a radical) progressive finding himself in a country which taxes incomes over, say $500,000 at the 99% tax rate. Would he start to demand lower taxes on the rich? Not bloody likely, and this is a confiscatory tax regime.
3shminux7yBy the way, the marginal income tax rate in the US on incomes over $100k was 92% in 1953, and 70% on incomes over $108k until 1981, when Reagan first started trading taxes for deficits. Source [http://taxfoundation.org/article/us-federal-individual-income-tax-rates-history-1913-2013-nominal-and-inflation-adjusted-brackets] .
2Lumifer7yFrom your source, in 1953 the marginal tax on ordinary income over $200K was 92% for single filers and that's $1.7m in today's dollars. I do wonder how many people were in this tax bracket. For the rich most of their income was dividends and capital gains -- not part of ordinary income.
1shminux7ySure, 92% on $1.7m/yr it's not quite 99% on $500k/yr, as in your example, but it's not too far off, and it is interesting to examine how people on different sides of the political spectrum reacted to it. I don't know if any of the "progressives" (meaning leftists?) demanded lower taxes back then.
4Lumifer7yBy the way: a nice graph [http://visualizingeconomics.com/blog/2011/04/14/top-marginal-tax-rates-1916-2010] and an amusing fact: The Wealth Tax Act of 1935, applied the top rate to income over $5 million and had only a single taxpayer: John D. Rockefeller, Jr.
0Moss_Piglet7yI think the issue here is that to you progressivism is a set of very specific ideals whereas to me it is a set of general-purpose political tactics. We could argue it around in circles forever, so why not cut to the meat of the issue; what would we expect the progressive response to be like if each of us were right? Situation A: Three nationalist groups representing their country's majority begin systemic campaigns of genocide against minority groups whom they resent for their higher social standing and perceived foreignness (in reality, both have lived there for centuries). The German NSDAP targets the Ashkenazim, the Vietnamese Viet Minh targets the Hoa, and the Hutu Akazu targets the Tutsi. What do we expect the modern sensible progressive to feel? If this is a simple question of morality, we could expect that each case would merit strong condemnation and the failure to prevent them as an unforgivable tragedy. If on the other hand Progressivism is simple political expedience, we expect our answers to break along purely practical lines; the NSDAP was a rival and is thus condemned as strongly as possible, the Viet Minh are even now an ally and thus their actions are completely ignored, and the Akazu are of no consequence whatsoever and are thus thought of only within the context of expanding the power of allied NGOs. Situation B: Two men lead attacks on US Federal Government buildings in an attempt to spark a race war which they believed was divinely ordained, failed, and were subsequently executed. John Brown attacked the Federal Arsenal at Harper's Ferry, while Timothy McVeigh attacked the Federal Building in Oklahoma City. In both cases innocents were killed as a result of the attacks, and in both cases their actions hurt their cause in the public mind and encouraged the expansion of paramilitary police forces designed to prevent similar future strikes. If this is a question of humanitarian ideals, you might expect that both would be repudiated for their act

I think the issue here is that to you progressivism is a set of very specific ideals whereas to me it is a set of general-purpose political tactics.

Judging by the examples you give, the tactic you're attributing to progressivism is basically harsh condemnation (and often forceful suppression) of purported "human rights abuse" when the perpetrators are ideological enemies, but quiet tolerance (and sometimes even approval) of the same actions when they are perpetrated by allies or by people/groups who do not fit the "bad guy" role in the standard progressive narrative. Is this pretty much what you intended to convey, or am I missing something important?

If I'm not, then I don't see why you tie this behavior to progressivism in particular. It seems like a pretty universal human failure mode when it comes to politics. Of course, the specifics of the rhetoric employed will differ, but I'm sure I can come up with examples similar to yours that apply to conservatives, or indeed to pretty much any faction influential enough to command widespread popular allegiance and non-negligible political clout. Do you think progressives are disproportionately guilty of this kind ... (read more)

I concede that a lot of contemporary discussion of John Brown is unjustifiably reverential, and I don't consider him particularly heroic.

I consider him extremely heroic. Not ultrarational, but there were people suffering in the darkness and crying out for help, a lot of people saying "Later", and John Brown saying "Fuck this, let's just do it." If there's a historical consensus that the Civil War could have been avoided, I have not encountered it; and that being so, might as well have the Civil War sooner rather than later.

2Vaniver7yHere's an argument [http://takimag.com/article/lincolns_folly_steve_sailer#axzz2h9wGF8qS]. Basically, Lincoln could have acted early to keep half of the South, and a confederacy of just seven coastal states primarily dependent on the global cotton market could have been waited out, or brought to heel quickly.
0Eugine_Nier7yTo bring this to contemporary examples, do you support Operation Iraqi Freedom?

If I recall my past opinions correctly, I said at the time that while such wars were the only way to free certain countries, I did not trust the competence of the current administration to prosecute it and was strongly against the way in which it was carried out in defiance of international law.

I would say in retrospect that the resulting disaster would have been 2/3 of the way to my reasonable upper bound for disastrousness, but the full degree to which e.g. the Bush Defense department was ignoring the Bush State department was surprising and would not become known until years later. I have since adjusted my political cynicism upward, and continue to argue with various community-members about whether the US government can be expected to execute elaborate correct actions based on amazingly accurate theories about AI which they got from university professors (answer: no).

0Eugine_Nier7yWhy doesn't the same logic apply to the Civil War?
7Eliezer Yudkowsky7yFor one thing, it worked. But I wasn't there at the time, not to mention not being born at the time, so it's hard to argue about what I would have said about the Civil War.
3Eugine_Nier7yFor certain values of "worked". Slavery was abolished, similarly Saddam is no longer in power and Iraq is certainly much closer to democracy (at least by Arab standards). Also in both cases the occupation (called "reconstruction" after the civil war) met with heavy resistance and was ultimately discontinued for political reasons. Ultimately Jim Crow was instituted. It is notable that for roughly a century afterwards the civil war was regarded as a tragic mistake.
-10Moss_Piglet7y
-4Moss_Piglet7yMore or less; it's all about framing the debate in terms which push popular sentiment leftward. Whoever controls the null hypothesis gets to decide what the data means, and conservatives suck at statistics. Now each of my examples is debatable; there are official Progressive answers to each dichotomy and they're all designed to make sense to well educated intelligent people (no-one with any sense would call the Cathedral dim). But if you look at the pattern, not just here but anywhere you look, you see double-standards which invariably favor the political Left and Demotism in general. I can't force you to see it, and I don't begrudge it if you don't, but it is there to see.

I took your explanation of "governmental entropy" to indicate a breakdown of heirarchy.

High order gov't = clear lines of heirarchy, which you could draw in a simple diagram

low order gov't = constant uncertainty about who's in charge (with the resulting insecurity resulting in violence).

We could argue it around in circles forever, so why not cut to the meat of the issue; what would we expect the progressive response to be like if each of us were right

So this is good, but I'm still confused.

Your examples describe a government which acts in its own interests (rather than by moral ideals) and I accept that this is in fact the case for our government, that it acts not according to ideals but in self-interest.

What I don't understand is why this is particular to progressive-ism, and not a general property of ideologically driven power structures. Or even power structures in general, for that matter - doesn't Fnarg also act in his own interests, by strengthening his allies and weakening his enemies?

who it's ultimately helping

Let's take India and Pakistan, and observe their positions on the Israel-Palestine scenario. Pakistan strongly sides with Palestine, probably because... (read more)

-1Moss_Piglet7yThat's not exactly true; there is one particular culture which benefits very greatly from every Leftist alliance; the culture of Leftist intellectuals. The Palestinians do not benefit from the "Peace Process" which keeps them in refugee camps, and neither does Israel or any of Israel's Arab neighbors or even the United States which keeps the scam going. But it does provide an enormous amount of jobs for smart progressive kids working in the UN and other NGOs, juicy materials for journalists and political pundits, a great laboratory for PoliSci academics connected to the State Department to test their pet theories, and the crisis itself is an excellent propaganda tool for anyone to the left of Mussolini to use on any pet issue they might have. In other words, the Cathedral itself profits, even if (especially if) everyone else is losing money. That's not a healthy business model, in fact it's almost criminal.

but doesn't that just class "Leftist Intellectuals" as one among many groups who use power to serve their own interests, while outwardly appealing to high moral ideals?

What's different here from all the other Fnargles who seize power? Why should I take any particular notice of this particular group of Fnargles who fall under the heading "Leftist Intellectuals"? Why is this Universe worse than the Universe that would result if there were no "leftist intellectuals"?

Are "leftist intellectuals" somehow meaner and more destructive than other Fnargles? Or is it simply that this brand of Fnargle is really, really good at re-directing power to itself?

3Moss_Piglet7yYes and yes, and the reason for both is how they take power. Nearly every ruler, and virtually every ruling class, in history has built their power by skimming off of the top; tithes are one of the oldest non-arbitrary forms of taxation, and the word literally means a ten percent cut. The incentive for the ruler is always to increase their personal profit by increasing the size of the pot he skims from, which means that as Machiavelli astutely pointed out a benevolent ruler and an amoral one will be indistinguishable. The reason the modern situation is so bad is that the conditions where the Cathedral profits have nothing to do with how well it governs, and are in fact typically opposed. If Somalia stays a war zone for the next ten thousand years, that's quadrillions of aid dollars which otherwise wouldn't be spent. Joseph Stieglitz, one of Bill Clinton's top economic advisers,made a similar point about modern corporate mismanagement. When the shares are controlled by an individual or a small number of individuals everyone has an interest in making sure that the company is running efficiently; when the shares are too widely distributed speculation rules and the Board of Directors ends up calling the shots in their own interests. The result is bad service, poor profits and a bunch of wealthy Board members.
4Ishaan7yOkay, so I came into this considering the notion that attempts at reform frequently fail plausible. 2) I also came into this believing that there isn't any good feedback mechanism to kill counterproductive charity, so it's not a stretch to apply that to reform. 3) Also, perverse incentives can sometimes perpetuate dysfunctional things. You've helped me to connect these dots and I am considering the notion that a system of perverse incentives is fueling a large amount of counterproductive reform, at least insofar as it comes to foreign policy. I don't have the evidence to believe this is true yet, but it is a coherent notion that could well be true. With regards to domestic policy (an area where I've got at least some evidence) I'm more skeptical. But then again, I take it the Cathedral does skim off the domestic pot, so maybe the effects cannot be observed domestically. I'm also not sure I understand the whole "the past was in many ways better" notion - I can't think of many metrics by which this is true. So... 1) Is this different from other forms of corrupt or inefficient charity? What is specific to the Left? Could this not apply to any group who were after a cause which was not related to their own direct profit? 2) Can it be fixed by requiring more transparency and data collection to ensure that interventions are, in fact, effective? (To force the benefit to the Cathedral to be tied to how well its actions produce the results it claims to produce)...basically, can we try to hold Cthulhu accountable? After all, revolting against Cthulhu altogether will increase entropy, and for reasons obvious to both leftists and reactionaries that is undesirable. Transparency inducing reform seems to be something that everyone generally gets behind. If it is true that the tool of the Cathedral's violence is reform, then reform seems to be the appropriate channel by which to modify it.
-5Moss_Piglet7y
-2Moss_Piglet7yThat's one way to look at it, but this is more about the actual responses of progressives themselves and I tried to phrase it that way (I.E. "What do we expect the modern sensible progressive to feel?"). What do you think about the Viet Minh's genocide against the Hoa? What do you even know about them? Is it anything at all like what you feel about the Holocaust? What do you feel when you think about John Brown? Do you think about him? Is it at all like your mental image of Timmy McVeigh? What's your response to the Liverpool Care Pathway? Is that even on your radar? How about the Tuskegee Syphilis Experiment, I'm sure you've got a strong feeling about that one? There is a pattern here; supposed moral concerns do not accurately predict how progressives, ordinary progressives not politicians remember, react to most issues. There are patterns of thought and behavior here and elsewhere which simply do not make sense except in the context of systematically eliminating non-aligned bases of power and expanding aligned ones. This is the absolute essence of the issue.

Me, personally? My domain is biology, and am aware that my political opinions on most issues aren't to be taken any more seriously than the average undergraduate's opinions. I suppose that makes me the "average progressive", so maybe that's a good thing:

Truthfully, none of those are on my radar, and I know nothing about the Holocaust beyond what I learned in school. As far as I'm concerned it's just one among many terrible genocides, and one that presently gets more attention than the others because it was committed against a group who currently inhabits Western nations. Slavery of African Americans is similar - one among many terrible atrocities which happen to get more attention because the group they were committed against lives among us.

The American public (which includes me) ignores the Hoa because we never see the Hoa and have no clue who they are. I've never met a Hoa. There's no Hoa organizations fighting for increased awareness. If awareness existed, people would care...but it doesn't, so they don't. This is what is meant by liberals when we say "privilege" - African Americans and Jews living in the West, as a group, have more privilege than the Hoa o... (read more)

5Moss_Piglet7yYou know, one of the things I keep forgetting is how reasonable people tend to be over here. My flinch-instinct is still very much tuned to other corners of the internet. Basically, everything you've said is consistent and reasonable and utterly dissimilar to most of the progressive stuff I've ever seen. My sociology prof's lectures, articles I read on Jstor, friends/family back home in my yellow dog democrat hometown, the feminist / progressive christian blogs I lurk on, politicians I follow (and often vote for. My options are bad in that sense.). Its obviously the same general pedigree, but a different breed. I'm not particularly sure what to make of it. You have to scroll a bit; his whole plan was based on a white-supremacist novel called The Turner Diaries [http://en.wikipedia.org/wiki/The_Turner_Diaries]. It's pretty much Battlefield Earth with Psychiatry find-and-replaced with Judaism, even down to the "nuke 'em all" ending. I've never read it myself but it's supposedly very popular in those circles.
4JoshuaZ7yOne of those is a war to stop something which is actually bad. The other isn't. That's not a trivial distinction.
-6Eugine_Nier7y
3Ishaan7yAlso...it just seems like the smartest people would always discard post-modernism. Values might shift away from mine, but post-modernism would imply that the epistemology would shift away from mine. Values are mutable properties, but there's only one correct epistemology. It aught to be converged upon. It's not like the swimming Cthulhu just happened to swim by the correct epistemology by chance, as part of a leftward drift. The correct epistemology is one answer in a reasonably large memetic space - we wouldn't have found it by coincidence. What's more, the ideals of reductionism and logic and the correct epistemology have been multiply, independently derived. China, India, and Greece all demonstrably converged upon them, and I'm sure many other unrecorded individuals have as well. (I take it you agree with me that there is a correct epistemology and it approximately corresponds to science, rationality, reductionism, etc, since you decry anti-science post modernism)
-4Moss_Piglet7yPostmodernism doesn't have to be right to be popular, and right now political power is a matter of popularity. Even if "the smartest people" prefer being right to being powerful, a dubious proposition if you ask me, that just means their less intelligent but more ambitious cousins will be the ones wielding the power instead. The modern feminist and anti-racist movements have started to learn that their pet pseudo-science sociology is just not credible enough to counter anthropology biology and psychology; they see postmodernism as a way to hit back at "the scientific establishment" which they identify as aligned with their oppressors. At the same time, anti-corporate alternative medicine and animal rights activists (who travel in the same circles) have wanted to discredit the medical industry for decades and are turning to PoMo rhetoric as well. These groups are all at the vanguard of the modern left and all of them have a lot to gain by weakening science. In the bastardized words of Tolstoy: "Good ideas are all alike; every bad idea is bad in its own way." An ordered society, like Greece India and China, will tend to look and think very similarly even when direct communication is limited. Their traditions are the results of centuries or millenia of received knowledge which has had to pass the test of each new generation before it was transmitted to the next. In a sense you could say their memes are K-strategists; in a stable environment with limited opportunity to transmit themselves, the high cost of a more correct idea pays for itself by out-competing rivals in the long run. In the modern world (more-or-less everything after the printing press), where the our technology made data transmission and storage trivial, the new environment put out new pressures. Old ideas were built to last but slow to spread; new ideas could easily afford to be much stupider and more dangerous as long as they reproduced and mutated quickly enough. These r-strategist memes are fads

But ancient India, China, Greece were absolutely over-run by irrationality. The seeds of logic and reason were lying more or less ignored, buried in texts alongside millions of superstitions and bad epistemologies. And our currently fashionable epistemology is superior to theirs. They didn't have the notion of parsimony.

Why is logic and reason spreading faster today than in the past? Do you think that the rise of post-modernism (Actually, wait.... why are we using the word post-modernism to mean anti-science? That doesn't make sense...) will somehow eclipse the spread of rationalism?

Your model seems to have anti-science-post-modernism as a successor tor rationalism My model has anti-science as a reaction to the rapid spread of rationalism - a backlash. Whenever something spreads rapidly, there are those who are troubled. Anti-science can only define itself in opposition to science - imagine explaining it to someone who had never heard of science in the first place! Further, anti-science advocates a return to pre-scientific modes of thought. Both of these are the signals of a reactionary school of thought. Cthulhu doesn't swim that way.

n the modern world (more-or-less everything

... (read more)
6[anonymous]7yYvain's too [http://slatestarcodex.com/2013/03/04/a-thrivesurvive-theory-of-the-political-spectrum/] .
0Multiheaded7yHUGE SPOILER: Technically, historical materialism and economic determinism was first... yup, a core Marxist idea.
1Multiheaded7yWould anyone care to dispute the object-level claim I made, or are people just spree-downvoting? http://en.wikipedia.org/wiki/Historical_materialism [http://en.wikipedia.org/wiki/Historical_materialism] http://en.wikipedia.org/wiki/Economic_determinism [http://en.wikipedia.org/wiki/Economic_determinism] Wikipedia seems to be pretty unambiguious about Marx being the first notable theorist here. It's not about "neutrality", there just isn't any evidence that this claim is mistaken.
8Randaly7yNeither of the above. Your comment's style was suboptimal, technological determinism is different from economic determinism, and the neo-reactionary position is neither. (This is obvious from the fact that they think that they can reverse the left-ward trend of history, but that it will take a concentrated effort.) (I did not downvote.)
4EHeller7yI cannot see how it is different then a mix of historical materialism and economic determinism. Please elaborate. Near as I can tell, the point is that Yvain and others (Ishaan specifically) are arguing that the reactionary position is wrong by asserting some form of historical materialism/economic determinism. i.e. reactionaries cannot reverse the trend of history because the structures of governments are largely an adaptation to the technological world we live in. The reactionaries want to divorce the government/culture from technological progress and assert they can move independently. The argument against them seems to be that government/culture may well be a response to the technological climate, and as such as technology changes so will the culture and government.
2Randaly7yEconomic determinism refers specifically to the economic structure. The basic outlines of the US's economic structure have not changed since at least the 1930's, and arguably even earlier. The development of TV, the internet, or for that matter the printing press, are all changes in technology, not changes in a society's economic structure. Marx, for example, was not a technological determinist; Yvain et. al. are not economic determinists. Changing an economic structure is significantly easier than destroying all technology and preventing new developments. In that case, I switch this critique to 'sub-optimal style'- i.e. it was difficult for me to tell who Multiheaded was addressing and how his point was relevant.
2EHeller7yYou missed roughly half of my sentence, and half of Multiheaded's. The other half was historical materialism- below is a quote from the wikipedia article
-1Randaly7yNah, I was deliberately ignoring the other half. The fact that one part of Multiheaded's comment was correct (though, AFAICT, irrelevant to the above discussion) doesn't mean that the other part (regarding economic determinism) is too.
4wedrifid7yAssuming the claims are correct (haven't a clue personally and nearly as little interest) I don't know why you got downvoted. The style is a little way from optimal but not enough that I'd expect serious penalties to be applied. Have you been pissing people off elsewhere in this thread? Voting tends to build up momentum within threads and the reception of later comments is at least as strongly influenced by earlier comments as it is by individual merit.
-1Eugine_Nier7yYvain's argument appears to be an attempt to put a positive spin on one of the neo-reactionary definitions of leftism: Leftism is would happens when signaling feed back cycles no longer interact with reality, in the sense of the Philip K. Dick quote "Reality is that which, when you stop believing in it, doesn't go away". Edit: Yvain tries to be pro-leftist by associating it with technological progress. Except he runs into this [http://lesswrong.com/lw/fqd/rationality_quotes_december_2012/7y3u] problem, i.e., leftism is how people in technological (or merely prosperous) societies like to behave, which is not the same thing as the behaviors that lead to technological progress (or prosperity).
5[anonymous]7yWell, once you've got the bottom few tiers of Maslow's pyramid secured out, shouldn't you start to think about the upper ones? And is chess evil because the pieces don't refer to anything outside the game?
-1RichardKennaway7y...then you can ignore them, because that's done?
7wedrifid7yThe word was secured. And yes, it means that most of your attention no longer needs to go to that area. That's the entire point of Maslow's hierarchy of needs. Once people have satisficed their low level needs they tend to focus more attention on higher, more abstract, goals.
3[anonymous]7yI don't mean you no longer need to eat, I mean that once you've reached a stable income that will allow you to eat as much as you need, you no longer need to worry about eating, and you can spend some of the time left over playing darts or whatever, rather than getting even more food into your fridge. Or why did you take the time to write that comment? Did it help you meet your basic survival needs somehow?
-4Eugine_Nier7yChess does a reasonable job of relating to reality in the sense I mean because the rules of the game and the person who wins are objective and (relatively) independent of any false beliefs about strategy the players might have. (If chess ever reaches the point that a player can get away with arguing that the laws of the game are arbitrary and that therefore he should be able to play some illegal move, that will be a sign that chess is becoming corrupted.)
4Desrtopa7yWhile I'm sure that there are ways in which our society could be much better geared to cultivating technological progress and/or prosperity, looking to the standards of earlier times does not seem like a particularly effective way to do so. Considerations of how to best cultivate further prosperity aside, I would say that there is a lot to recommend having people in a society behave as they like to behave, rather than ways that they don't like to behave.
-4Eugine_Nier7yWhy not? Look at societies that achieved and/or maintained prosperity and imitate them; look at prosperous societies that collapsed and avoid doing what they did.
6Desrtopa7yWhat societies maintained prosperity without either collapsing or turning into, well, us? In any case, we are by many standards the most prosperous civilization ever to exist; by what older prosperity-promoting behaviors do you think our society might be improved?
-8Eugine_Nier7y
1Moss_Piglet7yAre they? Unless you mean that as a synonym for Progressivism, I've missed that bit. Postmodernism isn't just a literary theory. [http://en.wikipedia.org/wiki/Science_wars#Postmodernism] You can't have an Emperor surrounded by legions of Mandarins if everyone is out in the bush looking for acorns; you need agriculture and specialization long before anyone starts talking about the Mandate of Heaven or tracing out dynasties. The same way you couldn't expect someone to come up with Black Bloc tactics without there already being ubiquitous video recording. But you could have crop rotation without building the Forbidden City; technology is a necessary condition, but not a sufficient one. The incentives are no less real and no less perverse if they require a technological substrate to be effective.
5Ishaan7yI'm talking about the greater literacy and mathematical proficiency, coupled with a decline in superstition and religious belief among cultures that have had the longest exposure to information technology. Oh, ok that makes more sense.
-9Moss_Piglet7y
-8Eugine_Nier7y
4MugaSofer7yHuh. The definition I was using - I guess picked up from usage? - was something like "Liberals reframed as overdog, to counter the perception of underdog-ness and apply liberal strategies to themselves. See also: propaganda, censorship." I suppose there are overlaps, but still ... I'll need to study further if I'm ever to pass the Reactionary Turing Test.
7shminux7yThis subthread deteriorated into an unchecked and fruitless political discussion. How sad.
6Ishaan7yI have a problem with political stuff when it derails other, important conversations. But my original quote is, at base, about politics. Deterioration implies that the sub-thread was about something "higher and better" than politics to begin with. I do think the ensuing discussion has improved my understanding of the person I quoted, at least. That was the spirit of the main topic, right? Edit: okay, I just read the sub-sub threads. You're right, it did deteriorate...I just didn't see the extent of it because there weren't many direct replies to me that displayed the deterioration.
5wedrifid7yAre those the best 15 words that the guy has? If so, that provides me significant information and potentially saves me time.
3Ishaan7yI don't assign high confidence to my ability to summarize it accurately because I didn't really get it - I was frequently confused about the meaning. What little I did get out of it doesn't match the praise it gets, and history/political science are not areas that I currently consider myself well informed about, so there is a high chance I'm missing something. Also, keep in mind I'm summarizing someone whose opinions are, superficially speaking, aligned with a group I generally tend to disagree with, so I might have un-adjusted for biases. Also, The Complete Works of Moldbug is frickin' long, and I've only read a minuscule fraction of the work. I just felt like trying Moldbug because it fit this exercise really well. Moldbug is the writer I've read the most recently who I think could really use some brevity. I did get one useful meme out of it which I actually agree with ... I generally felt this before reading reactionary literature, but I agreed with it more after: Progressives should try their best to work within an existing imperfect system rather than against it - because structures are expensive and complicated, and if you're gonna tear one down you'd better be prepared to build another one. In other words - rather than attacking bad things, create good things and let bad things wither away naturally.
2apophenia7yAdding a note because I said "quotes don't belong in this thread" elsewhere. However, this quote belongs in this thread, because
4Ishaan7yOh no, I'm sorry. That wasn't a direct quote, but a paraphrase of a set of long essays. I should not have formatted it like a quote. I've edited the original comment to better reflect this.