If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

New Comment
163 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hi LW, first time commenting on here, but I have been a reader / lurker of the site for quite some time. Anyway, I hope to bring a question to the community that has been on my mind recently.

I have noticed an odd transformation of my social circle, in particular, of the people whom I have basically known since I was young, and are about the same age as me. I'm wondering if this is something that most people have observed in other people as they moved into adulthood and out into the world.

I would say that ever since I was a teenager I considered myself a "rationalist". What that has meant exactly has of course been updated over the years, but I would say that my approach to knowledge hasn't fundamentally changed (like I didn't suddenly become a postmodernist or anything). As soon as I understood what science and empiricism were about, I knew that my life would revolve around it in some way. And, what made me very close to the people who would be my best friends throughout high school and college, is that they felt pretty much the same way I did. At least I very much believed they did. My happiest moments with them, when I was about 16 to 18, involved lengthy, deep, and ... (read more)

I agree there's something to the exploration-exploitation view of people becoming more closed-minded. But don't be too quick to write it off as "people don't think carefully anymore", or simple tribalism. Some important questions really do get settled by all those late-night college debates, though often the answer is "I don't think it's possible to know this" or "It's not worth the years of effort it would take to understand at a more-than-amateur level." People are recognizing their limitations and zeroing in on the areas where they can get the highest return on investment for their thoughts. That's a difficult thing to do when you're younger, because you don't have much to compare yourself to. If you've never met a physicist more knowledgeable than your 9th-grade science teacher, you might well think you can make big contributions to the theory of relativity in the space of a few weeks' discussion with your friends. Similarly, when it comes to politics, the idea of considering every idea with an open mind can fall victim to the pressures of reality--some ideas are superficially appealing but actually harmful; some are nice in theory but are so far from what could reasonably be implemented that their return on investment is low. And because politics is so adversarial, many ideas that are promoted as novel and non-partisan are actually trying to sneak in a not-so-novel agenda through the back door.
That's an interesting thought. However, I tend to observe that most people do not take strictly agnostic positions on most things. In fact, it seems that people tend towards certainty rather than uncertainty. So I'm not sure that I'm seeing people tend to give up on questions they think are too difficult or that they don't have the expertise or time to really come to a conclusion on. From my perspective it seems that people really do fall into ideological camps where they believe a lot of matters have been completely settled and do not need further discussion. An interesting sort-of reverse phenomenon that I've noticed, is that on matters where people really have more expertise, they actually tend to be a little more agnostic about and open to debate. So for example you might notice people having an in depth discussion on some aspect of software engineering, like a library or a framework, weighing the pros and cons of each and citing expert opinion - but on politics, which we understand even less about - you really don't see this at all.
I can imagine a few possible things that could have contributed. First, being more open-minded when young and getting more close-minded as older is the usual way, not just for humans, but also for many animals. Kittens are more playful than adult cats. And "philosophy" is a way of playing with words and ideas, so naturally young people would play with different ideas (the smart ones with different smart-sounding ideas; the stupid ones with different simplistic ideas), and gradually settle on the One True Way of looking at things, as they stop being able to consider new ideas, and choose one of the known ones which seems to work best for them. It makes sense from the "exploration / exploitation" point of view: When you have a lot of time ahead of you, and your opinions matter relatively little because you are in a relatively safe environment, it is good to explore and get new data. When time becomes scarce and potential mistakes costly, stick with the best of what you already know. Also, there is a trade-off between exploration and productivity; the time you spend playing with new ideas is the time you don't spend earning money or working on your dreams; which is okay for a teenager with low value at job market. For an adult, job and/or children reduce the time and mental energy they can spend on thinking about things unrelated to immediate survival. Second, people try to fit in their environment. I am sorry if this sounds too cynical, but the change of your friends probably reflects the change of the environment they are currently trying to fit in, where "scolding people, or snarky comments" are the standards of communication. (Trying to do anything else in such environment would probably gain you some snarky comments, and trying to reflect on the situation would get you scolded, plus some extra snarky comments. Not being sufficiently far-left of far-right would gain you low status as insufficiently "woke" or whatever is the right-wing equivalent.) Consider yours
Yes, this is absolutely normal, common experience. People get "set in their ways" in some point in their lives and it becomes easier to move a mountain than to have them change their mind. This is exactly why one of the very first parts of the EY sequences is How To Actually Change Your Mind. It is the foundational skill of rationalism, and something which most people, even self-described rationalists, lack. Really, truly changing your mind goes against some sort of in-built human instinct, itself the amalgamation of various described heuristics and biases with names like 'the availability heuristic' and '(dis)confirmation bias.'
The (speculative) explanation my mind immediately goes to: a combination of the you-are-the-average-of-your-5-best-friends heuristic, and the dilution of a selected social group when its members move into new environments. Universities and workplaces, with unusual exceptions, are probably not going to select as aggressively for high rationality (however you define "rational" & "rationality") as your in-school social selection did. So (I suspect) when the people in your circle started expanding their own social networks during university and then at work, the average rationality of their friends & acquaintances went down. And because (insofar as a person and their behaviour are malleable) a person's influenced by the people they hang out with, that probably made the people you know/knew less rational, or at least less likely to behave rationally.
Something in your comment changed my... not exactly opinion, more like feeling... about comparing social life at school and at job. Until now, I was thinking like this: At school you are thrown together with random kids from your neighborhood. But when you grow up, you choose your career, sometimes you even choose a different city or country, and then you are surrounded with people who made a similar choice. Therefore... not sure how to put this into words... your social environment at job is a result of more "optimization freedom" than your social environment at school. But suddenly it seems completely the other way round: Sure, the job is filtering for people somehow, but maybe it doesn't filter exactly by the criteria you care about the most. For example, you may care about people being nice and rational, but you career choice only allowed you to filter by education and social class. So, more optimization, but not necessarily in the direction you care about. And then at the job you are stuck with the colleagues you get on your project. However, at school, you had the freedom to pick a few people among dozens, and hang out with them. I guess what I am trying to say that if your criteria for people you want to associate with have a large component of education and social class, you will probably find the job better than school, socially; but if your criteria are about something else, you will probably find the job worse than school. (And university probably gives you the best of both worlds: a preselection of people, among whom you can further select.)
That is true for people who you are going to become friends with, but difference in negative environments is much bigger. If your job has a toxic social environment, you are free to find a new one at any time. You also have many bona fide levers to adjust the environment, by for example complaining to your boss, suing the company, etc. When your high school has a toxic social environment, you have limited ability to switch out of that environment. Complaints of other students have extremely high bars to be taken into account because it's mandatory for them to be there and it isn't in the administrator's best interests. If someone isn't doing something plainly illegal it's unlikely you will get much help.
Yep. The school → university transition might be the most interesting one WRT tristanm's question, because although it theoretically offers the best opportunity to select for rationality, in practice a lot of people can't or won't exploit the opportunity. I imagine even quite nerdy students, when deciding where to apply to university, didn't spend long asking themselves, "how can I make sure I wind up at a campus with lots of rationalists?" (I sure didn't!)
I don't know about rationalists but one big advantage of going to what's called a "highly selective college" is that your peers there are mostly smart. The same principle works for schools, except that the results are not as pronounced because the schools effectively use the wealth of the parents as a proxy.
I think the impression you have of the people may have been influenced by seeing them primarily through social media. Have you talked to them in person? It might be different. The format of social media makes having nuanced discussions difficult, and emphasizes the more tribal posts. Another thing to consider is that their priorities may have changed more than their approach to life. They may be applying empiricism to how to advance in a career, or how to be a good parent. There is a limited amount of time in a day, and they may have enough time to do only a few things well. Also, sleep deprivation, common among new parents, can make thinking clearly more difficult. Once children get older, parents get a bit of their balance back.
Interestingly, out of my original friend group, I am the only one who has gotten married and had a child. If anything, I have been forced to become more rational in order to cope with the added anxieties, lack of sleep, and stress.
I can offer a possible explanation (just one model though, you'll have to verify it for yourself). Humans are by design far from rationality as we intend it: we evolved to function in a social environment of peers, and to make life and death decisions in the blink of an eye. The structure of our brain is such that we first make instinctive decisions, and then we justify them post-hoc. The aim of rationality where we try to first deliberate the truth from first principles and the conform our behaviour to those conlcusions, is totally alien to the way human beings usually work. It is possible to change our mind by self-deliberation, but it is very difficult, with our own nature as an obstacle, and thus can be done only for a limited array of subjects and with enough resources at your disposal (such as a lot of time and a safe environment). This might have been what happened to your friends: by concentrating on thriving in the social environment, they made more and more reliance on system 1 (the heuristic, quick-firing decision system) first, to the point of forgetting to exercise system 2 (slow and deliberate) first as you did when debating years ago. The more interesting question I would say is this: why you never forgot?
That's a good question. I think what separates me from a lot of the people I surrounded myself with is that I tend to have always relied far more on system 2 than on system 1. The exact reason for this I'm not sure about, except that I've always felt that my system 1 has always lagged behind or has been deficient in some way relative to most of my peers. I've always felt very uncomfortable in social situations, high stress or fast decision-making environments, or when the demands to react quickly are quite high. I've always been a lot more comfortable in environments that allow me to think and work on a problem as long as I need to before I feel ready to commit to something. For that reason, I've come to rely on system 2 - like reasoning for a lot of tasks that would normally be done by system 1. I think many people, once they transition to the environment in which navigating complex social structures becomes necessary, learn to rely mostly on system 1. This probably happens around the early adulthood phase, through college and into early career, when networking becomes very important. For various reasons, I found I didn't need to network very hard or build up a lot of social capital to find a career and a comfortable livelihood. I realize that this probably makes me very lucky - I am basically able to hold this outside-view position that allows me, in a way, to be a little more protected from certain biases that could have potentially been learned from trying to thrive in highly social environments.
I've had the overall impression that the older you become the stronger you hold your beliefs, a metaphor can be the hardening of neural networks. I am making a relation right now between that and the part of the personality known as 'openness' which according to Roland R. Griffiths decrease as people become older. http://www.hopkinsmedicine.org/news/media/releases/single_dose_of_hallucinogen_may_create_lasting_personality_change Which is discussed here: http://lesswrong.com/lw/7wh/rationality_drugs/4xmw to http://lesswrong.com/lw/82g/on_the_openness_personality_trait_rationality/ So drop acid with your friends, or rather have an underground psychedelic therapy group with blindfolds, music and emotional support. You gotta do your research though on how to facilitate these kinds of experiences. This is only for educational purposes and in theory.

FYI, I just banned an account "kings11me" who didn't participate in the forum, but was sending the following private message to multiple users:

God bless you and thanks, how are you? Happy to meet you. I got your contact via this site, I seriously have interest to invest on a profitable business in your country, the money I want to invest was acquired from my church member, and then I was his financial adviser. The amount to invest is ($14.5 million US dollars) presently, but I’m the present Catholic Church leader in my parish, if you will like to assist me as a partner, you must have the fear of God? kindly indicate your interest, and all other details relating to the funds will be revealed to you as we progress on. Confidentiality contact my direct e-mail address (REDACTED@yahoo com or REDACTED@gmail com) also indicate your direct telephone number, when replying this mail, God will guide us and with good health Amen, God bless you and your family, Rev, Chris Madurai Okon.

(In other words, I am an evil villain who has just deprived MIRI of a possible $14.500.000 donation from a secret rationality benefactor masquerading as an ordinary spammer. Business as usual. Mwa-ha-ha-ha-ha!)

I think about politics far too much. Its depressing, both in terms of outcomes and in terms of how bad the average political argument is. It makes me paranoid and alienated if people I know join facebook groups that advocate political violence/murder/killing all the kulaks, although to be fair its possible that those people have only read one or two posts and missed the violent ones. But most of all its fundamentally pretty pointless because I have no desire to get involved in politics and I'm sure that wrt any advantages in terms of helping me to better understand human nature, I've already picked all the low hanging fruit.

So anyway, I'm starting by committing to ignore all politics for a week (unless something really earth-shattering happens). I'll post again in a week to say whether I stuck to it, and if I didn't, please downvote me to oblivion.

Oh, and replying to replies to this post are excepted from this rule.

Or they agree with some aspects of a group but not others. Surely you don't agree with every opinion voiced on LessWrong, do you? Not even all of the generally accepted orthodoxy either, I'm sure. If you claimed you did, I'm sure I could come up with some post by EY (picked for representing LW views, no other reason) that you would be insulted to think others ascribed to you. Worth thinking about. Even in cases that appear to be clear cut fear or violence mongering it may be that they joined the group to have its messages in their news feed for awareness, because they refuse to flinch from the problem. How others choose to engage in social circles should be treated like browsing data from a library -- confidential, respected, and interpreted charitably. We wouldn't want to be making thought crime a real thing by adding social repercussions to how they choose to engage in the world around them.
All good points, in the general case - I myself frequently read about things I disagree with. However... That is more of a LW thing. Most normal people don't act like this, and the person I was thinking of certainly doesn't. Politics is about waving the flag for your tribe, and trying to actually understand the other tribe's point of view is like waving the enemy flag - treason! To show that they are loyal, many people seem to be adopting the maximally uncharitably point of view, or at least they are in the last few years. Of course, its also possible that that is why some people are advocating violence - they wouldn't really want violence, and they certainly wouldn't personally assault someone, but they advocate violence because it shows more tribal loyalty then just advocating peaceful protest.
I was going to remind you of the fundamental attribution error, but that isn't exactly what's going on here. Is there a name for the error of assuming the simplest possible explanation given the information available is correct, when it comes to human behaviour? Popsci aside, the simplest explanation you can come up with is usually not the case, because the other person is acting as a result of a lifetime of experiences that you have had at best only a small glimpse into. It's hard to evaluate exactly why they do what they do, without sitting themselves down on the couch for a few hours. If anyone knows what this error in analysis is called, I'm genuinely curious.
'Overuse of Occam's Razor?' Anyway, I know that psychology is complex and the explanations I come up with are only my best hypothesis, not one that I would necessarily have >50% confidence in - I should have made that clear. Still, I have trouble thinking of other explanations for why intelligent, educated, friendly people claim to believe that about 50% - 95% of the population are evil? Or that most old people deliberately vote for bad things because why should they care if they are going to die soon anyway? Or that there is >50% probability that Brexit will literally lead to a neo nazi state in the UK within 10 years? Or that the best way to defend democracy in the USA is to assassinate Trump and Pence... despite the fact that they were democratically elected and that the sympathy vote would push the USA far more to the right? Or that someone would attempt to prove to me that a political party are evil, by showing me a meme saying that they are evil, as if messing about with photoshop confers truth, and then be unable to provide a single non-meme-based argument to support this assertion? I mean, these beliefs are so crazy that if only one person were expressing these opinions I might worry that they are showing the early warning signs of some form of clinical paranoia. But its widespread, among people who otherwise seem functional.
Well, you got me. I thought perhaps you were seeing things like "Islam is a violent religion" and inferring too much into it. But most if not all of those examples seem inexcusable if genuinely held. Although the original point stand that the person subscribing to the group might be doing so in response to more mundane writings, and they are not endorsing the more extreme writing, which may even have been done for shock value. I don't know. Regarding the other point, it's not quite that Occam's razor is wrong, but rather having to do with ignorance of a complex system. "The simplest explanation is probably correct" is true when we have a sufficient number of facts in front of us to make inference. In most things in life this is the case, but human behaviour is complex enough to make that not generally true. I can make Occam's razor predictions about the underlying reason for my wife doing something, and maybe my closest friends or siblings. But not others -- their mental states are too complex, too dependent on things I don't have information on. Anyway sorry to distract from your original question. I just wish there was a name and some literature regarding this bias because it seems relevant and important.
However, I would say that even when dealing with high complexity and uncertainty, the simplest explanation is still usually the most probable hypothosis, even if it has <50% probability.
Well, there is one simple explanation called bullshit. A lot of people are willing to pronounce positions and statements that they will not be able to execute in reality. Hopefully, most of them.
If you want non-meme based arguments, try visiting fora that cater to people capable of engaging in non-meme based arguments.
I don't know exactly what you've seen and therefore it's possible that the following fails to address it. But on the face of it username2's diagnosis seems very plausible. Not the bit about choosing to see violence-mongering stuff to keep one's awareness and opposition keen; that's taking steelmanning too far. But: put yourself in the shoes of someone who is, as you put it, intelligent, educated and friendly, whose political opinions are generally leftish, and who is horrified by the rise of right-wing populism as exemplified by Donald Trump and Brexit and Marine Le Pen and so forth. These things alarm them and they want to surround themselves with ideas that point the other way, to reassure themselves that the world isn't entirely against them, etc. So they find a Down With Donald Trump And Brexit group on Facebook and join it. Some of the things it posts are extreme and violent; our hypothetical intelligent leftie deplores that, and would be happier affiliating with a large anti-rightist community that doesn't do that sort of thing -- but all the large anti-rightist communities have people in them who do that sort of thing, so they don't have much choice. Joining the group doesn't mean endorsing everything its members post. (It's not as if the rhetoric of the less-pleasant parts of the political right is any nicer or more sensible than that of the less-pleasant parts of the political left. Intelligent educated friendly right-leaning folk can find themselves with some regrettable -- dare I say deplorable? -- bedfellows too.)
So the first thing that someone should do is untwist her underwear. The second thing is probably contemplate the meaning of the word "horrify". Horrified by Donald Trump? Really? I assume that someone is not horrified by Putin. Or, say, Erdogan who is, right now, doing a classic 1937-type purge in his country. Or is she? Erm. I'm not going to believe that.
It's a purely hypothetical someone, so who knows? FWIW the people I know who are horrified by the rise of right-wing populism (which is what I actually said, and which is not exactly the same thing as being horrified by Donald Trump as such) are no fonder of Putin and Erdogan than of Trump, so far as I can tell. (FWIW I am not exactly horrified by the rise of right-wing populism, but I don't like it at all and think it likely to do much harm; I think Putin and Erdogan are both substantially worse than Trump, but they're less surprising than Trump because Russia and Turkey have much stronger track records of awful awful leaders.) OK. (It turns out that what skeptical_lurker was describing was distinctly more extreme than in my scenario, so it's not terribly relevant how well that scenario actually matches reality. I'm not much of an expert on leftist Facebook groups; is there a good supply of such groups that don't have any extremist stuff in?)
So, is this really a thing? I've heard a lot about that "rise of right-wing populism", but remain unconvinced that it actually exists. What I observe is that some left-wing ideas became less popular than the left wing expected and wanted them to be (which caused a massive hissy fit on the left). In general, "populism" is one of those irregular nouns/adjectives: my ideology is (rightfully) popular but yours is populist. Since democracy is essentially a popularity contest, I tend to treat the label "populist" as a sour-grapes insult with little content. So are we just talking about some general re-assertion of the right wing (which, despite the left's best efforts, continues to insist it's not dead yet) or do you think the "populist" moniker has some meaning?
Dunno. There are quite a lot of politicians and political movements at the moment with the following characteristics: * They are more openly nationalist than it has generally been fashionable to be until recently. ("Right-wing".) * They are more appealing to "ordinary people" and less to "intellectual elites" than political groups previously in the ascendant. ("Populist".) * In particular, their signature policies are often ones "experts" would be very sniffy about but that sound good to a lot of people. ("Populist".) * They advocate economic policies whose consequences are probably better on the whole for the wealthy than for the poor. ("Right-wing".) * They get some of their support from being seen to be happy saying things that are "politically incorrect". ("Right-wing", "populist"). In the US, we have Donald Trump. "Make America Great Again", border walls, keeping Muslims out of the US; hugely more support among the less-educated than among the more-educated; advocates trade barriers and fierce border controls; favours large-scale tax cuts and abolishing Obamacare; grabs 'em by the pussy. In the UK, we have UKIP and the rest of the anti-EU movement. Keeping scary foreigners out; support, again, strongly concentrated among the less-educated; leaving the EU has been widely predicted by economic experts to be a terrible idea; UKIP at least proposed tax and benefit changes whose first-order effect would have been a large transfer from poorer people to richer people; UKIP seems to attract an awful lot of people who are upset at, e.g., anti-black prejudice no longer being widely socially acceptable. In continental Europe we have a whole lot of politicians and movements that are widely described as right-wing populist: the Front National in France, Vlaams Belang in Belgium, Golden Dawn in Greece, etc. I don't know enough about them to evaluate those claims, but they seem plausible. And they seem to be gaining rather than losing members over the years. I do. I
Let's not get into the weeds of specific European or US policies. But the notion of "populism" is an interesting one (by the way, there's certainly left-wing populism as well, see e.g. Chavez). You defined it as: There are implications, or, perhaps, prerequisites to this definition. A big one is the distinction between "a large fraction of the population" -- basically, the masses -- and "experts and intellectual elites". Journalists love these terms, but they look a bit too jiggly-wiggly to me. So there is some axis on which we can arrange people with one end being "the masses" and the other end being "the elites". By which criteria do we do that? One obvious one is money. Masses are poor and elites are rich. There are problems here, though -- for example, successful Kansas farmers are not poor at all and yet people rarely call them elites. But a negative-wealth grad student in San Francisco is a bona fide member of the elites, isn't he? Social status is not a good criterion because it's basically as synonym for the axis we're trying to define and I don't know how would one go about assigning "objective" social status levels to people or communities. The traditional class distinction is still in play in Europe, to some degree, but is by and large absent in the US. The city/country division is a good proxy in many ways, but by itself it's a different axis. IQ, maybe? Masses are dumb and elites are smart? That's an interesting approach but it has scary consequences for the idea of democracy and whole égalité thing. All in all, I'm not happy with the fuzziness of the masses/elites concept which is thrown around with wild abandon nowadays. Moreover, you definition of populism assumes that the masses/elites distinction matters -- they like different things and, presumably, make different choices. And there is the implication that in this divergence the masses are "wrong" and the elites are "right". Again, there are interesting consequences to this idea. Taking
Of course I agree (perhaps Bernie Sanders is a more prominent, though less extreme, example than Chavez) but I find it interesting that you've leapt so readily from "right-wing populism: is it even a thing at all?" to "of course there's left-wing populism as well as right-wing populism". Non-technical terms are always a bit jiggly-wiggly. Possibly also wibbly-wobbly and maybe even timey-wimey on occasion. There are, as you say, lots of ways to distinguish "masses" from "elites", many of them correlate strongly with one another, and if you look at people who appeal to "masses" much more than to "elites" then I think you tend to get roughly the same people for a wide range of ways-to-distinguish. If you want a specific proposal, here is one, but of course it's ad hoc and somewhat arbitrary. Imagine asking for each person in a country (1) how wealthy they are, (2) how smart they are, and (3) where a panel of 100 random people from that country would put them on a "masses/elite" scale, according to whatever principles they happen to prefer. Measure #1 and #2 as percentiles and #3 with a 0..100 scale. Take the average. There's your measure of eliteness. (Wealth should be measured as NPV of assets plus expected income stream. You may notice that this handily puts that SF grad student somewhat higher than their negative net wealth might leave one to think.) Nope. I can apply that definition just as well if it turns out that the distinction doesn't matter at all. But, as it happens, I think it's clear that the masses/elites distinction, fuzzy and ambiguous as it is, does matter -- not least because there are a lot of influential politicians doing well for themselves by explicitly telling the "masses" how they're being screwed over by the "elites". There is no such implication. (Though there are some reasons to suspect that on questions that admit of actually determinable right and wrong answers, the "elites" will be right more often than the masses.) But that is expli
I started with right-wing populism because that was the subject mentioned in the preceding comment, but the accent here was on populism, not on right-wing. I'm not convinced that the left-wing populism is a thing in exactly the same way. I don't know about that. How about Jeremy Corbyn and UKIP? Same "masses"? That actually depends on his profession. If he's going to get a degree in underwater basket weaving, I don't think making coffee at Starbucks pays that much. Can you? If the distinction doesn't matter, there's no way to appeal to the masses and annoy the elites at the same time, since there are no distinctions that matter. If there were no such implication, the word "populist" would not have derogatory and condescending overtones. And yet it does. Didn't you say above that you can apply the definition even if the difference doesn't matter? But let's see. Any sane politician-to-be would espouse popular ideas. So what makes positions populist is not that the masses like them, but that the elites dislike them? Does this mean that "populism" is a label for views elites dislike? As an aside, I find it hilariously ironic how the left wing nowadays defends the elites and denigrates the stupid masses :-D
I am not sure it's consistent to say both "of course there's left-wing populism" and "I'm not convinced that the left-wing populism is a thing", but never mind. I think you misinterpreted me. Of course there are populists of different sorts. But who they are doesn't depend much on exactly how you define "populist". Corbyn and UKIP both appeal more to the less-educated than the more-educated, appeal more to the poor than to the rich, etc. For sure. I think the typical underwater basket-weaving student is less "elite" than the typical computer science student, precisely because the latter is likely to be well off in 10 years and the former isn't. (Well ... for that exact same reason, maybe it's the more-elite-to-start-with people who can afford to go into underwater basket weaving; I'm not sure.) But I think all you're pointing out here is that "elite"-ness isn't the exact same thing as wealth, which I already agreed with. Either you're equivocating between two notions of "mattering" or I misunderstood you the first time. With the notion you're apparently using now, I think it's obvious that the masses/elites distinction "matters", and -- one man's modus ponens is another's modus tollens -- the fact that it's possible to identify "populists" and distinguish them pretty well from other politicians is good evidence for that. No, it's both. A position disliked by both the masses and the elites is not populist. (A politician who espouses too many of those won't last long, but I expect it's possible to survive with one or two, and having them doesn't constitute populism.) Not all of it. You yourself drew attention to Jeremy Corbyn not so long ago, and in the US there's Bernie Sanders. And by and large all that's happening is the same thing that always happens whoever's in power: their opponents attack them with whatever stick is ready to hand. If it happens that the people in power are on the right, their opponents will mostly be on the left; if it happens that the pe
The concept of populism certainly exists in the map. People talk about it, they point fingers at things and say "this is {right|left}-wing populism". My issue is whether this concept found a good joint in the territory to carve along. For example, consider how you can treat a shadow as a thing in itself, or you can treat it merely as absence of light. Well, yes, you do. I suspect the underwater basket-weaving student doesn't. Especially if his basket-weaving is called something like "How Picking Your Nose Is A Transformative Genderqueer Activity That Subverts Patriarchy-Imposed Rules". But that is exactly what I contest. See the map/territory distinction above. Because liberté, égalité, fraternité is a left-wing, not a right-wing motto.
Ah. That would have been clearer to me if you'd found a different way of expressing your concern than by saying 'I've heard a lot about that "rise of right-wing populism", but remain unconvinced that it actually exists.' I mean, if you want to say -- as you should! -- that shadows are better understood as regions where there's less light because of occlusion rather than as separate objects in their own right, I don't think "Shadows don't exist" is the best way to say it. (But it probably works better than doing the same for "right-wing populism" because it's more obvious that shadows at-least-kinda-exist, and therefore your audience is more likely to think "hmm, he probably means something cleverer than what he seems to".) Quite possibly. Again, I'm not claiming (and so far as I can tell no one is claiming) that there is a single metric for "eliteness" that everyone completely agrees on, any more than there is for right-wing-ness or intelligence or good musical taste -- so I'm not sure how what you say is meant to be a counterargument to anything I've been saying. (It is meant to be a counterargument, right?) Thought experiment: We pick 100 people from (let's say) the population of adult Americans and anglophone[1] Europeans with an IQ of at least 100 as measured by some arbitrarily chosen standard test. We give them all a brief description of populism, but attach some other name to it in the hope of avoiding pre-existing associations. The description does not name specific politicians or movements. Then we show each of these people a list of the 10 most prominent politicians in several Western countries, and give them a summary of their policy positions and a sample of their speeches (with translations as appropriate). We ask each person to say for each politician how well they fit the profile of Sneetchism or whatever we say instead of "populism". [1] Anglophone so that we can give them all the same description. We could equally well pick just francophones or s
I don't know. First, the setup isn't specific enough (e.g. an awful lot depends on the formulation of your "brief description of populism"), and second I don't know. I think the outcome of this poll can go this way or that way or sideways or make a nice pirouette or something else. I am surprised you feel so certain about the outcome. I think it does. I think the history of the two movements makes it very clear. And notice how the left has a habit of accusing the right of being in the pocket of the rich and the powerful (in other words, elites), but the right does not accuse the left of this.
I'd have thought the rest of the thread would give a pretty good idea of the sort of thing I would put in a brief description of populism. OK, so that in fact is a thing we disagree about: I would be extremely surprised to see much disagreement, and you wouldn't. I wonder why. I'd probably agree with you about the left and right of, say, 20 years ago. But political movements change. What, never? Well, hardly ever.
So you want me to guess how would you set up this hypothetical and then guess again what would be the outcome? Is the idea that we can argue afterwards about what the outcome of that imaginary situation could be, using words like "realistic"? X-D Individual politicians, sure. That's just the accusation of being a traitor to the "masses" or, alternatively, of being a LINO (Left In Name Only, see RINO and DINO). Plus, of course, you just throw all the mud you have and see what sticks :-/ But I don't recall many accusations of Democrats (or Labour) as a political movement of being just a front for the elites, other than from the certifiably extreme left.
Not if you think the answer is highly sensitive to the details of how you guess. I don't think it is, but evidently you do. (Which is, as I said two comments upthread, an answer to my question of which of my guesses about the outcome of this hypothetical experiment you disagreed with.) Ah, I see. Yup, I'll agree that that accusation is sometimes made by lefties about the Right as a whole (or at least big chunks of it) and very rarely if at all by righties about the Left as a whole (or big chunks of it). I'm not sure this makes for an actual compelling argument -- the context, recall, was whether it's more unreasonable for lefties than for righties to complain that their political opponents are pandering to the masses instead of listening to the wisdom of the elites. Remember (not that I expect you need reminding) that "the elites" and "the rich" are not the same thing. The people some lefties are accusing some righties of not listening to are not really the same people as those some lefties are accusing some righties of being in the pockets of. (Note, by the way, that if someone on the right accuses someone on the left -- or indeed anyone at all -- of being 'a traitor to the "masses"', then unless they're just saying "of course I have no problem with that, but you guys should" they are in fact claiming to speak for those "masses".)
By the way, you said that the left and the right changed over the past 20 years so that there is no "by default" association of the left with the masses any more. Why do you think so? It sounds like an unusual position to me.
I think the political right tries harder than it used to to appeal to those masses. The left says to the masses "Unlike those bastards on the right, we are going to look out for your economic interests". The right says to the masses "Unlike those perverts on the left, we are going to respect your values". (Of course I am caricaturing in both cases, and of course both sides say a little of both those things, and of course the Left/Right dichotomy is a simplification, yadda yadda yadda.) So, if Team Blue claims to act in the interests of the masses and Team Red claims to share the values of the masses, which one is acting more hilariously/hypocritically if it criticizes its opponents for ignoring the opinions of the elites?
Well, people like Thatcher or Reagan were popular -- notably, with the masses -- and they predate the shift that you are talking about. In the US context that would imply that during the Clinton years the Republicans decided they needed to "appeal to those masses" and the result was the success of Bush Jr. That doesn't look terribly persuasive to me -- Bush wasn't that appealing to the lower classes. The rightist rants for family values and against the degeneracy of the left were also pretty standard fare for more than a couple of decades. In the UK context this means that after Tony Blair came to power the Tories decided they need more mass appeal and again, I don't see much evidence for this suggestion. Just like Bush, Cameron was a fairly standard conservative leader.
I do prefer Berny Sanders over Hillary Clinton and at the same time I would label Berny a populist while I wouldn't label Clinton a populist. The opposite of people populist is being elitist.
The things I previously mentioned such as "Or that there is >50% probability that Brexit will literally lead to a neo nazi state in the UK within 10 years?" are mostly positions expressed by freinds. The group this person joined was advocating violent communist revolution and the murder of enemies of the people (as in it was an explicitly communist group, not a anti-Trump group that had been hijacked by communists), and so cannot be seen as a reaction to Trump or Brexit. But, in the more general case, there are a lot of people, a lot of centeralists, who are opposed to Trump/Brexit. So people do not need to join forces with extremists to fight them. I agree with that, but I think that there is a difference in behaviour due to the fact that the left has been winning in all areas with the possible exception of economics for the last 50 years or more, but suddenly there have been some unexpected rightist victories. Firstly, this means that the left expects to be pushing back the right, and there is a general assumption that, for instance, rightists must disavow and sever all ties with white nationalists but the left can freely associate with extremists. Secondly, given that the right has suddenly managed to win some victories, might the previous constant leftward march of history change, at least in some areas? In the same way that feminism and gay rights has made constant progress for the last 50 years, might nationalism make constant progress for the next 50 years? I don't know how much of the left are considering that as a possibility, but I can understand that they might be terrified and lashing out while they still have the ability to. So yes, the right are not more sensible or nicer in general, its just that right now the left have a greater ability to justify violence. If that changes, then we might live in interesting times.
Does it make a difference if instead of talking about "left" and "right" we focus on specific agendas? For example, if "left" includes both "gay rights" and "killing the kulaks", then it may sound scary for a left-leaning person to say "we had 50 years of the left progress, but now we will have 50 years of the right progress", but less scary if you translate it to e.g. "we have 50 years of gay rights, but kulaks are not going to be killed at least during the next 50 years". Yeah, this is too optimistic; I am just saying that perhaps focusing on the details may change the perspective. Maybe the historically most important outcome of the "50 years of right progress" will be e.g. banning the child genital mutilation, honor killings, and similar issues which the current left is not going to touch with a ten-foot pole (because they would involve criticizing cultural habits of other cultures, which is a taboo for the left, but the right would enjoy doing this). I guess my point is that imagining the "right" only clicking the Undo button during the following 50 years is unnecessarily narrowing their scope of possible action. (Just like the "left" also had other things to do, besides killing the kulaks.)
I think people cluster into left and right because those are the tribes. However, it can be oversimplistic and I agree that there are many potential directions left and right progress can take - indeed, if a few more Islamic terrorists shoot up gay bars there could be a lot of LBGTs defecting to right-nationalism.
Some people join the tribes because they are connected with the causes they support, but I think most people are there simply because of the other people who are there. When all your friends are X, there is a strong pressure on you to become X, too. And when people who enjoy hurting you are X, you are likely to become Y, if Y seems like the only force able to oppose X. It's like having a monkey tribe split into two subgroups; of course it makes sense to join the subgroup with your friends rather than the subgroup with your enemies. And the next step is making up the story why all good people are in your team, and all bad people are in the other team -- this signals that you have no significant conflicts in your team, and no significant friends in the other team, so you are a loyal member. But then also words have consequences, so if your team's banner says e.g. that you should burn the witches, then sooner or later some witches are likely to get burned. Even if most people in the team are actually not happy about burning the witches, and joined merely because their friends are there. Sometimes people agree that those words about "burning witches" were meant metaphorically, not literally; but there is a certain fragility about that, because someone is likely to decide that literally burning a witch will make even stronger signal of their loyalty to the tribe. It makes me sad that the popular political positions seem to be either nationalism or cultural relativism. Is there these days even a significant pro-"Western civilization" side? I mean a side that would say that as long as you follow the rules of civilized life, your language and color of skin don't matter, but if you as much as publicly talk positively about genital mutilation or "honor" killing, no one is going to give a fuck about your cultural or religious sensitivity, you are going to be called evil.
Well, if this person is joining an explicitly and specifically violent communist group, then I guess that indicates that this particular person is sympathetic with violent communism. That's too bad, but it's also pretty unusual and I'd classify it as "this person is broken" rather than "politics is broken" unless what you're seeing is lots of otherwise sensible people joining explicitly violent explicitly communist groups. In that case, either we've got a general resurgence of violent communism (which would be alarming) or there's something unusual about your friends (which would be interesting but not necessarily alarming). I think you're right that the last several decades have been pretty good for progressive social causes, and that this seems like it might be changing, and that this might lead to more violence from leftists. My guess is that serious politically-motivated violence will remain rare enough that you don't actually need to worry about it unless for some reason you're a specific target, and ineffectual enough that you don't need to worry that it will have much impact beyond the violence itself. What's there been historically? Occasional riots (usually left) and demonstrations-turned-violent (usually left, though arguably when there's been violence it's been as much due to provocation from the police as to actual violent intent by the protestors). Occasional acts of terrorism (usually right, but occasionally kinda-left as with Kaczynski). All these things are really rare, which is why they make the news, which is why it's easy to get worried about them :-). And they very rarely have any actual influence on what anyone else does. The single most worrying political-violence-related outcome (to me) is that someone commits some act of violence and the administration uses that as a pretext for major gutting of civil liberties or something of the kind. The historical precedent I'd rather not be using explicitly is of course the Reichstag fire. [EDITED to
That's the hope, right? We are living in a civilized society, etc. etc. There is not going to be a repeat of The Troubles, will there? No empire will collapse with a big bang, no mobs will torch the neighbourhoods, no lists of undesireables will be circulated... Historically? During the XX century? Being on the wrong side you stood a good chance of being killed. Sent to a gulag or a concentration camp, maybe.
It is something I hope, and also something I guess is probably true. Of course I could turn out to be wrong. Oh, for sure. But that's an entirely different failure mode from the ones skeptical_lurker appears to be concerned with.
I think communist beliefs, violent or not, are on the rise largely due to young angry people being too young to remember the cold war. Some friends and acquaintances from multiple disconnected freindship groups are communists, and too many of these advocate violence, although I think that they are still a tiny minority overall. I think the situation is, as you put it, "this person is broken". I'm not at all worried about actually being the victim of politically-motivated physical violence or of riots/revolutions etc in the near future. What worries me is general political polarisation leading to a situation where blue and red tribes hate each other and cannot interact, where politics is reduced to seeing who can shout 'racist' or 'cuck' loudest. My political beliefs have become increasingly right-wing, in a classically liberal sense as opposed to fascist, and it alienates me when friends advocate burning someone's house down because they hold beliefs which are actually similar, perhaps even left of, mine. I'm not worried about them actually burning my house down, it's just alienating on principle, and for fear of social exclusion. WRT historical periods of political instability, I agree that such periods are infrequent, and given that we have seen the results of both Nazism and communism, I think it unlikely that those ideologies will gain power. But OTOH we are going to see certain events that are totally unprecedented in history, largely because of technology. We are already seeing levels of migration that I think exceeds anything in the past (due to better transport), which is leading to a rise in nationalism, and soon it is possible that we will see far more disruptive technologies such as human genetic engineering, large numbers of jobs being automated away, mass automated surveillance, and finally FAI. If safely navigating the problems these technologies pose requires a partially political solution, then we need sane politics. And yet political discourse has
We never had and yet we all are here.
Dat anthropic bias tho!
Good point.
The impression I have -- though of course I don't know what your friends have been saying -- is that the burn-their-houses-down brigade are much more upset about the kinda-fascist sort of right than the kinda-libertarian sort of right. Of course even if I'm right about that that doesn't necessarily reduce the sense of alienation; your aliefs needn't match your beliefs. Agree about first half; not fully convinced about second half. As you pointed out yourself, it's not that long ago that we had actual Nazis and Stalinists in power in Europe, and bad though early-21st-century politics is it doesn't seem like it's got there just yet. People have said horrible things about Donald Trump and Hillary Clinton, but ten years ago they were saying similarly horrible things about George W Bush and, er, Hillary Clinton. But yeah, I can't see our existing political institutions coping very well with immortality or super-effective genetic engineering or superintelligent AI, should those happen to come along.
Except that I don't think libertarian is incompatible with boarder controls - indeed, libertarians are generally enthusiastic about property rights, and controlling immigration is no different to locking your front door and vetting potential housemates. I'm not saying that the boarder controls should be based around skin colour, but the definition of 'Nazi' seems to have expanded to anyone who believes in any form of boarder control. I certainly agree that globally its not as bad as 1930-1990. Nevertheless, things seem to have got dramatically worse in the last decade - in my personal experience it used to be that people could agree to disagree, now most political opinions seem to be in lockstep, almost like a cult. More generally, I remember people criticising Bush, but now there are very intelligent people, even the head of CFAR, saying that Trump could be the end of democracy. Either they are correct, in which case that is obviously a cause for concern, or they are wrong and a lot of very smart people, inc rationalists, are utterly mindkilled.
For what it's worth, I haven't seen the word used that way. But -- the standard disclaimer -- my left-leaning Facebook friends are not your left-leaning Facebook friends, unless there's some purely coincidental overlap, and yours may be more Nazi-accusation-happier than mine. Or both, of course :-). More seriously, I think your observations are adequately explained by the hypothesis that (1) Trump and his administration are much more unusual than Bush and his administration, (2) they are in fact distinctly more likely than Bush was (though still not very likely) to do serious damage to the US's democratic institutions, and (3) a lot of very smart people are somewhat mindkilled. I think #1 is obviously true, #2 is probably true, and #3 would be entirely unsurprising (much less surprising than all those people being utterly mindkilled). Incidentally, I do remember some not-otherwise-obviously-crazy people speculating that Bush would simply refuse to leave office after 8 years and that somehow the Republican-controlled Congress would help make it so. So end-of-democracy hysteria isn't so very new.
You mean like FDR actually did? Except that he wasn't a Republican.
Er, FDR was elected for a third term before there were term limits for US presidents. The (stupid and not widespread) speculation was that GWB would cling to power by some means less legitimate than that.
Putin's a mindkiller.
Trump is the end of democracy-as-we-know-it, and both sides of the political spectrum agree that this is the case, albeit for very different reasons. But the United States were never founded as a democracy in the first place; they're supposed to be a federated republic, with plenty of checks-and-balances as an integral part of the overall arrangement. If our Constitution is worth more than the paper it's printed on, we'll find ourselves right back in what used to be the status quo.
You should find better friends. Terrified. What exactly are they terrified of? That their favourite political positions are not going to be held by people in power? In the West that's hardly grounds for terror.
Does it help to disaggregate "political violence", political "murder", and "killing all the kulaks"? I'm happy with some instances of political violence, and even some political murders are defensible. The assassination of Jonas Savimbi pretty much ended Angola's 26-year civil war, for example. To quote Madeleine Albright: worth it. If the people you know are thumbs-upping literally "kill all the kulaks" (and maybe they are! I'm sure I've seen that kind of stuff in YouTube comments and Stalinist tweets, so it is out there), I can understand your reaction. But if people are merely affirming that some political violence is worthy of support...well, I'd have to say that I agree!
Well, in one comment a friend was advocating violence against perhaps the most right wing 10-15% of the population.
! That clarifies things somewhat.
You see, its one thing to advocate violence against a literal Neo-Nazi, but advocating violence against anyone who advocates reducing immigration, well, that shows a lot more liberal tribe loyalty. So much holier than thou. Additionally, this comment was made IRL, possibly within earshot a person they were advocating violence against.
Why do you mourn when you can contemplate politics no more? What makes you think about it so much in the first place? That just seems like something you wouldn't want to ignore.

No, but I'm not under the illusion that I can currently make any significant contribution to changing politics - its certainly not my area of comparative advantage, but I could at least leave the country if things did start to get that bad. There would be fairly obvious warning signs that would not require a close watch on current events.


Were there ever really

"experts" who were supposed to screen newcomers for those who wouldn't follow the rules of civilized life


why that particular criterion?

I don't see a particular criterion in Viliam's comment; I see a couple of examples of things that we might want not to tolerate.

aren't you throwing out another western value, free speech, with it?

Doesn't look like it to me. Viliam says: if you talk positively about X and Y, you will be called evil. That's not at all the same as saying you're forbidden to talk positively about X a... (read more)

Yes, there should be a gap between "what is legal" and "what is socially approved", and promoting uncivilized ways of life should be in that gap. Enough free speech to allow it, enough common sense to disapprove of it socialy. Some people will always enjoy walking exactly on the line; making the legal line same as the socially approved line makes things worse. If they are separated, then if someone walks exactly on the legal line, flip a coin, and either put them to a jail or not, but no one is going to complain about the jail if that happens to be the outcome. And if someone walks exactly on the decency line, flip a coin, and either stop inviting them for a dinner or don't, but either way the law is not involved. It's just when the two lines happen to be the same, you have to flip two coins at the same time, and sometimes put someone into a jail for a behavior that is perceived as okay.
The problem is that "socially approved" is a function of the society and there are a lot of those. "Socially approved" in Black Rock City means something very different from "socially approved" in Salt Lake City.
I guess the mass media used to synchronize the society at least approximately (what behavior is portrayed as "socially approved" in the soap operas), but that mechanism may be dead these days.
Interesting. This implies that there is/was a historically short period of time when mass media was able to sync up most everyone. Before that time societies were stratified (e.g. by classes, see feudalism, each with quite different "socially approved" standards) and after that time societies are re-fragmenting into small pieces/bubbles.
The ultimate Schelling point of human culture -- Hollywood.
A temporary Schelling point. The ultimates, being biologically hardwired, tend to stay the same: food, warmth, sex, company.

Imagine that a completely trustworthy person who knows all your beliefs has acquired information that will "radically alter your worldview." No further details of the information are given. How much would you pay for it? [pollid:1198]

Unpack "trustworthy" - does this mean the person isn't going to tell falsehoods, but may not actually understand how truth works? Or is this more like Omega - has special access to data?
The person doesn't tell lies and you trust his/her intelligence and access to information.
But otherwise, the person has non-exceptional access to and discernment of truth? So it's likely that anything truly unusual he believes is wrong. I don't think Bayes will let me update all that far from "whatever he says is filtered through an engine not optimized for truth." Anything that he thinks will "radically alter my worldview" is likely an illusion or something I already have some evidence for. This changes in cases where I think the person DOES have better-than-average access to truth. Also, the fact that he's offering to sell me information that will change my worldview very much works against my likelihood to believe what he says.
You are fighting the hypothetical. A person has true information that will "radically alter your worldview". Assume you believe him/her. How much would you pay for the information?
Seems more like trying to clarify the hypothetical. There's a genuine dependency here.
Yeah, I tend to do that. However, this is the first that you've asserted that it's true information, which is an important clarification. I'm willing to pay a significant amount for true information that will let me make a large update (which is how I interpret "radically alter worldview").
I read it as a person who generally has a good track record and who build a reputation with being right when he makes these kind of claims. Maybe someone who already has done this intervention a few times and who uses the principles of http://lesswrong.com/r/discussion/lw/oe0/predictionbased_medicine_pbm/ and can tell you that with 90% credence you will afterwards say that he radically changed your mind.
If all the parts of this hold true, then person knows me well enough to know how important it would be to me and to the world to change my worldview. If they're not already telling me without payment, I can conclude that it wouldn't have much practical impact and be something like "The Earth is a Simulation but we don't know anything about how it works beyond physics or who made it, but the proof is convincing." Given that, I would probably pay a small amount of curiosity but not more.
Sharing the information might have a cost for the other person that lead to it not being shared without payment. There's also the element that you take information a lot more seriously when you paid money for it.
If I know that what they are saying is true, I will already radically alter my worldview, by dividing up my probability estimate among the alternate possibilities that I think are most likely to be true.

Perhaps we're at cross purposes. I didn't call it "hypothetical violence" because I think no one on the left has ever been violent for political reasons, any more than I talked about a "hypothetical person" because I don't think persons are real or talked about "hypothetical leftist content" because I don't think there's any left-wing stuff on Facebook. I called it "hypothetical violence" because this is a purely hypothetical scenario and therefore questions about what will or would happen don't have definite answers. ("Hypothetical" does not mean "nothing matching this description has ever happened".)

Not necessarily. Sunni's might believe that siding with other Sunni's is a good idea because they expect to get treated better than Shia. Signaling the tribal loyalty might be more central than anything substantive about Islam.

To be fair to the people arguing against this, I suspect they're using a somewhat non-standard definition of "motivated by".

You can tell a story about how the old generals of the Iraqi army were out of work and wanted to regain political power and used the banner of Islam as tool. Not because they are honest believers but because it was the best move to gain political power.

That story uses the standard definition of "motivated by" but I don't think it's a full representation of what happened.


This is a useful general prescription against irrationality: if a belief is supported by reason and evidence then you should be able to say what evidence would make you revise it. But it's worth noting that sometimes a belief may be reasonable but really hard to imagine remotely plausible evidence that would change your mind about it. What would Donald Trump have to do that would make you think he's a progressive internationalist who favours open borders and free trade? What would ISIS have to do to convince you that they are primarily an organization dedi... (read more)

I agree that would be the sensible response, but I'm curious for ways to engage with people who see the world radically differently. An ability to build particularly long bridges of consensus across particularly wide chasms of preconceptions could do the world a lot of good, if it is a learnable and teachable skill.
Donald Trump could hold make increase the amount of green cards that the US hands out to skilled workers. Nixon went to China and if Trump acts in a way that actually furthers immigration and that reduces the total tariff burdens I'm open to accepting him as a progressive internationalist.
Personally I'd want more evidence than that. I think Trump-as-progressive-internationalist is more or less on the borderline of things that it's not crazy to imagine finding sufficient evidence to believe. (ISIS-as-peaceful-organization is well beyond it -- I guess the most plausible way to get such evidence would be for it to turn out that we in the west have been systematically deceived about ISIS to such an extent that our ideas about it are almost entirely wrong, but I think I'd describe that situation as "turns out ISIS, as we believed in it, was a made-up organization, and there happens to be another entirely different one that shares its name" rather than "ISIS turns out to be peaceful".)

will he be willing to criticize that stuff when it actually gets violent? Remember, it can be dangerous [...]

Dunno. The obvious guess would be "not willing to do it in public with the violent people watching, willing to do it when safe from reprisals", which coincidentally is more or less exactly what I would guess if he weren't a member of our hypothetical unkind-turning-violent Facebook group.

(Given that we're talking about a hypothetical person joining a hypothetical Facebook with hypothetical leftist content, in the event that its hypothet... (read more)

“The only thing necessary for the triumph of evil is for good men to do nothing.”
Yeah, that's not actually true. It also requires evil men (or women, but for whatever reason it usually seems to be mostly men) to do something, and personally I am more inclined to blame them for it. Anyway, I'm not sure what your point is. That our hypothetical reasonable leftie who puts up with extremist talk and maybe, later, violence from not-so-reasonable lefties hasn't acted optimally for the general good? Sure, I agree. What of it?
We are not talking about blame. We are talking about cause-effect relationships. My comment was more a quip and less a point, but I'm curious about LW's thoughts about the degree to which passivity absolves you from responsibility :-/
I don't think passivity as such has much to do with it. I think responsibility is diluted when the thing you did-or-didn't-do had its effect only by many thousands of other people likewise doing-or-not-doing it. If A threatens to assassinate the President and B1...B1000 all fail to report him to the FBI despite seeing the threat, and then he does it -- well, then B1 through B1000 all bear some responsibility, but I suggest at most about 0.1% as much as A does. I'm inclined to think rather less than 0.1% as much. (But of course how much responsibility should be assigned to any given person for any given thing is a complicated question in complicated cases, and surely there's no One True Right Answer.)
I'm not thinking of things like reporting a possible assassin to the FBI, I'm thinking of things like living in a country with a, let's say, morally reprehensible leadership. Say, Stalin's Russia or Hitler's Germany. And you're a regular person, you just go to your job every day, you don't shoot anyone or personally interrogate enemies of the state. Of course, you do go to the party meetings, but then everyone does. To what degree are you complicit in the doings of your state? I do not imply that there is One True Right Answer.
Sirrah, you astonish me.
Sometimes I astonish myself.
Technically, a certain fraction of population is born as psychopaths. Sure, we can blame them, but we shouldn't act surprised by their existence. In some sense it is probably good to think about them similarly as we think about natural disasters (if natural disasters would be endowed with human-level intelligence, e.g. a lightning bolt would first explore the environment, and then hit exactly the least protected place to cause maximal damage). Apologies for being off-topic, but it seems to me that this is a frequently underestimated thing.

It's very funny that I got spam from this site soliciting me in investing money for a church, and the prerequisite is "you must have the fear of God". Please ban the user kings11me.

Already done, thanks to those who reported it..

That's simple, engage in pathological lying.

How do you distinguish lying that's pathological from lying that isn't?

A math problem


This one is a real one, but somewhat transformed and potentially solvable.

I assume you mean differences between masses, not squared masses,

No I don't

as a little dimensional analysis show suggest.

It's the difference between squared masses divided by the energy.

Think about it this way, take a theory where the neutrino's mass is ε for arbitrary small ε and take the limit as ε approaches 0.

Then all other things being equal the length the neutrino needs to travel in order to oscillate to a different flavor approaches infinity.

(More accurately, oscillation lengths are inversely proportional to the differences between squared masses of neutrino mass eigenstates. So you can't set a lower bound to the mass of the lightest eigenstate, but you can set a lower bound to the masses of the two other eigenstates. (Each of the three neutrino flavors is a different superposition of the three neutrino mass eigenstates.)

Better answer: they would need to demonstrate experiencing subjective time, such as by flavor-oscillating.

Which they do.

Which is why we think they have mass.

There is a lot of wishing with what I wish for the world, so then I understand that the best option for me is to figure out the best course action over my lifetime, as that's what I have access to (indirectly via bandwidth to a keyboard-computer-internet-etc-you) but at the same time disconnecting from this belief. Because wishing isn't the best option, neither is the best course of action. Realizing that it's useful practically sometimes to attach to thinking, but not for the majority of the time. (p.s I made an excuse for my attachment to my thinking lol... (read more)

In case anyone's wondering what we lost by turning off downvotes: We lost the ability to downvote this sort of stuff into oblivion. (Still a net-positive tradeoff, I think, but certainly far from cost-free.)
You seem to imply that my comment is a cost but not to which extent. I acknowledge that I am not a writer which is able to facilitate this to you in the LW-lingo and better English. But, it also matters to a cost to what and benefit to who? I'm not writing with my brain wired from the perspective of the community of lesswrong. But, frankly, I have seen it very strongly in its users like you. It might seem like I am confronting you but then I offer you the opportunity to see it in another way. The way which is bigger than all of us and epistemic/instrumental rationality combined. I'm not sure what's the problem anyway if you can say what is. I wish you would argue against me so I can better explain my point. :) Peace and stillness my man. I appreciate y'all.
Your first sentence, for example, has a lot of parts, and uses terms in unusual ways, and there are multiple possible interpretations of several parts. The end effect is that we don't know what you're saying. I suspect that what you're saying could make sense if presented more clearly, and it would not seem deep or mysterious. This would be a feature, not a bug.
My wishing for the world is intellectual masturbation, so my practical actions in this consensus reality matter the most (instrumental rationality). But if thinking stops (epistemic rationality by persistent non-symbolic experiences) I do not care in a sense, I go insane in relation to the consensus reality but sane to the non-symbolic way of being. So the way to solve this is to have a good system to remember me of my chores, goals, and choices which we would call rationality in the consensus reality. Otherwise, I might simply no longer be efficient from what I learn of the consensus reality. My memory might even be impaired. Some think that the way for us to return to these states is by AGI and simply overcoming the limits of the human brain, but humans have done it for thousands of years, possibly with more ease. See this article, Ben Goertzel is doing the interview: http://hplusmagazine.com/2012/08/08/engineering-enlightenment-part-one/ So what I think that I want is a persistent non-symbolic state, symbols make no sense, it's a bit Orwellian. But empirical feeling, indiscriminate love and so on makes a lot of sense. Of course, everything will function as it used to be ('I'-thought have never existed in the first place), but it will still be different. But from the place I am, I need (and I think humanity) need some system in which the computer keeps a track of what my goals and so on were before the persistent non-symbolic state. This beautifully falls into a nice merging with machines, I think, let that which is unconscious, and always will be (machines), be our thinking, for we are non-symbolic I think. :)
You say "intellectual masturbation" like it's a bad thing. :)

I think this is a failure of question. We should be asking for concrete evidence of the event. For example if we smash neutrinos into our sensors they register as having a mass by interacting with other mass holding particles

Until the evidence is stronger, I might suggest "that allegedly makes it easier" or "that hopefully makes it easier" or something of the kind.
I just reposted the HN title, it's not my conclusion. I think HN discovered it's mostly a profanity filter, anyway :-/

When I listend to his AMA, I noticed this line as well. It's a really clever "tool for thinking" that deserves to be noticed.

There's an interview with Dawkins somewhere where he mentions an anecdote about Wittgenstein. Wittgenstein is supposed to have said "Why did people ever believe that the sun revolves around the earth?", and his interlocutor supposedly answered: "Well, obviously it's because it looks like the sun is revolving around the earth." Then Wittgenstein whips out the counterfactual: "Well, what would it have looked like if it looked like the earth revolves around the sun?".

And the answer is obviously: exactly the same, lol!

So what was the wrong idea "geocentrism" about, then? Some tribal lore tells us that it had to do with the centrality of humanity in God's plan; or the qualitative difference between earthly and celestial things: the sun, moon, and stars belong to the heavens; the earth is below them; and hell is under the earth. But maybe it's more to do with a wrong idea of "revolving" instead. The ancients had no concept of freefall. When they imagined an object revolving around another, they may have imagined a sling-stone being swung in a sling. "If the earth were swinging around the sun, surely we would fall off!" The earth has discernible features such as oceans, trees, and people which might "fall off" under motion, but the sun doesn't, being a seemingly featureless body of light: so the evidence of ordinary terrestrial experience favors the stability of the earth and the motion of the sun. Even after heliocentric cosmology, it took more than a century to come up with the unification of celestial and terrestrial gravity: that the same rules govern the motion of the planets and moons that also govern cannonballs.

According to chronicles1, the Volkhov river in the town of Novgorod sometimes flowed back. It happened in 1063 (5 days), 1415 (not stated for how long, but 'Volkhov and many other rivers' are said to have done that), 1461 (3 days), 1468 ('The whole summer the river Volkhov upwards flowed for four days', not sure how to read this at all), and 1525 (9 days, 'not by wind, neither by storm, but by the order of its creator the God').

1compiled in Е. П. Борисенков, В. М. Пасецкий. Тысячелетняя летопись необычайных явлений природы. 1988. (A thousand-year-long chronicle of astonishing natural phenomena).

Unfortunately life is too short to criticise everyone who is wrong on the internet.

[Excuse me. Not native english speaker]

First of all, Lesswrong is the site where I always put my lasts hope.

I'm having lots of troubles trying to find the truth about IQ test taken by different ethnic groups It seems there are lots of studies claiming differences in IQs. On the other side there is a lot of people saying the contrary (culturally biased tests,..)

What do we do as rationalists? I'm really confused. I have read tons of articles from both sides yet nothing is clear to me.

Speaking for myself, my position is "I don't know".

Ignoring the specific question, there are many situations in my life where (a) I am curious about something, (b) I don't trust the existing research, and (c) it is not high enough priority for me to try doing the research myself. In such case, thinking "I don't know" seems like a reasonable reaction. What else should I think?

In absence of solid research, people often return to armchair reasoning, inventing clever arguments why in absence of evidence we should stick with "default" opinion X, and put the whole burden of proof on people who say Y. Problem is, in the next room, people use similar armchair reasoning to argue that we should stick with the "default" opinion Y, and put the whole burden of proof on people who say X. I could easily provide "a priori" arguments for either position here, which is why I consider neither of them convincing.

Here are your options:

  • decide that you feel better about believing that the different ethnic groups have the same average IQs;
  • decide that you feel better about believing that the different ethnic groups have different average IQs;
  • do the
... (read more)
One more thing to consider is that IQ is caused partially genetically, and partially non-genetically, e.g. diseases or lack of nutrition decrease IQ. So if you e.g. examine people from a sick and starving population, of course they are likely to have below-average IQ. But that doesn't say anything about what IQ their descendants will have if the food and health problem gets fixed. Intelligence is a polygenic trait, i.e. a trait influenced by multiple genes. There is an observed regression to the mean, that is although smart parents are likely to have smart children, and dumb parents are likely to have dumb children, the children of either are usually closer to the average than their parents. In other words, if you would inhabit an island exclusively by Mensa members, the children born on this island would probably almost all have above-average intelligence; but many of them would not reach the Mensa level. Or the opposite experiment... well, this one was actually done in real life... 40-50 years ago when communists ruled Cambodia, they killed almost all literate people in the country in the attempt to create an agrarian utopia (spoiler: didn't work as advertised), but Cambodia didn't literally become a nation of retards. It is difficult to find exactly which genes contribute to IQ. There are more than 50 suspects, but the experiments suffer from low sample sizes, so many of them are probably false positives. It seems that first-born children have higher IQ than their siblings. (The official story seems to be that it's because they get more parental attention and resources. To me it seems more likely that children born later simply receive higher mutational load from older parents. But maybe it's both.) It was suspected that breastfeeding increases IQ. Then it turned out this correlation was caused indirectly by mother's IQ; i.e. smart mothers are more likely to breastfeed their children, and smart mothers are likely to have smart children, which creates a corre
"What do we do" depends largely on the action that you are thinking about. What kind of decision do you want to make that's effected by the knowledge?
Sorry, I didn't expressed correctly. What I'm asking is "what should I believe"?
Christian's question is spot on. What he doesn't say is the reason he's asking. What you're describing isn't a belief, it's a somewhat vague cluster of beliefs. different beliefs in the cluster can have different credence levels, and treating them as a unit means it's unanswerable how accurate you are. Decompose your question to specific falsifiable statements. You should believe whatever lets you most accurately predict the future conditional on your choices. So; what choices are you facing where beliefs on this topic pay rent? Or, if you prefer, what predictions are you testing with the belief?
Why should you believe any specific conclusion on this matter rather than remain in doubt?
Iq tests tell something, usually that clusters with intelligence. But there are many ways for that to go wrong.

Game Theory Question:

So I recently just bumped into this paper on a more optimal algorithm for winning IPD's (beating out Tit for Tat). I'm not parsing the paper, well, though. It appears that, given some constraints on the algorithms playing the game between players X and Y, X can unilaterally determine Y's score?

Apparently having a "theory of mind" somehow increases your ability to "extort" (i.e. unilaterally dictate) opponents?

Um, so I'm not an expert in this field, but I'm wondering if this has any bearing on decision theory? My cur... (read more)

I recall a paper written by a student of Scott Aaronson about an IPD tournament (mentioned in the article about Eigenmorality). Indeed the winners were agents that kept a model of the opponent and responded in kind: T-f-T wasn't by far the optimal algorithm. On the other side, IPDs is what you have in a society where different agents are trying to cooperate / compete for resources. Clearly, super-rational agents (i.e. agents that have access to each other source code and are reflexively coherent) will act according to the same information, so no exploitation is possible, but this is an extreme case, better suited to treat problems in artificial coordination, rather than describing a real situation. Indeed some psychologists (e.g. Haidt) think that language and higher cognition evolved to serve the need of a "theory of mind" (model and influence other agents).

Second edit: Dagon is very kind and I feel ok; for posterity, my original comment was basically a link to the last paragraph of this comment, which talked about helping depressed EAs as some sort of silly hypothetical cause area.

Edit: since someone wants to emphasize how much they would "enjoy watching [my] evaluation contortions" of EA ideas, I elect to delete what I've written here.

I'm not crying.

eep! I deeply apologize that my remarks have caused you pain. I am skeptical of EA, and especially the more ... tenuous causal and ethical calculations that are sometimes used to justify non-obvious charities. But I deeply respect and appreciate everyone who is thinking and acting with the intent to make the world better rather than worse, and my disbelief in the granularity of calculation is tiny and unimportant compared to my belief that individuals who want to make a difference can do so. Also, I cry at the drop of a hat, so if you start I'm definitely joining you out of both shame and sympathy.
Ok, thank you, this helps a lot and I feel better after reading this, and if I do start crying in a minute it'll be because you're being very nice and not because I'm sad. So, um, thanks. :)
I'd enjoy watching the evaluation contortions that an EA would have to go through to decide that their best contribution is to help a specific not-very-effective (due to mental health problems or disability) contributor rather than more direct contributions. Uncertainty is multiplied, not just added, with each step in a causal chain. If you're trying to do math on consequentialism (let alone utilitarianism, which has further problems with valuation), you're pretty much doomed for anything more complicated than mosquito nets. Edit - leaving original for the historical record. OMG this came out so much meaner than I intended. Honestly, even small improvements in depression across many sufferers seems like it could easily multiply out to huge improvements in human welfare - it's a horrible thing and causes massive amounts of pain. I meant only to question the picking of individuals based on their EA intentions and helping them specifically rather than scalable options for all.
EDIT: Replied to wrong OP. I'm pretty unsure about statistics for this. Depression seems to be about six to ten percent of the population. So, are there strong arguments that disproportionately high amounts of promising EAs have depression / disabilities? I can steelman a sort of consequentialist argument for redirecting existing efforts to help disabled people towards the most promising, high-value people, but I'm more curious if anyone has info about mental health and the EA community.
I'm pretty unsure about statistics for this. Depression seems to be about six to ten percent of the population. So, are there strong arguments that disproportionately high amounts of promising EAs have depression / disabilities? I can steelman a sort of consequentialist argument for redirecting existing efforts to help disabled people towards the most promising, high-value people, but I'm more curious if anyone has info about mental health and the EA community.
Even when it's not disproportionately high for EA's if it's 8% of EAs that might be enough. I think it's plausible that a psychologist who specializes on helping EA people does better at helping EA people than the average psychologist. If a psychologist already understands worries about AGI destroying humanity it's easier for the patient to talk to them about it.
I'll try to be gentler about my concern, but I really do want to caution against EA interventions that are targeted at EA members. Helping someone is a pure good, but there's both a bias problem and an optics problem with helping people because they're similar to yourself. (and note: one of the reasons I don't consider myself to be part of EA is that I prefer to help people close or similar to myself disproportionately to the amount of net human impact. I'm not saying "don't do that", just "be careful not to claim that EA justifies it").
When it comes to publically recommending causes it's worthwhile to focus on projects with good optics like the GiveWell recommended charities. At the same time it's okay if individual people decide that they believe projects with worse optics are high impact interventions. To the extent that there are fuzzies involved in helping fellow EA people, it's worthy to acknowledge the fact and be conscious that they are part of the reason for your donation but in generating fuzzies isn't a reason against donating.
Thanks, that said it better than I did. I don't mean to discourage helping friends, family, neighbors, or other groups where you're a member. Or anyone else - all charity is good. I only wanted to point out that EA loses credibility if it suspiciously turns out that the detailed calculations and evaluation of options give clear support to your friends/co-believers.
I guess it needs to be made even more obvious that one can help their friends without having (or pretending to have) an exact calculation proving that this is the optimal thing to do.
Hm, okay, i hadn't thought about it like this. I agree that this might be a niche role. But I'm still unsure about the demand. There's about 12,000 people in the FB group. If that's conservatively about 10% of all EAs, we're still only looking at about 120,000 people, and then only about 9,600 potential patients, spread across the entire globe. Then again, I admit I really don't know how demand works for psychology (is ~10,000 potential patients enough?), and those are just ballpark works.
A psychologist who does weekly 1-hour sessions with their patients might have 40 patients at one time if they work 40 hours a week and just spent time with patients. I think it's likely that you want the person to do more than just 1-on-1 work and also write a few blog posts about what they learn, so 30 patients at a time might be a decent count. CBT can be done via Skype, so the fact that patients are spread over the globe isn't a problem. According to the Mayo clinic CBT takes an average of 10-20 sessions (http://www.mayoclinic.org/tests-procedures/cognitive-behavioral-therapy/details/what-you-can-expect/rec-20188674). That means you might change patients every 3 months. That means your therapist might treat 120 people in a year. It would be fine to fund a single therapist for this task as an MVP. If you would have a single therapist who the EA community holds in high regard I would estimate that the person can find those 120 people to treat.
Cool. Thanks for the stats on how psychologists work; all this is new to me. A sort of Schelling therapist who's able to help w/ people in the EA community does seem like a force multiplier / helpful thing to have, I guess.

So Bill Gates wants to tax robots... well, how about SOFTWARE? May fit easily into certain definitions of ROBOT. Especially if we realize it is the software what makes robot (in that line of argumentation) a "job stealing evil" (100% retroactive tax on evil profits from selling software would probably shut Billy's mouth).

Now how about AI? Going to "steal" virtually ALL JOBS... friendly or not.

And let's go one step further: who is the culprit? The devil who had an IDEA!

The one who invented the robot, its application in the production, p... (read more)

Can we please bring back downvoting?

I'd say that you are not supposed to tax people, you are supposed to tax flows of money, e.g. income, profit, sales, etc.
This is the point at which the proposal becomes obviously insane. Not coincidentally, it is also the point at which the proposal stops having anything to do with the thing Bill Gates said he was in favour of. (It is more like saying "we tax income people get from doing their jobs, so we should tax those people's parents for producing a person who did work that yielded taxable income".) As username2 says, what gets taxed is acquisition of money; when I pay income tax it isn't a tax on me but on my receipt of that income. If anything like a "robot tax" happens, here's the right way to think of it: a company is doing the same work while employing fewer people, so it makes more profit, and it pays tax on that profit so more profit means more tax. We are generally happy[1] taxing corporate profits, and we are generally happy[2] taxing companies when their profitable activities impose nasty externalities on others, and some kinds of "robot tax" could fit happily into that framework. [1] Perhaps you aren't. But most of us seem to be, since this is a thing that happens all over the world and I haven't seen much objection to it. [2] This isn't so clear; I've not seen a lot of objection to taxes of this sort, but I also think they aren't used as much as maybe they should be, so maybe they are unpopular. (For what it's worth, I am not myself in favour of a "robot tax" as such, but if we do find that robots or AI or other technological advances make some kinds of business hugely more profitable then I think it's reasonable for governments to look for ways to direct some of the benefit their way, to be used to help people whose lives become more difficult as machines get good at doing what used to be humans' jobs.)
Isn't a VAT already basically a Robot Tax?
That would explain all those sci-fi robots who only walk around destroying stuff and never build anything. They were programmed with an incentive to keep the VAT low, they took it too literally, and things got out of control.
Seems less so than a tax on corporate profits is. Am I missing something?
Maybe true, but the sort of externality that occurs when some jobs are paid less because of robots is a pecuniary externality, not a real externality - so the usual argument for taxing these activities doesn't quite apply. Now, taxation of capital is actually somewhat justified (and robots are capital, obviously), but really only as an indirect taxation of especially valuable skill endowments (such as, hypothetically, the skill of repairing robots, or superintending a robot-reliant business) - and then only at rather mild levels that are already in play with the current income tax. (If income redistribution was not a factor, you'd rather tax consumption, labor income and resource rents + real externalities).
Actually, you don't even need to tax corporate profits in this scenario. Just tax when actual people get money - company makes more profit, eventually it needs to distribute that profit to shareholders (dividends) or employees (higher wages for the non-displaced). Tax at that point, not along the way.
I dunno, it's hard enough trying to determine if and where profit was made, in order to tax it. If we didn't tax profits and only distributions then there would be no taxes to collect. Companies and individuals would all claim that any profit are being retained for future investment or for hoarding and not actually distributed to owners. That is why we tax non distributed retained earning.
There won't be a blanket tax on all robots but self-driving cars and trucks can be taxed directly. Taxing them enough to reduce their usage means less carbon emissions.
If your goal is to reduce carbon emissions, then tax the gasoline.
Politically taxing gasoline is very unpopular and there's no majority for carbon taxes.
Politically, taxing gasoline is utterly commonplace and accepted. Every developed country except Mexico does it, and every U.S. state.
In the US it is not high enough to fully pay for the highway infrastructure because it's politically unpopular.