All of Alejandro1's Comments + Replies

The question is analogous to the Grim Reaper Paradox, described by Chalmers here:

A slightly better example of prima facie without ideal positive conceivability may be the Grim Reaper paradox (Benardete 1964; Hawthorne 2000). There are countably many grim reapers, one for every positive integer. Grim reaper 1 is disposed to kill you with a scythe at 1pm, if and only if you are still alive then (otherwise his scythe remains immobile throughout), taking 30 minutes about it. Grim reaper 2 is disposed to kill you with a scythe at 12:30 pm, if and only if you

... (read more)
You are right. This is actually the same problem. The problem of the (math) infinity magic itself.

Lately it seems that at least 50% of the Slate Star Codex open threads are filled by Trump/Clinton discussions, so I'm willing to bet that the debate will be covered there as well.

I guess one is Eugine/Azathoth/VoiceOfRa

I had suddenly the same suspicion about VoR today, in a spontaneous way; has there been previous discussion of this conjecture that I missed?

A little []. At this point I'm at least 99% confident VoR is the same person flouting the ban again. I've not had a lot of downvotes on ancient comments lately, though, so I think he's being a bit better behaved than in the past. (Though I find the downvote-for-political-disagreement strategy rude, and I don't think it's just because the practitioners I've noticed all have politics quite different from mine.)

It is true that normally, taking people at their word is charitable. But if someone says that a concept is meaningless (when discussing it in a theoretical fashion), and then proceeds to use informally in ordinary conversation (as I conjectured that most people do with race and intelligence) then we cannot take them literally at their word. I think that something like my interpretation is the most charitable in this case.

First, I'm not so sure: if someone is actually inconsistent, then pointing out the inconsistency may be the better (more charitable?) thing to do rather than pretending the person had made the closest consistent argument. For example: there are a lot of academics who attack reason itself as fundamentally racist, imperialistic, etc. They back this up with something that looks like an argument. I think they are simply being inconsistent and contradictory, rather than meaning something deep not apparent at first glance. More importantly, I think your conjecture is wrong. On intelligence, I believe that many of the people who think intelligence does not exist would further object to a statement like "A is smarter than B," thinking it a form of ableism. One example, just to show what I mean: [] On race, the situation is more complicated: the "official line" is that race does not exist, but racism does. That is, people who say race does not exist also believe that people classify humans in terms of perceived race, even though the concept itself has no meaning (no "realness in a genetic sense" as one of the authors I cited in this thread puts it) . It is only in this sense that they would accept statements of the form "A and B are an interracial couple."

When people say things like "intelligence doesn't exist" or "race doesn't exist", charitably, they don't mean that the folk concepts of "intelligence" or "race" are utterly meaningless. I'd bet they still use the words, or synonyms for it, in informal contexts, analogously to how we use informally "strength". (E.g. "He's very smart"; "They are an interrracial couple"; "She's stronger than she looks"). What they object to is to treating them as a scientifically precise concepts ... (read more)

When people say "race is a social construct", for the most part, what they mean is that racial categories are divided in ways that are ambiguous and that tend to change over time. Obviously people have different physical features and genetics, but what physical features make one a member of one race or another, where you draw those lines, and what racial distinctions are "important" and which aren't, are all social constructs. To someone without any that social context (say, an Australian aborigine living in the year 1500 who had never met anyone outside of his own ethnic group previously) it wouldn't immediately be obvious to him that someone from Norway and someone from Greece are both "the same race", but that someone from Greece and someone from northern Africa are "different races". There was also an interesting study that demonstrated that people's perception of what race someone else was, or even what their own race is, sometimes tends to change over time based on social circumstances. []
Yes, I suppose that is true when people say such things charitably. But usually when they say such things, they are not being charitable.
When people say things like "intelligence doesn't exist" or "race doesn't exist", they are often using what on SSC is referred to as "motte and bailey"--that is, their claims that they don't exist are true based on narrow definitions, but they then apply those claims when much broader definitions are not in use.
A. I think at least some people do mean that concepts of intelligence and race are, in some sense, inherently meaningless. When people say "race does not exist because it is a social construct" or that race does not exist because "amount of variation within races is much larger than the amount of variation between races," I think it is being overly charitable to read that as saying "race is not a scientifically precise concept that denotes intrinsic, context-independent characteristics." B. Along the same lines, I believe I am justified in taking people at their word. If people want to say "race is not a scientifically precise concept" then they should just say that. They should not say that race does not exist, and if they do say the latter, I think that opens them up to justifiable criticism.

I think that the trial and error model is implausible; in which "time" are these trials and iterations occurring? The global determination of the whole universe seems much simpler.

I don't think it necessarily conflicts with free will, when free will is understood in a compatibilist way (which is how EY and most LWers understand it). If we agree that one can have free will in a completely deterministic universe with ordinary past-to-future causal chains, then why can't one have it in a universe where some of the chains run future-to-past?

That's a good link, thanks. I'm warm to compatibilism. I think I've confused the conversation by using the wrong terms, though. Instead of pointing at a lack of free will I should have pointed at the complete lack of causality, which is more constraining. You can read EY on it here. [] My interpretation of this would be that space-time would be a fixed object that exists in it's entirety. In the same sense that you could take a cross sectional scan of a sneaker and play it from rear to front, there would be a logical consistency to how the slides transformed as you progressed through the shoe, but it would not make any sense to say that one part caused another. In this analogy, 4-dimensional space-time is the shoe, and the cross section is 3-dimensional space. We play it from back to front, watching a movie of the universe, but the entire universe from beginning to end already existed; we're just looking at a slide of it at a time. Everything is consistent as the cross section passes through, but there's no causality in play, it's just an object being viewed in sequential slices. Much like EY's modified game of Life with time-travel. This actually seems pretty unsatisfying because there is a strong impression that the world is being run mostly on causality in the normal direction, with reverse causality coming in occasionally. This seems to me to work better with the iterating model. Regarding the time for trials and iterations, I would refer to simulation as an analogy. "World time" is happening in the simulation, and this is what the characters are aware of. From within the simulated world, how much "Meta time" has elapsed outside of the simulation (i.e. the time stream that the computer is in), or how many failed attempts have been dumped from RAM is not very relevant in the sense that these facts don't have any impact on "the world" (the simulated one) and are in fact probably unknowable to its inhabitants unless

He actually said it beforehand in LW as well. Link.

In all details, certainly not; Dumbledore's CEV might well include reuniting with his family, which won't be a part of others' CEV.

In broad things like ethics and politics, it is hoped that different people's CEVs aren't too far apart (thanks to human values originating in our distant evolutionary history, which is shared by all present-day humans) but there is no proof, and many would dispute it. At least that is my understanding.

"I ask my first question," Harry said. "What really happened on the night of October 31st, 1981?" Why was that night different from all other nights... "I would like the entire story, please."

Oh, I see. I just didn't have the context to recognize that. Thanks.

I've had an experience a couple of times that feels like being stuck in a loop of circular preferences.

It goes like this. Say I have set myself the goal of doing some work before lunch. Noon arrives, and I haven't done any work--let's say I'm reading blogs instead. I start feeling hungry. I have an impulse to close the blogs and go get some lunch. Then I think I don't want to "concede defeat" and I better do at least some work before lunch, to feel better about myself. I open briefly my work, and then… close it and reopen the blogs. The cycle re... (read more)

The part that causes circularity is probably the work: it feels easy when you are not doing it (in far mode), but difficult when you are about to do it (in near mode). You preferences are probably something like this: The Abstract Idea of Work > Lunch > Blogs > The Real Work

It's multiple agents with their own preferences fighting for the mic. One agent with a loop is not a good model here, imo.

I understood it to be implied that the message was actually set in advance to mislead Harry into believing time travel was involved.

Upon further consideration, this is the more likely solution.

I'd be curious where the factor of 2 comes from in the Newtonian approximation.

I can take a stab at explaining this. Both the Poisson equation and the Einstein equation have the general form

  • 2nd order differential operator acting on some quantity F = Constant * Matter source

In the Newtonian case, F is the gravitational potential. In the Einstein case, it is the spacetime metric. This is a quantity with a simple, natural, purely "mathematical" definition that you cannot play with and change redifining constants; it measures the distance bet... (read more)

The formula is calculating the gravitational flux on the surface of a 3-dimensional sphere, and 3-dimensional spheres have a surface area 4π times their radii.

Saying that this is what the formula intrinsically does, amounts to saying that field lines are more fundamental/"real" than action-at-distance forces on point particles. But in the context of purely Newtonian gravity, both formulations are in fact completely equivalent. (And if you appeal to relativity to justify considering fields more fundamental, then why not better go for simplifying Einstein's equation and including 8π in G?)

Yep :-). I don't know enough the physics to back that up, but that's what my gut tells me. A more educated version of me might be able to say something "the vocabulary of forces is 'shallow'; the vocabulary of fields is deeper; the vocabulary of group symmetries is deeper still." I certainly do not have the depth of understanding to make that sort of statement with any authority. If you know enough physics to correct me or clarify, please please do. If somebody who groks relativity told me that this is the right thing to do, I would believe them (ETA mentioned on Wikipedia []). I'd be curious where the factor of 2 comes from in the Newtonian approximation.

The current definition of the gravitational constant maximizes the simplicity of Newton's law F = Gmm'/r^2. Adding a 4π to its definition would maximize the simplicity of the Poisson equation that Metus wrote. Adding instead 8π, on the other hand, would maximize the simplicity of Einstein's field equations. No matter what you do, some equation will look a bit more complicated.

Absolutely, and Planck's constant maximizes the simplicity of finding the energy of a photon from its wavelength, and π maximally simplifies finding the circumference of of a circle from its diameter. But in all those cases, it feels to me like we're simplifying the wrong equation. ETA: To be explicit, it feels like there should be a 4π in Newton's law. The formula is calculating the gravitational flux on the surface of a 3-dimensional sphere, and 3-dimensional spheres have a surface area 4π times their radii.

Here the question is raised again to Gates in a Reddit AMA. He answers:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.

Edit: Ninja'd by Kawoomba.

My understanding of the use of "mindkilled" is that people who can be so described are incapable of discussing the relevant issue dispassionately, acquiring an us-vs-them tribal mentality and seeing arguments just as soldiers for their side. I really don't think that this applies to the topic of abortion on LW, which can be discussed dispassionately (much more so than in other places, at least). This is quite compatible with the possibility that the LW consensus is biased and wrong, which is what you are suggesting.

Abortion is a strongly mindkilling topic for society in general, but it is not one for Less Wrong. According to Yvain's survey data on a 5-point scale the responses on abortion average 4.38 + 1.032, which indicates a rather strong consensus accepting it. As a contrast, the results for Social Justice are 3.15 + 1.385. This matches my intuitive sense that discussions of social justice on LW are much more mindkilling than discussions of abortion.

If a "pro-choice" essay had been under discussion, then "LessWrong is already pro-choice, of course it's not going to be a mindkilling discussion" would have been my conclusion as well. But the thesis of the essay was strongly "pro-life", and it still got a good reception, with rebuttals mostly of the form "here's what's wrong with your assumptions and numbers" rather than "go away you woman-enslaving theocrat". It could just be that the survey questions don't distinguish between different reasons for various stances? There may be a big practical difference between "I'm strongly pro-choice because analysis of this complicated moral question heavily tips that way, so I'm open to reconsidering if my reasoning is weaker than I thought" and "I'm strongly pro-choice because there's no good more-moderate Schelling point, so any attempt to undermine my position must be fought like a camel's nose in the tent."
This could either show that the topic isn't mindkilling, or that it is very mindkilling, if the Less Wrong consensus happens to be simply mistaken.

From eyeballing the survey results, we might expect the worst ideological conflicts on LW to be those current among libertarians, liberals, and moderate-to-mainline socialists, and especially those that're interesting to nerds with those affiliations: not, for example, abortion or immigration, where one camp's almost exclusively conservative. And indeed, the most heated political arguments on LW that I remember have dealt with radical feminism, fat acceptance, the treatment of women in nerd culture, and anything vaguely associated with pick-up artistry. ... (read more)

I answered "not at all", even though I was for some years very shy, anxious and fearful about asking girls out, because I never felt anything like the specific fears both Scotts wrote about, of being labelled a creep, sexist, gross, objectifier, etc. It was just "ordinary" shyness and social awkwardness, not related at all to the tangled issues about feminism and gender relations that the current discussion centers on. I interepreted the question as being interested specifically in the intersection of shyness with these issues, otherwise I might have answered "sort of".

Yup. I'm pretty sure I was the only one who knew about or cared about feminism during the awkward middle/high school years. Most kids just aren't that ideologically involved. Maybe I just grew up in a medium-IQ bubble (certainly lower than the Scotts), but in my experience the only place feminists really manage to hurt people is via the internet and internet-fueled outrages. However, even if it's restricted to the 'net it's still important and worth addressing, seeing as that's a main hub of communication now. Besides, nerdy heterosexual males are highly at risk for any damages that may occur via internet exposure.

You are the fourth or fifth person who has reached the same suspicion, as far I as know, independently. Which of course is moderate additional Bayesian evidence for its truth (at the very least, it means you are seeing a objective pattern even if it turns out to be coincidental, instead of being paranoid or deluded)

I think that violates the spirit of the thought experiment. The point of the dust speck is that it is a fleeting, momentary discomfort with no consequences beyond itself. So if you multiply the choice by a billion, I would say that the billion dust specks should aggregate in a way they don't pile up and "completely shred one person"--e.g., each person gets one dust speck per week. This doesn't help solving the dilemma, at least for me.

Ok, then it doesn't solve the torture vs dust specks. But it does solve many analogous problems, like 0.5 sec torture for many people vs 50 years for one person, for example. I touched on the idea here: [] But it's important to note that there is no analogue to that in population ethics. I think I'll make a brief post on that.

The "clearly" is not at all clear to me, could you explain?

Yes, I did under specify my answer. Let's assume that a billion dust specks will completely shred one person. Then if have a specific population (key assumption) of 3^^^3 people and face the same decision a billion times, then you have the choice between a billion tortures and 3^^^3 deaths. If you want to avoid comparing different negatives, figure out how many dust specks impacts (and at what rate) equivalent to 50 years of torture, painwise, and apply a similar argument.

Another dilemma where the same dichotomy applies is torture vs. dust specks. One might reason that torturing one person 50 years is better than torturing 100 people infinitesimally less painfully for 50 years minus one second, and that this is better than torturing 10,000 people very slightly less painfully for 50 years minus two seconds……. and at the end of this process accept the unintuituive conclusion that torturing someone 50 years is better than having a huge number of people suffer a tiny pain for a second (differential thinking). Or one might refuse to accept the conclusion and decide that one of these apparently unproblematic differential comparisons is in fact wrong (integral thinking).

Torture vs dust specks has other features - in particular, the fact that "torture" is clearly the right option under aggregation (if you expect to to face the same problem 3^^^3 times).
(nods) That said, "integral thinking" is difficult to apply consistently to thought-experimental systems as completely divorced from anything like my actual life as TvDS. I find in practice that when I try, I mostly just end up ignoring the posited constraints of the thought-experimental system -- what is sometimes called "fighting the hypothetical" around here. For example, when I try to apply "integral thinking" to TvDS to reject the unintuitive conclusion, I end up applying intuitions developed from life in a world with a conceivable number of humans, where my confidence that the suffering I induce will alleviate a greater amount of suffering elsewhere is pretty low, to a thought-experimental world with an inconceivable number of humans where my confidence is extremely high.

The exposure of the general public to the concept of AI risk probably increased exponentially a few days ago, when Stephen Colbert mentioned Musk's warnings and satirized them. (Unrelatedly but also of potential interest to some LWers, Terry Tao was the guest of the evening).

Warning: segment contains Colbert's version of the basilisk.

You could have a question about the scientific consensus on whether abortion can cause breast cancer (to catch biased pro-lifers). For bias on the other side, perhaps there is some human characteristic the fetus develops earlier than the average uninformed pro-choicer would guess? There seems to be no consensus on fetus pain, but maybe some uncontroversial-yet-surprising fact about nervous system development? I couldn't find anything too surprising on a quick Wiki read, but maybe there is something.

I would expect that even as a fairly squishy pro-abortion Westerner (incredibly discomforted with the procedure but even more discomforted by the actions necessary to ban it), I'm likely to underestimate the health risks of even contragestives, and significantly underestimate the health risks of abortion procedures. Discussion in these circles also overstates the effectiveness of conventional contraception and often underestimates the number of abortions performed yearly [] . The last number is probably the easiest to support through evidence, although I'd weakly expect it to 'fool' smaller numbers of people than qualitative assessments. I'm also pretty sure that most pro-choice individuals drastically overestimate its support by women in general -- this may not be what you're looking for, but the intervals (40% real versus 20% expected for women who identify as "pro-life") are large enough that they should show up pretty clearly.

Took the survey. As usual, immense props to Yvain for the dedication and work he puts into this.

If Alice was born in January and Bob was born in December, she will be 11 months older than him when they start going to school (and their classmates will be in average 5.5 months younger than her and 5.5 months older than him), which I hear can make a difference.

I think this way of sorting classes by calendar year of birth might also be six months shifted in different hemispheres (or perhaps vary with country in more capricious ways). IIRC, in Argentina my classes had people born from one July to the following June, not from one January to the following December.

In Massachusetts, USA, the classes of people were from one September to the following August, being based on the day classes started.

Is the "Birth Month" bonus question just to sort people arbitrarily into groups to do statistics, or to find correlations between birth month and other traits? If the latter, since the causal mechanism is almost certainly seasonal weather, the question should ask directly for seasonal weather at birth to avoid South Hemisphere confounders.

Not the only possible one. If Alice was born in January and Bob was born in December, she will be 11 months older than him when they start going to school (and their classmates will be in average 5.5 months younger than her and 5.5 months older than him), which I hear can make a difference. The survey already asks for country. Sure, some people will have been born and grown up in a country in a hemisphere other than that they “most identify with” today, but they'll probably be a small enough minority that they wouldn't screw up the statistics too much.
I have no idea what the seasonal weather was like when I was born - but I know which hemisphere it happened in.

The question about "Country" should clarify whether you are asking about nationality or residence.

Philosopher Richard Chapell gives a positive review of Superintelligence.

An interesting point made by Brandon in the comments (the following quote combines two different comments):

I think there's a pretty straightforward argument for taking this kind of discussion seriously, on general grounds independent of one's particular assessment of the possibility of AI itself. The issues discussed by Bostrom tend to be limit-case versions of issues that arise in forming institutions, especially ones that serve a wide range of purposes. Most of the things Bostrom

... (read more)
Political science, the art of manipulating humans for power and profit..?
Here is what Bostrom himself says about this analogy: Superintelligence, p. 139.
This is great, thanks! I always always said that if you are worried about FAI, you should look into what people do with unfriendly non-human agents running around today. I am glad constitutional law people have looked into this.
Here's a salient MOOC [] that's just started on political and legal philosophy, which I'm dipping in and out of for non-FAI reasons.

0) CEV doesn't exist even for a single individual, because human preferences are too unstable and contingent on random factors for the extrapolation process to give a definite answer.

The American Conservative is definitely socially conservative and, if not exactly fiscally liberal, at least much more sympathetic to economic redistribution than mainstream conservatism. But it is more composed of opinion pieces than of news reports, so I don't know if it works for way you want.

As others suggested, Vox could be a good choice for a left-leaning news source. It has decent summaries of "everything you need to know about X" (where X = many current news stories).


But "Would you pay a penny to avoid scenario X?" in no way means "Would you sacrifice a utilon to avoid scenario X?" (the latter is meaningless, since utilons are abstractions subject to arbitrary rescaling). The meaningful rephrasing of the penny question in terms of utilons is "Ceteris paribus, would you get more utilons if X happens, or if you lose a penny and X doesn't happen?" (which is just roundabout way of asking which you prefer). And this is unobjectionable as a way of testing whether you have really a preference an... (read more)

Seconded. But then we would also need to avoid using language that sneaks disguised utilons into the conversation.

Right; assuming (falsely of course) that humans have coherent preferences satisfying the VNM axioms, what can be measured in utilons are not "amount of dollars" in the abstract, but "amount of dollars obtained in such-and-such way in such-and-such situation". But I wouldn't call this "not being meaningfully comparable". And there is nothing special about dollars here, any other object, event or experience is subject to the same.

Every time there's an argument that goes like, "Would you pay a penny to avoid scenario X?", which in real life means actually "Would you sacrifice a utilon to avoid scenario X?" and therefore requires us to presuppose that dollars can stand in for utilons, something special is being assumed about dollars.

Utilons do not exist. They are abstractions defined out of idealized, coherent preferences. To the extent that they are meaningful, though, their whole point is that anything one might have a preference over can be quantified in utilons--including dollars.

If the rotating pie is a pie that when nonrotating had the same radius as the other one, when it rotates it has a slightly larger radius (and circumference) because of centrifugal forces. This effect completely dominates over any relativistic one.

The centrifugal force can be arbitrary small. Say that we have only the outer rim of the pie, but as large as a galaxy. The centrifugal force at the half of speed of light is just negligible. Far less than all the everyday centrifugal forces we deal with. Now say, that the rim has a zero around velocity at first and we are watching it from the centrer. Gradually, say in a million years time, it accelerates to a relativistic speed. The forces associated are a millionth of Newton per kilogram of mass. No big deal. The problem is only this - where's the Lorentz contraction? As long as we have only one spaceship orbiting the Galaxy, we can imagine this Lorentzian shrinking. In the case of that many, that they are all around, we can't.
Special Relativity + some basic mechanics leads to an apparent contradiction in the expected measurements, which is only resolved by introducing a curved space(time). So this would be a failure of self-consistency: the same theory leads to two different results for the same experiment. However, the two measurements of ostensibly the same thing are done by different observers, so there is no requirement that they should agree. Introducing curved space for the rotating disk shows how to calculate distances consistently.
I have two photos of two different pies, one of rotating one and one of not rotating. Photos are indistinguishable, I can't tell which is which. On the other hand, both pies have one-to-one correspondence with photos an one should be slightly deformed on the edge. Even if it is, on the photo can't be. The photo is perfectly Euclidean. I have measured no Lorentz contraction.

I am really torn between wanting to downvote this as having no place in LW and going against the politics-talk-taboo, and wanting to upvote it for being a clear, fair and to the point summary of ideological differences I find fascinating.

This forum needs to find a way to talk about politics with a cool head. This post is a good example of how to do so.

I’m always fascinated by the number of people who proudly build columns, tweets, blog posts or Facebook posts around the same core statement: “I don’t understand how anyone could (oppose legal abortion/support a carbon tax/sympathize with the Palestinians over the Israelis/want to privatize Social Security/insert your pet issue here)." It’s such an interesting statement, because it has three layers of meaning.

The first layer is the literal meaning of the words: I lack the knowledge and understanding to figure this out. But the second, intended meanin

... (read more)
The art of condescension is subtle and nuanced. "I'm always fascinated by..." can be sincere or not--when it is not, it is a variation on, "It never ceases to amaze me how..." If you were across the table from me, Alejandro, I could tell by your eyes. Most FB posts, tweets, blog posts and comments on magazine and newspaper articles are as bad or worse than what is described here. Rants masquerading as comments. That's why I like this venue here at LessWrong. Commenters actually trying to get more clarity, trying to make sure they understand, trying to make it clear with sincerely constructive criticism that they believe a better argument could be stated. If only it could be spread around the web-o-spehre. Virally.
Hmmm... let's try filling something else in there. "I don't understand how anyone could support ISIS/Bosnian genocide/North Darfur." While I think a person is indeed more effective at life for being able to perform the cognitive contortions necessary to bend their way into the mindset of a murderous totalitarian (without actually believing what they're understanding), I don't consider normal people lacking for their failure to understand refined murderous evil of the particularly uncommon kind -- any more than I expect them to understand the appeal of furry fandom (which I feel a bit guilty for picking out as the canonical Ridiculously Uncommon Weird Thing).

I like this and agree that usually or at least often the people making these "I don't understand how anyone could ..." statements aren't interested in actually understanding the people they disagree with. But I also liked Ozy's comment:

I dunno. I feel like "I don't understand how anyone could believe X" is a much, much better position to take on issues than "I know exactly why my opponents disagree with me! It is because they are stupid and evil!" The former at least opens the possibility that your opponents believe things

... (read more)
Or add a fourth laying: I think that I will rise in status by publically signalling to my facebook friends: "I lack the ability or willingness to attempt even a basic understanding of the people who disagree with me."
People do lots of silly things to signal commitment; the silliness is part of the point. This is a reason initiation rituals are often humiliating, and why members of minor religions often wear distinctive clothing or hairstyles. (I think I got this from this podcast interview [] with Larry Iannaccone.) I think posts like the ones to which McArdle is referring, and the beliefs underlying them, are further examples of signaling attire []. "I'm so committed, I'm even blind to whatever could be motivating the other side." A related podcast [] is with Arnold Kling on his e-book (which I enjoyed) The Three Languages of Politics. It's about (duh) politics--specifically, American politics--but it also contains an interesting and helpful discussion on seeing things from others' point of view, and explicitly points to commitment-signaling (and its relation to beliefs) as a reason people fail to see eye to eye.

While I agree with your actual point, I note with amusement that what's worse is the people who claim they do understand: "I understand that you want to own a gun because it's a penis-substitute", "I understand that you don't want me to own a gun because you live in a fantasy world where there's no crime", "I understand that you're talking about my beauty because you think you own me", "I understand that you complain about people talking about your beauty as a way of boasting about how beautiful you are."... None of... (read more)

Now repeat the same statement, only instead of abortions and carbon taxes, substitute the words "believe in homeopathy". (Creationism also works.) People do say that--yet it doesn't mean any of the things the quote claims it means (at least not in a nontrivial sense).
Or, (4), "I keep asking, but they won't say"....

The example of the three locks brings to mind another possible failure of this principle: that it can be exploited by deliberately giving us additional choices. For example, perhaps in this example the cheap lock is perfectly adequate for our needs, but seeing the existence of an expensive lock makes us believe that the regular one is the one that has equal chance of erring in both directions. I believe I read (in LW? or in Marginal Revolution?) that restaurant menus and sales catalogs often include some outrageously priced items to induce customers to buy... (read more)

Good example. It highlights that although erring on both sides should be a necessary condition for optimality when there's a full spectrum, it certainly isn't sufficient (and so as a fast rule of thumb it can be misled).

On the other hand, it is kind of awesome that people with no knowledge of Esperanto but knowledge of two or three European languages can immediately understand everything you say--as I just did.

Agreed, tho my sentence is probably easier than average because I haven't used Esperanto for years now, so I'm much more likely to remember vocabulary similar to languages I know. Knowing some of a Latin language and a Germanic one, plus knowledge of basic syntax (nounds end in -o, adjectives in -a, verbs in -is/-as/-os (past/present/future), adverbs in -e, plural is -j, accusative has an extra -n) is enough for understanding a lot of simple content.
And this is 'knowledge of' in a very loose sense - I don't know any European languages except English, and I could still work it out. (I did take 'parolas' from French 'parler'.)

I doubt it is possible to find non-controversial examples of anything, and especially of things plausible enough to be believed by intelligent non-experts, outside of the hard sciences.

If this is true, the only plausible examples would be such as "an infinity cannot be larger than another infinity", "time flows uniformly regardless of the observer", "biological species have unchanging essences", and other intuitively plausible statements unquestionably contradicted by modern hard sciences.

Turnabout Confusion is a Daria fanfic that portrays Lawndale High as being as full of Machiavellian plotters as HPMOR!Hogwarts is. Each student is keenly aware of their role in the popularity food chain, and most are constantly scheming on how to advance on it. When Daria and Quinn exchange roles for a few days on a spontaneous bet, they unwittingly set a chain reaction of plots and counterplots, leading to a massive Gambit Pileup that could overturn completely the whole social order of the school…

Part One: We All Fall Down.

Part Two: All the King's Horses... (read more)

This gets pretty incomprehensible pretty fast if you remember a show that had Daria, Jane, Quinn and Nameless Background Characters in it.

One thing that doesn't quite fit is this: If you are the weaker side, how is it possible that you come and bully me, and expect me to immediately give up? This doesn't seem like a typical behavior or weaker people surrounded by stronger people. (Possible explanation: This side is locally strong here, for some definition of "here", but the enemy side is stronger globally.)

Another explanation could be that the side is dominant in one form of battle (moralizing) but weak at another kind (economic power, prestige, literal battle) and wish to play

... (read more)
Certainly related. I'd perhaps categorise the core battle here as between different forms of social power but the same kind of breakdown of power kinds applies. Sometimes there is bleed-over into structural power as well (for both 'sides' at various times.)

More succinctly: I am rational, you are biased, they are mind-killed.

None of these quite fit the "irregular verbs" pattern that Russell and others made famous; in those all three words should have overlapping denotations and merely greatly differ in connotations. Maybe "I use heuristics, you are biased, they are mind-killed", but there the "to use"/"to be" distinction still ruins it.

"However, yields an even better joke (due to an extra meta level) when preceded by its quotation and a comma", however, yields an even better joke (due to an extra meta level) when preceded by its quotation and a comma.

"Is a even better joke than the previous joke when preceded by its quotation" is actually much funnier when followed by something completely different.

Another type of rare event, not as important as the ones you discuss, but with a large community devoted to forecasting its odds: major upsets in sports, like the recent Germany-Brazil 7-1 blowout in the World Cup. Here is a 538 article discussing it and comparing it to other comparable upsets in other sports.

Potentially valuable since you have a large amount of professionals not only attempting to forecast odds but actually betting beliefs.

The mention of the Sokal paper reminds me that Derrida (who is frequently associated with the po-mo authors parodied by Sokal, although IIRC he was not targeted directly) was basically a troll, making fun of conventional academic philosophy in a similar way than Will makes fun of conventional LW thought. I wonder if Will has read Derrida…?

Yes, actually, Derrida was initially targeted by Sokal []. His response [] in Le Monde is worth reading: (If I may be uncharitable for a moment, this bears more than a little resemblance to some but not all contemporary criticism of MIRI/LW using quotes from <= 2009.) As you can read later, the Sokal group dialed back their criticism of Derrida, to the point of pretending they had never said anything in the first place. (I know this situation quite well, as you might imagine. The remark was later published in a certain book with an interesting title [].)
Load More