Followup to: Excuse Me, Would You Like to Take a Survey?, Return of the Survey

Thank you to everyone who took the Less Wrong survey. I've calculated some results out on SPSS, and I've uploaded the data for anyone who wants it. I removed twelve people who wanted to remain private, removed a few people's karma upon request, and re-sorted the results so you can't figure out that the first person on the spreadsheet was the first person to post "I took it" on the comments thread and so on. Warning: you will probably not get exactly the same results as me, because a lot of people gave poor, barely comprehensible write in answers, which I tried to round off to the nearest bin.

Download the spreadsheet (right now it's in .xls format)

I am not a statistician, although I occasionally have to use statistics for various things, and I will gladly accept corrections for anything I've done wrong. Any Bayesian purists may wish to avert their eyes, as the whole analysis is frequentist. What can I say? I get SPSS software and training free and I don't like rejecting free stuff. The write-up below is missing answers to a few questions that I couldn't figure out how to analyze properly; anyone who cares about them enough can look at the raw data and try it themselves. Results under the cut.

Out of 166 respondees:

160 (96.4%) were male, 5 (3%) were female, and one chose not to reveal their gender.

The mean age was 27.16, the median was 25, and the SD was 7.68. The youngest person was 16, and the oldest was 60. Quartiles were <22, 22-25, 25-30, and >30.

Of the 158 of us who disclosed our race, 148 were white (93.6%), 6 were Asian, 1 was Black, 2 were Hispanic, and one cast a write-in vote for Middle Eastern. Judging by the number who put "Hinduism" as their family religion, most of those Asians seem to be Indians.

Of the 165 of us who gave readable relationship information, 55 (33.3%) are single and looking, 40 (24.2%) are single but not looking, 40 (24.2%) are in a relationship, 29 (17.6%) are married, and 1 is divorced.

Only 138 gave readable political information (those of you who refused to identify with any party and instead sent me manifestos, thank you for enlightening me, but I was unfortunately unable to do statistics on them). We have 62 (45%) libertarians, 53 (38.4%) liberals, 17 (12.3%) socialists, 6 (4.3%) conservatives, and not one person willing to own up to being a commie.

Of the 164 people who gave readable religious information, 134 (81.7%) were atheists and not spiritual; 5 other atheists described themselves as "spiritual". Counting deists and pantheists, we had 11 believers in a supreme being (6.7%), of whom 2 were deist/pantheist, 2 were lukewarm theists, and 6 were committed theists. 14 of us (8.5%) were agnostic.

53 of us were raised in families of "about average religiousity" (31.9%). 24 (14.5%) were from extremely religious families, 45 (27.1%) from nonreligious families, and 9 (5.4%) from explicitly atheist families. 30 (18.1%) were from families less religious than average. The remainder wrote in some hard to categorize responses, like an atheist father and religious mother, or vice versa.

Of the 106 of us who listed our family's religious background, 92 (87%) were Christian. Of the Christians, 29 (31.5% of Christians) described their backgrounds as Catholic, 30 (32.6% of Christians) described it as Protestant, and the rest gave various hard-to-classify denominations or simply described themselves as "Christian". There were also 9 Jews, 3 Hindus, 1 Muslim, and one New Ager.

I didn't run the "how much of Overcoming Bias have you read" question so well, and people ended up responding things like "Oh, most of it", which are again hard to average. After interpreting things extremely liberally and unscientifically ("most" was estimated as 75%, "a bit" was estimated at 25%, et cetera) I got that the average LWer has read about half of OB, with a slight tendency to read more of Eliezer's posts than Robin's.

Average time in the OB/LW community was 13.6 ± 9.2 months. Average time spent on the site per day was 30.7 ± 30.4 minutes.

IQs (warning: self-reported numbers for notoriously hard-to-measure statistic) ranged from 120 to 180. The mean was 145.88, median was 141.50, and SD was 14.02. Quartiles were <133, 133-141.5, 141.5-155, and >155.

77 people were willing to go out on a limb and guess whether their IQ would be above the median or not. The mean confidence level was 54.4, and the median confidence level was 55 - which shows a remarkable lack of self-promoting bias. The quartiles were <40, 40-55, 55-70, >70. There was a .453 correlation between this number and actual IQ. This number was significant at the <.001 level.

Probability of Many Worlds being more or less correct (given as mean, median, SD; all probabilities in percentage format): 55.65, 65, 32.9.

Probability of aliens in the observable Universe: 70.3, 90, 35.7.

Probability of aliens in our galaxy: 40.9, 35, 38.5. Notice the huge standard deviations here; the alien questions were remarkable both for the high number of people who put answers above 99.9, and the high number of people who put answers below 0.1. My guess: people who read about The Great Filter versus those who didn't.

Probability of some revealed religion being true: 3.8, 0, 12.6.

Probability of some Creator God: 4.2, 0, 14.6.

Probability of something supernatural existing: 4.1, 0, 12.8.

Probability of an average person cryonically frozen today being successfully revived: 22.3, 10, 26.2.

Probability of anti-agathic drugs allowing the current generation to live beyond 1000: 29.2, 20, 30.8.

Probability that we live in a simulation: 16.9, 5, 23.7.

Probability of anthropic global warming: 69.4, 80, 27.8.

Probability that we make it to 2100 without a catastrophe killing >90% of us: 73.1, 80, 24.6.

When asked to determine a year in which the Singularity might take place, the mean guess was 9,899 AD, but this is only because one person insisted on putting 100,000 AD. The median might be a better measure in this case; it was mid-2067.

Thomas Edison patented the lightbulb in 1880. I've never before been a firm believer in the wisdom of crowds, but it really came through in this case. Even though this was clearly not an easy question and many people got really far-off answers, the mean was 1879.3 and the median was 1880. The standard deviation was 36.1. Person who put "2172", you probably thought you were screwing up the results, but in fact you managed to counterbalance the other person who put "1700", allowing the mean to revert back to within one year of the correct value :P

The average person was 26.77% sure they got within 5 years of the correct answer on the lightbulb question. 30% of people did get within 5 years. I'm not sure how much to trust the result, because several people put the exact correct year down and gave it 100% confidence. Either they were really paying attention in history class, or they checked Wikipedia. There was a high correlation between high levels of confidence on the question and actually getting the question right, significant at the <.001 level.

I ran some correlations between different things, but they're nothing very interesting. I'm listing the ones that are significant at the <.05 level, but keep in mind that since I just tried correlating everything with everything else, there are a couple hundred correlations and it's absolutely plausible that many things would achieve that significance level by pure chance.

How long you've been in the community obviously correlates very closely with how much of Robin and Eliezer's posts you've read (and both correlate with each other).

People who have read more of Robin and Eliezer's posts have higher karma. People who spend more time per day on Less Wrong have higher karma (with very strong significance, at the <.001 level.)

People who have been in the community a long time and read many of EY and RH's posts are more likely to believe in Many Worlds and Cryonics, two unusual topics that were addressed particularly well on Overcoming Bias. That suggests if you're a new person who doesn't currently believe in those two ideas, and they're important to you, you might want to go back and find the OB sequences about them (here's Many Worlds, and here's some cryonics). There were no similar effects on things like belief in God or belief in aliens.

Older people were less likely to spend a lot of time on the site, less likely to believe in Many Worlds, less likely to believe in global warming, and more likely to believe in aliens.

Everything in the God/revealed religion/supernatural cluster correlated pretty well with each other. Belief in cryonics correlated pretty well with belief in anti-agathics.

Here is an anomalous finding I didn't expect: the higher a probability you assign to the truth of revealed religion, the less confident you are that your IQ is above average (even though no correlation between this religious belief and IQ was actually found). Significance is at the .025 level. I have two theories on this: first, that we've been telling religious people they're stupid for so long that it's finally starting to sink in :) Second, that most people here are not religious, and so the people who put a "high" probability for revealed religion may be assigning it 5% or 10%, not because they believe it but because they're just underconfident people who maybe overadjust for their biases a little much. This same underconfidence leads them to underestimate the possibility that their IQ is above average.

The higher probability you assign to the existence of aliens in the universe, the more likely you are to think we'll survive until 2100 (p=.002). There is no similar correlation for aliens in the galaxy. I credit the Great Filter article for this one too - if no other species exist, it could mean something killed them off.

And, uh, the higher probability you assign to the existence of aliens in the galaxy (but not in the universe) the more likely you are (at a .05 sig) to think global warming is man-made. I have no explanation for this one. Probably one of those coincidences.

Moving on - of the 102 people who cared about the ending to 3 Worlds Collide, 68 (66.6%) prefered to see the humans blow up Huygens, while 34 (33.3%) thought we'd be better off cooperating with the aliens and eating delicious babies.

Of the 114 people who had opinions about the Singularity, 85 (74.6%) go with Eliezer's version, and 29 (25.4%) go with Robin's.

If you're playing another Less Wronger in the Prisoner's Dilemma, you should know that of the 133 who provided valid information for this question, 96 (72.2%) would cooperate and 37 (27.8%) would defect. The numbers switch when one player becomes an evil paper-clip loving robot; out of 126 willing to play the "true" Prisoner's Dilemma, only 42% cooperate and 58% defect.

Of the 124 of us willing to play the Counterfactual Mugging, 53 (42.7%) would give Omega the money, and 71 (57.3%) would laugh in his face.

Of the 146 of us who had an opinion on aid to Africa, 24 (16.4%) thought it was almost always a good thing, 42 (27.8%) thought it was almost always a bad thing, and 80 (54.8%) took a middle-of-the-road approach and said it could be good, but only in a few cases where it was done right.

Of 128 of us who wanted to talk about our moral theories, 94 (73.4%) were consequentialists, about evenly split between garden-variety or Eliezer-variety (many complained they didn't know what Eliezer's interpretation was, or what the generic interpretation was, or that all they knew was that they were consequentialists). 15 (9%) said with more or fewer disclaimers that they were basically deontologists, and 5 (3.9%) wrote-in virtue ethics, and objected to their beliefs being left out (sorry!). 14 people (10.9%) didn't believe in morality.

Despite the seemingly overwhelming support for cryonics any time someone mentions it, only three of us are actually signed up! Of the 161 of us who admitted we weren't, 11 (6.8%) just never thought about it, 99 (59.6%) are still considering it, and 51 (31.7%) have decided against it.

New to LessWrong?

New Comment
212 comments, sorted by Click to highlight new comments since: Today at 8:00 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Awesome work.

One thing that disappointed, but didn't really surprise me, was the lack of diversity in the community

"160 (96.4%) were male, 5 (3%) were female, and one chose not to reveal their gender.

The mean age was 27.16, the median was 25, and the SD was 7.68. The youngest person was 16, and the oldest was 60. Quartiles were 30.

Of the 158 of us who disclosed our race, 148 were white (93.6%), 6 were Asian, 1 was Black, 2 were Hispanic, and one cast a write-in vote for Middle Eastern. Judging by the number who put "Hinduism" as their family religion, most of those Asians seem to be Indians."

The thing that particularly worries me is our low age. Now it's to be expected as internet communities are a young person's game but I'd be more comfortable with an average age closer to 30.

Combine that with the fact that most of us seem to be in Computers or Engineering (I'd really like to know what those "Other Hard Sciences" were) I do worry about our rationality as a group. One thing I've noticed with junk science is that Engineers and to a lesser extent Computer Scientists seem to be overrepresented. I'm not sure of all the reasons for this, I suspect that par... (read more)

I suspect that part of the problem is that we regularly work with designed systems that have a master plan that can be derived from a small amount of evidence.

I've been playing alot of portal and half life 2 lately. (first person shooters with heavy puzzle elements) and I wonder about how the level design is affecting my thought process.

I'm often in a room with a prominent exit and it is clear that that is the exit I'm supposed to take. When the way I came in is blocked I know that there is some other way to get out. When my computer controlled squad mates parrot 'which way do we go?' I think to myself 'What do you mean? It's obvious the level designer wants us to go this way."

I wonder if this will affect how I deal with real world puzzles where there are many paths that don't lead to defined goals, but also don't lead to a clear dead end.

You could play procedural games like nethack or Dwarf Fortress, which have no railroading, and some things you encounter just can't be solved. Those kind of games aren't as popular as "mainstream" ones like Portal, but they may better reflect the real world.
I've noticed the same thing with Valve games particularly (esp. after playing through with the developer commentary): they just seem so perfectly designed to guide the player that it becomes a bit boring. I want a few moments of running around in frustration before realizing "Aha! That's what you want me to do! How non-obvious." (A bit more like the old text-based adventures, King's Quest series, etc., in other words.)
Yes! I start interpreting things I see in game as communication from the developers, rather than a universe to figure out. Which is fine in game, but I worry it's training me for magical thinking. King's Quest style games solve some of this problem, probably because there's more 'noise'; Pointless things you can do, more places to wander. Grand Theft Auto is even more open ended. Though I haven't played the recent incarnations much.
This wired article may interest you.
Amusing article - I can't quite get my mind around feeling that way abuoy quake, but I'll cop to dreaming about Tetris when I was younger - . Jonnane
For a while, whenever I heard any number of songs on the Dance Dance Revolution Supernova playlist I had an almost insuperable urge to jump around like a friggin' idiot.
When I used to play a lot of Quake III, I had dreams where I'd have the sensation of moving around using jump-pads. I've also caught myself walking along the street and half-consciously scanning for potential cover and ambush points. My most disturbing video game carryover was a brief impulse after a long GTA session to gun my car at a pedestrian crossing a zebra crossing.
It's a cliche that kookdom is filled with brilliant scientists outside of their expertise, but its definitely not what I observe when I look at scientific history. Lots of kook inventors, Faraday, and lots of chemical and life and social scientists who start out correct but ignored or rejected and gradually embrace more extreme, attention-getting, but exaggerated and false versions of their initial thesis as a result of years avoiding their peers and interacting primarily with those members of the public who will act as an echo chamber. Then there are the free energy and anti-gravity crowds. They seem to be born that way.
I should clarify. I'm specifically thinking of Linus Pauling with his theories about Vitamin C curing cancer and a former Nobel winning physicist (can't remember who) doing a debunking of global warming based on some flaky arguments. Of course Wikipedia claims that Pauling may not have been completely out to lunch (though I don't really trust Wikipedia when it comes to junk science). And I don't really have any hard numbers, just knowledge of a couple cases and some anecdotes from scientists complaining about the tendency of Nobel winners to turn crackpot. I suppose this could underline the danger I was mentioning about working with limited evidence as I fell victim in my very own example of it!

I'm surprised to see how close I was to the mean in so many cases. I expected on several questions that I would be, if not an outlier, then outside the middle quartiles. I was wrong in most cases. Clearly the OB/LW brainwashing process has been more successful than I realised... :P

Seriously, very interesting results. I'm a bit dismayed by the 3% female figure -- I knew I was in a minority, but I didn't realise it was that tiny. I wish I could articulate some suggestions for getting hold of more female readers/commenters. I can sort of see intuitively how this place could seem like not the most attractive one to some women, but I don't have any ideas for sorting that out. Largely I guess it may just be a self-perpetuating thing. Perhaps the first step ought to be just getting some of the current female readers/commenters to make (more/some) top-level posts too. I wish I felt brave and knowledgeable and intelligent enough to attempt one touching on some aspect or other of feminism.

Go for it! Hopefully that will give you +1 bravery.
Why not a top level post noting the lack of women on LW? Doesn't have to be anything fancy, just note the survey results you don't even have to offer any analysis, just noting the problem in a top level post should be enough to draw some women out of the woodwork in the comments section. It might even inspire a few of them to write their own top level posts.
There has already been a top level post noting the rarity of women on LW.

I'm perplexed by the person who believes that the most severe existential risk is an asteroid strike, yet gave only 70% odds of surviving the century.

Hopefully it's not because they're an astronomer...

Maybe a human-caused asteroid strike? Or maybe this person can think of many different extinction scenarios which are individually very improbable, but add up to a 30% chance?
Most astronomers seem to put the odds of an asteroid strike at below 1 in 1000. I'd be interested to hear the person's other 299 ideas for race-ending catastrophes, each worthy of its own category (!).
I agree with your point, but just because someone can't enumerate 299 possibilities, does not mean they should not reserve probability space for unknown unknowns. Put another way, in calculating these odds you must leave room for race-ending catastrophes that you didn't even imagine. I believe this point is important, that we succumb to multiple biases in this area, and that these biases have affected the decision-making of many rationalists. I am preparing a Less Wrong post on this and related topics.
Hmmm... I think "something I can't think of" should qualify as a category, myself.

The main thing I take away from this survey is that many of us still think they can assign 99%+ probabilities based on no good info (the aliens and existential risk questions stand out in particular). Maybe the LW community needs to focus more on the basics?

I voted for a high probability of life occurring at least once elsewhere in the universe due to a combination of the very large size of the universe and the generalized Copernican principle.
Maybe other people think they have good info.
But they don't in fact have good info, so there must (with very high probability) have been some sort of rationality blunder involved.
I reported a 1% chance of extra-Earth sentient aliens in our galaxy, on Yvain's survey. Is that overconfident? I was reasoning that, if the rest of the universe is real (which it may not be under some simulation hypotheses), the odds of there being extra-Earth intelligence in this particular time-window, which has already acquired sentience and has not yet become lightcone-tiling or made itself extinct, was rather small. (The galaxy is about 100,000 light-years across). Now that I write this, I'm inclined to think I didn't pay enough heed to uncertainty in whether I set up the analysis correctly or was missing a major consideration.
I meant the people who reported 99% chances, mostly. The Fermi paradox is probably good info. One thing I do worry about is if some anthropic idea (self-indication principle) may favor densely populated universes over sparsely populated universes. Something like that is true in this model, but I'm not sure it could work (probability of intelligent life is a matter of logical necessity, what other implications do you get if you apply this model of anthropic reasoning to uncertain logical necessities?) and even if it does it should only imply a serious probability for aliens in the observable universe, not the galaxy.

Why the heck is the average stated probability of a creator god greater than the average stated probability of something supernatural existing? Or did a team of scientists in a parallel universe count as a creator god?

7Scott Alexander15y
I think the supernatural question was phrased as something supernatural happening within the universe. So in a deist perspective, if God created the universe and then went away, that would qualify for creator god but not supernatural.
This was really interesting. And I actually have a confession to make in this regard. I remember changing my estimates after noting that I had given a lower probability to supernatural entities than I gave to the existence of God. I can't figure out how why I did that.

Two things surprise me.

First while 73.4% of responders are consequentialists and only 9% deontologists, at the same time 45% of responders are libertarians. While labels like that are vague, libertarianism is in most versions highly deontologist ideology, and cares about processes and not results as such.

The other thing was 33.3% of "single and looking" (plus 24.2% of "single and not looking" consists of some mix of "single and not interested" and "single, tried but given up"). There are some well known seduction tec... (read more)

While labels like that are vague, libertarianism is in most versions highly deontologist ideology, and cares about processes and not results as such.

The libertarian writing I've seen is primarily consequentialist: arguments, mostly by economists, that governments produce worse results than people directing their own efforts for their own reasons. So I see no contradiction in the survey responses.

If, in the places I don't read, which are surely more numerous than those I do, libertarianism is mainly promoted by arguments that government is morally wrong, then I can see why the Libertarian party has never been more than a splinter movement.

The argument I've seen is mostly something like:

  • People have rights to absolute personal liberty in economic matters (pure deontology).
  • In the fairytale case of perfectly competitive markets with no externalities, no transaction costs, no information asymmetry, no cost of entry, and so on and so on, government intervention is inefficient.
  • Somehow based on the fairytale case, following libertarian process is bound to produce the best possible results, without any need for serious empirical evidence that this link is true.
  • In cases where following the rules doesn't seem to produce best results, we're supposed to follow the rules anyway, as breaking them would most likely result in something even worse, even if we don't immediately see it.

This sort of post-hoc consequentalization of essentially deontological ethics is extremely common. I don't know that many deontologies that have balls to avoid this trick, and assert that they don't care about consequences.

Ask a typical Christian, or other theist, and just like a libertarian, they will tell you something like that:

  • sinning is wrong (deontological basis)
  • sinning results in bad consequences (unsupported assertion)
  • even when sinning
... (read more)
I don't know where you've been finding this argument but it's hardly representative of a good argument for libertarianism. I grew up in Europe (well, the UK, which is kind of Europe) with Labour voting parents and grandparents with fairly socialist views and considered myself a socialist into my early 20s. Weak arguments like these wouldn't have been enough to convert me to a generally libertarian worldview. I had a similar caricature of the views of supporters of the free market (back when I didn't even know the term libertarian) but learning more about economics and being confronted with evidence of better outcomes in freer economies, together with learning that few serious economists (or libertarians) believe in perfectly efficient markets and learning about Public Choice Theory were key in changing my political views. Key to the economic arguments for libertarianism is the idea that incentives matter and that the incentives facing actors in a free market tend to be far less perverse than those facing politicians or employees of state run monopolies. The moral arguments stem largely from a view that personal freedom is a high moral value and that the evidentiary bar should be set very high for any demonstration of harm to justify restriction of individual freedoms. That tendency seems to be correlated with certain personality types according to some research and the crossover between libertarians and progressives/liberals on social issues seems to be as much a factor of personal values as of consequentialist reasoning. And being fairly familiar with UK politics (less so with European politics in other countries) the idea that European politics pick policies based on 'what is estimated to work best' strikes me as pretty laughable.
3Scott Alexander15y
Thanks, Matt. You're providing some interesting points in a direction I hadn't heard much about before. Do you think most libertarians believe that regulation by a responsible, intelligent, benevolent government would improve society, but that we simply don't have a government we can trust that much? Or do you think they believe that any government intervention is likely to have adverse effects no matter how well-planned it is?
I think most libertarians would tend to agree with Hayek's presentation of the Economic Calculation Problem as a fairly fundamental obstacle to successful government planning. There are a couple of problems with government attempts to improve society: one is their practical ability to do so (given a clear goal, are they able to achieve it) and the other is how they decide what constitutes 'improvement'. The fact that they generally fail at the former tends to mask the fact that they don't really have a good way of doing the latter. Given all the relevant inputs, perfect rationality and unlimited computational capacity I concede the theoretical possibility of a central planner producing more optimal outcomes than a market. Such a planner would be so far from any government that actually exists or could exist given current technology however that I don't consider it particularly relevant whether it is theoretically possible or not. That could perhaps change if Eliezer is successful. The more immediate problem is that governments are not structured in a way that provides incentives to improve society. The reality of politics is all about special interests, rent seeking, regulatory capture and political maneuvering. The system as it actually exists is certainly not capable of making rational policy choices to improve society, though it remains possible that by some happy accident some policies may not be terribly harmful.
Matt, I'd be interested to know how your broader views on the nature of morality (i.e. that it's essentially enlightened self-interest) feed in to your support for libertarianism. More specifically, it seems as though this view would set a lower empirical bar than more altruistic views, and I guess I'm wondering to what extent you view the empirical arguments for libertarianism as sufficiently strong that you would still endorse something like it if you were a utilitarian or a prioritarian or an egalitarian instead.
My views on morality are certainly interconnected with my support for libertarianism. In the case of healthcare for example, my idea of what would constitute a good system may well differ from someone who takes a more utilitarian view of morality. For example, I think there may well be a place for some kind of government involvement in the control and treatment of infectious disease since there are externalities to consider if someone foregos treatment for cost reasons and a free at the point of delivery treatment service for infectious diseases is arguably a public good that would be undersupplied without government involvement. I don't however think that anyone has a fundamental right to healthcare and utilitarian arguments for healthcare reform that advocate a system based on a more 'equitable' allocation of healthcare resources are not going to carry much weight for me. This does mean that I will tend to judge empirical evidence according to somewhat different standards than someone who takes a different view of morality. If someone is arguing for universal healthcare based on a particular set of moral premises, I am likely to point out evidence suggesting the reforms won't work even to achieve their stated goals rather than to try and argue with their premises. It's entirely possible that the evidence would suggest that the proposed reforms would achieve their goals and I would still not support the reforms however since I might not share those goals. There's an obvious risk that I will tend to view evidence selectively because of this but once you're aware of confirmation bias and make an effort to allow for it I'm not sure how much more you can do to protect yourself. Many of the economic arguments for libertarianism stem from the fact that people don't act like pure altruists/utilitarians and instead act largely in their own self interest. I'd argue that if you start from utilitarian premises and try to devise policies to further those goals you are often
FYI, "Libertarianism" apparently means something different in the United States than it does elsewhere. This comes from a friend who is currently majoring in Political Science. He claims that "true libertarians would just laugh at American libertarians." I do not know exactly what that means or give any more information, but it sounded relevant to the discussion.
In France at least, "Libertarians" ("Libertaires") are traditionally left-wing anarchists, US-style Liberterians would be what we call "Liberals" ("Liberaux"), though it seems recently some started calling themselves "liberaux-libertaires".
I hadn't heard the term in the UK before encountering it in discussions with American libertarians online. I believe Classical Liberalism would be the closest term commonly (though not very commonly any more) used in the UK.
I'm not sure what you meant by "based on what is estimated to work best," but I would say that modern European politics is not that different from modern American politics, or politics fifty years ago, in that politics can be described as the result of pre-existing political institutions, irrational, ignorant, and unenlightened voters, corruption and special interest groups. Well, things could be a lot worse. We could live in Myanmar or Sudan. If European politics has gotten less ideological, is that (to a first approximation) because political institutions changed or because voters became less ideological?
As far as I can tell, European politics (as far as it's even a valid label) is different from American politics. I didn't do any proper research, so this might be just impressions. From what I can see, many Americans would say they "are Democrat/Republican/etc.", Europeans would only say they "vote Labour/Conservative/etc.", Europeans are much more likely to switch votes between elections, American political parties talk a lot about ideologies (freedom, fairness, Constitution, Founding Fathers, Christian nation, this or that is socialism, and so on and so on) what is extremely unusual in Europe. By the way your description of what politics is like while not invalid seems extremely biased. As far as I can tell politics is mostly about day to day dealing with mundane problems of managing the state, and balancing of interests of different groups in it. Yes, the things you're talking about are there, but if someone described modern capitalism as consisting of exploitation of third world workers, destruction of environment, corruption, union busting, focus on quarterly profits over sustainability, gender discrimination, race to the bottom, oligopolies, brainwashing consumers etc. is would also be true, but about as biased.
When speaking about politics in general, and current governments in particular, my rhetoric tends to be negative and focus on problems. This is because I hope that talking about the problems will get people to help fix or work around them. It is my impression that the public, though perhaps not people on LW, have too much faith that a) they know what good public policy is, and b) that current policy is good. You would probably respond, and would be correct to respond, that government, and the political process, can do good. This should be recognized... I am not a libertarian extremist. I know a fair bit about American politics, and the disciplines of political science/economics/sociology. But I know little about Europe, and I should have admitted that straight up. I hadn't / still don't fully understand the differences in how ideological politics is across the Atlantic. Don't people trumpet Rights-based claims a lot? Or draw on what are considered admirable nationalistic characteristics in framing debates? Or talk about the dangers of neo-liberalism or capitalism? I'll have to think/read about that more. Thanks for the suggestion.
By focusing on problems of government and ignoring problems of modern capitalism which has arguably far more influence (both positive and negative) on our daily lives, and upon which we have a lot less control, you're highly biasing the debate. It's not just you - I would say people in general are a lot more critical of government policies than of consequences of current form of capitalism (which has nothing to do with libertarian/econ101 fairytale free market). As for European politics (I'm basing it mostly about Poland, UK, and Germany, as opposed to States, but my understanding is that the situation is very similar in most European countries): Admirable nationalistic characteristics - never, that's purely American thing, European politicians tend to be extremely shy about national issues, there's no flag waving etc. Rights-based claims - not really, you can hear often that some policies are unjust toward some group, or cause some group suffering, or some policies would be beneficial for some group, but it's pretty very rarely about abstract "right to X" like American debates are framed. Talking about dangers of neo-liberalism - this happens, usually in terms of specific problem (like mistreatment of employees, or job loses, or environmental issues etc.), more often in realistic "companies only care about profit, so we need to regulate things about them that we care about", rarely in a generic "neo-liberal capitalism is bad", but why do you include it as ideological? Should neo-liberalism be a taboo subject?
Companies care about profits which makes them care about their consumers, their suppliers, their workers, and their congressmen (for better or for worse). But regulations are obviously necessary, and I like public goods. Again, I think your argument about U.S. and European politics differ is interesting, I should look into that.
Right now, yeah, pretty much. In Europe the most you can find is politicians of country X talking about protecting "X jobs", but on "we look after our interests, others look after theirs" basis, not on any sense of superiority and uniqueness that is so prevalent in American political propaganda.
Nationalism may be less potent in Europe than the U.S., but there are other countries in the world. And my impression is that, thankfully, nationalism is less potent in the U.S. than in many of them.
You have a point, I only looked at Western democracies, U.S. is an outlier in this set, but there's plenty of countries with a lot worse nationalism than U.S. if you look outside the set.
I think that both promoting and criticizing neo-liberalism are fairly ideological projects. I wouldn't taboo either of them, but I would like to see politicians/journalists/voters more focused on discussing the costs and benefits of specific policies which I think would lead people to be more consequentialist.
My point was that problems here are rarely framed as pro-neoliberalism vs anti-neoliberalism, the focus tends to be on specifics, which I would say is more productive.
I agree on everything but the dangers of neo-liberalism. This seems to me to be ever present, also in relatively succesful countries like Germany and France. Boo neo-liberalism. A bit like inequality. Ideology in the American sense is pretty much relegated to fringe movements. I live in Denmark, but follow politics in major European countries.
3Paul Crowley15y
FWIW, these have mostly been the arguments I've seen for libertarianism; that, and arguments which hinge on the importance of wealth going to the "deserving" over the "undeserving". If anyone can point me to any online writings on the subject which tackle the standard challenges to libertarian capitalism in a way that doesn't hinge on deontological ideas or ideas of deserving, I'd be interested to read them.
No strong opinion on whether they're correct, but from what I've seen libertarians argue from consequences rather than deontology most of the time, so I have to wonder where you've been looking. As for pointers, there's a libertarian-leaning econ encyclopedia here.
I recommend J.S. Mill's On Liberty - it's not necessarily argued entirely from consequentialist grounds, but that's basically where he's coming from. Online version
You cannot be honestly consequentialist without seeking the best empirical evidence you can get, and I find the idea that there might have been much useful evidence for best organization of government in 2009 back in 1869 extremely unlikely, so I'm going to completely disregard this recommendation.
I'm not at all sympathetic to the libertarian point of view, but I have to say that this does not sound like your true rejection. I find thomblake's Boyle's Law analogy quite apt: if you are really interested in thermodynamics, you have to start with material at the Boyle's Law level. Likewise, if you are truly interested in understanding libertarian thought, it behooves you to start with a basic text.
If someone wants to argue for libertarianism versus status quo on consequentialist and empirical grounds, it stands to reason they should have some idea about status quo, what a person writing in 1869 couldn't possibly have without breaking causality. I'm not saying Mill doesn't make good deontologist arguments, as these can be timeless, I'm simply not interested in deontology here.
You seem to have missed the part where thomblake claims J. S. Mills more-or-less originated consequentialism. Seriously, asking for a reference on LW, getting one, and dismissing it without even flipping through it? Lame. ETA: My bad -- you did not ask for the reference. I am lame.
Wasn't it ciphergoth who asked, not taw?
Your celebration of ignorance angers me. You asked for a recommendation and got one from probably one of the best-qualified here to answer that question. Really, it's a very short book. And it's one of the basic works on classical liberalism, one of the foundations (along with Locke's Second Treatise on Government) of all current discourse on liberalism. Mill is arguably the fellow who invented consequentialism (with a hat tip to Bentham, and J.S. Mill's father). It's like if someone referred you to Boyle's Law and you insisted someone from the 17th century couldn't possibly have anything useful to say about physics. EDIT: correction - as noted above, it was not taw who asked for a recommendation in the first place. Mea culpa.
By this logic, one could also argue in favor of Newton's theories on alchemy because he essentially invented classical mechanics. Consequentialism is a type of formalization of ideas on ethics, which are inherently arbitrary. Theories of political structure deal with empirical matters of actual results. taw asserts that someone in the 17th century would have had no empirical data relevant to modern govenment, an assertion that is, if not obviously correct, at least defensible to the extent that society has changed since then.
Most economists are more libertarian than most people, which means something to me.
2Paul Crowley15y
That's enough to interest me but obviously not nearly enough to convince.
Fair enough, I'd like to believe that my libertarian sympathies are based on a lot more than that as well. I'm sure you've read a lot of Robin Hanson, do you feel he focuses a lot on a deontological justification for libertarian ideas? I also recommend for learning to see the world through the eyes of thoughtful libertarian economists. Both of these sources are more libertarian than I am, but I find reading them very worthwhile and often convincing. In important respects, even Paul Krugman is more libertarian than most Americans. I think we'd probably do well to discuss individual policies, which can be done more precisely than overarching political philosophies.
This is probably a good point, as for all the sound and fury of this thread I would be slightly surprised if there were more than a handful of actual, significant policy disagreements between participants.
The most straightforward types of both libertarianism and utilitarianism take the form of systems that can be logically built from a base of few, powerful, elegant axiomatic principles. This type of system appeals deeply to mathematicians and engineers, hence both the large intersection and the high representation here.
Yes, RichardKennaway puts it nicely. Also note that John Stuart Mill wrote both "On Liberty" and "Utilitarianism".
Echoing RichardKennaway, IMO most of the strong arguments for libertarianism (as a set of policies) are consequentialist ones by economists. The other issue is how to classify someone if they defend some mix of consequentialism and deontology. For example, Robert Nozick argued for rights as side constraints in an otherwise utilitarian moral theory, and Roderick Long argues for deontology based on consequentialist grounds. I'll raise my hand as someone who could probably truthfully describe myself as either, but settled on consequentialist in part for social reasons.
Do any of these economists have a consistently successful track record of prediction? Remember, this is a field where opinions of serious economists on the recent stimulus package ranged from "it won't have any effect" to "it will make things worse" to "it doesn't go far enough". Economists talking about large-scale political structures should be assumed to lack credibility until proven otherwise via actual, consistent predictive results. EDIT: Requesting clarification on why this comment was voted down to -2. Robin has posted repeatedly on many experts' allergies to predictions. Have I made a mistake in my conclusions here?
lesswrong is not completely there yet, but it's steadily heading toward reddit's "downvote to disagree". It's a natural consequence of reddit-style comment up/down-voting system, don't think about it too much.
Strongly disagree about "don't think about it too much," but upvoted for pointing out this important problem. Everyone: upvote for useful discourse, not agreement!
"don't think about it too much" as in "don't think about things you cannot affect". Unless you want to go and convince Eliezer to remove downvoting and leave only upvote and report links like on Hacker News. This will leave more garbage in the comments of course, I think it's smaller problem than "downvote to disagree", but I have no strong evidence about it.
Unless or until we get separate voting for "agreement" vs. "quality", as people have mentioned a few times.
1Paul Crowley15y
Listen to Robin Hanson discuss this phenomenon on EconTalk. Starts with half-hour monologue by presenter, but I find the presenter quite interesting too.
Consequentialism and deontology don't really 'mix' well. Either the consequences ultimately matter, or the rules ultimately matter. So it's either 'consequentialism' that collapses into deontology, or 'deontology' that collapses into consequentialism, or some inconsistent mix, or a distinct theory altogether.
What's wrong with maximize [insert consequentialist objective function here] subject to the constraints [insert deontological prohibitions here]?
Act A will certainly generate X units of good, and has a Y% chance of violating some constraint (killing somone, say). For what values of X and Y will you perform A? It's very tough for deontology to be dynamically consistent.
This is a problem for deontology in general, not a specific problem that arises when trying to combine it with consequentialism. Whatever probability Y a deontologist would accept can simply be built into the constraint. If the constraint is satisfied, then you do A iff it maximizes X. Otherwise you don't.
Then there are further questions: 1. why maximize that? , and 2. why use those constraints? Note that both of these are ethical questions. The way you answer one might have implications for the answer to the other.
Can't both of these questions be asked of pure consequentialists?
Sure, but the point is that one concern will probably collapse into the other. For a pure consequentialist, question 2 is either irrelevant or answered by question 1, and for question 1 you will end up in a bit of a circle where "because it maximizes overall net utility" is the only possible answer, with maybe an "obviously" down the line.
Well, yes. But we're not talking about pure consequentialists. It's obvious that hybrid deontology-consequentialism is inconsistent with pure consequentialism; it's also beside the point. Deontological constraints are seldom sufficient to determine right action. When they're not it seems perfectly natural to try to fill the neither-prohibited-nor-obligatory middle ground with something that looks pretty much like consequentialism.
Why not? If libertarianism (more than other ideologies) reflects statistical truths of human existence, we'd expect to reach the same conclusion from different avenues of argument.
There are both deontological and consequentialist arguments for libertarianism, and I think they're both equally convincing (to their respective audiences). My perception is that libertarians who used to be liberals tend to favor the consequentialist arguments, while libertarians who used to be conservatives tend towards the deontological.
Certainly most libertarians care about processes, or at most about results very similar to the processes, but this is a biased sample. Most ideologies are about process and uninterested in evidence about consequences, but that doesn't mean that people who the term "libertarian" are ideologues. One cost of using the term is appearing to be an ideologue. For this reason, I refuse to reliquish the term "liberal" to the modern liberals. But I think that taw is poisoning the discourse, making it worse than it already is. It's a pretty common tactic to paint anyone outside the mainstream as an ideologue.
7Scott Alexander15y
Oh, hey, we have data! According to crosstabs, of our fifteen deontologists, four were libertarian, four were liberal, four were socialist, two were conservative, and one didn't list political views. That means deontologists were slightly less likely to be libertarian than the average person. (deontologists were much more likely to be conservative than the average person, but I can't draw too many conclusions from that because there was such a small sample size of deontologists and conservatives.) I admit I didn't expect that result. I think it's because the really, really loud obnoxious libertarians like Objectivists are all deontologists. But I don't think this site has a lot of those. I would be curious what would happen if we polled the reader based of [EDIT: Also, an overwhelming majority of those who said they didn't believe in morality were libertarians. Wonder what that means.].
Perhaps they are contractarians, which they think isn't about "morality" per se?
In what way is he "poisoning" the discourse? He didn't even use the term ideologue, and he explained in a later post why he thinks libertarianism is essentially deontological in nature. Accusing him of "making the discourse worse" only serves to itself worsen the discussion. Quite frankly, in my experience with people arguing for libertarianism, it tends to be precisely what he describes--a lot of bottom-line faux-consequentialist arguments about why free market principles necessarily produce better results, combined with question-begging arguments that assume individual economic freedom as the value to be maximized. As a concrete example, by almost any metric European-style socialized health care systems work empirically, objectively better. Given the high cost of trying untested systems and the general lack of predictive power demonstrated by current macroeconomics, I can't conceive of any coherent, consequentialist argument agaisnt the immediate utility of adopting such a system in the USA, yet most libertarians will argue until blue in the face that socialized health care is a terrible idea, in aparent defiance of reality. EDIT: This comment was pretty promptly voted down to -2 for reasons not apparent to me. Any reasons other than disagreement?
Deontological principles often help maximize utility indirectly, as I'm sure most utilitarians agree in contexts like war and criminal justice. Still, I agree deontology can bias people in the direction of libertarian politics. On the other hand, folk economics can bias people away from libertarian politics. Since utilitarianism values the sum of all future generations far more than it values the current generation, it seems like (if we ignore that existential risks are even more important) utilitarianism recommends whatever policies grow the economy the fastest in the long run. That might be an argument for libertarianism but it might also be an argument for governments spending lots of money subsidizing research and development.
It seems more the other way to me--die-hard libertarians tend toward deontological positions, typically by gradual reification of consequentialist instrumental values into deontological terminal values ("free markets usually produce the best results" becomes "free markets are Good", &c.). This is true, of course, and it's worth noting that I agree with a substantial majority of libertarian positions, which is part of why I find some aspects of libertarianism so irritating--it helps marginalize a political outlook that could be doing some good. I'd think more likely it'd be an argument for both--subsidized research combined with lowered barriers to entry for innovative businesses--tile the country with alternating universities and silicon valley-type startup hotbeds, essentially (see also: Paul Graham's wet dream). Anyway, I don't think it's the case that all forms of utilitarianism assign value to future generations that may or may not ever exist. Assigning value to potential entities seems fraught with peril.
Such as?
It would seem to support the biblical condemnation of onanism.
"Potential entities" here doesn't mean "currently existing non-morally-significant entities that might give rise to morally significant entities", just "entities that don't exist yet". A much clearer phrasing would be something like "Does my utility function aggregate over all entities existing in spacetime, or only those existing now?" IMO, the latter is obviously wrong, either being dynamically inconsistent if "now" is defined indexically, or, if "now" is some specific time, implying that we should bind ourselves not to care about people born after that time even once they do exist.
Combinatorial explosion, for starters. There's a very large set of potential entities that may or may not exist, and most won't. Assigning value to these entities seems likely to lead to absurdity. If nothing else, it seems to quickly lead to some manner of obligation to see as many entities created as possible.
But not assigning value to potential entities implies that we should make a lot of changes. Ignoring global warming for one. Perhaps enslaving future generations?
I think it's arguable that global warming could impact plenty of people already alive today, and I'm not sure what you mean by enslaving future generations. But yes, assigning no value at all to potential entities may also be problematic, but I'm not sure what a reasonable balance is.
Taken together, bullet points 2, 3, and 4 are a textbook strawman. To me, this speaks more to the extent of your motivation to find merit-worthy libertarian writing than to the merit of libertarian ideas. It so happens that an entire school of libertarian thought ("policy libertarianism") is dedicated to studying the specific consequences of government action. One interesting claim: "State actors are (made up of) people who are subject to the same irrational biases and collective stupidity as market actors, and often have perverse incentive structures as well." If you're interested in reading some reasonable libertarians, you might try The Cato Institute, Reason Magazine, or EconLog as starting points. Really? Respectfully, it seems much more plausible, based on the tone of your post, that you're couching an appeal for your own preferred policy in hypothetical terms than that you're actually suffering from a failure of imagination.
FWIW, I generally find Will Wilkinson and Tyler Cowen more reasonable than those listed above. (Yes, I realize Will works for the Cato Institute; I find him more reasonable than his employers.) YMMV.
Along with a few others, I mentioned them both by name in an earlier version of that post. I didn't want to get bogged down presenting all the relationships needed to establish that all these people were in fact libertarians: Instead of beating that glob of stuff into something readable, I got lazy and went for the low-hanging fruit instead, specifically the over-the-top claim that there's no such thing as a coherent, consequentialist, libertarian argument against (e.g.) European-style socialzed health care.
That's certainly not what I meant by "poisoning the discourse," or I would have made my comment on it. It isn't a strawman (in the sense of purely made up). That is how most libertarians argue. I liked that post much better, but it still doesn't say why these actions by the majority of libertarians matter. Maybe they've poisoned the word already. Saying "these guys are nuts, avoid their brand name" is just pointing out a bad situation, not making it worse. There are other reasons it might matter: a consequentialist libertarian should ask himself how he reached that state, if it was from fakely consequentialist libertarian arguments. It reminds me of Robin Hanson's advice to pull the rope sideways; while that seems like good advice on how to choose policies to focus on, his advice not to choose sides seems exactly backwards. Instead, choose a party, prove your loyalty, and pull that party sideways. I am not afraid of fakely consequentialist libertarians, because I think I can tell the difference. Except that I am afraid of Cato, which argues from the conclusions and might be cluefull enough to invest in rhetoric. Why would you ever look to lobbyists?
Let's not argue semantics. I had intended to express the following simile: (3-bullet-points : rigorous libertarian thinking) :: (straw-facsimile-of-human : actual-human) I'm afraid I'm having trouble understanding what you mean here. Can you clarify? I recognize it may not speak to the question you're actually asking, but my immediate reaction to this is: "Arguments employed by most libertarians are completely irrelevant. It's the arguments employed by the strongest and most sophisticated libertarians that demand our attention." I'm confused here, too. You mention falsely consequentialist libertarians and seem dismissive of them. You mention the Cato institute, and suggest they are arguing in bad faith and therefore very likely to be wrong. Your reference to "tell[ing] the difference" suggests you might entertain the idea of a consequentialist libertarian who argues in good faith. Is it possible that an earnest consequentialist libertarian could be right? What about?
Uhm, point 2 at least is a straight up fact, as real markets typically diverge to varying degrees from perfect markets. I also note you don't actually dispute point 1, which is the strong statement of a deontological ethical position, and pure deontology remains incompatible with consequentialism, hence the apparent contradiction in ethical systems. If I've been unimpressed with the arguments of rank-and-file members of a political position, why would I be motivated to look for better writing that may or may not exist? Do you go looking for merit-worthy religious apologetics? That said, do you know of any libertarian arguments that do not assume either 1) economic freedom as the primary terminal value or 2) assume the efficiency of real-world markets? Both are unwarranted assumptions that seem to underlie many libertarian arguments I've seen. Well, yes, I prefer policies that are empirically demonstrated to actually work, especially when the cost of trying a system that fails is very high. Why don't you?
Yes. Diagnosing the faults in Alvin Plantinga's reasoning is important. Am I to understand you'd prefer a frank exchange of views with Jerry Falwell? Yes. I included one such argument in the post you just replied to. I quote myself: In other words, government decision-makers (i.e. bureaucrats) have just as much trouble integrating new information, violating social norms, and admitting error as consumers or decision-makers for firms, but bureaucrats are also subject to perverse incentives, regulatory capture, etc. The implied primary terminal value here is welfare-maximization, according to some material standard that I'm assuming we could agree on, given that we're both here. No specific claim about the efficiency of markets is made. A fortiori, the argument derives some of its strength from the acknowledgment of certain deviations from rational behavior that (once again) we both presumably know about, because we're both here.
6Scott Alexander15y
My main complaint with this argument is that it should be empirically testable. You can implement regulatory scheme X in Area A, and no regulatory scheme in Area B, and see which produces better results. For example, ban all cancer treatments that top doctors agree are useless and dangerous in Area A, keep all treatments legal in Area B, and see which area has higher mortality among cancer patients. Many libertarians I know have absolutely no interest in doing this, and don't even like talking about the term "regulatory scheme X" because they prefer to lump all possible regulatory schemes together and judge them on the merit of the first one that comes to mind (this is also a problem with many socialists, for the opposite reason). I don't know much about economics, but I do know a bit about public health policy, and the people in charge of that are sometimes very good about using studies to determine whether their government interventions are an overall improvement over the no-intervention case (obvious exception: the FDA, which is very good at running studies, but very bad at running the right studies and doing sane cost-benefit analysis). When these studies show positive results at relatively low cost, a truly consequentialist libertarian ought to admit government regulation has been effective in that case. Instead, they tend to dismiss it as a fluke or start talking about some case where government regulation isn't effective. I think the great error in this whole debate is framing it as a conflict between socialists (who supposedly ought to think all government interventions are great) and libertarians (who supposedly ought to think all government interventions are terrible). In reality, some of these will work and some of these won't. I'd rather people started paying more attention to which were which than become crusaders for bigger or smaller government. I think "Government regulation is bad" (or "is good") is approximately the same kind of sentence as "Isl

I'm reluctant to jump into a long discussion of the specifics of libertarian public policy -- mind killer and all that -- but in light of the terrible account of itself libertarianism has given you and SoullessAutomaton, maybe a few nonspecific comments are in order.

There's such a thing as libertarian public policy research. It happens in think tanks. It gets done by academics (mostly economists), it incorporates peer review, and it usually doesn't hold with the kind of boorish behavior you're describing. Many of its hypotheticals are imports from the most inconvenient possible world. Specifically, it acknowledges that market failures exist and that government intervention is sometimes the most effective way to deal with them; that regulation has legitimate uses in service of the public good; and above all that pragmatism and compromise are the only virtues that can survive in the political arena.

Like most public policy it is essentially utilitarian, and its specific claims center around the idea that society is too complex for any central authority to administer efficiently. That's to say, while there are many good ends the government might achieve through intervention in th... (read more)

I appreciate your presentation of the ideas here; it's more enlightening than most material I've seen. That said, I still take issue with some points: Peer review by academics is only meaningful if based on a foundation of empirical observation and testable predictions; I think it remains to be demonstrated that macroeconomics has any predictive power whatsoever. Otherwise you end up with something like literary criticism--sophisticated, elaborate arguments unrelated to reality in nearly every way. This does not require that certain subsections of society may benefit from central administration. This is easily demonstrated by the existence of large, niche-dominating corporations, which tend to be every bit as inefficient and bureaucratic as government. Some problems are such that the benefits of centralization outweigh the costs of bureaucracy. Agreed completely, but sophisticated answers are still useless without empirical foundations. ...which, ironically, leads to the other great flaw of libertarian politics--it proposes that government voluntarily reduce its own power dramatically and promotes increased personal responsibility. This is not a popular idea. People like having an authority, and people in government like being authorities. Large-scale libertarian government stikes me as, unfortunately, every bit as unlikely as Yvain's government of controlled experimentation.
While I am pretty skeptical of the predictive power of a lot of macroeconomics as well, it seems odd to demand empirical research but simultaneously deny that the field in question is amenable to empirical research. A lot of the economics research that is used as support for libertarian positions is based on comparative studies between countries or between jurisdictions within countries. One common thread of research is to attempt to rate countries according to some defined measure of economic freedom and then see if the rating correlates with positive outcomes (GDP per capita being a common choice). There's all sorts of ways that research can be criticized but to completely rule out such research as admissible evidence would seem to render questions about how to organize society completely beyond the realms of scientific investigation. If studies of this kind are not a valid basis for making decisions then how do you propose ideal enlightened governments should determine policy? Hence the existence of things like the free state project and seasteading. Libertarians are quite aware of the difficulties of achieving their vision of society through conventional democratic politics. In fact, there's a recently established blog on that very topic run in part by a less wrong commenter (who is also behind the seasteading idea).
I'm not saying it's not amenable to empirical research, I just don't get the impression that any extant research has been fruitful. As I said earlier, I saw serious economists discussing the USA stimulus bill predicting everything from "harmful" to "no effect" to "doesn't do enough", and as Robin has observed the chances that any of these predictions will be remembered is close to nil. This is strong evidence that the field as a whole lacks rigor, and that anyone who does know their stuff is being drowned out by the rest. Relying first and foremost on things that are already demonstrated to have worked well is a good start--hence my argument for adopting European-style socialized health care. Also, drop things that have been demonstrated ineffective, like the "war on drugs". Beyond that, take action only when necessary, and test new ideas in small areas first when possible. Modern governments are too large and powerful to be making large, expensive mistakes. I've read about the seasteading project before, actually, and I think it's generally a wonderful idea.
I'm not sure what kind of economics you're thinking of when you say macroeconomics. I have very little confidence in the kind of macroeconomics that tries to relate things like interest rates and savings to money supply using simple formulas, or that tries to give accurate values to 'multipliers' for stimulus spending, or that construct mathematical models of the economy and use them to make predictions about future growth from a few aggregated inputs by curve fitting to historical data. I'd agree that most of that is junk. The kind of macroeconomics I think has some value is that which attempts to gather empirical support for particular policies by comparing outcomes across different countries or jurisdictions, or across different time periods. This kind of research is obviously far from ideal since it is weakly controlled and is often making hard to justify comparisons between different measures. Maybe macroeconomics is not the right term for that kind of research but I'm not sure what else to call it. In terms of gathering empirical evidence for the results of particular policy decisions, that seems about the best we can do at the moment. You do realize how far from an uncontroversial claim that is? I grew up with the NHS in the UK (and I now live in Canada which is also a universal health care system). They are far from perfect systems. Every government for as long as I can remember in the UK has come into power partly on a promise to 'fix' the NHS. None have succeeded. I don't think I've heard anyone argue that the healthcare system in the US is fine as it is - there is fairly universal agreement that it is broken. I'm far from persuaded that the best solution is to try and adopt a European model though. There's plenty of research from libertarian think tanks that provides empirical evidence in favour of health care reforms that reduce government involvement in healthcare rather than increase it. I'm sure there are grounds for questioning some of that researc
Primarily the first kind you describe. It is, but it is unfortunately limited to examining the results of policies already implemented; anyone justifying novel solutions based on such evidence is likely veering more into the territory of what we agree is junk, and probably making very dubious assumptions about ability to extrapolate trends. Of course, but for all their imperfections they are widely recognized as better than what the USA currently does and win out on almost every objective metric. I also seem to recall that the UK and Canada are often considered some of the worst other than the USA, though at least they spend less than half as much than the US does. Socialized health care, like democracy, can generally be categorized under "the worst system, except for all the others that have been tried". On basis of what observations? To my knowledge all other developed nations employ some form of socialized health care. As it stands now, the USA is the biggest outlier and getting the worst results; an obvious case for applying a little majoritarianism. Because it's not a new idea. Everyone else has helpfully tried it out for us already and found that it basically works. On the other hand, aggressively de-regulating and getting government out of health care is, as far as I know, completely untried and untested.
The superiority of the Canadian and British models is not uncontroversial. This policy analysis rebuts many of the arguments for example. It includes numerous objective metrics on which the US does better than Canada or the UK. Debate the claims if you want but don't pretend the issue is settled. There is a lot of variation between nations. Many nations that have some form of 'socialized' health care also have significant amounts of private health care. Many countries have introduced market based reforms within their socialized health care systems in an effort to improve efficiency. Health tourism is increasingly common in Europe. 'It' is not a single idea or system. Ultimately the biggest problem with healthcare reform in the US is that there is very little chance that it will adopt the best practices found in other nations. There are too many powerful special interests for the political process to select policies based on effectiveness. The more government involvement there is in healthcare, the more healthcare will be subject to problems of regulatory capture, special interest lobbying, rent seeking and bureaucracy. De-regulation and reduced government involvement creates incentives for serving patient interests as a primary route to success. Increased regulation and government involvement means that it becomes more and more profitable to focus effort on lobbying, gaming the system and political maneuvering rather than on patient care.
I don't have the time to respond properly to the linked PDF, but skimming it quickly it doesn't seem particularly persuasive. Obviously, the argument isn't "settled" because people still argue about it, but that doesn't mean both sides have arguments of equal strength. At any rate, I'm really going to have to drop the discussion at this point because I don't have the time to go digging up supporting references. If you seriously think that the US system is of comparable quality to European systems our difference of perspective is far too vast to bridge by simple off-the-cuff arguments here. Thanks for your time, though, this has been enjoyable.
The linked document contains a number of objective metrics on which the US does better - waiting times, use of high tech surgical procedures, access to high tech diagnostic equipment, breast and prostate cancer mortality ratios, specialist to patient ratios and patient satisfaction measures. I linked it as evidence to rebut the specific claim that the US is worse on 'almost every objective metric'. I'm not asking you to make a detailed rebuttal, I'm just providing evidence of objective measures on which the US does better. I don't have time for a detailed debate either. You seem to assign an extremely high probability to the belief that healthcare in the US could be significantly improved by adopting a European system though and I'm questioning whether the evidence justifies such high confidence. Ultimately the only reason this is even a political issue is because of the high levels of government involvement. With less government involvement people could spend their healthcare dollars in the way they thought best without having to persuade anyone else. That's another reason I oppose high levels of government involvement.
2Paul Crowley15y
This is by far the most useful discussion of the subject I have ever had. I'm starting to think this rationality stuff might actually work out.
Now the question is, does advocating one regulatory scheme make other regulatory schemes more likely, through some habit-forming mechanism or other? If so, then your version of a libertarian (who thinks the average scheme is bad) should sometimes oppose even good schemes, and your version of a socialist (who thinks the average scheme is good) should sometimes support even bad schemes. In building bridges between left and right, it's always a good idea to offer analogies between money and sex, so consider this: utilitarians generally oppose governments telling people whom to mate with and whom not to mate with, even in cases where these people will predictably make decisions that make them and others unhappy, because utilitarians think the good this would do is outweighed by the value of having a bright-line taboo against the government meddling in that sphere. It's less obvious to me than it is to you that the economy as a whole, or some particular circumscribed aspect of it, isn't also such a sphere, at least in part. Since there's no good that could possibly come from us talking about this other than low-quality thinking and writing practice, I'm putting this sentence here to make myself look like an idiot in case I fail to resist the temptation to post about politics again anytime soon.
I see little value to discussion with either. Given the fundamental problems with theism (primarily a lack of empiricism), I can reasonably expect that "better" theist arguments will only be more elaborate and rigorous argumentation from the same broken axioms. Unless I were preparing for a formal, public debate with a theist it wouldn't be worth my time. The phrase "policy libertarianism" gets only a couple thousand google hits, many of which are false positives. Eliminating the most common false positive (the phrase "foreign policy, libertarianism"), your own LW comment comes up on the first page of results. The remaining results seem to mostly concern matters of evolutionary vs. revolutionary change as a means of implementing libertarian goals, not arguments for goals, and bear no obvious relation to what you've mentioned. If you're referring to a major school of libertarian thought, I'm assuming there's another, more popular term for it, but I don't know how to figure out what it would be, sorry. This point is not under dispute, but it also does not suffice to prove that governmental action is therefore less effective, especially given imperfect markets, other problematic incentives for smaller agents (e.g., problems of collective action), and empirical evidence showing that governmental programs can sometimes lead to better results than non-governmental programs (e.g., European vs. USA health care). Some form of welfare-maximization, yes. Various quality-of-life metrics are a (rough) approximation. I'm not sure what else you're getting at here. I should say again, it is likely that we agree on the vast majority of actual conclusions. My complaint with libertarianism as a political philosophy is that (like most other political philosophies) it has no apparent, consistent empirical basis and resorts frequently to bottom-line arguments, even though many of their conclusions can be justified rigorously. I am convinced of this in large part because most liberta
It's a fairly recently coined term. The first use I'm aware of is here. The distinction between policy and structural libertarianism has been picked up quite quickly as many have found it useful.
In that case I remain confused in that it seems to mostly refer to a framework for how to achieve libertarian goals, not for justifying that said goals will successfully confer the advertised benefits (e.g., some type of welfare maximization or general utility).
The post is on a libertarian blog and as such is aimed at an audience who already accept the case for libertarian goals being desirable. It's making a distinction between libertarians who believe that the best way to achieve their goals is to work within existing democratic systems to promote libertarian policies and those who believe that existing democratic systems are fundamentally inhospitable to libertarian policies and that achieving libertarianism requires addressing structural factors that tend to produce unlibertarian societies. The 'policy libertarians' tend to be the ones focusing on demonstrating empirical support for improved outcomes under libertarian policies. The idea being that it may be possible to get more libertarian policies implemented by appealing to empirical evidence for their efficacy on a case by case basis rather than by trying to convince people that libertarianism is the 'one true way'. That would seem to be precisely the kind of approach you would seem to prefer.
That clarifies the relevance, thank you.
He used the term "ideology."
Are you being disingenuous here, or do you really think those connotationally equivalent?
Some people use ideology more broadly. Others use it exactly as ideologue. It's pretty clear from taw's later comment that he meant it as ideologue. I responded to the short comment rather than the long comment because it merely insinuates.
That does not seem clear to me. Are you certain you aren't reading too much into it? Assume good faith, as Wikipedia would say.
Indeed. "classical liberal" is the only way I use "liberal", though I'll only use the term at all if I'm actually discussing political philosophy. Also, one's political philosophy is not necessarily isomorphic to one's ethics. The questions "Should I be a libertarian", "How should we arrange political institutions", and "How should I feel about other people telling me what to do" are all ethical questions, but their answers are far more complex than finding something that 'matches' one's ethics.

IQs (warning: self-reported numbers for notoriously hard-to-measure statistic)

Yeah, I'm extremely skeptical of the IQ data. Assuming a standard mean=100 SD=15 test (although at least one respondent says he took a test with SD=24), our reputed median is above the 0.003th percentile. I don't think any public blog is that elite.

ERRATUM: Oh, dear. I meant 99.7th percentile.

7Eliezer Yudkowsky15y
I'm skeptical of the IQ data because of the number of IQs above 140. Most IQ tests don't measure well above IQ 140, and so even if we have that many truly exceptional people, I would not expect it to show up in their measured IQs.
But if so many lied, it would also be a surprising fact, that doesn't seem to be a better explanation.

It's only a little more surprising than somebody at an online forum for bodybuilders lying about how much they can bench press.

I take it the reason it's not equally surprising is that few bodybuilders are as monomaniacally obsessed with The Truth as we are?
Most human beings in any forum, anywhere, will be more obsessed with signaling and other concerns than The Truth -- even in a pseudo-anonymous survey -- and will be subject to most of the standard cognitive biases that bodybuilders will be, even if to a lesser degree. Being obsessed with The Truth does not mean never lying or exaggerating (or reporting just that one internet IQ test you took that was 1 std dev higher than your real-world test).
If a lot of people actually got scores outside the calibration range of whatever IQ test they took, they could have answered honestly and the resulting numbers still be as bogus as Eliezer suggests.
We had similar data on the survey I ran (which I still need to write up the results of). I don't know that the numbers past 140 are intelligence-indicative, but I suspect people really did get their reported scores on IQ tests.
Also, in the responses to my survey, people who said they were from the USA were no more or less likely than people who said they weren't to report scores over 140. Which argues against regional variation in what IQ tests mean. Although I don't know how consistent the meaning is of IQ tests within the USA; anyone have knowledge, here?
Did you ever write up your results? They would make a valuable addition to the historical data.
The person who administered my test told me it was inaccurate above 150, and then told me my result was high enough to be somewhere in the inaccurate range, so I explicitly mentioned that it was an "at least" figure.
If we were to assume a test with a standard deviation of 24, a median of 141.5 would be just below the 96th percentile. That still seems too high for the median user, but it's almost plausible -- much more so than 99.7th percentile. It's also quite likely that LW readers with abnormally high IQs (relative to LW) are (A) much more likely to have been tested and to know (and remember) the result, and (B) include the score on the survey.
It doesn't strike me as all that implausible, given how many other indicators of quirkiness we have as a group (e.g., the 95-97% male, the 12% with PhDs (and 23% of members over 35 with PhDs), the portion with advanced math/compsci skill, etc.).
Math is not my strong suit, but my arithmetic comes out differently on the PhD bit. Are you counting as PhDs the people who have "student" in the "degree status" field?
I was working off the 233 people who filled out my earlier survey. I haven't analyzed Yvain's data; what percentage do you get there?
I didn't write it down and I don't want to count them up again, sorry.

Person who put "2172", you probably thought you were screwing up the results, but in fact you managed to counterbalance the other person who put "1700", allowing the mean to revert back to within one year of the correct value :P

Not to worry - I am a believer in the wisdom of crowds, so I knew full well that I wasn't going to be screwing up anything. That response was pure noise.

I just don't like guessing, and so I put "0%" for my confidence on that question, so that one of my answers was definitely wrong and the other was definitely right.

I believe in the wisdom of crowds, but I also think that your actions were screwing up the results. If you weren't going to take a question seriously, I wish you wouldn't have answered it at all. ADDED: I decided not to downvote you because I don't want to discourage being honest/forthcoming.
0% confidence should mean zero weight when computing the results, no?
Yes, but what was the point of that survey question? Among other things, it could assess a) the distribution of the survey takers accuracy, b) the distribution of the survey takers calibration, c) the relationship of accuracy and calibration to other personal characteristics. I don't mean to make an overly-big-deal about this, and I appreciate thomblake's other contributions to the LW community, but because he didn't really give us his best guess about when the lightbulb was invented, he reduced our ability to learn all these things.
That's an interesting idea, but I think Yvain just averaged the answers without regard to confidence.
This seems contradictory. Care to explain?
The "wisdom of crowds" would only apply if everyone is trying to actually get the answer right, and if the errors of incompetence are somewhat random. A large number of intentional pranksters (or one prankster who says "a googolplex") can predictably screw up the average by introducing large variance or acting in a non-random fashion.

There was a .453 correlation between this number and actual IQ; that is, 45% of the variance in how likely you thought you were to have a higher-than-average IQ could be explained by your actual IQ.

Correlation is r and percent of variance explained is r^2, so I think that should be 21% rather than 45%. There's also a typo where you say ".5 level" and presumably mean .05.

1Scott Alexander15y
Thanks. Edited out.
The "I'm listing the ones that are significant at the <.5 level" typo is still there.

Probability of some Creator God: 4.2, 0, 14.6. Probability of something supernatural existing: 4.1, 0, 12.8.

It looks like some of us have yet to overcome the conjunction fallacy.

Some people may believe in natural simulator/creator/gods or some other sort of natural god.

Or believe that a creator god created the universe but does not exist. Depending on your definition of "exist" this could be meaningful.

If you intend to hide something by shuffling the results, it's probably also a good idea to remove the "timestamp" column :-)

0Scott Alexander15y
...right. Done.

Having a copy of the original survey would be nice.

Would whomever gave answers of "irrelevant" to the alien questions, among other peculiar responses, be willing to identify and explain himself?

Re: Probability of an average person cryonically frozen today being successfully revived: 22.3, 10, 26.2.

An enormous estimate, IMO - close to that given by the salesmen(!):

That's because cryonics salesmen are generally amateur rationalists who are actually trying to believe rationally and report their beliefs honestly.
I am more inclined to believe that they are a self-selected group - drawn from the section of the population with the most optimistic estimates of whether cryonics will work. Usually, "most optimistic" != "most realistic".
No contradiction between the two posts above. The second is, nonetheless, probably more useful in judging accuracy.
It's probably not mentioned enough that cryonics can be justified even if it looks like it probably wont work, as long as it's past some threshold.
From that document: "If all my best case figures are used, P(now) from the Warren Equation is 0.15, or a bit better than one chance in seven. This is my most optimistic scenario. The pessimistic scenario puts P at 0.0023, or less than one chance in 400."

It's interesting that Yvain credited The Great Filter with the huge standard deviations seen in the existence of aliens question. I don't recall seeing any qualifier about conscious or intelligent beings. When in doubt, blame Cached Thoughts :)

Also, does observable mean within our future or past light cone?

Here is an anomalous finding I didn't expect: the higher a probability you assign to the truth of revealed religion, the less confident you are that your IQ is above average (even though no correlation between this religious belief and IQ was actually found). Significance is at the .025 level. I have two theories on this: first, that we've been telling religious people they're stupid for so long that it's finally starting to sink in :) Second, that most people here are not religious, and so the people who put a "high" probability for revealed re

... (read more)

It saddens me that so many people assigned extreme probabilities to propositions that may well be false. Makes me wonder what the entire OB/LW project has been for.

Uh... sorry you think so, why?

Lojban's a resident troll. Don't worry about him. ETA: I don't think standard troll warnings should be karma-worthy.
Thanks -- after posting that I did see some of their other comments and conclude that they weren't worth bothering with.

14 people (10.9%) didn't believe in morality.

I'd really like to know what these folks are thinking. Are they using 'morality' in the way Nietzsche did when he called himself an amoralist? Or do they really think there's nothing to the concepts of 'good/bad' and 'right/wrong'?

Supposing one wants to open a pickle jar, and one considers the acts of (a)twisting the top until it comes off, (b)smashing the jar with a hammer, and (c) cutting off one's own hand with a chainsaw, do these folks think (for instance) that (a) is no better than (c)?

I would guess that they would say that one can certainly have preferences, without there being anything worth calling "morality".

It's probably more of a statement about our jargon: most OB veterans are probably on board with the concept that "morality" should be used to generally talk about our goal systems and decision processes, and not as if it implied naive moral realism. I'd suspect that some of the 14 are relative newcomers who thought that the question was asking whether they accepted some form of moral realism or not. I'd also expect that some of them are veterans who simply disagreed that the term "morality" should be extended in the above fashion.
Someone can believe in an action being good or bad for a purpose without believing that there is any ultimate reason to choose one purpose over another. Once you've assumed very high-level goals, further discussion is about effectiveness rather than morality. Further, except for sub-goals, where goal X is required or useful for reaching goal Y, rationality doesn't have anything to say about "choosing" goals, which means you cannot rationally argue about morality with someone whose highest goal conflicts with your own.
But ethics doesn't just apply to these high-level goals. A utilitarian is committed to whatever action generates the most overall net utility - even when choosing how to (for instance) open a pickle jar. (of course, it's been rightly argued that even a true utilitarian might do best in fact to not consider the question while making the decision, due to the cost of considering the decision). If it turns out (b) results in more overall net utility than (a), then the utilitarian says (a) was the wrong thing to do. If someone nonetheless thinks one should (a) instead of (b) because one should choose the option that most effectively reaches one's goals without terrible side-effects, then that person would disagree with the utilitarian above about ethics. If you don't believe in ethics, then you have no grounds for disagreeing with the utilitarian.
See e.g. non-cognitivism and error theory.

of the 102 people who cared about the ending to 3 Worlds Collide, 68 (66.6%) prefered to see the humans blow up Huygens, while 34 (33.3%) thought we'd be better off cooperating with the aliens and eating delicious babies.

I'm shocked. Are there any significant variations in the responses of babyeaters compared to freedom fighters to other questions?

Can I make a pro-babyeater argument?

Here is a dialogue between an imaginary six-year-old child named Dennis and myself.

Me: Hi Dennis, do you like broccoli?

Dennis: No, I hate it!

Me: But it's good for you, right?

Dennis: I don't care! It tastes awful!

Me: Would you like to like broccoli?

Dennis: No, I can't stand broccoli! That stuff is gross!

Me: What if I told you some magic words that would make it so that every piece of broccoli you ever ate would taste just like chocolate if you said them? Would you say the magic words?

Dennis: Well...

Me: You like chocolate, don't you?

Dennis: Yes, but...

Me: What?

Dennis: Your questions are too hard.

I think everyone has conflicts between their different wants. I want to do well in my classes, but I don't want to study. And yet I can't think of any conflicts between my metawants: If I could choose to like studying just as much as I like my favorite computer game, I would make that choice. The wants offered to the humans in the babyeaters story seem fairly sensible from a utilitarian perspective. They promote peace throughout the galaxy and mean lots of fun for everyone. What's not to like?

I wish someone would do a post on metawants. Personally I view them with deep suspicion.
What about metawants (a.k.a. second-order desire) do you want to see a post on?
Well, their ontological, epistemological, and ethical statuses, for three. Specifically, how it's possible to want X and simultaneously want to not want X (while remaining more or less sane/rational). Whether metawants have any special status when making utilitarian ethical calculations. That sort of thing. Even the history of thought on the subject (e.g. Buddhism, where the stated (and only?) metawant is to eliminate all first-order wants).
I'll see what I can do. There was a fair bit about second-order desire in my self-knowledge class and if people would be interested in a distillation/summary of it, I'll provide.
I get the argument, but I assign a high value to self-determination. Like Arthur Dent, I don't want my brain replaced (unless by choice), even if the new brain is programmed to be ok with being replaced. Which ending did you pick in Deus Ex 2? I felt guilty gunning down JC and his brother, but it seemed the least wrong (according to my preferences) thing to do.
A rather vacuous statement, no? Isn't human nature funny* that we have qualms about behaving immorally in a sufficiently realistic simulation, yet can hear cold numbers about enormous real disutility (genocides, natural disasters, etc.) and feel nothing? That's speaking for myself incidentally, not casting aspersions on you. *(where by "funny" I mean "designed by a blind idiot god", naturally)
I don't think you're being very fair to your new brain. Do you? I haven't played Deus Ex 2, sorry.
As things that were good for us in the anscestral environment (fat and sugar) tend to taste good, and things that might be bad (suspect plants) taste yucky, Imaginary Dennis' reaction makes adaptive sence. Do you want to want to eat poison?