Open Thread: June 2010

by Morendil1 min read1st Jun 2010663 comments

9

Open Threads
Personal Blog

To whom it may concern:

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

(After the critical success of part II, and the strong box office sales of part III in spite of mixed reviews, will part IV finally see the June Open Thread jump the shark?)

Rendering 500/663 comments, sorted by (show more) Highlighting new comments since Today at 7:42 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Cleaning out my computer I found some old LW-related stuff I made for graphic editing practice. Now that we have a store and all, maybe someone here will find it useful:

You are magnificent.

(Alternate title for the LW tabloid — "The Rational Enquirer"?)

1Scott Alexander11yThat's....brilliant. I might have to do another one just for that title.
8pjeby11ySweet!
3cousin_it11yYep, it was probably the first rationalist joke ever that made me laugh.
0FourFire6yI didn't see that until right now, made me chuckle.
4fburnaby11yNearly killed me.
4Unnamed11yWe have a store? Where?
5arundelo11yRoko Mijic [http://lesswrong.com/user/Roko] has a Zazzle store [http://www.zazzle.co.uk/rmijic]. (See also. [http://lesswrong.com/lw/2a0/lesswrong_meetup_london_uk_20100606_1600/224g])
1gaffa11yTabloid 100% gold. Hanson slayed me.

Why is LessWrong not an Amazon affiliate? I recall buying at least one book due to it being mentioned on LessWrong, and I haven't been around here long. I can't find any reliable data on the number of active LessWrong users, but I'd guess it would number in the 1000s. Even if only 500 are active, and assuming only 1/4 buy at least one book mentioned on LessWrong, assuming a mean purchase value of $20 (books mentioned on LessWrong probably tend towards the academic, expensive side), that would work out at $375/year.

IIRC, it only took me a few minutes to sign up as an Amazon affiliate. They (stupidly) require a different account for each Amazon website, so 5*4 minutes (.com, .co.uk, .de, .fr), +20 for GeoIP database, +3-90 (wide range since coding often takes far longer than anticipated) to set up URL rewriting (and I'd be happy to code this) would give a 'worst case' scenario of $173 annualized returns per hour of work.

Now, the math is somewhat questionable, but the idea seems like a low-risk, low-investment and potentially high-return one, and I note that Metafilter and StackOverflow do this, though sadly I could not find any information on the returns they see from this. So, is there any reason why nobody has done this, or did nobody just think of it/get around to it?

2Douglas_Knight11yFrom your link, a further link [http://blog.stackoverflow.com/2009/11/our-amazon-advertising-experiment/] doesn't make it sound great at SO - 2-4x the utter failure. But they are very positive about it because the cost of implementation was very low. Just top-level posts or no geolocating would be even cheaper. You may be amused (or something) by this search [http://www.google.com/search?q=site:lesswrong.com+amazon+affiliate]
5mattnewport11yA possibly relevant data point: I usually post any links to books I put online with my amazon affiliate link and in the last 3 months I've had around 25 clicks from links to books I believe I posted in Less Wrong comments and no conversions.

The entire world media seems to have had a mass rationality failure about the recent suicides at Foxconn. There have been 10 suicides there so far this year, at a company which employs more than 400,000 people. This is significantly lower than the base rate of suicide in China. However, everyone is up in arms about the 'rash', 'spate', 'wave'/whatever of suicides going on there.

When I first read the story I was reading a plausible explanation of what causes these suicides by a guy who's usually pretty on the ball. Partly due to the neatness of the explanation, it took me a while to realise that there was nothing to explain.

Your strength as a rationalist is your ability to be more confused by fiction than by reality. It's even harder to achieve this when the fiction comes ready-packaged with a plausible explanation (especially one which fits neatly with your political views).

That's what I thought as well, until I read this post from "Fake Steve Jobs". Not the most reliable source, obviously, but he does seem to have a point:

But, see, arguments about national averages are a smokescreen. Sure, people kill themselves all the time. But the Foxconn people all work for the same company, in the same place, and they’re all doing it in the same way, and that way happens to be a gruesome, public way that makes a spectacle of their death. They’re not pill-takers or wrist-slitters or hangers. ... They’re jumpers. And jumpers, my friends, are a different breed. Ask any cop or shrink who deals with this stuff. Jumpers want to make a statement. Jumpers are trying to tell you something.

Now I'm not entirely sure of the details, but if it's true that all the suicides in the recent cluster consisted of jumping off the Foxconn factory roof, that does seem to be more significant than just 15 employees committing suicide in unrelated incidents. In fact, it seems like it might even be the case that there are a lot more suicides than the ones we've heard about, and the cluster of 15 are just those who've killed themselves via this particular, highly visible, me... (read more)

Suicide and methods of suicide are contagious, FWIW.

keyword = "werther effect"

7CannibalSmith11yhttp://en.wikipedia.org/wiki/Werther_effect [http://en.wikipedia.org/wiki/Werther_effect]
3wedrifid11yI was surprised when I read a statistical analysis on national death rates. Whenever there was a suicide by a particular method published in newspapers or on television, deaths of that form spiked in the following weeks. This is despite the copycat deaths often being called 'accidents' (examples included crashed cars and aeroplanes). Scary stuff (or very impressive statistics-fu).
1JoshuaZ11yYes, this is connected to the existence of suicide epidemics. The most famous example is the ongoing suicide epidemic over the last fifty years in Micronesia, where both the causes and methods of suicide have been the same (hanging). See for example this discussion [http://www.micsem.org/pubs/articles/suicide/frames/suifamilyfr.htm].
6Torben11yIf all the members of a cult committed suicide then the local rate is 100%. The most local rate that we so far know of is 15/400,000 which is 4x below baseline. If these 15 people worked at, say, the same plant of 1,000 workers you may have a point. But we don't know. At this point there is nothing to explain.
5kodos9611yFair enough - my example was poorly thought out in retrospect. But I don't think it's correct that there's nothing to explain. If it's true that all 15 committed suicide by the same method - a fairly rare method frequently used by people who are trying to make a public statement with their death - then there seems to be something needing to be explained. As Fake Steve Jobs points out later in the cited article, if 15 employees of Walmart committed suicide within the span of a few months, all of them by way of jumping off the roof of their Walmart, wouldn't you think that was odd? Don't you think that would be more significant, and more deserving of an explanation, than the same 15 Walmart employees committing suicide in a variety of locations, by a variety of different methods? I'm not committing to any particular explanation here (Douglas Knight's suggestion, for one, sounds like a plausible explanation which doesn't involve any wrongdoing on Foxconn's part), I'm just saying that I do think there's "something to explain".
2mattnewport11yThe first question that came to mind when I heard about this story was 'what's the base rate?'. I didn't investigate further but a quick mental estimate made me doubt that this represented a statistically significant increase above the base rate. It's disappointing yet unsurprising that few if any media reports even consider this point.
1Bo10201011yWasn't there a somewhat well-publicized "spate" of suicides at a large French telecom a while back? I remember the explanation being the same - the number observed was just about what you'd expect for an employer of that size. ETA: http://en.wikipedia.org/wiki/France_Telecom [http://en.wikipedia.org/wiki/France_Telecom]
3mattnewport11yEven if the suicide rate was somewhat higher than average it still doesn't necessarily tell you much. You should really be looking at the probability of that number of suicides occurring in some distinct subset of the population - given all the subsets of a population that you can identify you will expect some to have higher than suicide rates than for the population as a whole. The relevant question is 'what is the probability that you would observe this number of suicides by chance in some randomly selected subset of this size?' Incidentally the rate appears to be below [http://www.ncbi.nlm.nih.gov/pubmed/10855511] that of Cambridge University students:
1gwern11yYes, this is my counter-counter-criticism as well. 'Sure, the overall China rate may be the same, but what's the suicide rate for young, employed workers employed by a technical company with bright prospects? I'll bet it's lower than the overall rate...'
2SilasBarta11yAgreed. Also, I think what got the suicides in China in the news was that the victim attributed the suicide specifically to some weird policy or rule the company adhered to. It could be that the "normal" suicides at the company are being ignored, and the ones being reported are the suicides on top of this, justifying that concern that this is abnormal.

Marginal Revolution linked to A Fine Theorem, which has summaries of papers in decision theory and other relevant econ, including the classic "agreeing to disagree" results. A paper linked there claims that the probability settled on by Aumann-agreers isn't necessarily the same one as the one they'd reach if they shared their information, which is something I'd been wondering about. In retrospect this seems obvious: if Mars and Venus only both appear in the sky when the apocalypse is near, and one agent sees Mars and the other sees Venus, then they conclude the apocalypse is near if they exchange info, but if the probabilities for Mars and Venus are symmetrical, then no matter how long they exchange probabilities they'll both conclude the other one probably saw the same planet they did. The same thing should happen in practice when two agents figure out different halves of a chain of reasoning. Do I have that right?

ETA: it seems, then, that if you're actually presented with a situation where you can communicate only by repeatedly sharing probabilities, you're better off just conveying all your info by using probabilities of 0 and 1 as Morse code or whatever.

ETA: the paper works out an example in section 4.

I thought of a simple example that illustrates the point. Suppose two people each roll a die privately. Then they are asked, what is the probability that the sum of the dice is 9?

Now if one sees a 1 or 2, he knows the probability is zero. But let's suppose both see 3-6. Then there is exactly one value for the other die that will sum to 9, so the probability is 1/6. Both players exchange this first estimate. Now curiously although they agree, it is not common knowledge that this value of 1/6 is their shared estimate. After hearing 1/6, they know that the other die is one of the four values 3-6. So actually the probability is calculated by each as 1/4, and this is now common knowledge (why?).

And of course this estimate of 1/4 is not what they would come up with if they shared their die values; they would get either 0 or 1.

Here is a remarkable variation on that puzzle. A tiny change makes it work out completely differently.

Same setup as before, two private dice rolls. This time the question is, what is the probability that the sum is either 7 or 8? Again they will simultaneously exchange probability estimates until their shared estimate is common knowledge.

I will leave it as a puzzle for now in case someone wants to work it out, but it appears to me that in this case, they will eventually agree on an accurate probability of 0 or 1. And they may go through several rounds of agreement where they nevertheless change their estimates - perhaps related to the phenomenon of "violent agreement" we often see.

Strange how this small change to the conditions gives such different results. But it's a good example of how agreement is inevitable.

2Roko11yBut in reality, what happens when people try to aumann involves a different set of problems, such as status-signalling, especially the idea that updating toward someone else's probability is instinctively seen as giving them status.
1cousin_it11yThanks a lot for both links. I already understood common knowledge, but the paper is a very pleasing and thorough treatment of the topic.

I have debated my religion before, but ironically this looks like a bad place to make a stand because everyones against me and theres a karma system.

Don't take the adversarial attitude: "taking a stand", "against me". This leads to a broken mode of thought. Just study the concepts that will allow you to cut through semantic stopsigns and decide for yourself. Taking advice on an efficient way to learn may help as well.

Observation: The may open thread, part 2, had very few posts in the last days, whereas this one has exploded within the first 24 hours of its opening. I know I deliberately withheld content from it as once it is superseded from a new thread, few would go back and look at the posts in the previous one. This would predict a slowing down of content in the open threads as the month draws to a close, and a sudden burst at the start of the next month, a distortion that is an artifact of the way we organise discussion. Does anybody else follow the same rule for their open thread postings? Is there something that should be done to solve this artificial throttling of discussion?

Some sites have gone to an every Friday open thread; maybe we should do it weekly instead of monthly, too.

2Blueberry11yI would support that.
4Kaj_Sotala11yI don't post in the open threads much, but if I run into a good rationality quote I tend to wait until the next rationality quotes thread is opened unless the current one is less than a week or so old.

Amazingly, there really are domains in which socialism actually works. In the first half of the nineteenth century, the U.S. had privatized firefighting. It was horrible. After the American Civil War, firefighting was taken over by governments, and, astoundingly enough, things actually got better!

Is everyone missing the obvious subtext in the original article - that we already live in just such a world but the button is located not on the forehead but in the crotch?

Perhaps some people would give their button-pushing services away for free, to anyone who asked. Let's call those people generous, or as they would become known in this hypothetical world: crazy sluts.

4CronoDAS11yBut you can touch that button yourself...
5SilasBarta11yHow does that compare to when someone else touches your button with their button?
5CronoDAS11yI've never done that, so I don't know.
3RichardKennaway11yI see that subtext, but I also see a subtext of geeks blaming the obvious irrationality of everyone else for them not getting any, like, it's just poking a button, right?
3Blueberry11yExcept that sex, unlike the button in the story, doesn't always make people happy. Sometimes, for some people, it comes with complications that decrease net utility. (Also, it is possible to push your own button with sex.)
4mattnewport11ySure, but it's not my comparison - I'm just saying it appears to be the obvious subtext of the original article.
1Houshalter11yBut two poor, "lonely" people could just get together and push each others buttons. Thats the problem with this, any two people that can cooperate with each other can get the advantage. There was once an expiriment to evolve different programs in a genetic algorithm that could play the prisoners dilema. I'm not sure exactly how it was organized, which would really make or break different strategies, but the result was a program which always cooperated except when the other wasn't and it continued refusing to cooperate with the other untill it believed they were "even".
1mattnewport11yAre you thinking of tit for tat [http://en.wikipedia.org/wiki/Tit_for_tat]? I'm not trying to argue for or against the comparison. Would you agree that the subtext exists in the original article or do you think I'm over-interpreting?
1bentarm11yNo, the subtext is definitely there in the original article. At least, I saw it immediately, as did most of the commenters:

I think my only other comment here has been "Hi." But, the webcomic SMBC has a treatment of the prisoner's dilemma today and I thought of you guys.

[-][anonymous]11y 12

This is not a site that devotes a whole lot of space to debating religion. People aren't getting mean so much as they're using shorthand. It can save time, for atheists, not to explain why they're atheists over and over. Hence the links. The sequences are a pretty good expression of why the majority around here is atheist. They're the expansion of the shorthand. If you're anything like me, reading them will probably move some of your mental furniture around; even if not, you'll talk the lingo better.

So I've started drafting the very beginnings of a business plan for a Less Wrong (book) store-ish type thingy. If anybody else is already working on something like this and is advanced enough that I should not spend my time on this mini-project, please reply to this comment or PM me. However, I would rather not be inundated with ideas as to how to operate such a store yet: I may make a Less Wrong post in the future to gather ideas. Thanks!

My theory of happiness.

In my experience, happy people tend to be more optimistic and more willing to take risks than sad people. This makes sense, because we tend to be more happy when things are generally going well for us: that is when we can afford to take risks. I speculate that the emotion of happiness has evolved for this very purpose, as a mechanism that regulates our risk aversion and makes us more willing to risk things when we have the resources to spare.

Incidentally, this would also explain why people falling in love tend to be intensly happy at first. In order to get and keep a mate, you need to be ready to take risks. Also, if happiness is correlated with resources, then being happy signals having lots of resources, increasing your prospective mate's chances of accepting you. [...]

I was previously talking with Will about the degree to which people's happiness might affect their tendency to lean towards negative or positive utilitarianism. We came to the conclusion that people who are naturally happy might favor positive utilitarianism, while naturally unhappy people might favor negative utilitarianism. If this theory of happiness is true, then that makes perfect sens

... (read more)
8Houshalter11yHow does this make sense exactly? A happy person, with more resources, would be better off not taking risks that could result in him losing what he has. On the other hand, a sad person with few resources, would need to take more risks then the happy person to get the same results. If you told a rich person, jump off that cliff and I'll give you a million dollars, they probably wouldn't do it. On the other hand, if you told a poor person the same thing, they might do it as long as there was a chance they could survive. My idea of why people were happy wasn't a static value of how many resources they had, but a comparative value. A rich person thrown into poverty would be very unhappy, but the poor person might be happy.
7pjeby11yKaj's hypothesis is a bit off: what he's actually talking about is the explore/exploit tradeoff. An animal in a bad (but not-yet catastrophic) situation is better off exploiting available resources than scouting new ones, since in the EEA, any "bad" situation is likely to be temporary (winter, immediate presence of a predator, etc.) and it's better to ride out the situation. OTOH, when resources are widely available, exploring is more likely to be fruitful and worthwhile. The connection to happiness and risk-taking is more tenuous. I'd be interested in seeing the results of that experiment. But "rich" and "poor" are even more loosely correlated with the variables in question - there are unhappy "rich" people and unhappy "poor" people, after all. (In other words, this is all about internal, intuitive perceptions of resource availability, not rational assessments of actual resource availability.)
2RobinZ11yIf I were to wager a guess, the people who would accept the deal are those who feel they are in a catastrophic situation. Speaking of catastrophic situations, have you seen The Wages of Fear or any of the remakes? I've only seen Sorcerer [http://en.wikipedia.org/wiki/Sorcerer_(film\]), but it was quite good. It's a rather more realistic situation that jumping off a cliff, but the structure is the same: a group of desperate people driving cases of nitroglycerin-sweating dynamite across rough terrain to get enough money that they can escape.
1Kaj_Sotala11yI was kind of thinking expected value. In principle, if you always go by expected value, in the long run you will end up maximizing your value. But this may not be the best move to make if you're low on resources, because with bad luck you'll run out of them and die even though you made the moves with the highest expected value. However, your objection does make sense and Eby's reformulation of my theory is probably the superior one, now that I think about it.
8Alexandros11yHi Kaj, I really liked the article. I had a relevant theory to explain the perceived difference of attitudes of north Europeans versus south Europeans. I guess you could call it a theory of unhappiness. Here goes: I take as granted that mildly depressed people tend to make more accurate depictions of reality [http://en.wikipedia.org/wiki/Depressive_realism], that north Europeans have higher incidence of depression and also much better functioning economies and democracies. Given a low resource environment, one needs to plan further, and make more rational projections of the future. If being on the depressive side makes one more introspective and thoughtful, then it would be conducive to having better long-term plans. In a sense, happiness could be greed-inducing, in a greedy algorithm sense. This more or less agrees with kaj's theory. OTOH, not-happiness would encourage long-term planning and even more co-operative behaviour. In the current environment, resources may not be scarce, but our world has become much more complex, actions having much deeper consequences than in the ancestral environment (Nassim Nicholas Taleb makes this point in Black Swan) therefore also needing better thought out courses of action. So northern Europeans have lucked out where their adaptation to climate has been useful for the current reality. If one sees corruption as a local-greedy behaviour as opposed to lawfulness as a global-cooperative behaviour, this would also explain why going closer to the equator you generally see an increase in corruption and also failures in democratic government. Taken further, it would imply that near-equator peoples are simply not well-adapted to democratic rule, which demands a certain limiting of short-term individual freedom for the longer-term common good, and a more distributed/localised form of governance would do much better. I think this (rambling) theory can more or less be pieced together with kaj's, adding long-term planning as a second dime
3Jayson_Virissimo11yIf any given instance of discrimination increases the degree of correspondence between your map and the territory, then there is no need for apology. Are these sorts of disclaimers really necessary here?
1RomanDavis11yRelevant to your interests: http://www.youtube.com/watch?v=A3oIiH7BLmg&feature=channel [http://www.youtube.com/watch?v=A3oIiH7BLmg&feature=channel]

Searle has some weird beliefs about consciousness. Here is his description of a "Fading Qualia" thought experiment, where your neurons are replaced, one by one, with electronics:

... as the silicon is progressively implanted into your dwindling brain, you find that the area of your conscious experience is shrinking, but that this shows no effect on your external behavior. You find, to your total amazement, that you are indeed losing control of your external behavior. You find, for example, that when the doctors test your vision, you hear them say, ‘‘We are holding up a red object in front of you; please tell us what you see.’’ You want to cry out, ‘‘I can’t see anything. I’m going totally blind.’’ But you hear your voice saying in a way that is completely out of your control, ‘‘I see a red object in front of me.’’

(J.R. Searle, The rediscovery of the mind, 1992, p. 66, quoted by Nick Bostrom here.)

This nightmarish passage made me really understand why the more imaginative people who do not subscribe to a computational theory of mind are afraid of uploading.

My main criticism of this story would be: What does Searle think is the physical manifestation of those panicked, helpless thoughts?

I don't have Searle's book, and may be missing some relevant context. Does Searle believe normal humans with unmodified brains can consciously affect their external behavior?

If yes, then there's a simple solution to this fear: do the experiment he describes, and then gradually return the test subject to his original, all-biological condition. Ask him to describe his experience. If he reports (now that he's free of non-biological computing substrate) that he actually lost his sight and then regained it, then we'll know Searle is right, and we won't upload. Nothing for Searle to fear.

But if, as I gather, Searle believes that our "consciousness" only experiences things and is never a cause of external behavior, then this is subject to the same criticism as Searle's support of zombies.

Namely: if Searle is right, then the reason he is giving us this warning isn't because he is conscious. Maybe in fact his consciousness is screaming inside his head, knowing that his thesis is false, but is unable to stop him from publishing his books. Maybe his consciousness is already blind, and has been blind from birth due to a rare developmental accident, and it doesn't know what words he types in his books at all. Why should we listen to him, if his words about conscious experience are not caused by conscious experience?

2torekp11ySearle thinks that consciousness does cause behavior. In the scary story, the normal cause of behavior is supplanted, causing the outward appearance of normality. Thus, it's not that consciousness doesn't affect things, but just that its effects can be mimicked. Nisan's criticism is devastating, and has the advantage of not requiring technological marvels to assess. I do like the elegance of your simple solution, though.
7Vladimir_M11yDavid Chalmers discusses this particular passage by Searle extensively in his paper "Absent Qualia, Fading Qualia, Dancing Qualia": http://consc.net/papers/qualia.html [http://consc.net/papers/qualia.html] He demonstrates very convincingly that Searle's view is incoherent except under the assumption of strong dualism, using an argument based on more or less the same basic idea as your objection.

http://www.kk.org/quantifiedself/2010/05/eric-boyd-and-his-haptic-compa.php

'Here is Eric Boyd's talk about the device he built called North Paw - a haptic compass anklet that continuously vibrates in the direction of North. It's a project of Sensebridge, a group of hackers that are trying to "make the invisible visible".'

The technology itself is pretty interesting; see also http://www.wired.com/wired/archive/15.04/esp.html

To the powers that be: Is there a way for the community to have some insight into the analytics of LW? That could range from periodic reports, to selective access, to open access. There may be a good reason why not, but I can't think of it. Beyond generic transparency brownie points, since we are a community interested in popularising the website, access to analytics may produce good, unforeseen insights. Also, authors would be able to see viewership of their articles, and related keyword searches, and so be better able to adapt their writing to the audience. For me, a downside of posting here instead of my own blog is the inability to access analytics. Obviously i still post here, but this is a downside that may not have to exist.

Chill with the downvotes, guys. Houshalter's new, looks to be participating well in other threads, and is just stating a belief for the first time.

Uh... thanks?

Occasionally someone will show up here and try to flame-bait us, not really arguing (or not responding to counterarguments) but just trying to provoke people with contrary opinions. (This is, after all, the Internet.) It's obvious from your other contributions that you're not doing that, but someone who'd only seen your two comments above might have wrongly assumed otherwise. I was explaining why the downvotes should be taken back, as it appears they were.

By the way, the mainstream view among Less Wrong readers is that any evidence we've seen for theism is far too weak to overcome the prior improbability of such a sneakily complex hypothesis (and that much of the evidence that we might expect from such a hypothesis is absent); but there are a few generally respected theists around here. The community norm on theism has more to do with how people conduct themselves in disputes than with the fact of disagreement— but you should be prepared for a lot of us to talk amongst ourselves as if atheism is a settled question,... (read more)

1Houshalter11yI recently found out that you can't downvote someone past zero, so that must be why they stopped :) I might just delete the post anyways. Ah well.
6orthonormal11yIt's considered poor form to delete a post or comment on LW, since it makes it impossible to tell what the replies were talking about. (Also, it doesn't restore the karma.) What's preferable, if one regrets a comment, is to edit it in a manner that keeps it clear what the original comment was, or to add a disclaimer. Here's one example [http://lesswrong.com/lw/1re/blame_theory/]— note that if cousin_it had just deleted the post, it would be more difficult to understand the comments on it. Or a fake example: should probably be edited to if the content is to be removed.
3JoshuaZ11yIt might be better to just spend some time reading the sequences. A lot of people here like myself disagree with the LW consensus views on a fair number of issues, but we have a careful enough understanding of what those consensus views are to know when to be explicit about what assumptions and what methods of reasoning we are using.

LW too focused on verbalizable rationality

This comment got me thinking about it. Of course LW being a website can only deal with verbalizable information(rationality). So what are we missing? Skillsets that are not and have to be learned in other ways(practical ways): interpersonal relationships being just one of many. I also think the emotional brain is part of it. There might me people here who are brilliant thinkers yet emotionally miserable because of their personal context or upbringing, and I think dealing with that would be important. I think a hollistic approach is required. Eliezer had already suggested the idea of a rationality dojo. What do you think?

7Will_Newsome11yI've been talking to various people about the idea of a Rationality Foundation (working title) which might end up sponsoring or facilitating something like rationality dojos. Needless to say this idea is in its infancy.
2Morendil11yThe example of coding dojos [http://www.codingdojo.org/] for programmers might be relevant, and not just for the coincidence in metaphors.
5RomanDavis11yI'm a draftsman and it always struck me how absolutely terrible the English language is for talking about ludicrously simple visual concepts precisely. Words like parallel and perpendicular should be one syllable long. I wonder if there's a way to apply rationality/ mathematical think beyond geometry and to the world of art.

New papers from Nick Bostrom's site.

2timtyler11y2nd one "ANTHROPIC SHADOW: OBSERVATION SELECTION EFFECTS AND HUMAN EXTINCTION RISKS" - is good reading.

This post is about the distinctions between Traditional and Bayesian Rationality, specifically the difference between refusing to hold a position on an idea until a burden of proof is met versus Bayesian updating.

Good quality government policy is an important issue to me (it's my Something to Protect, or the closest I have to one), and I tend to approach rationality from that perspective. This gives me a different perspective from many of my fellow aspiring rationalists here at Less Wrong.

There are two major epistemological challenges in policy advice, in addition to the normal difficulties we all have to deal with: 1) Policy questions fall almost entirely within the social sciences. That means the quality of evidence is much lower than it is in the physical sciences. Uncontrolled observations, analysed with statistical techniques, are generally the strongest possible evidence, and sometimes you have nothing but theory or professional instinct to work with.
2) You have a very limited time in which to find an answer. Cabinet Ministers often want an answer within weeks, a timeframe measured in months is luxurious. And often a policy proposal is too sensitive to discuss with the... (read more)

2xamdam11yReminded me of one of my favorite movie dialogues - from Sunshine. Context was actually physics, but the complexity of the situation and the time frame but the characters in the same situation as you with the Cabinet ministers. Capa: It's the problem right there. Between the boosters and the gravity of the sun the velocity of the payload will get so great that space and time will become smeared together and everything will distort. Everything will be unquantifiable. Kaneda: You have to come down on one side or the other. I need a decision. Capa: It's not a decision, it's a guess. It's like flipping a coin and asking me to decide whether it will be heads or tails. Kaneda: And? Capa: Heads... We harvested all Earth's resources to make this payload. This is humanity's last chance... our last, best chance... Searle's argument is sound. Two last chances are better than one. http://www.imdb.com/title/tt0448134/quotes?qt0386955 [http://www.imdb.com/title/tt0448134/quotes?qt0386955]
2James_K11yYes, that's a good example. There are times when a decision has to be made, and saying you don't know isn't very useful. Even if you have very little to go on, you still have to decide one way or the other.

Forgive me if this is beating a dead horse, or if someone brought up an equivalent problem before; I didn't see such a thing.

I went through a lot of comments on dust specks vs. torture. (It seems to me like the two sides were miscommunicating in a very specific way, which I may attempt to make clear at some point.) But now I have an example that seems to be equivalent to DSvs.T, easily understandable via my moral intuition and give the "wrong" (i.e., not purely utilitarian) answer.

Suppose I have ten people and a stick. The appropriate infinite... (read more)

DSvsT was not directly an argument for utilitarianism, it was an argument for tradeoffs and quantitative thinking and against any kind of rigid rules, sacred values, or qualitative thinking which prevents tradeoffs. For any two things, both of which have some nonzero value, there should be some point where you are willing to trade off one for the other - even if one seems wildly less important than the other (like dust specks compared to torture). Utilitarianism provides a specific answer for where that point is, but the DSvsT post didn't argue for the utilitarian answer, just that the point had to be at less than 3^^^3 dust specks. You would probably have to be convinced of utilitarianism as a theory before accepting its exact answer in this particular case.

The stick-hitting example doesn't challenge the claim about tradeoffs, since most people are willing to trade off one person getting hit multiple times with many people each getting hit once, with their choice depending on the numbers. In a stadium full of 100,000 people, for instance, it seems better for one person to get hit twice than for everyone to get hit once. Your alternative rule (maximin) doesn't allow some tradeoffs, so it leads to implausible conclusions in cases like this 100,000x1 vs. 1x2 example.

5[anonymous]11yI don't think maximising the minima is what you want. Suppose your choice is to hit one person 20 times, or five people 19 times each. Unless your intuition is different from mine, you'll prefer the first option.
4Nick_Tarleton11yI don't think you can justifiably expect to be able to tell your brain something this self-evidently unrealistic, and have it update its intuitions accordingly.
4Blueberry11yOh, and I'd love to hear what you mean about this.
3Blueberry11yThere's one difference, which is that the inequality of the distribution is much more apparent in your example, because one of the options distributes the pain perfectly evenly. If you value equality of distribution as worth more than one unit of pain, it makes sense to choose the equal distribution of pain. This is similar to economic discussions about policies that lead to greater wealth, but greater economic inequality.
2RomanDavis11yI think the point of Dust Specks Vs Torture was scope failure. Even allowing for some sort of "negative marginal utility" once you hit a wacky number 3^^^3, it doesn't matter. .000001 negative utility point multiplied by 3^^^3 is worse than anything, because 3^^^3 is wacky huge. For the stick example, I'd say it would have to depend on a lot of factors about human psychology and such, but I think I'd hit the one. Marginal utility tends to go down for a product, and I think that the shock of repeated blows would be less than the shock of the one against ten separate people. I think your opinion basically is an appeal to egalitarianism, since you expect negative utility to yourself from an unfair world where one person gets something that ten other people did not, for no good or fair reason.
1NancyLebovitz11yI think you're mistaken about the marginal utility-- being hit again after you've already been injured (especially if you're hit on the same spot) is probably going to be worse than the first blow. Marginal disutility could plausibly work in the opposite direction from marginal utility. Each 10% of your money that you lose impacts your quality of life more. Each 10% of money that you gain impacts your quality of life less. There might be threshold effects for both, but I think the direction is right.
1RomanDavis11yI was thinking more along the lines of scope failure: If some one said you were going to be hit 11 times would you really expect it to feel exactly 110% as bad as being hit ten times? But yes, from a traditional economics point of view, your post makes a hell of a lot more sense. Upvoted.
1Blueberry11yPart of the assumption of the problem was that hitting with a stick has some constant negative utility for all the people.
1snarles11yI'd analyze your question this way. Ask any one of the ten people which they would prefer: A) to get hit B) to have a 1/10th chance of getting hit 9 times. Assuming rationality and constant disutility of getting hit, every one of them would choose B.

I agree about Jaynes and the exactness of Bayesian inference. (I haven't read his Probability Theory fully, but I should definitely get to it sometime. I did got through the opening chapters however, and it's indeed mighty convincing.) Yet, I honestly don't see how either Jaynes or your comments answer my question in full, though I seen no significant disagreement with what you've written. Let me try rephrasing my question once more.

In natural sciences, when you characterize some quantity with a number, this number must make sense in some empirical way, te... (read more)

0Morendil11yThat question, interesting as it is, is above my pay grade; I'm happy enough when I get the equations to line up the right way. I'll let others tackle it if so inclined.

I have a theory: Super-smart people don't exist, it's all due to selection bias.

It's easy to think someone is extremely smart if you've only seen the sample of their most insightful thinking. But every time that happened to me, and I found that such a promising person had a blog or something like that, it universally took very little time to find something terribly brain-hurtful they've written there.

So the null hypothesis is: there's a large population of fairly-smart-but-nothing-special people, who think and publish their thought a lot. Because the best ... (read more)

[-][anonymous]11y 17

I was thinking something similar just today:

Some people think out loud. Some people don't. Smart people who think out loud are perceived as "witty" or "clever." You learn a lot from being around them; you can even imitate them a little bit. They're a lot of fun. Smart people who don't think out loud are perceived as "geniuses." You only ever see the finished product, never their thought processes. Everything they produce is handed down complete as if from God. They seem dumber than they are when they're quiet, and smarter than they are when you see their work, because you have no window into the way they think.

In my experience, there are far more people who don't think out loud in math than in less quantitative fields. This may be part of why math is perceived as so hard; there are all these smart people who are hard to learn from, because they only reveal the finished product and not the rough draft. Rough drafts make things look feasible. Regular smart people look like geniuses if they leave no rough drafts. There may really be people who don't need rough drafts in the way that we mundanes do -- I've heard of historical figures like that, and those really are savants -- but it's possible that some people's "genius" is overstated just because they're cagey about expressing half-formed ideas.

You may be right about math. Reading the Polymath research threads (like this one) made me aware that even Terry Tao thinks in small and well-understood steps that are just slightly better informed than those of the average mathematician.

3NancyLebovitz11yI Am a Strange Loop [http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/0465030793/ref=sr_1_1?ie=UTF8&s=books&qid=1275562767&sr=1-1] by Hofstadter may be of interest-- it's got a lot about how he thinks as well as his conclusions.
7snarles11yI'm not a psychologist but I thought I could improve on the vagueness of the original discussion. There are a few factors which determine "smartness" (or potential for success): 1. Speed. Having faster hardware. 2. Pattern Recognition. Being better at "chunking". 3. Memory. 4. Creativity. (="divergent" thinking.) 5. Detail-awareness. 6. Experience. Having incorporated many routines into the subconscious thanks to extensive practice. 7. Knowledge. (Quality is more important than quantity.) The first five traits might be considered part of someone's "talent." Experience and knowledge, which I'll group together as "training", must be gained through hard work. Potential for success is determined by a geometric (rather than additive) combination of talent and training: that is, roughly, potential for success=talent * training All this math, of course, is not remotely intended to be taken at face value, but it's merely the most efficient way to make my point. The "super-smart" start life with more talent than average. The rule of the bell curve holds, so they generally do not have an overwhelming cognitive advantage over the average person. But they have enough talent to justify investing much more of their resources into training. This is because a person with 15 talent will gain 15 success for every unit of time they put into training, while a unit of training is worth 17 success for a person with 17 talent. The less time you have to spend, the more time costs, so all other things being equal, the person with more talent will put more time into training. Suppose the person with 15 talent puts 100 units of time into training, and the person with 17 talent puts 110 units of time into training. Then: person with 15 talent * 100 training => 15000 success person with 17 talent * 110 training => 18700 success Which is 25% more success for only 13% more talent. There's probably some more
6NancyLebovitz11yIf you're interpreting "super-smart" to mean always right, or at least reasonable, and thus never severely wrong-headed, I think you're correct that no one like that exists, but it seems like a rather comic bookish idea of super-smartness. Also, I have no idea how good your judgment is about whether what you call brain-hurtful is actually ideas I'd think were egregiously wrong. I think there are a lot of folks smart enough to be special people-- those who come up with worthwhile insights frequently. And even if it's just a matter of generating lots of ideas and then publishing the best, recognizing the best is a worthwhile skill. It's conceivable that idea-generation and idea-recognizing are done by two people who together give the impression of one person who's smarter than either of them.
2dyokomizo11yHow would you describe the writing patterns of super-smart people? Similarly, how would meeting/talking/debating them would feel like?
4taw11yI think my comment was rather vague, and people aren't sure what I meant. This is all my impressions, as far as I can tell evidence of all that is rather underwhelming; I'm writing this more to explain my thought than to "prove" anything. It seems to me that people come in different level of smartness. There are some people with all sort of problems that make them incapable of even human normal, but let's ignore them entirely here. Then, there are normal people who are pretty much incapable of original highly insightful thought, critical thinking, rationality etc. They can usually do OK in normal life, and can even be quite capable in their narrow area of expertise and that's about it. They often make the most basic logic mistakes etc. Then there are "smart" people who are capable of original insight, and don't get too stupid too often. They're not measuring example the same thing, but IQ tests are capable of distinguishing between those and the normal people reasonably well. With smart people both their top performance and their average performance is a lot better than with average people. In spite of that, all of them very often fail basic rationality for some particular domains they feel too strongly about. Now I'm conflicted if people who are so much above "smart" as "smart" is above normal really exists. A canonical example of such person would be Feynman - from my limited information he seems to be just so ridiculously smart. Eliezer seems to believe Einstein is like that, but I have even less information about him. You can probably think of a few such other people. Unfortunately there's a second observation - there's no reason to believe such people existed only in the past, or would have aversion to blogging - so if super-smart people exist, it's fairly certain that some blogs of such people exist. And if such blogs existed, I would expect to have found a few by now. And yet, every time it seemed to me that someone might just be that smart and I start
6cousin_it11yA few people who blog frequently and fit my criteria for "super-smart": Terence Tao [http://terrytao.wordpress.com/], Cosma Shalizi [http://cscs.umich.edu/~crshalizi/weblog/], John Baez [http://math.ucr.edu/home/baez/TWF.html].
3Risto_Saarelma11yI was thinking of Tao as well. Also, Oleg Kiselyov [http://okmij.org/ftp/] for programming/computer science.
0cousin_it11yYep, seconding the recommendation of Oleg. I read a lot of his writings and I'd definitely have included him on the list.
0cupholder11yInteresting picks. I hadn't thought of Cosma Shalizi as 'super-smart' before, just erudite and with a better memory for the books and papers he's read than me. Will have to think about that...
5CronoDAS11yI think you're giving the "normal person" too little credit.
4NancyLebovitz11yAgreed. If nothing else, refugee situations aren't that uncommon in human history, and the majority are able to migrate and adapt if they're physically permitted to do so.
4dyokomizo11yIt doesn't seem to me that you have an accurate description of what a super-smart person would do/say other than match your beliefs and providing insightful thought. For example, do you expect super-smart people to be proficient in most areas of knowledge or even able to quickly grasp the foundations of different areas through super-abstraction? Would you expect them to be mostly unbiased? Your definition needs to be more objective and predictive, instead of descriptive.
1taw11yI don't know what's the correct super-smartness cluster, so I cannot make objective predictive definition, at least yet. There's no need to suffer from physics envy here - a lot of useful knowledge has this kind of vagueness. Nobody managed to define "pornography" yet, and it's far easier concept than "super-smartness". This kind of speculation might end up with something useful with some luck (or not). Even defining by example would be difficult. My canonical examples would be Feynman and Einstein - they seem far smarter than the "normally smart" people. Let's say I collected a sufficiently large sample of "people who seem super-smart", got as accurate information about them as possible, and did a proper comparison between them and background of normally smart people (it's pretty easy to get good data on those, even by generic proxies like education - so I'm least worried about that) in a way that would be robust against even large number of data errors. That's about the best I can think of. Unfortunately it will be of no use as my sample will be not random super-smart people but those super-smart people who are also sufficiently famous for me to know about them and be aware of their super-smartness. This isn't what I want to measure at all. And I cannot think of any reasonable way to separate these. So the project is most likely doomed. It was interesting to think about this anyway.
3Mitchell_Porter11yWhy would they blog? They would already know that most people have nothing of interest to tell them; and if they want to tell other people something, they can do it through other channels. If such a person had a blog, it might be for a very narrow reason, and they would simply refrain from talking about matters guaranteed to produce nothing but time-consuming stupidity in response.
3JoshuaZ11yI'm not sure that the ability to have original thoughts is at all closely connected to the ability to think rationally. What makes you reach that conclusion? Have you tried looking at Terence Tao's blog? I think he fits your model, but it may be that many of his posts will be too technical for a non-mathematician. I'm not sure in general if blogging is a good medium for actually finding this sort of thing. It is easy to see if a blogger isn't very smart. it isn't clear to me that it is a medium that allows one to easily tell if someone is very smart.
2xamdam11yI doubt your disproof of super-smart people, for the very same reasons you do, perhaps with a greater weight assigned to those reasons. I am also not sure about your definition of super-smart. Is idiot savant (in math, say) super-smart? If you mean super-smart=consistently rational, I suspect nothing prevents people of normal-smart IQ from scoring (super) well there, trading off quantity of ideas for quality. There is a ceiling there as good ideas get more complex and require more processing power, but I suspect given how crazy this world is Norm Smart the Rationalist can score surprisingly highly on relative basis. As a data point you might want to look at "Monster Minds" chapter of Feynman's "Surely you're joking". Since you mentioned Feynman. The chapter is about Einstein. Finally, where is your blog? ;)
1taw11yMy blog is here [http://t-a-w.blogspot.com/].
3Vladimir_Nesov11yYou can set that in "preferences".
1cupholder11yReminds me of 'My Childhood Role Model [http://lesswrong.com/lw/ql/my_childhood_role_model/]'. As for the actual meat of your comment, I don't have much to add. 'Smart' is a slippery enough word that I'd guess one's belief in 'super-smart people' depends on how one defines 'smart.'
0DanielVarga11yThere is an important systematic bias you only tangentially mention in your analysis. Super-smart people (more generally, very successful people) don't feel they have to prove themselves all the time. (Especially if they are tenured. :) ) Many of them like to talk before they think. There are very smart people around them who quickly spot the obvious mistakes and laboriously complete the half-baked ideas. It is just more economic this way.
0Jack11yHave you never had an in-person conversation with a super-smart person? Also, hi folks, I'm back. It is surprisingly difficult to dive back into LW after leaving it for a few weeks.
0taw11yObviously no, as I don't believe in their existence.
2Jack11yMy point is that I have trouble telling the difference between a fairly-smart and super-smart person by their writing for exactly the reason you mentioned. But in-person conversations give you access to the raw material and, if I take myself to be fairly smart there are definitely super-smart people out there. For example, I imagine if you had got to talking to Richard Feynman while he was alive you would have quickly realized he was a super-smart person.
5JoshuaZ11yI'm not sure about this. I have a lot of trouble distinguishing between just smart, super-smart, and smart-and-an-expert-in-their-field. Distinguishing them seems to not occur easily simply based on quick interactions. I can distinguish people in my own field to some extent, but if it isn't my own area, it is much more difficult. Worse, there are serious cognitive biases about intelligence estimations. People are more likely to think of someone as smart if they share interests and also more likely to think of someone as smart if they agree on issues. (Actually I don't have a citation for this one and a quick Google search doesn't turn it up, does someone else maybe have a citation for this?) One could imagine that many people might if meeting a near copy of themselves conclude that the copy was a genius. That said, I'm pretty sure that there are at least a few people out there who reasonably do qualify as super-smart. But to some extent, that's based more on their myriad accomplishments than any personal interaction.
1taw11yI'd guess it's far far easier to fool someone in person with all the noise of primate social clues, so such information is worth a lot less than writing.

Per my upcoming "Explain Yourself!" article, I am skeptical about the concept of "tacit knowledge". For one thing, it puts up a sign that says, "Hey, don't bother trying to explain this in words", which leads to, "This is a black box; don't look inside", which leads to "It's okay not to know how this works".

Second, tacit knowledge often turns out to be verbalizable, questioning whether the term "tacit" is really calling out a valid cluster in thingspace[1]. For example, take the canonical exampl... (read more)

5Morendil11yAs someone who has made much of the concept of tacit knowledge in the past, I'll have to say you have a point. (I'm now considering the addendum: "made much of it because it served my interests to present some knowledge I claimed to have as being of that sort". I'm not necessarily endorsing that hypothesis, just acknowledging its plausibility.) It still feels as if, once we toss that phrase out the window, we need something to take its place: words are not universally an effective method of instruction [http://c2.com/cgi/wiki?TeachMeToSmoke], practice clearly plays a vital part in learning (why?), and the hypothesis that a learner reconstructs [http://en.wikipedia.org/wiki/Jean_Piaget] knowledge rather than being the recipient of a "transfer" in a literal sense strikes me as facially plausible given the sum of my learning experiences. Perhaps an adult can comprehend "as long as you keep moving, you won't tip over", but I have a strong intuition it wouldn't go over very well with kids, depending on age and dispositions. My parenting experience (anecdotal evidence as it may be) backs that up. You need to see what a kid is doing right or wrong to encourage the former and correct the latter, you need a hefty dose of patience as the kid's anxieties get in the way sometimes for a long while. Learning to ride a bike is a canonical example because it is taught early on, there is hedonic value in learning it early on, but it is typically taught at an age when a kid rarely (or so my hunch says) has the learning-ability to understand advice such as "as long as you keep moving, you won't tip over". There is such a thing as learning to learn (and just how verbalizable is that skill?). It's all too easy to overgeneralize from a sparse set of examples and obtain a simple, elegant, convincing, but false theory of learning. I hope your article doesn't fall into that trap. :)
2SilasBarta11yI don't disagree, but I don't see how it contradicts my position either. The evidence you give against words being effective is that, basically, they don't fully constrain what the other person is being told to do, so they can always mess up in unpredictable ways. That's true, but it just shows how you need to understand the listener's epistemic state to know which insights they lack that would allow them to bridge the gap People do get this wrong, and end up giving "let them eat cake" advice -- advice that, if it were useful, the problem would have been solved. But at the same time, a good understanding of where they are can lead to remarkably informative advice. (I've noticed Roko and HughRistik are excellent at this when it comes to human sociality, while some are stuck in "let them eat cake" land.) Well, in my case, once it clicked for me, my thought was, "Oh, so if you just keep moving, you won't tip over, it's only when you stop or slow down that you tip -- why didn't he just tell me that?" Well, if it were a sparse set I wouldn't be so confident. I have a frustratingly long history of people telling me something can't be explained or is really hard to explain, followed by me explaining it to newbies with relative ease. And of cases where someone appeals to their inarticulable personal experience for justification, when really it was an articulable hidden assumption they could have found with a little effort. Anyone is welcome to PM me for an advance draft of the article if they're interested in giving feedback.
1NancyLebovitz11yI'm in general agreement, but leaves me wondering if you underestimate how much effort it takes to notice and express how to do things which are usually non-verbal.
3SilasBarta11yI don't understand. The part you quoted isn't about expressing how to do non-verbal things; it's about people who say, "when you get to be my age, you'll agree, [and no I can't explain what experiences you have as you approach my age that will cause you to agree because that would require a claim regarding how to interpret the experience which you have a chance of refuting]" What does that have to do with the effort need to express how to do non-verbal things?
4Tyrrell_McAllister11yI'm looking forward to your article, and I think that you're right to emphasize the vast gap between "unverbalizable" and "I don't know at the moment how to verbalize it". But, to really pass the "bicycle test", wouldn't you have to be able to explain verbally how to ride a bike so well that someone could get right on the bike and ride perfectly on the first try? That is, wouldn't you have to be able to eliminate even that "little practice on your own"? Or is there some part of being able to ride a bike that you don't count as knowledge, and which forms the ineliminable core that needs to be practiced?
2SilasBarta11yDepends on what the "bicycle test" is testing. For me, the fact that something is staked out as a canonical, grounding example of tacit knowledge, and then is shown to be largely verbalizable, blows a big hole in the concept. It shows that "hey, this part I can't explain" was groundless in several subcases. I do agree that some knowledge probably deserves to be called tacit. But given the apparent massive relativity of tacitness, and the above example, it seems that these cases are so rare, you're best off working from the assumption that nothing is tacit, than from looking for cases that you can plausibly claim are tacit. It's like any other case where one possibility should be considered last. If you do a random test on General Relativity and find it to be way off, you should first work from the assumption that you, rather than GR, made a mistake somewhere. Likewise, if your instinct is to label some of your knowledge as tacit, your first assumption should be, "there's some way I can open up this black box; what am I missing?". Yes, these beliefs could be wrong -- but you need a lot more evidence before rejecting them should even be on the radar. (And to be clear, I don't claim my thesis about tacitness to deserve the same odds as GR!)
1Morendil11yJust to be clear, I don't think it has been shown in the case of bike-riding that the knowledge can be transferred verbally. You can give someone verbal instruction that will help them improve faster at bike-riding, that isn't at issue. It's much less clear that telling someone the actual control algorithm you use when you ride a bike is sufficient to transform them from novice into proficient bike rider. You can program a robot to ride a bike [http://www.msnbc.msn.com/id/9594086/] and in that sense the knowledge is verbalizable, but looking at the source code would not necessarily be an effective method of learning how to do it.
1SilasBarta11yI think being able to verbally transmit the knowledge that solves most of the problem for them is proof that at least some of the skill can be transferred verbally. And of course it doesn't help to tell someone the detailed control algorithm to ride a bike, and I wouldn't recommend doing so as an explanation -- that's not the kind of information they need! One day, I think it will be possible to teach someone to ride a bike before they ever use one, or even carry out similar actions, though you might need a neural interface rather than spoken words to do so. The first step in such a quest is to abandon appeals to tacit knowledge, even if there are cases where it really does exist.

I have debated my religion before, but ironically this looks like a bad place to make a stand because everyones against me and theres a karma system.

Awwwww, I'm not against you. I just think you're incorrect.

If you post on Less Wrong a lot, you'll eventually say something several posters will disagree with, and some of them will say so. Try not to interpret it as a personal attack - taking it personally makes it harder to rationally evaluate new arguments and evidence.

I wouldn't expect the karma system to be much of a problem, by the way. If I remember rightly, your karma can't go below 0, so you can continue posting comments even if it falls to zero.

Chill with the downvotes, guys. Houshalter's new, looks to be participating well in other threads, and is just stating a belief for the first time.

Houshalter, this is a tangent to the current... tangent. It might be better to discuss theism in its own Open Thread comment or within a past discussion on the topic.

On a related note, have you looked through the Mysterious Answers to Mysterious Questions sequence yet? Not to throw a short book's worth of stuff at you, but there's a lot of stuff taken for granted around here when discussing theism, the supernatural, and evidence for such.

The first two reasons only justify requiring that airlines carry liability insurance policies against the external damage that can be caused by by their planes and injuries/deaths of passengers. Then, the insurer would specify what protocols airlines must follow before the insurer will offer an affordable policy. Passengers would not have to make such judgments in that case.

Remember to look for the third alternative!

I don't understand the point you're making in 3.

ETA: Actually, you know what? This has devolved into a political debate. Not cool. Can we... (read more)

What if there is evidence for God? Why do you assume there isn't?

Note that general Less Wrong consensus is that religion in almost all forms is very wrong. It is a safe operating assumption to work with on LW, in that you don't need to go through the logic everytime to justify it. it probably isn't as safe a starting point as say the wrongness of a flat-earth, or the wrongness of phlogiston, but it is pretty safe.

Incidentally, note that the evidence strongly suggests that actively taking out your aggression actually increases rather than decreases stress and aggression levels. See for example, Berkowitz's 1970 paper "Experimental investigation of hostility catharsis" in the Journal of Consulting and Clinical Psychology.

The Unreasonable Effectiveness of My Self-Exploration by Seth Roberts.

This is an overview of his self-experiments (to improve his mood and sleep, and to lose weight), with arguments that self-experimentation, especially on the brain, is remarkably effective in finding useful, implausible, low-cost improvements in quality of life, while institutional science is not.

There's a lot about status and science (it took Roberts 10 years to start getting results, and it's just to risky to careers for scientists to take on projects which last that long), and some int... (read more)

Thus: "Effective transfer of tacit knowledge generally requires extensive personal contact and trust. Another example of tacit knowledge is the ability to ride a bicycle."

How much personal contact and trust does it take to learn to ride a bicycle?

3RobinZ11yAs someone who learned cycling as a near-adult, the main insight is that you turn the wheel in the direction in which the bike is falling to push it back vertical. Once I had been told that negative-feedback mechanism, the only delay was until I got frustrated enough with going slowly to say, "heck with this 'rolling down a slight slope' game, I'm just going to turn the pedals." Whereupon I was genuinely riding the bicycle. ...for about a minute, until I got the bright idea of trying to jump the curb. Did you know that rubbing the knee off a pair of jeans will leave a streak of blue on concrete?
2Douglas_Knight11yWhat was your total time frame in learning to ride? Was there a period before you were told about turning the wheel?
1RobinZ11yI estimate the total time between donning the helmet and hitting the sidewalk was less than an hour - but it was probably a decade ago, so I don't trust my recollections.
1cousin_it11yHahaha, great catch. Though maybe they meant personal contact with a bicycle!

Well, my general approach is to think that we should continue political discussions as long as they are not indicating mind-killing.

If it's not there in your judgment then, I'll continue.

For example, I find your point about liability insurance to be very interesting, and not one I had thought about before. It is certainly worth thinking about, but even then, that's a different type of regulation, not a lack of regulation as a whole.

Yes, but it certainly makes a difference in how many choices and alternatives regulation chokes off. Even if you belie... (read more)

7ocr-fork11yI winced.
2Daniel_Burfoot11yI would like to see a top-level link post and discussion of this article (and maybe other related papers).
2cupholder11yI'm slightly tempted to, because that article is sloppy and unfocused enough that it annoys me, even though it's broadly accurate. (I mean, 'the standard statistical system for drawing conclusions is, in essence, illogical'? Really?) But I don't know what I'd have to add to it, really, other than basically whining 'it is so unfair!'

I've been reading the Quantum Mechanics sequence, and I have a question about Many-Worlds. My understanding of MWI and the rest of QM is pretty much limited to the LW sequence and a bit of Wikipedia, so I'm sure there will be no shortage of people here who have a better knowledge of it and can help me.

My question is this: why are the Born Probabilites a problem for MWI?

I'm sure it's a very difficult problem, I think I just fail to understand the implications of some step along the way. FWIW, my understanding of the Born Probabilities mainly clicks here:

I

... (read more)
[-][anonymous]11y 11

So... If a quantum event has a 30% chance of going LEFT and a 70% chance of going right . . . you'll have a 30% probability of observing LEFT and a 70% probability of observing RIGHT.

So why is this surprising?

The surprising (or confusing, mysterious, what have you) thing is that quantum theory doesn't talk about a 30% probability of LEFT and a 70% probability of RIGHT; what it talks about is how LEFT ends up with an "amplitude" of 0.548 and RIGHT with an "amplitude" of 0.837. We know that the observed probability ends up being the square of the absolute value of the amplitude, but we don't know why, or how this even makes sense as a law of physics.

3Spurlock11yAh. So it's not the idea that it's weighted so much as the specific act of squaring the amplitude. "Why squaring the amplitude, why not something else?". I suppose the way I had been reading, I thought that the problem came from expecting a different result given the squared amplitude probability thing, not from the thing itself. That is helpful, many thanks.
6Douglas_Knight11yThat's one issue, but as Warrigal said, the other issue is "how this even makes sense." it seems to say that the amplitude is a measure of how real [http://www.google.com/search?q="reality+fluid"+site:lesswrong.com] the configuration is.
1NancyLebovitz11yDelightful, and has a nice breakdown of the sort of questions to ask yourself (what exactly is the problem, how much precision is actually needed, what is the condition of the tools, etc.) if you want to get things done efficiently.

After more-or-less successfully avoiding it for most of LW's history, we've plunged headlong into mind-killer territory. I'm a little bit worried, and I'm intrigued to find out what long-time LWers, especially those who've been hesitant about venturing that direction, expect to see as a result over the next month or two.

It doesn't look encouraging. The discussions just don't converge, they meander all over the place and leave no crystalline residue of correct answers. (Achievement unlocked: Mixed Metaphor)

5simplicio11yIt is problematic but necessary, in my opinion. Politics IS the mind-killer, but politics DOES matter. Avoiding the topic would seem to be an admission that this rationality thing is really just a pretty toy. But it would be nice to lay down some ground-rules.
2mattnewport11yI don't think anyone has mentioned a political party or a specific current policy debate yet. That's when things really go downhill.
4khafra11yI think a current policy debate has potential for better results, since it would offer the potential for betting, and avoid some of the self-identification and loyalty that's hard to avoid when applying a model as simple as a political philosophy to something as complex as human culture.
1fburnaby11ySince we've had some discussion about additions/modifications to the site, and LW -- as I understand it -- was a originally a sort of spin-off from OB, maybe addition of a karma-based prediction market of some sort would be suitable (and very interesting).
1JoshuaZ11yMaybe make bets of karma? That might be very interesting. It would have less bite than monetary stakes, but highly risk averse individuals might be more willing to join the system.
2fburnaby11yI think having such a low-stakes game to play would be beneficial not only to highly risk-averse individuals, but to anyone. It would provide a useful training ground (maybe even a competitive ladder in a rationality dojo) for anyone who wants to also play with higher stakes elsewhere. Edit: I'm currently a mediocre programmer (and intend to become good via some practice). And while I don't participate often in the community (yet), this could be fun and educational enough that I would be willing to contribute a fairly substantial amount of labour to it. If anyone with marginally more know-how is willing to implement such an idea, let me know and I'll join up.
1Matt_Duing11yMy feelings on this are mixed. I've found LW to be a refreshing refuge from such quarrels. On the other hand, without careful thought political debates reliably descend into madness quickly, and it is not as if politics is unimportant. Perhaps taking the mental techniques discussed here to other forums could improve the generally atrocious level of reasoning usually found in online political discussions, though I expect the effect would be small.

And I can't explain how people live without cars. It boggles me. Sure we have big roads, but seriously, who wants to walk for 20 miles every day?

The point made in the discussion of traditional cities I linked is that living without a car can be a nightmare in places that were designed around cars but that many cities that were not designed around cars are very livable without them. I've lived in Vancouver for 7 years without a car quite happily and it's not even particularly pedestrian friendly compared to many European cities (though it is by North American standards). I only walk about 3-4 miles a day.

Are there any rationalist psychologists?

Also, more specifically but less generally relevant to LW; as a person being pressured to make use of psychological services, are there any rationalist psychologists in the Denver, CO area?

1Kevin11yAs a start, http://en.wikipedia.org/wiki/Cognitive_behavioral_therapy [http://en.wikipedia.org/wiki/Cognitive_behavioral_therapy] is a branch of psychotherapy with some respect around here because of the evidence that it sometimes works, compared to the other fields of psychotherapy with no evidence.
1RomanDavis11yDo they really have such a poor track record? I know some scientists have very little respect for the "soft" sciences, but sociologist can at least make generalizations from studies done on large scales. Psychotherapy makes a lot of people incredulous, but iis it really fair to say that most methods in practice today are ~0% effective? Yes this is essentially a post stating my incredulity. Would you mind quelling it?
2pjeby11yIt's not that they're 0% effective, it's that they're not much more effective than placebo therapy (i.e. being put on a waiting list for therapy), or keeping a journal. CBT is somewhat more effective, but I've also heard that it's not as effective for high-ruminators... i.e., people who already obsess about their thinking.
3AlanCrowe11yScientific medicine is difficult and expensive. I worry [http://www.cawtech.freeserve.co.uk/treatfection.2.html] that the apparent success of CBT may be because methodological compromises needed to make the research practical happen to flatter CBT more than they flatter other approaches. I might be worrying about the wrong thing. Do we know anything about the usefulness of Prozac in treating depression? Since we turn a blind eye to the unblinding of all our studies by the sexual side-effects of Prozac, and also refuse to consider the direct impact of those side-effects it could be argued that we don't actually have any scientific knowledge of the effectiveness of the drug.
0Douglas_Knight11yThe claim I've seen associated with Robyn Dawes is that therapy is useful (which I read as "more useful than being on a waiting list"), but that untrained therapists are just as good as those trained under most methods. (ETA: and, contrary to Kevin, they have been tested and found wanting)
1Kevin11yIt's not that other forms of psychotherapy are scientifically shown to be 0% effective; it's just that evidence-based psychotherapy is a surprisingly recent field. Psychotherapy can still work even if some fields of it have not had rigorous studies showing their effectiveness... but you might as well go with a therapist that has training in a field of psychotherapy that has some scientific method behind it. http://www.mentalhelp.net/poc/view_doc.php?type=doc&id=13023&cn=5 [http://www.mentalhelp.net/poc/view_doc.php?type=doc&id=13023&cn=5]
1torekp11yI can't help you with the Denver area in particular, but the general answer is a definite yes. In an interesting juxtaposition, American Psychologist magazine [http://www.apa.org/pubs/journals/amp/index.aspx] had a recent issue prominently featuring discussion of how to get past the misuse of statistics [http://lesswrong.com/lw/2ax/open_thread_june_2010/23ff] discussed in this very LW open thread. And it's not the first time the magazine addressed the point.
1NancyLebovitz11yDoes cognitive rationalist therapy count as both rationalist and psychology for purposes of this question? I think Learning Methods [http://learningmethods.com/] is a more sophisticated rationalist approach than CBT (it does a more meticulous job of identifying underlying thoughts), and might be worth checking into.
2pjeby11yInteresting. I found the site to be not very helpful, until I hit this page [http://www.learningmethods.com/goodforwhom.htm], which strongly suggests that at least one thing people are learning from this training is the practical application of the Mind Projection Fallacy: The quote is from an article written by an LM student, and some insights from the learning process that helped her overcome her stage fright. IOW, at least one aspect of LM sounds a bit like "rationality dojo" to me (in the sense that here's an ordinary person with no special interest in rationalism, giving a beautiful (and more detailed than I quoted here) explanation of the Mind Projection Fallacy, based on her practical applications of it in everyday life . (Bias disclaimer: I might be positively inclined to what I'm reading because some of it resembles or is readily translatable to aspects of my own models. Another article that I'm in the middle of reading, for example, talks about the importance of addressing the origins of nonconsciously-triggered mental and physical reactions, vs. consciously overriding symptoms -- another approach I personally favor.)

Simply responding with a Randian quote doesn't show that government doesn't work. Moreover, there are some things where government has worked well. At the most basic level, one needs governments to protect property rights, without which markets can't function. Similarly, various forms of pooled goods are useful (you are welcome to try to have roads run by private industry and see how well that works) But even beyond that, government policies are helpful for dealing with negative externalities. In particular, some forms of harm are by nature spread out and ... (read more)

6RomanDavis11yEven from a libertarian point of view, pollution is something that causes harm, like murder or theft. The governments job is to enforce laws that mitigate sources of harm and, when possible, correct harms against individuals. A person or corporation who puts out some amount of pollution should be forced to pay for any clean up or harm that they make. If you drive a car, you emmitted some fraction of the pollution that caused temperatures to go up, caused smog induced illness and some other miscellaneous harms that cost some amount of money. If that amount of money was 40 billion dollars, and you contributed 1 billionth towards the harm, you sshould pay 40 dollars. This should be even less controversial than imprisoning murderers
1SilasBarta11ySadly it isn't [http://silasx.blogspot.com/2008/11/well-i-guess-i-dont-count-as.html]. I consider(ed) myself libertarian, and then found that most self-identified ones reject that reasoning [http://silasx.blogspot.com/2008/08/so-why-are-libertarians-such-socialists.html] entirely. Pity. I was also unpleasantly suprised to find that there was a group of people griping about programs that would make it easier to identify cars that weren't liability-insured or pollution-tested, and this was called a "libertarian" position. ETA: And libertarian-leaning academics don't seem to "get" [http://lesswrong.com/lw/10j/typical_mind_and_politics/ung] why paying polluters to go away isn't a solution, and don't even understand what problem is supposed to be solved, even when hypothetically placed in such a situation! (See the exchange between me and Hanson in the link.) ETA2: I edited an EDF graphic to make this [http://1.bp.blogspot.com/_SL1MFVbilH8/ShYYtf_3oNI/AAAAAAAAACg/8mo3eiFs9Nc/s1600-h/carbonparodyfinal2.PNG] cute picture about the pollution issue and Coasean reasoning. ETA3: Full blog post with original graphic [http://silasx.blogspot.com/2009/05/fun-with-graphics-and-environment.html]
4RomanDavis11yIt's not so much that it doesn't solve the problem as things just don't work that way. For starters, current energy distribution methods are local monopolies, so they are strongly regulated on price because the competition mechanism doesn't work as it should. The idea that a customers might "choose" cleaner energy doesn't always work. Second, some logging companies tried that. They had an outside company, come in, do an inspection, and certify the ecological viability of their practices. There were a fair number of people who actually were willing to pay a little more. The problem is, another set of companies came by, inspected and approved themselves (with a different label that they invented) , and customers weren't able to tell the difference. That's a problem.
2CronoDAS11yAlso, to a great extent, electricity is fungible. Suppose you have both windmills and coal-fired plants connected to the same electrical grid, and they both generate equal amounts of power. Now suppose I tell the electric company that I only want to buy power from the windmills, so instead of getting half wind power and half coal power, I get 100% wind power (on paper). However, the electric company doesn't actually have to change the way it produces electricity in order to do this. All they have to do slightly increase the percentage of coal power that they deliver to everyone else (on paper). So all that changes is numbers on paper, and there's exactly as much coal power being generated as before.
1mattnewport11yYour noise pollution example is a potentially problematic one for libertarians but the obvious answer that occurs to me is the one I would expect many thoughtful libertarians to make. You are assuming a libertarian world with largely unchanged amounts of public space which is a problematic combination. The space outside your window has no reason to be public space. You would see a lot more 'gated community' type arrangements in a more libertarian society. People with low noise tolerance could choose to live in communities where the 'public' space was owned by a municipal service provider with strict rules about noise pollution. Anyone not adhering to these rules could be ejected from the property. Many common problems with imagined libertarian societies dissolve when you allow for much greater private ownership of currently public land than currently exists.

The blog of Scott Adams (author of Dilbert) is generally quite awesome from a rationalist perspective, but one recent post really stood out for me: Happiness Button.

Suppose humans were born with magical buttons on their foreheads. When someone else pushes your button, it makes you very happy. But like tickling, it only works when someone else presses it. Imagine it's easy to use. You just reach over, press it once, and the other person becomes wildly happy for a few minutes.

What would happen in such a world?

...

We already have these buttons on LessWrong... ;)

3cousin_it11yKarma does make me feel important, but when it comes to happiness karma can't hold a candle to loud music, alcohol and girls (preferably in combination). I wish more people recognized these for the eternal universal values they are. If only someone invented a button to send me some loud music, alcohol and girls, that would be the ultimate startup ever.
5Vladimir_Nesov11yClassical game theorists establish a scientific consensus that the only rational course of action is not to push the buttons. Anyone who does is regarded with contempt or pity and gets lowered in the social stratum, before finally managing to rationalize the idea out of conscious attention, with the help of the instinct to conformity. A few free-riders smugly teach the remaining naive pushers a bitter lesson, only to stop receiving the benefit. Everyone gets back to business as usual, crazy people spinning the wheels of a mad world.
7Wei_Dai11yAre you saying that classical game theorists would model the button-pushing game as one-shot PD? Why would they fail to notice the repetitive nature of the game?
2khafra11yI'd be far more willing to believe in game theorists calling for defection on the iterated PD than in mathematicians steering mainstream culture. However, with the positive-sum nature of this game, I'd expect theorists to go with Schelling instead of Nash; and then be completely disregarded by the general public who categorize it under "physical ways of causing pleasure" and put sexual taboos on it.
1Vladimir_Nesov11yThe theory says to defect in the iterated dilemma as well (under some assumptions).
3cousin_it11yHere's what the theory actually says: if you know the number of iterations exactly, it's a Nash equilibrium for both to defect on all iterations. But if you know the chance that this iteration will be the last, and this chance isn't too high (e.g. below 1/3, can't be bothered to give an exact value right now), it's a Nash equilibrium for both to cooperate as long as the opponent has cooperated on previous iterations.
4Alicorn11yA social custom would be established that buttons are only to be pressed by knocking foreheads together. Offering to press a button in a fashion that doesn't ensure mutuality is seen as a pathetic display of low status.

Pushing someone's happiness button is like doing them a favor, or giving them a gift. Do we have social customs that demand favors and gifts always be exchanged simultaneously? Well, there are some customs like that, but in general no, because we have memory and can keep mental score.

3cousin_it11yHah. Status is relative, remember? Your setup just ensures that "dodging" at the last moment, getting your button pressed without pressing theirs, is seen as a glorious display of high status.

William Saletan at Slate is writing a series of articles on the history and uses of memory falsification, dealing mainly with Elizabeth Loftus and the ethics of her work. Quote from the latest article:

Loftus didn't flinch at this step. "A therapist isn't supposed to lie to clients," she conceded. "But there's nothing to stop a parent from trying something like [memory modification] with an overweight child or teen." Parents already lied to kids about Santa Claus and the tooth fairy, she observed. To her, it was a no-brainer: "A

... (read more)

This might be old news to everyone "in", or just plain obvious, but a couple days ago I got Vladimir Nesov to admit he doesn't actually know what he would do if faced with his Counterfactual Mugging scenario in real life. The reason: if today (before having seen any supernatural creatures) we intend to reward Omegas, we will lose for certain in the No-mega scenario, and vice versa. But we don't know whether Omegas outnumber No-megas in our universe, so the question "do you intend to reward Omega if/when it appears" is a bead jar guess.

3Vladimir_Nesov11yThe caveat is of course that Counterfactual Mugging or Newcomb Problem are not to be analyzed as situations you encounter in real life: the artificial elements that get introduced are specified explicitly, not by an update from surprising observation. For example, the condition that Omega is trustworthy can't be credibly expected to be observed. The thought experiments explicitly describe the environment you play your part in, and your knowledge about it, the state of things that is much harder to achieve through a sequence of real-life observations, by updating your current knowledge.
3Nisan11yWhatever our prior for encountering No-mega, it should be counterbalanced by our prior for encountering Yes-mega (who rewards you if you are counterfactually-muggable).
2Jonathan_Graehl11ySurely the last thing on anyone's mind, having been persuaded they're in the presence of Omega in real life, is whether or not to give $100 :) I like the No-mega idea (it's similar to a refutation of Pascal's wager by invoking contrary gods), but I wouldn't raise my expectation for the number of No-mega encounters I'll have by very much upon encountering a solitary Omega. Generalizing No-mega to include all sorts of variants that reward stupid or perverse behavior (are there more possible God-likes that reward things strange and alien to us?), I'm not in the least bit concerned. I suppose it's just a good argument not to make plans for your life on the basis of imagined God-like beings. There should be as many gods who, when pleased with your action, intervene in your life in a way you would not consider pleasant, and are pleased at things you'd consider arbitrary, as those who have similar values they'd like us to express, and/or actually reward us copacetically.
2cousin_it11yYou don't have to. Both Omega and No-mega decide based on what your intentions were before seeing any supernatural creatures. If right now you say "I would give money to Omega if I met one" - factoring in all belief adjustments you would make upon seeing it - then you should say the reverse about No-mega, and vice versa. ETA: Listen, I just had a funny idea. Now that we have this nifty weapon of "exploding counterfactuals", why not apply it to Newcomb's Problem too? It's an improbable enough scenario that we can make up a similarly improbable No-mega that would reward you for counterfactual two-boxing. Damn, this technique is too powerful!

Ok. It looks like someone just did a driveby and downvoted every single entry in this subthread by 1 (I noticed because I saw my karma drop by 13 points with about 5 minute span since my last click on a LW page, and then glancing through saw that a lot of entries in this thread (including many that are not mine) had a lower karma than they had been when I last looked at the thread this morning, with many comments at 0 now at -1). Can the person who did this please explain their logic?

2RobinZ11yRequest for explanation seconded - I have had four comments (one [http://lesswrong.com/lw/2ax/open_thread_june_2010/23fx], two three four [http://lesswrong.com/lw/2ax/open_thread_june_2010/23h7]) downvoted in the same timespan, with several surrounding comments visibly downvoted.

I would have thought everyone here would have seen this by now, but I hadn't until today so it may be new to someone else as well:

Charlie Munger on the 24 Standard Causes of Human Misjudgment

http://freebsd.zaks.com/news/msg-1151459306-41182-0/

D: GAHHH!!! D: Hundreds of links to pages that contain hundreds of more links. D:

Hm, had you not noticed the sequences yet? The "sequences" button is next to the "about" button. There's quite a few more of them. :)

Because they would dump the waste off the left side of the boat, and get drinking water from the right.

This was a general problem more connected to cleanliness as a whole in 19th century America. Read a history of old New York, and realize that it took multiple plagues before they even started discussing not having livestock roaming the city.

I've been on those canal boats before, they are very, very slow.

Of course they were slow. They were an efficient method of moving a lot of cargo. Each boat moved slowly, but the total cargo moved was a lot mor... (read more)

Thought I might pass this along and file it under "failure of rationality". Sadly, this kind of thing is increasingly common -- getting deep in education debt, but not having increased earning power to service the debt, even with a degree from a respected university.

Summary: Cortney Munna, 26, went $100K into debt to get worthless degrees and is deferring payment even longer, making interest pile up further. She works in an unrelated area (photography) for $22/hour, and it doesn't sound like she has a lot of job security.

We don't find out until... (read more)

1NancyLebovitz11yDo you mean young people with unrepayable college debt, or young people with unrepayable debt for degrees which were totally unlikely to be of any use?
1Seth_Goldin11yArnold Kling has some thoughts about the plight of the unskilled college grad. 1 [http://econlog.econlib.org/archives/2010/03/the_plight_of_t_1.html] 2 [http://econlog.econlib.org/archives/2010/04/plight_of_the_u.html]
2SilasBarta11yThanks for the links, I had missed those. I agree with his broad points, but on many issues, I notice he often perceives a world that I don't seem to live in. For example, he says that people who can simply communicate in clear English and think clearly are in such short supply that he'd hire someone or take them on as a grad student simply for meeting that, while I haven't noticed the demand for my labor (as someone well above and beyond that) being like what that kind of shortage would imply. Second, he seems to have this belief that the consumer credit scoring system can do no wrong. Back when I was unable to get a mortgage at prime rates due to lacking credit history despite being an ideal candidate [1], he claimed that the refusals were completely justified because I must have been irresponsible with credit (despite not having borrowed...), and he has no reason to believe my self-serving story ... even after I offered to send him my credit report and the refusals! [1] I had no other debts, no dependents, no bad incidents on my credit report, stable work history from the largest private employer in the area, and the mortgage would be for less than 2x my income and have less than 1/6 of my gross in monthly payments. Yeah, real subprime borrower there...

One reason why the behavior of corporations and other large organizations often seems so irrational from an ordinary person's perspective is that they operate in a legal minefield. Dodging the constant threats of lawsuits and regulatory penalties while still managing to do productive work and turn a profit can require policies that would make no sense at all without these artificially imposed constraints. This frequently comes off as sheer irrationality to common people, who tend to imagine that big businesses operate under a far more laissez-faire regime than they actually do.

Moreover, there is the problem of diseconomies of scale. Ordinary common-sense decision criteria -- such as e.g. looking at your life history as you describe it and concluding that, given these facts, you're likely to be a responsible borrower -- often don't scale beyond individuals and small groups. In a very large organization, decision criteria must instead be bureaucratic and formalized in a way that can be, with reasonable cost, brought under tight control to avoid widespread misbehavior. For this reason, scalable bureaucratic decision-making rules must be clear, simple, and based on strictly defined ca... (read more)

1NancyLebovitz11yAs nearly as I can figure it, people who rely on credit ratings mostly want to avoid loss, but aren't very concerned about missing chances to make good loans.

For what it's worth, the credit score system makes a lot more sense when you realize it's not about evaluating "this person's ability to repay debt", but rather "expected profit for lending this person money at interest".

Someone who avoids carrying debt (e.g., paying interest) is not a good revenue source any more than someone who fails to pay entirely. The ideal lendee is someone who reliably and consistently makes payment with a maximal interest/principal ratio.

This is another one of those Hanson-esque "X is not about X-ing" things.

3NancyLebovitz11yI think there's also some Conservation of Thought (1) involved-- if you have a credit history to be looked at, there are Actual! Records!. If someone is just solvent and reliable and has a good job, then you have to evaluate that. There may also be a weirdness factor if relatively few people have no debt history. (1) Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed [http://www.amazon.com/Seeing-Like-State-Condition-Institution/dp/0300078153] is partly about how a lot of what looks like tyranny when you're on the receiving end of it is motivated by the people in charge's desire to simplify your behavior enough to keep track of you and control you.
4JGWeissman11ySimplifying my behavior enough to keep track of me and control me is tyranny.
3SilasBarta11yExcept that there are records (history of paying bills, rent), it's just that the lenders won't look at them. Maybe financial gurus should think about that before they say "stay away from credit cards entirely". It should be "You MUST get a credit card, but pay the balance." (This is another case of addictive stuff that can't addict me [http://silasx.blogspot.com/2008/07/question-why-cant-addictivetm-stuff.html].) (Please, don't bother with advice, the problem has since been solved; credit unions are run by non-idiots, it seems, and don't make the above lender errors.) ETA: Sorry for the snarky tone; your points are valid, I just disagree about their applicability to this specific situation.
8Vladimir_M11ySilasBarta: Well, is it really possible that lenders are so stupid that they're missing profit opportunities because such straightforward ideas don't occur to them? I would say that lacking insider information on the way they do business, the rational conclusion would be that, for whatever reasons, either they are not permitted to use these criteria, or these criteria would not be so good after all if applied on a large scale. (See my above comment [http://lesswrong.com/lw/2ax/open_thread_june_2010/23q8] for an elaboration on this topic.) Or maybe the reason is that credit unions are operating under different legal constraints and, being smaller, they can afford to use less tightly formalized decision-making rules?
4SilasBarta11yNo, they do require that information to get the subprime loan; it's just that they classified me as subprime based purely on the lack of credit history, irrespective of that non-loan history. Providing that information, though required, doesn't get you back into prime territory. Considering that in the recent financial industry crisis, the credit unions virtually never needed a bailout, while most of the large banks did, there is good support for the hypothesis of CU = non-idiot, larger banks/mortgage brokers = idiot. (Of course, I do differ from the general subprime population in that if I see that I can only get bad terms on a mortgage, I don't accept them.)
4Vladimir_M11ySilasBarta: This merely means that their formal criteria for sorting out loan applicants into officially recognized categories disallow the use of this information -- which would be fully consistent with my propositions from the above comments. Mortgage lending, especially subprime lending, has been a highly politicized issue in the U.S. for many years, and this business presents an especially dense and dangerous legal minefield. Multifarious politicians, bureaucrats, courts, and prominent activists have a stake in that game, and they have all been using whatever means are at their disposal to influence the major lenders, whether by carrots or by sticks. All this has undoubtedly influenced the rules under which loans are handed out in practice, making the bureaucratic rules and procedures of large lenders seem even more nonsensical from the common person's perspective than they would otherwise be. (I won't get into too many specifics in order to avoid raising controversial political topics, but I think my point should be clear at least in the abstract, even if we disagree about the concrete details.) Why do you assume that the bailouts are indicative of idiocy? You seem to be assuming that -- roughly speaking -- the major financiers have been engaged in more or less regular market-economy business and done a bad job due to stupidity and incompetence. That, however, is a highly inaccurate model of how the modern financial industry operates and its relationship with various branches of the government -- inaccurate to the point of uselessness.
1SilasBarta11yI actually agree with most of those points, and I've made many such criticisms myself. So perhaps larger banks are forced into a position where they rely too much on credit scores at one stage. Still, credit unions won, despite having much less political pull, while significantly larger banks toppled. Much as I disagree with the policies you've described, some of the banks' errors (like assumptions about repayment rates) were bad, no matter what government policy is. If lending had really been regulated to the point of (expected) unprofitability, they could have gotten out of the business entirely, perhaps spinning off mortgage divisions as credit unions to take advantage of those laws. Instead, they used their political power to "dance with the devil", never adjusting for the resulting risks, either political or in real estate. There's stupidity in that somewhere.
6mattnewport11yIn some cases this was an example of the principal–agent problem [http://en.wikipedia.org/wiki/Principal_agent_problem] - the interests of bank employees were not necessarily aligned with the interests of the shareholders. Bank executives can 'win' even when their bank topples.
1Douglas_Knight11yThese are not such different answers. Working on a large scale tends to require hiring (potentially) stupid people and giving them little flexibility.
1Vladimir_M11yYes, that's certainly true. In fact, what you say is very similar to one of the points I made in my first comment [http://lesswrong.com/lw/2ax/open_thread_june_2010/23q8] in this thread (see its second paragraph).
3NancyLebovitz11yFair point. This does replicate the Conservation of Thought theme. I think a good bit about business can be explained as not bothering because one's competitors haven't bothered either. I've seen financial gurus recommend getting a credit card and paying the balance. And thanks for the ETA.
4mattnewport11yRamit Sethi [http://www.iwillteachyoutoberich.com/blog/full-chapter-from-my-book-optimize-your-credit-cards/] for example. I had the impression that this was actually pretty much the standard advice from personal finance experts. Most of them are not worth listening to anyway though.
1SilasBarta11yThis might be what they say in their books, where they give a detailed financial plan, though I doubt even that. What they advise is usually directed at the average mouthbreather who gets deep into credit card debt. They don'd need to advise such people to build a credit history by getting a credit card solely for that purpose -- that ship has already said! All I ever hear from them is "Stay away from credit cards entirely! Those are a trap!" I had never once heard a caveat about, "oh, but make sure to get one anyway so you don't find yourself at 24 without a credit history, just pay the balance." No, for most of what they say to make sense, you have to start from the assumption that the listener typically doesn't pay the full balance, and is somehow enlightened by moving to such a policy. Notice how the citation you give is from a chapter-length treatment from a less-known finance guru (than Ramsey, Orman, Howard, etc.), and it's about "optimizing credit cards" a kind of complex, niche strategy. Not standard, general advice from a household name.
1Blueberry11yThat would be an insanely stupid thing for anyone to say. Credit cards are very useful if used properly. I agree with mattnewport that the standard advice given in financial books is to charge a small amount every month to build up a credit rating. Also, charge large purchases at the best interest rate you can find when you'll use the purchases over time and you have a budget that will allow you to pay them off.
1SilasBarta11yWell, then I don't know what to tell you. I'd listened to financial advice shows on and off and had read Clark Howard's book before applying for the mortgage back then, and never once did I hear or read that you should get a credit card merely to establish a credit history (and this is not why they issue them). I suspect it's because their advice begins from the assumption that you're in credit card debt, and you need to get out of that first, "you bozo". And your comment about the usefulness of credit cards for borrowing is a bit ivory-tower. In actual experience, based on all the expose reports and news stories I've seen, it's pretty much impossible to do that kind of planning, since credit card companies reserve the right to make arbitrary changes to the terms -- and use that right. I remember one case where a bank issued a card that had a "guaranteed" 1.9% rate for ~6 months with a ~$5000 limit -- but if you actually used anything approaching that limit, they would invoke the credit risk clauses of the agreement, deem you a high risk because of all the debt you're carrying, and jack up your rate to over 20%. So, a 1.9% loan that they can immediately change to 20% if they feel like it -- in what sense was it a 1.9% loan? For that reason, I don't even consider using a credit card for installment purchases.
0Blueberry11yWow, they can jack up the rate like that? I would definitely consider that fraud and abuse. That's not common, however, and Congress recently passed legislation to prevent that sort of abuse. Currently, I don't have the option of not using a credit card; I would starve to death without it.
4SilasBarta11yI thought so too, but then was overwhelmed with stories like that. Most credit cards agreements are written with a clause that says, "we can do whatever we want, and the most you can do to reject the new terms is pay off the entire debt in 15 days". This is one of the few instances where courts will honor a contract that gives one party such open-ended power over the other. If you haven't been burned this way, it's just a matter of time. And if you google the topic, I'm sure you'll find enough to satisfy your evidence threshold. Would you starve to death with it? If you can service the debts, let me loan you the money; at this point, most investors would sell out their mother to get a fraction of the interest rate on their savings that most credit cards charge. (Not that I would, but I'd turn down the offer without my trademark rudeness...)
0CronoDAS11y::followed link:: Did you ever experience nicotine withdrawal symptoms? In people who aren't long-time smokers, they can take up to a week to appear.
2Vladimir_M11yFor what that's worth, when I quit smoking, I didn't feel any withdrawal symptoms except being a bit nervous and irritable for a single day (and I'm not even sure if quitting was the cause, since it coincided with some stressful issues at work that could well have caused it regardless). That was after a few years of smoking something like two packs a week on average (and much more than that during holidays and other periods when I went out a lot). From my experience, as well as what I observed from several people I know very well, most of what is nowadays widely believed about addiction is a myth.
0SilasBarta11yNo, never did. My best guess is that I didn't smoke heavily enough to get a real addiction, though I smoked enough to get the psychoactive effects.
3Kevin11yYes, I would think it would take around 5-10 cigarettes a day (or more) for at least a week to develop an addiction. While cigarettes (and heroin, and caffeine) are very physically addictive, it still takes sustained, moderately high use to develop a physical addiction. Most cigarette smokers describe their addictions in terms of "x packs per day".
1SilasBarta11yOkay, then I guess my case isn't informative ... I'd use the pack/year metric instead instead of the pack/day.
0CronoDAS11yI wish I could direct you to this Scientific American article [http://www.scientificamerican.com/article.cfm?id=hooked-from-the-first-cigarette] so I could ask how it compares to your experiences, but it's behind a paywall.
0SilasBarta11yFrom what I can see before the paywall, it looks like I definitely didn't meet the threshold under the best science, but I could probably cross it from 5 cigarettes per day. I'd only try that out if I were rewarded for doing it (but not for stopping as that would defeat the purpose of such an experience).
9CronoDAS11yI read the article on paper before it was hidden in a paywall, so I can summarize some of the findings: 1) Rat brains are irrevocably changed by a single dose of nicotine. 2) Brains of rats that have never been exposed to nicotine ("non-smokers"), those that are currently given nicotine on a regular basis ("current smokers"), and those that used to be given nicotine on a regular basis but have been deprived of it for a long time ("former smokers") are all distinguishable from each other. 3) The author notes that the primary effect of nicotine on addicted human smokers appears to be suppressing craving for itself. 4) The author hypothesizes that the brain has a craving-generating system and a separate craving-suppression system. (These systems apply to appetites in general, such as the desire to eat food.) He further goes on to speculate that the primary action of nicotine is to suppress craving. This has the effect of throwing the two systems out of equilibrium, so the brain's craving-generation system "works harder" to counter the effects of nicotine. When the effects of nicotine wear off (which can take much longer than the time it takes for the nicotine to leave the body), the equilibrium is once again thrown out of balance, resulting in cravings. (The effects of smoking on weight are mentioned as support for this hypothesis.)
2Douglas_Knight11yExpected profit explains much behavior of credit card companies, but I don't think it helps at all with the behavior of the credit score system or mortgage lenders (Silas's example!). Nancy's answer looks much better to me (except her use of the word "also").

I suppose it might end up being treated like sex. Having one's button publicly visible is "indecent" - buttons are only pushed in private.

The analogy to sex is rough. From a historical and evolutionary perspective, sex is treated the way it is because it leads to gene replication and parenthood, not because it leads to pleasure. The lack of side effects from the buttons makes them more comparable to rubbing someone's back, smiling, or saying something nice to someone.

3AlephNeil11yOK - well that's one possibility. But in discussing either of these analogies, aren't we just showing (a) that the pleasure-button scenario is underdetermined, because there are many different kinds of pleasure and (b) that it's redundant, because people can actually give each other pats on the back, or hand-jobs or whatever.

I dunno, this strikes me as a somewhat sex-negative attitude. Responding seriously to your question about the better things we could be doing, it strikes me that we people spend most of our time doing worthless things. We seldom really know whether we are happy, what it means to be happy, or how what we are doing might connect to somebody's future happiness.

If the buttons actually made people happy from time to time, it could be quite useful as a 'reality check.' People suspecting that X led to happiness could test and falsify their claim by seeing whet... (read more)

3AlephNeil11yIsn't that a bit like snorting some coke (or perhaps just masturbating) after a happy experience (say, proving a particularly interesting theorem) to test whether it was really 'happy'? There are many different kinds of 'happiness', and what makes an experience a happy or an unhappy one is not at all simple to pin down. A kind of happiness that one can obtain at will, as often as desired, and which is unrelated to any "objective improvement" in oneself or the things one cares about, isn't really happiness at all. Pretend it's new year's eve and you're planning some goals for next year - some things that, if you achieve them, you will look back with pride and a sense of accomplishment. Is 'looking at lots of porn' on your list (even assuming that it's free and no-one was harmed in producing it)? I don't mean to imply anything about sex, because sex has a whole lot of things associated with it that make it extremely complicated. But the 'pleasure button' scenario gives us a clean slate to work from, and to me it seems an obvious reductio ad absurdum of the idea that pleasure = utility.
2Blueberry11yYou seem to be confusing happiness with accomplishment: Sure it is. It may not be accomplishment, or meaningfulness, but it is happiness, by definition. I think the confusion comes because you seem to value many other things more than happiness, such as pride and accomplishment. Happiness is just a feeling; it's not defined as something that you need to value most, or gain the most utility from.

(Wherein I seek advice on what may be a fairly important decision.)

Within the next week, I'll most likely be offered a summer job where the primary project will be porting a space weather modeling group's simulation code to the GPU platform. (This would enable them to start doing predictive modeling of solar storms, which are increasingly having a big economic impact via disruptions to power grids and communications systems.) If I don't take the job, the group's efforts to take advantage of GPU computing will likely be delayed by another year or two. Th... (read more)

7orthonormal11yThe amount you could slow down Moore's Law by any strategy is minuscule compared to the amount you can contribute to FAI progress if you choose. It's like feeling guilty over not recycling a paper cup, when you're planning to become a lobbyist for an environmentalist group later.
7NaN11yUninformed opinion: space weather modelling doesn't seem like a huge market, especially when you compare it to the truly massive gaming market. I doubt the increase in demand would be significant, and if what you're worried about is rate of growth, it seems like delaying it a couple of years would be wholly insignificant.
5Kaj_Sotala11yI would say that there seem to be a lot of companies that are in one way or another trying to advance Moore's law. For as long as it doesn't seem like the one you're working on has a truly revolutionary advantage as compared to the other companies, just taking the money but donating a large portion of it to existential risk reduction is probably an okay move. (Full disclosure: I'm an SIAI Visiting Fellow so they're paying my upkeep right now.)
4Roko11yPersonally trying to slow Moore's Law down is the kind of foolishness that Eliezer seems to inspire in young people...
1university_student11yDo you mean that he actively seeks to encourage young people to try and slow Moore's Law, or that this is an unintentional consequence of his writings on AI risk topics?
2JoshuaZ11yI'm pretty sure that Roko means the second. If this idea got mentioned to Eliezer I'm pretty sure he'd point out the minimal impact that any single human can have on this, even before one gets to whether or not it is a good idea.

Should we buy insurance at all?

There is a small remark in Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making about insurance saying that all insurance has negative expected utility, we pay too high a price for too little a risk, otherwise insurance companies would go bankrupt. If this is the case should we get rid of all our insurances? If not, why not?

There is a small remark in Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making about insurance saying that all insurance has negative expected utility, we pay too high a price for too little a risk, otherwise insurance companies would go bankrupt.

No -- Insurance has negative expected monetary return, which is not the same as expected utility. If your utility function obeys the law of diminishing marginal utility, then it also obeys the law of increasing marginal disutility. So, for example, losing 10x will be more than ten times as bad as losing x. (Just as gaining 10x is less than ten times as good as gaining x.)

Therefore, on your utility curve, a guaranteed loss of x can be better than a 1/1000 chance of losing 1000x.

ETA: If it helps, look at a logarithmic curve and treat it as your utility as a function of some quantity. Such a curve obeys diminishing marginal utility. At any given point, your utility increases less than proportionally going up, but more than proportionally going down.

(Incidentally, I acutally wrote an embarrasing article arguing in favor of the thesis roland presents, and you can still probably find on it the internet.... (read more)

1mkehrt11yI voted this up, but I want to comment to point out that this is a really important point. Don't be tricked into not getting insurance just because it has a negative expected monetary value.
3mattnewport11yI voted Silas up as well because it's an important point but it shouldn't be taken as a general reason to buy as much insurance as possible (I doubt Silas intended it that way either). Jonathan_Graehl's point that you should self-insure if you can afford to and only take insurance for risks you cannot afford to self-insure is probably the right balance. Personally I don't directly pay for any insurance. I live in Canada (universal health coverage) and have extended health insurance through work (much to my dismay I cannot decline it in favor of cash) which means I have far more health insurance than I would purchase with my own money. Given my aversion to paperwork I don't even fully use what I have. I do not own a house or a car which are the other two areas arguably worth insuring. I don't have dependents so have no need for life or disability coverage. All other forms of insurance fall into the 'self-insure' category for me given my relatively low risk aversion.
8RobinZ11yRisk is more expensive when you have a smaller bankroll. Many slot machines actually offer positive expected value payouts - they make their return on people plowing their winnings back in until they go broke.
6Douglas_Knight11yCitation please? A cursory search suggests that machines go through +EV phases, just like blackjack, but that individual machines are -EV. It's not just that they expect people to plow the money back in, but that pros have to wait for fish to plow money in to get to the +EV situation. The difference with blackjack is that you can (in theory) adjust your bet to take advantage of the different phases of blackjack. Your first sentence seems to match Roland's comment about the Kelly criterion (you lose betting against snake eyes if you bet your whole bankroll every time), but that doesn't make sense with fixed-bet slots. There, if it made sense to make the first bet, it makes sense to continuing betting after a jackpot.
3Dagon11yThis comes up frequently in gambling and statistics circles. "Citation please" is the correct response - casinos do NOT expect to make a profit by offering losing (for them) bets and letting "gambler's ruin" pay them off. It just doesn't work that way. The fact that a +moneyEV bet can be -utilityEV for a gambler does NOT imply that a -moneyEV bet can be +utilityEV for the casino. It's -utility for both participants. The only reason casinos offer such bets ever is for promotional reasons, and they hope to make the money back on different wagers the gambler will make while there. The Kelly calculations work just fine for all these bets - for cyclic bets, it ends up you should bet 0 when -EV. When +EV, bet some fraction of your bankroll that maximizes mean-log-outcome for each wager.
1CronoDAS11ySome casinos advertise that they have slots with "up to" a 101% rate of return. Good luck finding the one machine in the casino that actually has a positive EV, though!
1RobinZ11yOn the scale from "saw it in The Da Vinci Code" to "saw it in Nature", I'd have to say all I have is an anecdote from a respectable blogger [http://www.websnark.com/archives/2008/04/moments_in_time.html]: I'll give you that "many" is almost certainly flat wrong, on reflection, but such machines are (were?) probably out there.
8SilasBarta11yThat move was full of falsehoods. For example, people named Silas are actually no more or less likely than the general population to be tall homicidal albino monks -- but you wouldn't guess that from seeing the movie, now, would you?
2RobinZ11yThat's why it represents the bottom end of my "source-reliability" scale.
5bentarm11yThe only relevant part of the quote seems to be: I'm pretty sure it's not that unlikely to come up ahead 'three or four' times when playing slot machines (if it weren't so late I'd actually do the sums). It seems much more plausible that the blog author was just lucky than that the machines were actually set to regularly pay out positive amounts.
5roland11yAhh, Kelly criterion, correct?
1RobinZ11y... *looks up Kelly criterion [http://en.wikipedia.org/wiki/Kelly_criterion]* That's definitely a related result. (So related, in fact, that thinking about the +EV slots the other day got me wondering what the optimal fraction of your wealth was to bid on an arbitrary bet - which, of course, is just the Kelly criterion.)
4gwern11yI'd like to pose a related question. Why is insurance structured as up-front payments and unlimited coverage, and not as conditional loans? For example, one could imagine car insurance as a options contract (or perhaps a futures) where if your car is totaled, you get a loan sufficient for replacement. One then pays off the loan with interest. The person buying this form of insurance makes fewer payments upfront, reducing their opportunity costs and also the risk of letting nsurance lapse due to random fluctuations. The entity selling this form of insurance reduces the risk of moral hazard (ie. someone taking out insurance, torching their car, and then letting insurance lapse the next month). Except in assuming strange consumer preferences or irrationality, I don't see any obvious reason why this form of insurance isn't superior to the usual kind.
5Vladimir_M11yWell, look at a more extreme example. Imagine an accident in which you not just total a car, but you're also on the hook for a large bill in medical costs, and there's no way you can afford to pay this bill even if it's transmuted into a loan with very favorable terms. With ordinary insurance, you're off the hook even in this situation -- except possibly for the increased future insurance costs now that the accident is on your record, which you'll still likely be able to afford. The goal of insurance is to transfer money from a large mass of people to a minority that happens to be struck by an improbable catastrophic event (with the insurer taking a share as the transaction-facilitating middleman, of course). Thus a small possibility of a catastrophic cost is transmuted into the certainty of a bearable cost. This wouldn't be possible if instead of getting you off the hook, the insurer burdened you with an immense debt in case of disaster. (A corollary of this observation is that the notion of "health insurance" is one of the worst misnomers to ever enter public circulation.)
2gwern11yAlright, so this might not work for medical disasters late in life, things that directly affect future earning power. (Some of those could be handled by savings made possible by not having to make insurance payments.) But that's just one small area of insurance. You've got housing, cars, unemployment, and this is just what comes to mind for consumers, never mind all the corporate or business need for insurance. Are all of those entities buying insurance really not in a position to repay a loan after a catastrophe's occurrence? Even nigh-immortal institutions?
3Vladimir_M11yI wouldn't say that the scenarios I described are "just one small area of insurance." Most things for which people buy insurance fit under that pattern -- for a small to moderate price, you buy the right to claim a large sum that saves you, or at least alleviates your position, if an improbable ruinous event occurs. (Or, in the specific case of life insurance, that sum is supposed to alleviate the position of others you care about who would suffer if you die unexpectedly.) However, it should also be noted that the role of insurance companies is not limited to risk pooling. Since in case of disaster the burden falls on them, they also specialize in specific forms of damage control (e.g. by aggressive lawyering, and generally by having non-trivial knowledge on how to make the best out specific bad situations). Therefore, the expected benefit from insurance might actually be higher than the cost even regardless of risk aversion. Of course, insurers could play the same role within your proposed emergency loan scheme. It could also be that certain forms of insurance are mandated by regulations even when it comes to institutions large enough that they'd be better off pooling their own risk, or that you're not allowed to do certain types of transactions except under the official guise of "insurance." I'd be surprised if the modern infinitely complex mazes of business regulation don't give rise to at least some such situations. Moreover, there is also the confusion caused by the fact that governments like to give the name of "insurance" to various programs that have little or nothing to do with actuarial risk, and in fact represent more or less pure transfer schemes. (I'm not trying to open a discussion about the merits of such schemes; I'm merely noting that they, as a matter of fact, aren't based on risk pooling that is the basis of insurance in the true sense of the term.)
2gwern11yIntrinsically, the average person must pay in more than they get out. Otherwise the insurance company would go bankrupt. No reason a loan style insurance company couldn't do the exact same thing. 'Rent-seeking' and 'regulatory capture' are certainly good answers to the question why doesn't this exist.
2Nick_Tarleton11yFor one thing, insurance makes expenses more predictable; though the desire for predictability (in order to budget, or the like) does probably indicate irrationality and/or bounded rationality.
1Jonathan_Graehl11yObviously if you know your utility function and the true distribution of possible risks, it's easy to decide whether to take a particular insurance deal. The standard advice is that if you can afford to self-insure, you should, for the reason you cite (that insurance companies make a profit, on average). That's a heuristic that holds up fine except when you know (for reasons you will keep secret from insurers) your own risk is higher than they could expect; then, depending on how competitive insurers are, even if you're not too risk-averse, you might find a good deal, even to the extent that you turn an expected (discounted) profit, and so should buy it even if you have zero risk aversion. Apparently in California, auto insurers are required to publish the algorithm by which they assign premiums (and are possibly prohibited from using certain types of information). Conversely, you may choose to have no insurance (or extremely high deductible) in cases where you believe your personal risk is far below what the insurer appears to believe, even when you're actually averse to that risk. Of course, it's not sufficient to know how wrong the insurer's estimate of your risk is; they insist on a pretty wide vig - not just to survive both uncertainties in their estimation of risk and the market returns on the float, but also to compensate for the observed amount of successful adverse selection [http://en.wikipedia.org/wiki/Adverse_selection] that results from people applying the above heuristic. I suppose it may also be possible that the insurer won't pay. I don't know what exactly what guarantees we have in the U.S.
1Douglas_Knight11yActually, I think that for voluntary insurance, the observed adverse selection is negative, but I can't find the cite. People simply don't do cost-benefit calculations. People who buy insurance are those who are terribly risk-averse or see it as part of their role. Such people tend to be more careful than the general population. In a competitive market, the price of insurance would be bid down to reflect this, but it isn't.

You can't predict when you'll have to start paying.

Guided by Parasites: Toxoplasma Modified Humans

a ~20 minute (absolutely worth every minute) interview with, Dr. Robert Sapolsky, a leading researcher in the study of Toxoplasma & its effects on humans. This is a must see. Also, towards the end there is discussion of the effect of stress on telomere shortening. Fascinating stuff.

2NancyLebovitz11yThanks for the link. If people's desires are influenced by parasites, what does that do to CEV?
6Blueberry11yIf your desires are influenced by parasites, then the parasites are part of what makes you you. You may as well ask "If people's desires are influenced by their past experience, what does that do to CEV?" or "If people's desires are influenced by their brain chemistry, what does that do to CEV?"
9Alexandros11ySo what if Dr. Evil releases a parasite that rewires humanity's brains in a predetermined manner? Should CEV take that into account or should it aim to become Coherent Extrapolated Disinfected Volition?
5cupholder11yWhat if Dr. Evil publishes a book or makes a movie that rewires humanity's brains in a predetermined manner?
2Alexandros11yYep, I made a reference to cultural influence here [http://lesswrong.com/lw/2b7/hacking_the_cev_for_fun_and_profit/23vs]. That's why I suspect CEV should be applied uniformly to the identity-space of all possible humans rather than the subset of humans that happen to exist when it gets applied. In that case defining humanity becomes very, very important. Of course, perhaps the current formulation of CEV covers the entire identity-space equally and treats the living population as a sample, and I have misunderstood. But if that is the case, Wei Dai's last article is also bunk, and I trust him to have better understanding of all things FAI than myself.
3cupholder11yHeh - my first instinct is to bite the bullet and apply CEV to existing humans only. I couldn't give a strong argument for that, though; I just can't immediately think of a reason to exclude non-culturally influenced humans while including culturally influenced humans.
2NancyLebovitz11yIt's hard to tell what counts as an influence and what doesn't. It would be interesting to see what would happen if the effects of parasites could be identified and reversed. The results wouldn't necessarily all be good, though.
0Alexandros11yI am not sure I follow your last sentence. Can you elaborate?
2cupholder11yI'll give it a try. A human's mind and preferences might be influenced by cultural things like books and TV, and they might be influenced by non-cultural things like parasites. (And of course a lot of people will be influenced by both.) I can't think of a reason to include the former in CEV and exclude the latter that feels non-arbitrary to me, so I don't feel as if parasitically modified brains warrant different treatment, such as altering CEV to cover the space of all possible humans. My gut evaluates the prospect of parasite-driven brains as just another kind of human brain. (I'm presuming as well that CEV as currently formulated is just meant to cover existing humans, not all possible humans.) That makes me content to apply CEV to existing humans only - I don't feel I have to try to account for brain changes due to culture or parasites or what have you by expanding it to incorporate all of brain space.
3Blueberry11yYou may as well ask: "What if Dr. Evil kills every other living organism? Should CEV take that into account or should it aim to become Coherent Extrapolated Resurrected Volition?" Of course, if someone modifies or kills all the other humans, that will change the result of CEV. Garbage in, garbage out.

Did you miss the part where Lloyds imploded, and the unlimited liability destroyed scores of lives (and caused multiple suicides)?

Issue Status: Closed.

Reason: As Designed.

I downvoted several of Houshalter's comments for containing multiple spelling and punctuation errors, though I'd upvote a well-written defense of theism.

None, and nobody. I got a bicycle and tried to ride it until I could ride it. It took about three weeks from never having sat on a bicycle to confidently mixing with heavy traffic. (At the age of 22, btw. I never had a bicycle as a child.)

The first line that JoshB quoted from Wikipedia is fine -- there is this class of knowledge -- but I don't agree with the second at all. Some things you can learn just by having a go untutored. Where an instructor is needed, e.g. in martial arts, the only trust required is enough confidence in the competence of the teacher to do as he says before you know why.

I have debated my religion before, but ironically this looks like a bad place to make a stand because everyones against me and theres a karma system.

You're probably getting most downvotes because, as orthonormal said, you're going off a tangent to the current tangent, and with a somewhat adverserial stance.

Were telegraphs a bad idea? Horse-drawn plows? Why does the fact a technology was superseded mean that it's a terrible idea?

It does matter if one has guns (or SWAT teams) and the other relies on non-violent persuasion.

True and they wouldn't deserve it, but the truth is, there are a lot of really awesome effective drugs that either take forever to get approved, or don't get approved it at all. This kills people, too.

And there are a lot of diseases, like bronchitis, that are easy for a person to diagnose in themselves, and know that they need an antibiotic, but it costs a hundred dollars to see a doctor to tell him what he already knows so he can get the medicine, and if that's the difference between him paying the rent or not... and, hypothetically, he dies because it goes untreated.

It's more a propblem of political viability rather than anything else.

4Blueberry11yAnd then they misdiagnose it, and antibiotic resistance increases, and then the antibiotic doesn't work when they need it. Or they diagnose it but miss a warning sign for another disease that a doctor would have noticed and tested for. No thanks, I'd much rather have people who have gone to medical school for years make that decision.
2thomblake11yAnd I'd much rather the decision to trust doctors be made by the people to be affected, rather than politicians (who have not done any school / training in particular).
3cupholder11ySome day I hope someone without an axe to grind does an in-depth study estimating how badly people would be harmed with drug regulation v. without drug regulation. I've seen the 'yeah but regulation causes harms' versus 'yeah but non-regulation causes harms' argument before, but I can't remember seeing anyone try to rigorously and comprehensively quantify the respective pros and cons of both courses of action and compare them.
1Douglas_Knight11yHave you looked at the academic studies on the topic? Are these the "axe-grinding" "arguments" that you dismiss? Simple comparisons of the US vs Europe during times when one was systematically more conservative seems to me to be a pretty reasonable methodology, but maybe you don't consider it "rigorous" or "comprehensive." Maybe I'm overdoing the scare quotes, but those words were not helpful for me to identify what you have looked at, whether our disagreement is due to your ignorance or my lower standards.
1cupholder11yI have not, and my comment was not intended to slam whatever genuinely unbiased academic studies of the topic there are. My comment's referring to the times I've been a bystander for arguments about the utility of pharmaceutical drug regulation, both in real life and online; a pattern I noticed is the arguers failing to cite hard, quantitative evidence or make an argument based on the numbers. At best they might cite particular claims from think tanks or other writers/groups with a political agenda that would plausibly bias the analysis. So when I say I've seen the argument before, I'm not thinking of the abstract debate over whether what the FDA does is a net good or not, or particular pieces of academic work; I'm thinking of concrete occasions where people have started arguing about it in my presence, and the failure of the people I've witnessed arguing about it to present detailed evidence. I haven't tried to research the topic in detail, so I don't know precisely what ground the academic studies cover. At any rate, I didn't mean to claim knowledge of the field and to imply that there aren't any. I genuinely do just mean that I haven't seen them, because laymen (including the parent posters in this subthread, at least so far) don't mention them when they argue about the issue. As I wrote before, [http://lesswrong.com/lw/2ax/open_thread_june_2010/23ru] I added the 'axe to grind' warning not as a preemptive slam on academics, but because I suspect there have already been some overtly partisan analyses of the subject, and I want to discourage people from suggesting them to me. In this context, what I mean by 'rigorously and comprehensively' is that the analysis should satisfy basic standards for causal inference - all important confounding variables should be accounted for, and so on. For example, it would not be 'rigorous' to just collect a list of countries and compare the lifespan of those with an FDA-like administration with those that don't, because there
1Douglas_Knight11yThe disagreement was just that you seemed to say (by the phrasing "some day") that there had not been any good work on the subject. The only such paper I remember reading is Gieringer [http://www.fdareview.org/references.shtml#gieringer85]. That link is to a whole bibliography, compiled by people with a definite slant, so I can't guarantee that there aren't contradictory papers with equally good methodology. I'm reminded of Bruce Bueno de Mesquita [http://lesswrong.com/lw/20w/open_thread_april_2010/1v2c?context=3], who gives the impression of having fabricated the papers assessing him, but they're real.
1cupholder11yFair enough. Thanks for the Gieringer 1985 cite; it's 25 pages long so I haven't read it yet, but skimming through it I see a couple of quantitative tables, which is a good sign, and that it was published in the Cato Journal, which is not such a good sign. But it's something!
0Douglas_Knight11yI said my standards were lower. My point was that your original comment could be taken for having read this and dismissed it.
0cupholder11yI had noticed that you said that. I was originally not going to draw attention to the paper's source, but it occurred to me that someone might then have asked me whether I was aware of the paper's source, referring to my earlier claim that I wanted to discourage people from offering me overtly partisan analyses. So I decided to pre-empt that possible confusion/accusation by acknowledging the paper's origin from a libertarian-leaning journal.
1RomanDavis11yYeah, I was thinking of bringing up examples myself, but because of the various axes involved, bringing one up might not be terrible effective. Another person (I think it was cousin_it) brought up the idea that it should come down to a bet. If we bet ten dollars, and one of us kept arguing after the evidence was in and the bet was lost all it would come down to is, "If you're so smart, why aren't you rich?" EDIT:Also someone went and down voted the crap out of me. Who'd I make mad and why?
2cupholder11yYup. I thought of the 'without an axe to grind' proviso because I expect some politically-aligned think tanks out there have already published pamphlets or reports arguing one side or the other, but I wouldn't be inclined to take their claims very seriously. Whoever did it, it's not just you [http://lesswrong.com/lw/2ax/open_thread_june_2010/23ra].
1mattnewport11yMe too. Around 30 points in around 10 minutes. I'm flattered.
2thomblake11yMy guess for all this is that someone found the whole conversation off-topic and mind-killing. Which seems to justify downvotes.
1RobinZ11yDid either of you perhaps post in any of the threads replying to billswift [http://lesswrong.com/lw/2ax/open_thread_june_2010/23ra?context=1#23ra]?
1mattnewport11yYes. I think someone downvoted extra comments elsewhere for effect based on the magnitude and speed of the karma hit.
2JoshuaZ11yYes, it looks like almost all the comments related to the government policy issue got downvoted. This is annoying in that, I at least thought that it was a calm, rational discussion which was showing that political discussion isn't necessarily mind-killing. I'm particularly perplexed by the downvoting of comments which consisted of either interesting non-standard ideas or of comments which included evidence of claims.
1mattnewport11yIt must be a relatively high karma user given the fact that downvotes are limited by total karma. Perhaps they'd care to explain themselves.
3JoshuaZ11yThe downvote limit is 4 times your karma yes? So if the total downvote for the thread was around 60 points, the individual would only need to be around 15 karma.
4thomblake11yYes. It was originally equal to your karma but some of us had already spent that many downvotes and the point of the policy wasn't to stop established users from being able to downvote.
0[anonymous]11yI hope someone without an axe to grind does this; if there are axes involved, its much more likely to turn out supporting whatever the person thought before, i. e. not strongly correlated with how people are hurt or helped by regulation

I think you're stuck in the mindset of 'if it wasn't for our government provided roads where would we drive our cars?'. Such a world would probably have fewer private cars and be arranged in such a way that many ordinary people could get by perfectly well without a car, as is the case in many European and Japanese cities.

This article might help you understand some of the hidden assumptions many Americans operate under. Note: this guy has some rather wacky ideas but his articles on 'traditional cities' are pretty interesting.

1Mass_Driver11yI strongly agree with you that the US federal government has spent too much on road subsidies over the years and should decrease its current spending. That said, not everywhere is Juneau, Alaska; not all sites connected to government roads are a "Suburban Hell," and not all inhabitants of the suburbs would prefer to live in a "Traditional City." Roads are useful for accommodating a highly mobile, atomistic society that exploits new resources and adopts new local trade routes every 20 years or so. Cars and parking lots are useful for separating people who have recently immigrated from all different places and who really don't like each other and don't want to have much to do with each other. Interstate highways were built for evacuation and civil defense as well as for actual transport. Finally, regardless of whether you prefer roads or trains, some level of government subsidy and/or coordination is probably needed to get the most efficient transportation system possible. In any case, this thread started out as a discussion of Traditional vs. Bayesian rationality, did it not? Improving government policy was merely the example chosen to illustrate a point. It seems unsportsmanlike to shoot that point down on the grounds that virtually all government does more harm than good. Even if such a claim were true, one might still want to know how to generate government policies that do relatively less harm, given a set of political constraints that temporarily prevent enacting a strong version of (anarcho)libertarianism.

I'm not certain this comment will be coherent, but I would like to compose it before I lose my train of thought. (I'm in an atypical mental state, so I easily could forget the pieces when feeling more normal.) The writing below sounds rather choppy and emphatic, but I'm actually feeling neutral and unconvinced. I wonder if anyone would be able to 'catch this train' and steer it somewhere else perhaps..?

It's an argument for dualism. Here is some background:


I've always been a monist: believing that everything should be coherent from within this reality. Th... (read more)

2ata11yI don't see where dualism comes in. Specifically what kind of dualism are you talking about? -------------------------------------------------------------------------------- A problem being unsolvable within some system does not imply that there is some outer system where it can be solved. Take the Halting Problem, for example: there are programs such that we cannot prove whether or not they will never halt, and this itself is provable. Yet there is a right answer in any given instance — a program will halt or it won't — but we can never know in some cases. That you say "I cannot understand what the answer to the problem could possibly be" suggests that it is a wrong question. Ask "Why do I think the universe exists?" instead of "Why does the universe exist?". I have my tentatively preferred answer to that, but maybe you will come up with something interesting.
2Blueberry11yWhat is it?
0byrnema11yAgreed, I was imprecise before. It is not generally 'a problem' if something is unknown. In the case of the halting problem, it's OK if the algorithm doesn't know when it is going to halt. (This doesn't make it incomplete.) However, it is a problem if X doesn't know how X was created (this makes X incomplete.) The difference is that an algorithm can be implemeted -- and fully aware of how it is implemented, and know every line of its own code -- without knowing where it is going to halt. Where it's going to halt isn't squirreled away in some other domain to be read at the right moment, the rules for halting are known by the algorithm, it just doesn't know when those rules will be satisfied. In contrast, X could not have created itself without any source code to do so. The analogous situation would be an algorithm that has halted but doesn't know why it halted. If it cannot know through self-inspection why it halted, then it is incomplete: it must deduce that something outside itself caused it to halt.
0byrnema11yI agree that when a question doesn't have any possibility of an answer, it's probably a wrong question. But in this case, I don't see how it could be a wrong question. It seems like a perfectly reasonable question that we've gotten habituated to not having an answer to. It's evidence -- if we were looking for evidence -- that X is incomplete and we are in a simulation. We take a lot of store in the convenient fact that our reality is causal. So why can't we ask what caused reality? No, I don't come up with anything. I feel like anything that a person could possibly come up with would be philosophy (a non-scientific answer outside X). But please do share your answer (even if it is philosophy, as I expect). (By dualism, I mean that there are aspects of reality we interact with beyond science, so that physical materialism or scientism, etc., would be incomplete epistemologies.)
0ata11yHere's [http://lesswrong.com/lw/2di/poll_what_value_extra_copies/26ta] where I stated it most recently, and I wrote an earlier post [http://lesswrong.com/lw/1zt/the_mathematical_universe_the_map_that_is_the/] getting at the same sort of thing (where I see you posted a few comments), but at this point I've decided to abstain from actually advocating it until I have a better handle on some of the currently-unanswered questions raised by it. At the same time, I do feel like this line of reasoning (the conclusion I like to sum up as "Existence is what mathematical possibility feels like from the inside") is a step in the right direction. I do realize now that it is not as complete a solution as I originally thought — it makes me feel less confused about existence, but newly confused about other things — but I do still have the sense that the ultimately correct explanation of existence will not specially privilege this reality over others, and that our mental algorithms regarding "existence" are leading us astray. That seems to be the only state of affairs that does not compel us to believe in an infinite regress of causality, which doesn't really seem to explain anything, if it even makes logical sense. In any case, although I definitely have to concede that this problem is not solved, I am not convinced that it is not solvable. Metaphysical cosmology has been one of the most difficult areas of philosophy to turn into science or math, but it may yet fall. Alright, that's what threw me off. I think "dualism" is usually used to refer specifically to theories that postulate ontologically-basic mental substances or properties separate from normal physical interactions; not that "there are aspects of reality we interact with beyond science", but that our consciousness or minds are made of something beyond science. Your reasoning does not imply the latter, correct?
0byrnema11yOh, that [http://lesswrong.com/lw/1zt/the_mathematical_universe_the_map_that_is_the/] was you. I think the Ultimate Ensemble idea is really appealing as an explanation of what existence is. (The way possibility feels from the inside, as you wrote.)
0[anonymous]11yMy answer to those questions should be the same. The process of answering either question should bring the two into line even if they were previously cached somewhat differently.
0Blueberry11yBy "problem of existence" you mean why we exist and how we came to exist? Why do you think that can't be answered within our world? And what do you think a world would look like if you could solve the problem in it?
0byrnema11yYes. Why and how anything exists, and what existence is. The reason that I think this problem can't be answered within our world is that the lack of an answer doesn't seem to be a matter of lack of information. It's a unique question in that although it seems to be a reasonable question, there's no possibility of an answer to this question, not even a false one. It's a reasonable question because X is a causal reality, so it is reasonable to ask what caused X. There's no possibility of an answer to the question because causality is an arrow that always requires a point of departure. If you say the universe was created by a spark, and the rest followed by mathematics and logical necessity, still, what created that spark? Religions have creation stories, but they explain the creation of X by the creation of X outside X. So creation stories don't resolve the conundrum of creation, they just move creation to someplace outside experience, where we cannot expect to understand anything. This may represent a universal insight that the existence of X cannot be explained within X. This is analogous to being in flatland and wondering about edges. I suppose the main mysterious thing about the larger universe Y would be acausality. Here within X, it seems to be a rule, if not a logical principle, that everything is determined by something else. If something were to happen spontaneously, how did it decide to? What is the rule or pattern for its spontaneous appearance? These are all reasonable questions within X. Somehow Y gets around them.
0Blueberry11yWhat do you think of the following answer? There is some evidence that backward time travel may be possible under some circumstances in a way that is compatible with general relativity. So suppose, many years in the future, a team of physicists and engineers creates a wormhole in the universe and sends something back to the time of the Big Bang, causing it and creating our universe. That way, it's all self-contained.
0byrnema11ySelf-contained is good, though it doesn't resolve the existence problem. (What is the appropriate cliché there ... you can't pull yourself out of quicksand by pulling on your boots?) Backward time travel itself opens up a number of wonderful possibilities, including universe self-reflection and the possibility of a post-hoc framework of objective value.
0wedrifid11yIt also makes encryption more difficult!

In Harry Potter and the Methods of Rationality, Quirrell talks about a list of the thirty-seven things he would never do as a Dark Lord.

Eliezer, do you have a full list of 37 things you would never do as a Dark Lord and what's on it?

  1. I will not go around provoking strong, vicious enemies.
  2. Don't Brag
  3. ?
3RichardKennaway11yAll of the replies to this should be in the thread for discussing HP&tMoR [http://lesswrong.com/lw/2ab/harry_potter_and_the_methods_of_rationality/].
1JoshuaZ11yThis is a reference to the Evil Overlord List. That's why Harry starts snickering. Indeed, it almost is implied that Voldemort wrote the actual evil overlord list. For the most common version of the actual Evil Overlord List see Peter's Evil Overlord List [http://www.eviloverlord.com/lists/overlord.html]. Having such a list for Voldemort seems to be at least partially just rule of funny.
4MBlume11yDid the evil overlord list exist publicly in 1991? I was actually a bit confused by Harry's laughter here. Eliezer seems to be working pretty hard to keep things actually in 1991 (truth and beauty, the journal of irreproducible results, etc.)
1JoshuaZ11yThat's a good point. I'm pretty sure the Evil Overlord List didn't exist that far back, at least not publicly. It seems like for references to other fictional or nerd-culture elements he's willing to monkey around with time. Thus for example, there was a Professor Summers for Defense Against the Dark Arts which wouldn't fit with the standard chronology for Buffy at all.
4NancyLebovitz11yChecking wikipedia [http://en.wikipedia.org/wiki/Evil_overlord], it looks possible but not likely that Harry could have seen the list in 1991.
1Blueberry11yWell, he and his father are described as being huge science fiction fans, so it's not that unlikely that they heard about the list at conventions, or had someone show them an early version of the list printed from email discussions, even if they didn't have Internet access back then.
0NancyLebovitz11yI'm pretty sure they did have internet access back then. It was more available through universities than it was to the general public.
1Blueberry11yI meant even if Harry's parents didn't have access back then, someone could still have printed out the list and showed it to them.
1RomanDavis11yThat doesn't sound very rational. The simplest answer seems to be, "Eliezer thought it would be funny" and he would have included the Evil Overlord List in the fanfic even if the Evil Overlord he was talking about was Caligula.
0Blueberry11yOf course it was included because Eliezer thought it would be funny. But I don't see what's so irrational about Harry reading the printed copy of the list.
0RomanDavis11yYes, but that's not the same as saying Eliezer actually went and looked up the earliest conceivable date to give Harry a reasonable chance of reading the list, or that he could pass the joke up even if he did.
0JoshuaZ11yWell, would Harry have started laughing if he had just seen just a list before? I'm not sure, but the impression I got was that Harry was laughing because someone had made list identical in form to a well-known geek list. If he had just happened to have seen such a list before, would it be as funny? Moreover, would that be what the reader would have expected to understand from the text?
1RobinZ11yMaybe 'Quirrell' posted his version to FidoNet.
0JoshuaZ11yWould not then Harry have noticed that Quirrel's list overlapped with the one he had seen?
1RobinZ11yHarry did correctly guess Item #2...
0JoshuaZ11yGood point. That makes it much more plausible. Although given Harry's personality I'd then expect him to test by trying to guess the third and fourth.
0Oscar_Cunningham11yGood call, although the fic doesn't explicitly mention the evil overlord list.
2RomanDavis11yThe reason I think it might actually be plot relevant is that most people can't resist making a list that is much longer than 37 rules long. Plus most of the rules are just lampshades for tropes that show up again and again in fiction with evil overlords. They rarely are such basic, practical advice as "stop bragging so much."

Ah. I'm pretty sure it isn't a real list because of the number 37. 37 is one of the most common numbers for people to pick when they want to pick a small "random" number. Humans in general are very bad at random number generation. More specifically, they are more likely to pick an odd number, and given a specific range of the form 1 to n, they are most likely to pick a number that is around 3n/4. The really clear examples are from 1 to 4 (around 40% pick 3), 1 to 10 (I don't remember the exact number but I think it is around 30% that pick 7). and then 1 to 50 where a very large percentage will pick 37. The upshot is if you ever see an incomplete list claiming to have 37 items, you should assign a high probability that the rest of the list doesn't exist.

Ouch. I am burned.

3JoshuaZ11yWell, that's ok. Because I just wrote a review of Chapter 23 criticizing Harry's rush to conclude that magic is a single-allele Mendellian trait and then read your chapter notes where you say the same thing. That should make us even.
2Oscar_Cunningham11yIt just occurred to me that the odd/even bias applies only because we work in base ten. Humans working in a prime base (like base 11) would be much less biased. (in this respect)
0JoshuaZ11yWell, that seems plausible, although what is going on there is being divisible by 2, not being prime. If your general hypothesis is correct, then if we used a base 9 system numbers divisible by 3 might seem off. However, I'm not aware of any bias against numbers divisible by 5. And there's some evidence that suggests that parity is ingrained human thinking (children can much more easily grasp the notion of whether a number is even or odd, and can do basic arithmetic with even/oddness much faster than with higher moduli).
3Oscar_Cunningham11yI seared for "human random number" in Google and three of the results were polls on internet fora. Polls A & C were numbers in the range 1 to 10, poll B was in the range 1 to 20. C had the best participation. (By coincidence, I had participated in poll B) I screwed up my experimental design by not thinking of a test before I looked at the results, so if anyone else wants to judge these they should think up a measure of whether certain numbers are preferred before they follow the links. A [http://bmgf.bulbagarden.net/showthread.php?t=45287]B [http://forums.xkcd.com/viewtopic.php?f=18&t=38705&start=0]C [http://forums.gtsplus.net/index.php?showtopic=20133&mode=threaded] (You have a double post btw)
1RobinZ11yJoshuaZ's statement implies a peak near 15 for B and outright states 30% of responses to A and C near 7. I would guess that 13 and 17 would be higher than 15 for B and that 7 will still be prominent, and that odd numbers (and, specifically, primes) will be disproportionately represented. I will not edit this comment after posting.
1Blueberry11yWhy primes?
3RobinZ11yMy instinct is that numbers with obvious factors (even numbers and multiples of five especially) will appear less random - and in the range from 1 to 20, that's all the composites.
0[anonymous]11yWell, that seems plausible, although what is going on there is being divisible by 2, not being prime. If your general hypothesis is correct, then if we used a base 9 system numbers divisible by 3 might seem off. However, I'm not aware of any bias against numbers divisible by 5. And there's some evidence that suggests that parity is ingrained human thinking (children can't much more easily grasp the notion of whether a number is even or odd, and can do basic arithmetic with even/oddness much faster than with higher moduli).
0RomanDavis11yI have a feeling they are ammunition in Chekov's Gun, and and therefore any attempts to get more data will lead to spoilers.

What does 'consciousness' mean?

I'm having an email conversation with a friend about Nick Bostrom's simulation argument and we're now trying to figure out what the word "consciousness" means in the first place.

People here use the C-word a lot, so it must mean something important. Unfortunately I'm not convinced it means the same thing for all of us. What does the theory that "X is conscious" predict? If we encounter an alien, what would knowing that it was "conscious" or "not conscious" tell us? How about if we encou... (read more)

0RichardKennaway11yWhat I mean by "consciousness" is my sensation of my own presence. Googling for definitions of "conscious" and "consciousness" gives mostly similar forms of words, so that concept would appear to be what is generally understood by the words. Do philosophers have some other specific generally understood convention of exactly what they mean by these words?
0DanArmak11yWhat exactly do you mean by 'sensation'? Does it have to do with "subjective experience" and "qualia", or just the bare fact that you're modeling yourself as part of the world, like RomanDavis [http://lesswrong.com/lw/2ax/open_thread_june_2010/243l] and Blueberry [http://lesswrong.com/lw/2ax/open_thread_june_2010/244m]'s definitions?
3RichardKennaway11yBy "sensation" I mean the subjective experience. If you ask me what I mean by "subjective" and "experience", well, you could follow such a train of questions indefinitely and eventually I would have no answer. But what would that prove? You're not asking for a theory of how consciousness works, but a description of the thing that such a theory would be a theory of. Ask someone five centuries ago what they mean by "water" and all they'll be able to say is something like "the wet stuff that flows in rivers and falls from the sky". And you can ask them what they mean by "rivers" and "sky", but to what end? All you're likely to get if you press the matter is some bad science about the four elements. "Consciousness" is in a similar state. I have an experience I label with that word, but I can't tell you how that experience happens.
1DanArmak11yThat's great - I use the word in the same way. As far as I can tell, some other people don't - see the comments by RomanDavis and Blueberry that I linked to. This confusion over the meaning of the word is what I wanted to highlight. The way that some others use the word (to mean "an agent that models itself" or "an agent that perceives itself"), either they have successfully dissolved the question of what subjective experience is, or I don't understand them correctly, or indeed different people use the word to mean different things. And the reason I started out talking about that is that I've seen this cause confusion both on LW and elsewhere.
0RomanDavis11yThere are a lot of hypotheses floating around. Mine is: We have awareness. That is, we observe things in the territory with our senses, and include them in our map of the territory. The phenomenon we observe as consciousness is just our ability to include ourselves (our own minds, and some of it's inner sensations) in the territory. Some people think there are things you can only know if you experience them yourself. In theory, you could run a decent simulation of what it's like to be a bat, but you would still have memories of being human, and therefore awareness of bat territory wouldn't be enough. My solution: implant memories, including bat memories of not having human memories, into yourself. In theory, this should work.
1DanArmak11yI hope you don't mean you're hypothesizing what the word "consciousness" means; rather, your hypotheses are alternate predictions about physical unknowns or about the future. Which is it? I'm asking what the definition, the meaning, of the word consciousness is. Hypothesizing what a word means feels like the wrong way to do things. Well, unless we're hypothesizing what other people mean when they say "consciousness". But if we're using the word here at LW we shouldn't need to hypothesize, we can all just tell one another what we mean... Under that definition, any agent that models the world and includes its own behavior in the model (and any good general model will do that) - is called conscious. (I would call that self-modeling or self-aware.) So any moderately intelligent, effective agent - like my hypothetical aliens and androids - would be called conscious. That's a fine definition, but if everyone thought that, there would be no place for arguments about whether it's possible for zombies (let alone p-zombies) to exist. It doesn't seem to me that people see consciousness as meaning merely self-modeling.
1RomanDavis11yI think consensus here is that the idea of P Zombies is silly.
0DanArmak11yCertainly. But is the idea of ordinary zombies also silly? That's what your definition implies. ETA: not that I'm against that conclusion. It would make things so much simpler :-) I just have the experience that many people mean something else by "consciousness", something that would allow for zombies.
0RomanDavis11yWhat's the difference?
0DanArmak11yIf you define "consciousness" in a way that allows for unconscious but intelligent, even human-equivalent agents, then those are called zombies. Aliens or AIs might well turn out to be zombies. Peter Watt's vampires from Blindsight are zombies. ETA: a p-zombie is physically identical to a conscious human, but is still unconscious. (And we agree that makes no sense). A zombie is physically different from a conscious human, and as a result is unconscious - but is capable of all the behavior that humans are capable of. (My original comment was wrong (thanks Blueberry!) and said: The difference between a zombie and a p-zombie is that p-zombies claim to be conscious, while zombies neither claim nor believe to be conscious.)
3Blueberry11yThis is very different from my understanding of the definition of those terms, which is that p-zombies are physically identical to a conscious human, and a zombie is an unconscious human-equivalent with a physical, neurological difference. I don't see any reason why an unconscious human-equivalent couldn't erroneously claim to be conscious, any more than an unconscious computer could print out the sentence "I am conscious."
1DanArmak11yYou're right. It's what I meant, but I see that my explanation came out wrong. I'll fix it. That's true. But the fact of the matter would be that such a zombie would be objectively wrong in its claim to be conscious. My question is: what is being conscious defined to mean? If it's a property that is objectively present or not present and that you can be wrong about in this way, then it must be something more than a "pure subjective" experience or quale.
0torekp11yIf a subjective experience is the same event, differently described, as a neural process, you can be wrong about whether you are having it. You can also be wrong about whether you and another being share the same or similar quale, especially if you infer such similarity solely from behavioral evidence. Even aside from physical-side-of-the-same-coin considerations, a person can be mistaken about subjective experience. A tries the new soup at the restaurant and says "it tastes just like chicken". B says, "No, it tastes like turkey." A accepts the correction (and not just that it tastes like turkey to B). The plausibility of this scenario shows that we can be mistaken about qualia. Now, admittedly, that's a long way from being mistaken about whether one has qualia at all - but to rule that possibility in or out, we have to make some verbal choices clarifying what "qualia" will mean. Roughly speaking, I see at least two alternatives for understanding "qualia". One would be to trot out a laundry list of human subjective feels: color sensations, pain, pleasure, tastes, etc., and then say "this kind of thing". That leaves the possibility of zombies wide open, since intelligent behavior is no guarantee of a particular familiar mental mechanism causing that behavior. (Compare: I see a car driving down the road, doing all the things an internal combustion engine-powered vehicle can do. That's no guarantee that internal combustion occurs within it.) A second approach would be to define "qualia" by its role in the cognitive economy. Very roughly speaking, qualia are properties highly accessible to "executive function", which properties go beyond (are individuated more finely than by) their roles in representing, for the cognizer, the objective world. On this understanding of "qualia" zombies might be impossible - I'm not sure.
0Blueberry11yWell, the claim would be objectively incorrect; I'm not sure it's meaningful to say that the zombie would be wrong. As others have commented, it's having the capacity to model oneself and one's perceptions of the world. If p-zombies are impossible, which they are, there are no "pure subjective" experiences: any entity's subjective experience corresponds to some objective feature of its brain or programming.
4DanArmak11yThat's not the definition that seems to be used in many of the discussions about consciousness. For instance, the term "Hard Problem of Consciousness" isn't talking about self-modeling. Let's take the discussion about p-zombies as an example. P-zombies are physically identical to normal humans, so they (that is, their brains) clearly model themselves and their own perceptions of the world. Then the claim that they are unconscious is in direct contradiction to the definition of consciousness. If proving that p-zombies are logically impossible was as simple as pointing this out, the whole debate wouldn't exist. Beyond that example, I've gone through all LW posts that have "conscious" in their title: * The Conscious Sorites Paradox [http://lesswrong.com/lw/pv/the_conscious_sorites_paradox/], part of Eliezer's series on quantum physics. He says: And then he says: I read that as using 'consciousness' to mean experience in the sense of subjective qualia. * Framing Consciousness [http://lesswrong.com/lw/f8/framing_consciousness/]. cousin_it has retracted the post, but apparently not for reasons relevant to us here. It talks about "conscious/subjective experiences", and asks whether consciousness can be implemented on a Turing machine. Again, it's clear that a system that recursively models itself can be implemented on a TM, so that can't be what's being discussed. * MWI, weird quantum experiments and future-directed continuity of conscious experience [http://lesswrong.com/lw/189/mwi_weird_quantum_experiments_and_futuredirected/] . Clearly uses "consciousness" to mean "subjective experience". * Consciousness [http://lesswrong.com/lw/1ly/consciousness/]. Ditto. * Outline of a lower bound for consciousness [http://lesswrong.com/lw/1mf/outline_of_a_lower_bound_for_consciousness/]. I don't understand this post at first sight - would have to read it more throughly... The reason "subjective exper
0RomanDavis11yLets say you're having a subjective experience. Say, being stung by a wasp. How do you know? Right. You have to have to be a ware of yourself, and your skin, and have pain receptors, and blah blah blah. But if you couldn't feel the pain, let's say because you were numb, you would still feel conscious. And if you were infected with a virus that made a wasp sting feel sugary and purple, rather than itchy and painful, you would also still be conscious. It's only when you don't have a model of yourself that consciousness becomes impossible.
0DanArmak11yThat doesn't mean they're the same thing. Unless you define them to mean the same thing. But as I described above, not everyone does that. There is no "Hard Problem of Modeling Yourself".
0Jack11yWhere the heck is this terminology coming from? As I learned it the 'philosophical' in "philosophical zombie" is just there to distinguish it from Romero-imagined brain-eating undead.
1Blueberry11yYes, but we need some other term for "unconscious human-like entity". I read one paper that used the terms "p-zombie" and "b-zombie", where the p stood for "physical" as well as "philosophical" and the b stood for "behavioral".
0Jack11yI'd rather call the first an n-zombie (meaning neurologically identical to a human). And, yeah, lets use b-zombie instead of zombie as all of these are varieties of philosophical zombie. (But yes they're just words. Thanks for clarifying.)
0Vladimir_Nesov11yP-zombies can write philosophical papers on p-zombies.
0RomanDavis11yOh, P Zombies are just the reductio ad absurdum version? Yeah, I don't believe in Zombies.
0JoshuaZ11yP-zombies aren't just reducio ad absurda although most of LW does consider them to be. David Chalmers, who is a very respected philosopher takes the idea quite seriously as do a surprisingly large number of other philosophers.
0RomanDavis11yPlease explain to me how it is not. You can't just say, "This smart guy takes this very seriously." Aristotle took a lot of things very seriously that turned out to be nonsense.
2RichardChappell11y'Zombie Review [http://www.philosophyetc.net/2008/04/zombie-review.html]' provides some background here...
1JoshuaZ11yMy point is that it isn't regarded in general as a a reducio. Indeed, it actually was originally constructed as an argument against physicalism. I see it as a reducio also or even more to the point as an indication of how much into a corner dualism has been pushed by science. The really scary thing is that some philosophers seem to think that P-zombies are a slam-dunk argument for dualism.
1Jack11yWho?
0JoshuaZ11yNagel and Chalmers all seem to think it is a strong argument. Kirk used to think it was but since then has gone about Pi radians on that. My impression is that Block also sees it as a strong argument but I haven't actually read anything by Block. That's the impression I get from seeing Block mentioned in passing.
2RichardChappell11yThinking it's a strong argument is, of course, still a long way from thinking it's a "slam dunk" (nobody that I'm aware of thinks that).
1JoshuaZ11yYeah, that wording may be too strong, although the impression I get certainly is that Kirk was convinced it was a slam dunk for quite some time. Kirk's book "Zombies and Consciousness" (which I've only read parts of) seems to describe him as having once considered to be pretty close to a slamdunk. But yeah, my wording was probably too strong.
0RomanDavis11yOkay, I agree. It's just really easy to take the explicit, "this guy takes it seriously" and make the implicit connection, "and this is totally not a silly idea at all."
[-][anonymous]11y 2

del

0AdeleneDawner11yhttp://lesswrong.com/lw/2a5/on_enjoying_disagreeable_company/22ga [http://lesswrong.com/lw/2a5/on_enjoying_disagreeable_company/22ga] That's a small sample, but we actually seem to score below average on Conscientiousness. Of the 7 responses to that request, the Conscientiousness scores were 1, 1, 8, 13, 41, 41, and 58.
1[anonymous]11yAdd another C5. Does not surprise me, as per all the akrasia talk here around.
0mattnewport11yI tend to score very high on openness to experience and average to low on extraversion but only average to low on conscientiousness.

What's the deal with female nymphomaniacs? Their existence seems a priori unlikely.

3RomanDavis11yThen your priors are wrong. Adjust accordingly.
7Liron11y"What's the deal with" means "What model would have generated a higher prior probability for". Noticing your confusion isn't the entire solution.
8Mitchell_Porter11yIf the existing model is sexual dimorphism, with high sexual desire a male trait, you could simply suppose that it's a "leaky" dimorphism, in which the sex-linked traits nonetheless show up in the other sex with some frequency. In humans this should especially be possible with male traits which depend not on the Y chromosome, but rather on having one X chromosome rather than two. That means that there is only one copy, rather than two, of the relevant gene, which means trait variance can be greater - in a woman, an unusual allele on one X chromosome may be diluted by a normal allele on the other X, whereas a man with an unusual X allele has no such counterbalance. But it would still be easy enough for a woman to end up with an unusual allele on both her Xs. Also, regardless of the specific genetic mechanism, human dimorphism is just not very extreme or absolute (compared to many other species), and forms intermediate between stereotypical male and female extremes are quite common.
1RomanDavis11yI thought it was pretty clear. Sexual Dimorphism doesn't operate the way you think it does. Women with high sex drives aren't rare at all. I have heard that, for most men and most women, the time of highest sex drive happens at very different times (much younger for men than women). This might account for the entire difference, especially if your'e getting most of your information from the culture at large. As TVTropes will tell you, Most Writers Are Male.
3Vladimir_M11yWhy?
3gwern11yAnd they are accordingly rare, are they not?
3Blueberry11yNo, women with a high sex drive are not rare.
0Liron11yMaybe. I don't know.
1RichardKennaway11yThis question reads to me like it's out of the middle of some discussion I didn't hear the beginning of. Why were "nymphomaniacs" on your mind in the first place? What do you mean by the word? I don't think I've heard it in many years, and I associate it with the sexual superstitions of a former age.
1LucasSloan11yWhat does the word "nymphomaniacs" mean? How do you judge someone to be sufficiently obsessed with sex to be a nymphomaniac? I think a lot of your confusion might be coming from you tendency to label people with this word with such negative connotations. Does the question "what is with women who want to have sex [five times a week*] and will undertake to get it?" resolve any of your confusion? You should expect that those women who have more sex to be more salient wrt people talking about them, so they would seem more prominent, even if only 2% of the population. *not sure about this number, just picked one that seemed alright.
5Alicorn11yFive times a week wouldn't be remotely enough to diagnose. It has to be problematic and clinically significant.
2LucasSloan11yI think that's kinda my point. I was attempting to point out that he's probably confusing the term "nymphomaniac" with its negative connotations, with "likes to have [vaguely defined 'a lot'] of sex."
3Blueberry11y"Nymphomaniac" hasn't been a clinical diagnosis for a long time. In my experience, the word is now most commonly used colloquially to mean "a woman who likes to have a lot of sex". Whether this has negative connotations depends on your attitude to sex, I suppose.
2JoshuaZ11yPicking a number for this seems like a really bad idea. For most modern clinical definitions of disorders what matters is whether it interferes with normal daily behavior. Even that is questionable since what constitutes interference is very hard to tell. Societies have had very different notions of what is acceptable sexuality for both males and females. Until fairly recent homosexuality was considered a mental disorder in the US. And in the Victorian era, women were routinely diagnosed as nymphomaniacs for showing pretty minimal signs of sexuality.
0[anonymous]11yThis is one of the more bizarre things I've read recently.

Information processing isn't the whole story of what we care about. For example, the amount of energy available to societies and the per a capita energy availability both matter. (In fairness, Kurzweil has discussed both of these albeit not as extensively as information issues).

Another obvious metric to look at is average lifespan. This is one where one doesn't get an exponential curve. Now, if you assert that most humans will live to at least 50 and so look at life span - 50 in major countries over the last hundred years, then the data starts to look sli... (read more)

1xamdam11yGood points. Still I feel that basing the crux of the argument on information processing is valid, unless the other concerns you mention interfere with it at some point. Is that what you're saying? Good observation about infant mortality; there should be an opposite metric of "% of centenarians", which would be a better measure in this context.
2JoshuaZ11y%Centenarians might not be a good metric given that one will get an increasing fraction of those as birth rates decline. For the US, going by the data here [http://paa2005.princeton.edu/download.aspx?submissionId=50718] and here [http://www.u-s-history.com/pages/h980.html], I get a total of 1.4 10^-4 for the fraction of the US pop that is over 100 in 1990, and a result of 1.7 10^-4 in 2000. But I'm not sure how accurate this data is. For example, in the first of the two links they throw out the 1970 census data as given a clearly too high number. One needs a lot more data points to see if this curve looks exponential (obviously two isn't enough), but the linked paper claims that for the foreseeable future the fraction of the pop that will be over 100 will increase by 2/3rds each decade. If that is accurate, then that means we are seeing an exponential increase. Another metric to use might be the age of the oldest person by year of birth worldwide. That data [http://en.wikipedia.org/wiki/Oldest_people_by_year_of_birth] shows a clear increasing trend, but the trend is very weak. Also, one would expect such an increase simply by increasing the general population (Edit: and better record keeping since the list includes only those with good verification), so without a fair bit of statistical crunching, it isn't clear that this data shows anything.
1JoshuaZ11yWell, they do interfere, for example, lifespan issues help tell us if we're actually taking advantage of the exponential growth in information processing, or for that matter if even if we are taking advantage that it actually matters. If for example information processing ability increases exponentially but the marginal difficulty in improving other things (like say lifespan) increases at a faster rate, then even with an upsurge in information processing one isn't necessarily going to see much in the way of direct improvements. Information processing is also clearly limited in use based on energy availability. If I went back to say 1950 and gave someone access to a set of black boxes that mimic modern computers, the overall rate of increase in tech won't be that high, because the information processing ability while sometimes the rate limiting step, often is not (for example, generation of new ideas and speed at which prototypes can be constructed and tested both matter). And this is even more apparent if I go further back in time. The timespan from 1900 to 1920 won't look very different with those boxes added, to a large extent because people don't know how to take advantage of their ability. So there are a lot of constraints other than just information processing and transmission capability. Edit: Information processing might potentially work as one measure among a handful but by itself it is very crude.

I was making a distinction between extreme bad judgment (as shown in the article) and moderately bad judgment and/or bad luck.

Your emphasis upthread seemed to be on how foolish that woman and her family were.

I tend to think that the right of exit is the ultimate and fundamental check on such abuses of power. This is why I favour decentralization / federalization / devolution as improvements to the status quo of increasing centralization of political power. I think that on more or less every level of government we would benefit from decentralization of power. City-wide bylaws on noise pollution are too coarse-grained for example. An entertainment district or an area popular with students should have different standards than a residential area with many working ... (read more)

So they're a terrible idea because of bad sanitation and child labor? In that case, the entire history of economic ideas is bad up until 1920-ish. They unquestionably achieved their goal of providing better transportation. Am I to infer that you believe that government run highways are wrong because there is trash strewn on the sides of the road?

Since the italics are yours, I'm going to focus on that term and ask what you mean by necessitate?

I mean that recognizing the existence of a perceived problem does not need to lead automatically to considering ways that government can 'fix' it. Drug prohibition is a classic example here. Many people see that there are problems associated with drug use and jump straight to the conclusion that therefore there is a need for government to regulate drug use. Not every problem requires a government solution. The mindset that all perceived problems with the w... (read more)

That seems clearly within the scope of legitimate concerns of government, given that air travel is already heavily regulated

This argument doesn't work. Just because you already have heavy regulation, doesn't justify having more regulation. Also, many libertarians would say that the solution should be to simply remove much of the heavy regulation of air travel.

Not necessarily. If you've ever been to Disney World, it's not like that. And hell, government roads in the states and Japan often dissolve into a complex and inefficient series of toll roads, at least in some areas.

I'm much more worried about uncompetitive practices, like powerful local monopolies and rent seeking behavior.

I'm not at all sure what any of this has to do with anything. I agree with the quoted section that having the government step in to regulate how much carryon luggage people can have is an example of people making bad assumptions about government. Indeed, this one is particularly stupid because it is economically equivalent to charging a higher price and then offering a discount for people who don't bring carryon luggage. And psych studies show that if anything people react more positively to things framed as a discount.

But I don't see what this has to do w... (read more)

When it comes to government policy I tend to grade on a curve. I actually agree with you that the quality of government policy is generally quite poor. But it's not equally poor everywhere, and improving government's function (which will in some cases meaning having it do less) can do a lot of good for a lot of people.

I should also point out that choosing to take no action is still a policy decision. To give you an example, a few years a go some crazy woman pulled a knife on a plane, leading to a bit of an incident. There was a review of airline secu... (read more)

3mattnewport11yI'd question the need to have government involved in the decision at all. Why not let the airlines decide their own security policies?
1realitygrill11yI guess I would say I don't know. Have you read Taleb's The Black Swan? He has a counterfactual story that is extremely similar (though it uses 9/11); basically there aren't any (even negative) incentives for politicians to push such policies through until after some huge disaster happens.
0James_K11yI haven't read Taleb, but I have heard a few interviews of him where he got the opportunity to outline his ideas. I think politicians in general have a tendency to overreact to adverse events, and often by doing things that involve signals of reassurance (such as security theatre) rather than steps to fix the problem. I'm open to the possibility that they don't do enough to prevent problems, but as a rule governments are very risk averse entities, usually preoccupied with things that might go wrong.

Your link provides very little evidence for your claim.

What did you take my claim to be? The example in the link is intended to illustrate the fact that the problem of politics is not one of figuring out better policy. It is an example of a policy that is universally agreed to be bad and yet has persisted for over 60 years, despite a brief period in which it was temporarily stamped out. The magnitude of the subsidy in this case may be small but there are many thousands of such bad policies, some of much greater individual magnitude, and they add up. The... (read more)

2Mass_Driver11yIndeed you can! Be aware, though, that memes about government corruption and the people who peddle them may have just as much power to fool you as the 'official' authorities. Hollywood, for example, has a much larger propaganda budget than the US Congress. When's the last time a Hollywood movie showcased virtuous politicians? Also, beware of insulated arguments. If you assume that (a) politicians are amazingly good at disguising their motives, and (b) that politicians do in fact routinely disguise their motives, your assertions are empirically unfalsifiable. If you disagree, consider this: what could a politician do to convince you that he was honestly motivated by something like altruism?
3mattnewport11yAn Inconvenient Truth? Seriously though, I don't think Hollywood is particularly tough on politicians. It's a major enabler for the cult of the presidency [http://reason.com/archives/2008/05/12/the-cult-of-the-presidency] with heroic presidents saving the world from aliens, [http://www.imdb.com/title/tt0116629/] asteroids [http://www.imdb.com/title/tt0120647/] and terrorists [http://www.imdb.com/title/tt0118571/]. Evil corporations and businessmen get a far worse rap. The mainstream media is much too soft on politicians in the US in my opinion as well. Where's the US Paxman [http://www.youtube.com/watch?v=Uwlsd8RAoqI]? I think some politicians actually believe that they are acting for the 'greater good'. Sometimes when they lobby for special interests they really convince themselves they are doing a good thing. It is sometimes easier to convince others when you believe your own spiel - this is well known in sales. They surely often think they are saving others from themselves by restricting their liberties and trampling on their rights. Ultimately what they really believe is somewhat irrelevant. I judge them by how they respond to incentives, whose interests they actually promote and what results they achieve. I don't think being motivated by altruism is desirable and I don't think pure altruism exists to any significant degree.
3Mass_Driver11yGood examples! I agree with you that Hollywood is soft on Presidents, and that the mainstream media is soft on just about everyone, with the possible exception of people who might be robbing a convenience store and/or selling marijuana in your neighborhood, details at eleven. That still leaves legislators, bureaucrats, administrators, police chiefs, mayors, governors, and military officers as Rent-A-Villains (tm) for Hollywood action flicks and dramas. From my end, it still looks like you're starting with the belief that government is wrong, and deducing that politicians must be doing harm. Your arguments are sophisticated enough that I'm assuming you've read most of the sequences already, but you might want to review The Bottom Line [http://lesswrong.com/lw/js/the_bottom_line/]. I'm not sure to what extent either of us has an open mind about our fundamental political assumptions. I'm also unsure as to whether the LW community has any interest in reading a sustained duel about abstract versions of anarcholibertarianism and representative democracy. Worse, I at least sympathize with some of your arguments; my main complaint is that you phrase them too strongly, too generally, and with too much certainty. For all those reasons, I'm not going to post on this particular thread in public for a few weeks. I will read and ponder one more public post on this thread by you, if any -- I try to let opponents get in the last word whenever I move the previous question. All that said, if you'd like to talk politics for a while, you're more than welcome to private message me. You seem like a thoughtful person.
4mattnewport11yI described myself as a socialist 10 years ago when I was at university. My parents are lifelong Labour [http://en.wikipedia.org/wiki/Labour_Party_\(UK\]) voters. I have changed my political views over time which gives me some confidence that I am open minded in my fundamental political assumptions. Caveats are that my big 5 personality factors are correlated with libertarian politics (suggesting I may be biologically hardwired to think that way) and from some perspectives I could be seen as following the cliched [http://en.wikiquote.org/wiki/Winston_Churchill#Misattributed] route of moving to the right in my political views as I get older. This is partly a stylistic thing - I feel that padding comments with disclaimers tends to detract from readability and distracts from the main point. I try to avoid saying things like in my opinion (should be obvious given I'm writing it) or variations on the theme of the balance of evidence leads me to conclude (where else would conclusions derive from) or making comments merely to remind readers that 0 and 1 are not probabilities (here of all places I hope that this goes without saying). I used to make heavy use of such caveats but I think they tend to increase verbiage without adding much information. If it helps, imagine that I've added all these disclaimers to anything I say as a footnote. I tend to subscribe to the idea that the best hope for improving politics is to change incentives, not minds [http://athousandnations.com/2010/05/12/change-incentives-not-minds/] but periodically I get drawn into political debates despite myself. I'll try to leave the topic for a while.

One of the hidden assumptions I was thinking of is the assumption that government built roads have been a net benefit for America. The highway system has been a large implicit subsidy for all kinds of business models and lifestyle choices that are not obviously optimal. America's dependence on oil and outsize energy demands are in large part a function of the incentives created by huge government expenditure on highways. Suburban sprawl, McMansions, retail parks and long commutes are all unintended consequences of the implicit subsidies inherent in large s... (read more)

In what way is this a useful response to James_K? What do you believe James_K is doing that he shouldn't be doing (or vice-versa), such that your comment is likely to lead him toward better action?

In A Technical Explanation of Technical Explanation, Eliezer writes,

You should only assign a calibrated confidence of 98% if you're confident enough that you think you could answer a hundred similar questions, of equal difficulty, one after the other, each independent from the others, and be wrong, on average, about twice. We'll keep track of how often you're right, over time, and if it turns out that when you say "90% sure" you're right about 7 times out of 10, then we'll say you're poorly calibrated.

...

What we mean by "probability"

... (read more)
7Morendil11yAs I understand it, frequentism requires large numbers of events for its interpretation of probability, whereas the bayesian interpretation allows the convergence of relative frequencies with probabilities but claims that probability is a meaningful concept when applied to unique events, as a "degree of plausibility".
6Vladimir_M11yDo you (or anyone else reading this) know of any attempts to give a precise non-frequentist interpretation of the exact numerical values of Bayesian probabilities? What I mean is someone trying to give a precise meaning to the claim that the "degree of plausibility" of a hypothesis (or prediction or whatever) is, say, 0.98, which wouldn't boil down to the frequentist observation that relative to some reference class, it would be right 98/100 of the time, as in the above quoted example. Or to put it in a way that might perhaps be clearer, suppose we're dealing with the claim that the "degree of plausibility" of a hypothesis is 0.2. Not 0.19, or 0.21, or even 0.1999 or 0.2001, but exactly that specific value. Now, I have no intuition whatsoever for what it might mean that the "degree of plausibility" I assign to some proposition is equal to one of these numbers and not any of the other mentioned ones -- except if I can conceive of an experiment or observation (or at least a thought-experiment) that would yield that particular exact number via a frequentist ratio. I'm not trying to open the whole Bayesian vs. frequentist can of worms at this moment; I'd just like to find out if I've missed any significant references that discuss this particular question.
2Wei_Dai11yHave you seen my What Are Probabilities, Anyway? [http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/] post?
1Vladimir_M11yYes, I remember reading that post a while ago when I was still just lurking here. But I forgot about it in the meantime, so thanks for bringing it to my attention again. It's something I'll definitely need to think about more.
3[anonymous]11yMorendil's explanation is, as far as I can tell, correct. What's much more interesting is that examples given in terms of frequencies is required to engage our normal intuitions about probability. There's at least some research that indicates that when questions of estimation and probability are given in terms of frequencies (ie: asking 'how many problems do you think you got correct?' instead of 'what is your confidence for this answer?'), many biases disappear completely.

Did you also learn to swim by jumping into water and trying not to drown?

There was actually at some point a theory that "babies are born knowing how to swim", and on one occasion at around age three, at a holiday resort the family was staying at, I was thrown into a swimming pool by a caretaker who subscribed to this theory.

It seems that after that episode nobody could get me to feel comfortable enough in water to get any good at swimming (in spite of summer vacations by the seaside for ten years straight, under the care of my grandad who taught me how to ride a bike). I only learned the basics of swimming, mostly by myself with verbal instruction from a few others, around age 30.

I look at conscious thought like a person trying to simultaneously ride multiple animals. Each animal can manage itself, if left to it's own devices it'll keep on walking in some direction, perhaps even a good one. The rider can devote different levels of attention to any given animal, but his level of control bottoms out at some point: he can't control the muscles of the animals, only the trajectory (and not always this).

One animal might be vision: it'll go on recognizing and paying attention to things unspurred, but the rider can rein the animal in and m... (read more)

Good points, but keep in mind snowboarding instructors aren't optimizing the same thing that a rationalist (in their capacity as a rationalist) is optimizing. If you just want to make money, quickly, and churn out good snowboarders, then use the best tools available to you -- you have no reason to convert the instruction into words where you don't have to.

But if you're approaching this as a rationalist, who wants to open the black box and understand why certain things work, then it is a tremendously useful exercise to try to verbalize it, and identify the... (read more)

Yes, you can do this precisely with measure theory, but some will argue that that is nice math but not a philosophically satisfying approach.

I'm not sure I understand what exactly you have in mind. I am aware of the role of measure theory in the standard modern formalization of probability theory, and how it provides for a neat treatment of continuous probability distributions. However, what I'm interested in is not the math, but the meaning of the numbers in the real world.

Bayesians often make claims like, say, "I assign the probability of 0.2 to... (read more)

1Oscar_Cunningham11yBayesians, would say that the probability is (some function of) the expected value of one bet. Frequentists, would say that it is (some function of) the actual value of many bets (as the amount of bets goes to infinity). The whole point of looking at many bets is to make the average value close to the expected value (so that frequentists don't have to think about what "expected" actually means). You never have to say "the expected gain ... over a large number of bets." That would be redundant. What does "expected" actually mean? It's just the probabilty you should bet at to avoid the possibility of being Dutch-booked on any single bet. ETA: When you are being Dutch-booked, you don't get to look at all the offered bets at once and say "hold on a minute, you're trying to trick me". You get given each of the bets one at a time, and you have to bet Bayesianly for each one if you want to avoid any possibility of sure losses.
5Vladimir_M11yI might be mistaken, but I think this still doesn't answer my question. I understand -- or at least I think I do -- how the Dutch book argument can be used to establish the axioms of probability and the entire mathematical theory that follows from them (including the Bayes theorem). The way I understand it, this argument says that once I've assigned some probability to an event, I must assign all the other probabilities in a way consistent with the probability axioms. For example, if I assign P(A) = 0.3 and P(B) = 0.4, I would be opening myself to a Dutch book if I assigned, say, P(~A) != 0.7 or P(A and B) > 0.3. So far, so good. However, I still don't see what, if anything, the Dutch book argument tells us about the ultimate meaning of the probability numbers. If I claim that the probability of Elbonia declaring war on Ruritania before next Christmas is 0.3, then to avoid being Dutch-booked, I must maintain that the probability of that event not happening is 0.7, and all the other stuff necessitated by the probability axioms. However, if someone comes to me and claims that the probability is not 0.3, but 0.4 instead, in what way could he argue, under any imaginable circumstances and either before or after the fact, that his figure is correct and mine not? What fact observable in physical reality could he point out and say that it's consistent with one number, but not the other? I understand that if we both stick to our different probabilities and make bets based on them, we can get Dutch-booked collectively (someone sells him a bet that pays off $100 if the war breaks out for $39, and to me a bet that pays off $100 in the reverse case for $69 -- and wins $8 whatever happens). But this merely tells us that something irrational is going on if we insist (and act) on different probability estimates. It doesn't tell us, as far as I can see, how one number could be correct, and all others incorrect -- unless we start talking about a large reference class of events an

The question of whether an agent's interests are aligned with the principal's is largely orthogonal to the question of whether the agent achieves a positive return. The agent's expected return is more relevant.

It's possible that a sufficiently good instructor could communicate just as effectively through purely verbal instruction but I'm not sure such an instructor exists.

I would suspect this has more to do with the skill of the student in translating verbal descriptions into motions. You can perfectly understand a series of motions to be executed under various conditions, without having the motor skill to assess the conditions and execute them perfectly in real-time.

There are a lot of problems with Myers-Briggs. For example, the test doesn't account for people saying things because they are considered socially good traits. Claims that Myers-Briggs is accurate seem often to be connected to the Forer effect. A paper which discusses these issues is Boyle's "Myers-Briggs Type Indicator (MBTI): Some psychometric limitations", 1995 Australian Psychologist 30, 71–74.

OK, so I suppose it doesn't take much personal contact and trust to acquire a skill of the bike-riding type. In particular if you're an autonomous enough learner, in particular if the skill is relatively basic.

The original assertion, though, was about personal contact and trust being required to transfer a skill of the bike-riding type, and perhaps one reason to make this assertion is that the usual method involves a parent dispensing encouragement and various other forms of help, vis-a-vis a child. (I learnt it from my grandfather, and have a lot of posit... (read more)

1RichardKennaway11yWe must have had very different experiences of many things. Tell me more about learning being risky. I have been learning Japanese drumming since the beginning of last year (in a class), and stochastic calculus in the last few months (from books), and "risky" is not a word it would occur to me to apply to either process. The only risk I can see in learning to ride a bicycle is the risk of crashing.
1Morendil11yOne major risk involved in learning is to your self-esteem: feeling ridiculous when you make a mistake, feeling frustrated when you can't get an exercise right for hours of trying, and so on. As you note, in physical aptitudes there is a non-trivial risk of injury. There is the risk, too, of wasting a lot of time on something you'll turn out not to be good at. Perhaps these things seem "safe" to you, but that's what makes you a learner, in contrast with large numbers of people who can't be bothered to learn anything new once they're out of school and in a job. They'd rather risk their skills becoming obsolete and ending up unemployable than risk learning: that's how scary learning is to most people.
2RichardKennaway11yI would say that the problem then is with the individual, not with learning. Those feelings reset on false beliefs that no-one is born with. Those who acquire them learn them from unfortunate experiences. Others chance to have more fortunate experiences and learn different attitudes. And some manage in adulthood to expose their false beliefs to the light of day, clearly perceive their falsity, and stop believing them. Thus it is said, "The things that we learn prevent us from learning." [http://lesswrong.com/lw/11r/rationality_quotes_july_2009/wh2]
2JoshuaZ11yI doubt people are consciously making this decision, but rather they aren't calculating the potential rewards as opposed to potential risks well. A risk that is in the far future is often taken less seriously than a small risk now.

First I'd like to point out a good interview with Ray Kurzweil, which I found more enjoyable than a lot of his monotonous talks. http://www.motherboard.tv/2009/7/14/singularity-of-ray-kurzweil

As a follow-up, I am curious anyone attempted to mathematically model Ray's biggest and most disputed claim, which is the acceleration rate of technology. Most dispute the claim by pointing out that the data points are somewhat arbitrary and invoke data dredging. It would be interesting if the claim was based on a more of a model basis rather than basically a regressi... (read more)

1JoshuaZ11yNote that Kurzweil's responded to the data dredging complaint by taking major lists compiled by other people, combining them and showing that they fit a roughly exponential graph. (I don't have a citation for this unfortunately). Edit: I'm not aware of anyone making a model of the sort you envision but it seems to suffer they same problem that Kurzweil has in general which is a potential overemphasis on information processing ability.

I couldn't post a article due to lack of karma so I had to post here:P

I notice this site is pretty much filled with proponents of MWI, so I thought it'd be interresting to see if there are anyone on here who are actually against MWI, and if so, why?

After reading through some posts it seems the famous Probability, Preferred Basis and Relativity problems are still unsolved.

Are there any more?

1JamesPfeiffer11yWelcome! Here is a comment by Mitchell Porter. http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/1csi [http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/1csi]
1torekp11ySeconding Mitchell Porter's friendly attitude toward the Transactional Interpretation, I recommend this paper [http://arxiv.org/abs/1001.2867] by Ruth Kastner and John Cramer.

I think the essays most directly related to the rectitude of religion are "Religion's Claim to be Non-Disprovable", which CronoDAS linked, and "Atheism = Untheism + Antitheism". That said, the real introduction to the sort of thinking that led most of us to reject religions are illuminated to an extent in the Mysterious Answers to Mysterious Questions and Reductionism) sequences.

The system of canals built in the early 19th century in the United States allowed the settlement of the old west and the development of industry in the north east (by allowing grain from western farms to reach the east). Why do you consider them a terrible idea? They were one of the centerpieces of the American System, which was largely successful.

It's easier to move out? You are not born under a landlord. You do not swear fealty to the flag of the landlord. Nobody thinks the landlord should be able to draft you for civil service. The landlord cannot put you in jail for failing to pay rent. There's a long, long list of other differences where the landlord as government analogy breaks down. I'm surprised anyone still brings it up.

EDIT: Ha. You changed it. In reality, not necessarily that much, although it's nice to have extra governmental agency that you can choose to pay or not, and that is accountable to the government in a transparent way. Asking the government to regulate itself is almost as dumb as asking a logging company to regulate itself.

I'd buy "main road incorporating rope suspension bridges" over "millionaire hiring people to throw themselves off cliffs", but I see what you mean.

It seems the pharma industry discovered the effect of PDE5 inhibitors on erectile dysfunction pretty much by accident. The stuff was initially developed to treat heart disease, initial tests showed it didn't work, but male test subjects reported a useful side effect. Reminds me of the story of post-it notes: the guy who developed them actually wanted to create the ultimate glue, but sadly the result of his best efforts didn't stick very well, so he just went ahead and commercialized what he had.

If big pharma is listening, I'd like to post a request for exercise pills.

And can't the same sensation be either 'happy' or 'unhappy' depending on the circumstances? A person with persistent sexual arousal syndrome isn't made 'happy' by the orgasms they can't help but 'endure'.

Yes, this is true. We will need to assume that the button can analyze the context to determine how to provide happiness for the particular brain it's attached to.

My point is that happiness is not necessarily associated with accomplishment or objective improvement in oneself (though it can be). In such a situation, some people might not value this kind of detached happiness, but that doesn't mean it's not happiness.

Depends on how you define happiness. If you define it as "how much dopamine is in my system" ,"joy" or "these are the neat brainwaves my brain is giving off" then yes, you can achieve happiness by pressing a button.

Oh, really? How can I get a cheap, legal, repeatable dopamine rush to my brain?

2RomanDavis11yEdited my post to reflect your point. Although, I'm a young male and can achieve orgasm multiple times in under ten minutes with the aid of some lube and free porn. You probably didn't want to know that.

Strange. I thought it made a good point, so I just upvoted it.

Mass_Driver appears to be one of the people who can be fooled all of the time since he judges politicians by what they say and how they present themselves rather than by what their actions say about their incentives and motivations. I did not intend to be ambiguous.

2RobinZ11yThank you - I had suspected that might be your meaning, but I prefer not to pronounce negative judgments on people without clear cause, and I have read plenty of comments which appeared equally damning but were of an innocent nature upon elaboration. Carry on.
2mattnewport11yI appreciate the irony of your veiled criticism. Upvoted.
2RobinZ11yI appreciate your unusually deft grasp of the English language. Upvoted. (I also appreciate the paucity of my education in the sociology of representative government, and must therefore bow out of the discussion. Please discount my opinion appropriately.)

I agree with most of what you said. That's one of the reasons I gave the historical example of SO2. The claim being made by the person I was responding to was not a remark about net gain but the claim that regarding "Good quality government policy" that "There is no more evidence for that than there is for God" and then backing it up with an argument from irrelevant authority. So giving examples to show that's not the case accomplishes the basic goal.

If Omega* makes no reference to the original Omega, I don't understand why they have "opposite behavior with respect to my status as being counterfactually-muggable" (by the original Omega), which was your reason for inventing "duality" in the first place. I apologize, but at this point it's unclear to me that you actually have a proof of anything. Maybe we can take this discussion to email?

It's nice to know I've had an influence :)

As it happens, I'm pretty sceptical as to how much we can know as well. There's nothing like doing policy to gain an understanding of how messy it can be. While the social sciences have a less than wonderful record in developing knowledge (look at the record of development economics, as one example), and economic forecasting is still not much better than voodoo but it's not like there's another group out there with all the answers. We don't have all of the answers, or even most of them, but we're better than nothing, which is the only alternative.

5matt11yNothing is often a pretty good alternative. Government action always comes at a cost, even if only the deadweight loss of taxation [http://en.wikipedia.org/wiki/Excess_burden_of_taxation] (keyphrase "public choice" for reasons you might expect the cost to be higher than that). I'm not trying to turn this into a political debate, but you should consider doing nothing not necessarily a bad thing, and what you do not necessarily better.
2James_K11yWhen I said "better than nothing" I was referring to advice, not the actual actions taken. My background is in economics so I'm quite familiar with both dead-weight loss of taxation and public choice theory, though these days I lean more toward Bryan Caplan's rational irrationality theory of government failure. I agree that nothing is often a good thing for governments to do, and in many cases that is the advice that Cabinet receives.
1mattnewport11yPoliticians' logic: “Something must be done. This is something. Therefore we must do it.”

What do you mean practical ways? I understand the difficulty of transferring kinesthetic or social understanding, but how can we overcome that in nonverbalized fashion?

Some things have to be shown, you have to sometimes take part in an activity to "get" it, learn by trial and error, get feedback pointing out mistakes that you are unaware of, etc...

2CannibalSmith11yFor example?
2RomanDavis11yDo you think you could describe this image to an arbitrarily talented artist and end up with an image that even looked like it was based on it? http://smithandgosling.files.wordpress.com/2009/05/the-reader.jpg [http://smithandgosling.files.wordpress.com/2009/05/the-reader.jpg] It's not so much, "Such insolence, our ideas are so awesome they can not be broken down by mere reductionism" as "Wow, words are really bad at describing things that are very different from what most of the people speaking the language do." I think you could make an elaborate set of equations on a cartesian graph and come up with a drawing that looked like it and say fill up RGB values #zzzzzz at coordinates x,y or whatever, but that seems like a copout since that doesn't tell you anything about how Fragonard did it.
2bogdanb11yThis reminds me of an exercise we did in school. (I don’t remember either when or for what subject.) Everyone was to make a relatively simple image, composed of lines, circles, triangles and the such. Then, without showing one’s image to the others, each of us was to describe the image, and the others to draw according to the description. The “target” was to obtain reproductions as close as possible to the original image. It’s surprisingly hard. It’s was a very interesting exercise for all involved: It’s surprisingly hard to describe precisely, even given the quite simple drawings, in such a way that everyone interprets the description the way you intended it. I vaguely remember I did quite well compared with my classmates in the describing part, and still had several “transcriptions” that didn’t look anywhere close to what I was saying. I think the lesson was about the importance of clear specifications, but then again it might have been just something like English (foreign language for me) vocabulary training. -------------------------------------------------------------------------------- An example: Draw a square, with horizontal & vertical sides. Copy the square twice, once above and once to the right, so that the two new squares share their bottom and, respectively, left sides with the original square. Inside the rightmost square, touching its bottom-right corner, draw another square of half the original’s size. (Thus, the small square shares its bottom-right corner with its host, and its top-left corner is on the center of its host.) Inside the topmost square, draw another half-size square, so that it shares both diagonals with its host square. Above the same topmost square, draw an isosceles right-angled triangle; its sides around the right angle are the same length as the large squares’; its hypotenuse is horizontal, just touching the top side of the topmost square; its right angle points upwards, and is horizontally aligned with the center of the or
0Larks11yMy mum had to do this take for her work, save with building blocks, and for the learning-impaired. Instructions like 'place the block flat on the ground, like a bar of soap' were useful. One nit-pick: when you say squares half the size, you mean with half the side length, or one quarter of the size.
1Risto_Saarelma11yYou could probably get pretty good results without messing with complex equations, by first describing the full picture, then describing what's in four quadrants made by drawing vertical and horizontal lines that split the image exactly in half, then describing quadrants of these quadrants, split in a similar way and so on. The artist could use their skills to draw the details without an insanely complex encoding scheme, and the grid discipline would help fix the large-scale geometry of the image. Edit: A 3x3 grid might work better in practice, it's more natural to work with a center region than to put the split point right in the middle of the image, which most probably contains something interesting. On the other hand, maybe the lines breaking up the recognizable shapes in the picture (already described in casual terms for the above-level description) would help bring out their geometrical properties better. Edit 2: Michael Baxandall's book Patterns of Intetion has some great stuff on using language to describe images.
1RomanDavis11yDrawing a photograph with the aid of a Grid is a common technique for making copyinng easier, although it's also sometimes used as a teaching tool for early artists. I'm not in love with this explanation (Loomis does much better) but this should give you the essential idea: http://drawsketch.about.com/od/drawinglessonsandtips/ss/griddrawing.htm [http://drawsketch.about.com/od/drawinglessonsandtips/ss/griddrawing.htm] As a teaching tool for people who can't draw, I haven't seen it be effective, but it's awesome if you've got a deadline and don't want to spend all your time checking and rechecking your proportions.I doubt it would be effective, since it's so easy for novice artists to screw up when they have the image right in front of them. There's a more effective method which uses a ruler or compass and is often used to copy Bargue drawings. Use precise measurements around a line at the meridian and essentially connect the dots. For the curious: http://conceptart.org/forums/showthread.php?t=121170 [http://conceptart.org/forums/showthread.php?t=121170] This might work long distance: "Okay, draw the next dot 9/32nds of an inch a way at 12 degrees down to the right." This still seems like a bit of a cop out, though. Yes, there are ways to assemble copies of images using a grid, but it doesn't help us figure out how such freehand images were made in the first place. We're not even taking a crack at the little black box.

The reason we shift probability weight away from the deceptive Omega is that, in the original problem, we are told that we believe Omega to be non-deceptive. The reasoning goes like this: If it looks like Omega and talks like Omega, then it might be Omega or Omega . But if it were Omega* , then it would be deceiving us, so it's most probably Omega.

In the original problem, we have no reason to believe that No-mega and friends are non-deceptive.

(But if we did, then yes, the dual of a non-deceptive agent would be deceptive, and so have lower prior probability... (read more)

Any recommendations for how much redundancy is needed to make ideas more likely to be comprehensible?

There's a general rule in writing that if you don't know how many items to put in a list, you use three. So if you're giving examples and you don't know how many to use, use three. Don't know if that helps, but it's the main heuristic I know that's actually concrete.

8SoullessAutomaton11yI'm not sure I follow. Could you give a couple more examples of when to use this heuristic?
6[anonymous]11yThe only guideline I'm familiar with is "Tell me three times - tell me what you're going to explain, then explain it, then tell me what you just explained." This seems to work on multiple scales - from complete books to shorter essays (though I'm not sure if it works on the level of individual paragraphs).
4[anonymous]11yIt really depends upon the topic and upon how much inferential difference there is between your ideas and the reader's understanding of the topic. Eliezer's earlier posts are easily understandable to someone with no prior experience in statistics, cognitive science, etc. because he uses a number of examples and metaphors to clearly illustrate his point. In fact, it might be helpful to use his posts as a metric to help answer your question. In general, though, it's probably best to repeat yourself by summarizing your point at both the beginning and end of your essay/post/whatever and by using several examples to illustrate whatever you are talking about, especially if writing for non-experts.

In fact, I can consider all crazy mind-reading reward/punishment agents at once: For every such hypothetical agent, there is its hypothetical dual, with the opposite behavior with respect to my status as being counterfactually-muggable (the one rewarding what the other punishes, and vice versa). Every such agent is the dual of its own dual; in the universal prior, being approached by an agent is about as likely as being approached by its dual; and I don't think I have any evidence that one agent will be more likely to appear than its dual. Thus, my total e... (read more)

4cousin_it11yWhy? Can't your definition of dual be applied to Omega? I admit I don't completely understand the argument.
3Nisan11yOkay, I'll be more explicit: I am considering the class of agents who behave one way if they predict you're muggable and behave another way if they predict you're unmuggable. The dual of an agent behaves exactly the same as the original agent, except the behaviors are reversed. In symbols: * An agent A has two behaviors. * It it predicts you'd give Omega $5, it will exhibit behavior X; otherwise, it will exhibit behavior Y. * The dual agent A* exhibits behavior Y if it predicts you'd give Omega $5, and X otherwise. * A and A* are equally likely in my prior. What about Omega? * Omega has two behaviors. * If it predicts you'd give Omega $5, it will flip a coin and give you $100 on heads; otherwise, nothing. In either case, it will tell you the rules of the game. What would Omega* be? * If Omega predicts you'd give Omega $5, it will do nothing. Otherwise, it will flip a coin and give you $100 on heads. In either case, it will assure you that it is Omega, not Omega. So the dual of Omega is something that looks like Omega but is in fact deceptive. By hypothesis, Omega is trustworthy, so my prior probability of encountering Omega* is negligible compared to meeting Omega. (So yeah, there is a dual of Omega, but it's much less probable than Omega.) Then, when I calculate expected utility, each agent A is balanced by its dual A , but Omega is not balanced by Omega.

You never have to decide in advance, to precommit. Precommitment is useful as a signal to those that can't follow your full thought process, and so you replace it with a simple rule from some point on ("you've already decided"). For Omegas and No-megas, you don't have to precommit, because they can follow any thought process.

0cousin_it11yI thought about it some more and I think you're either confused somewhere, or misrepresenting your own opinions. To clear things up let's convert the whole problem statement into observational evidence. Scenario 1: Omega appears and gives you convincing proof that Upsilon doesn't exist (and that Omega is trustworthy, etc.), then presents you with CM. Scenario 2: Upsilon appears and gives you convincing proof that Omega doesn't exist, then presents you with anti-CM, taking into account your counterfactual action if you'd seen scenario 1. You wrote: "If you do present observations that move the beliefs to represent the thought experiment, it'll work just as well as the magically contrived thought experiment." Now, I'm not sure what this sentence was supposed to mean, but it seems to imply that you would give up $100 in scenario 1 if faced with it in real life, because receiving the observations would make it "work just as well as the thought experiment". This means you lose in scenario 2. No?
0Vladimir_Nesov11yOmega would need to convince you that Upsilon not just doesn't exist, but couldn't exist, and that's inconsistent with scenario 2. Otherwise, you haven't moved your beliefs to represent the thought experiment. Upsilon must be actually impossible (less probable) in order for it to be possible for Omega to correctly convince you (without deception). Being updateless, your decision algorithm is only interested in observations so far as they resolve logical uncertainty and say which situations you actually control (again, a sort of logical uncertainty), but observations can't refute logically possible, so they can't make Upsilon impossible if it wasn't already impossible.
0cousin_it11yNo it's not inconsistent. Counterfactual worlds don't have to be identical to the real world. You might as well say that Omega couldn't have simulated you in the counterfactual world where the coin came up heads, because that world is inconsistent with the real world. Do you believe that?
0Vladimir_Nesov11yBy "Upsilon couldn't exist", I mean that Upsilon doesn't live in any of the possible worlds (or only in insignificantly few of them), not that it couldn't appear in the possible world where you are speaking with Omega. The convention is that the possible worlds don't logically contradict each other, so two different outcomes of coin tosses exist in two slightly different worlds, both of which you care about (this situation is not logically inconsistent). If Upsilon lives on such a different possible world, and not on the world with Omega, it doesn't make Upsilon impossible, and so you care what it does. In order to replicate Counterfactual Mugging, you need the possible worlds with Upsilons to be irrelevant, and it doesn't matter that Upsilons are not in the same world as the Omega you are talking to. (How to correctly perform counterfactual reasoning on conditions that are logically inconsistent (such as the possible actions you could make that are not your actual action), or rather how to mathematically understand that reasoning is the septillion dollar question.)
2cousin_it11yAh, I see. You're saying Omega must prove to you that your prior made Upsilon less likely than Omega all along. (By the way, this is an interesting way to look at modal logic, I wonder if it's published anywhere.) This is a very tall order for Omega, but it does make the two scenarios logically inconsistent. Unless they involve "deception" - e.g. Omega tweaking the mind of counterfactual-you to believe a false proof. I wonder if the problem still makes sense if this is allowed.
[-][anonymous]11y 0

It's not predictable when you'll have to start making payments.

I dont't know how much this will support your position, but: mid 1980s, Texas, USA, by my father.

And as I said above, it did take a while to learn, but afterward, my reaction was, "Wait -- all I have to do is keep in motion and I won't fall over. Why didn't he just say that all along?" That began my long history of encountering people who overestimate the difficulty of, or fail to simplify the process to teaching or justifying something.

ETA: Also, I haven't ridden a bike in over 15 years, so that might be a good test of whether my "just keep in motion" heuristic allows me to preserve the knowledge.

6mattnewport11yThe fact that 'like riding a bike' is a saying used to describe skills that you never forget suggests that it wouldn't be a very good test.
0SilasBarta11yYeah, I wasn't so sure it would be a good test. Still, I'm not sure how well the "you don't forget how to learn a bike" hypothesis is tested, nor how much of its unforgettability is due to the simplicity of the key insights.
1NancyLebovitz11yMost people don't store the insights of bike riding verbally-- the insights are stored kinesthetically. It seems to be much easier to forget math.
0SilasBarta11yI don't disagree, but there's typically a barrier, increasing with time since last use, that must be overcome to re-access that kinesthetic knowledge. And think verbal heuristics like the one I gave can greatly shorten the time you need to complete this process.

early 90s, US. I also had training wheels for a while first, which didn't actually teach me anything. I didn't learn until they were removed. And I also had someone running along for reassurance.