To whom it may concern:

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

(After the critical success of part II, and the strong box office sales of part III in spite of mixed reviews, will part IV finally see the June Open Thread jump the shark?)

New Comment
663 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Cleaning out my computer I found some old LW-related stuff I made for graphic editing practice. Now that we have a store and all, maybe someone here will find it useful:


You are magnificent.

(Alternate title for the LW tabloid — "The Rational Enquirer"?)

2Scott Alexander
That's....brilliant. I might have to do another one just for that title.
Yep, it was probably the first rationalist joke ever that made me laugh.
I didn't see that until right now, made me chuckle.
Nearly killed me.
We have a store? Where?
Roko Mijic has a Zazzle store. (See also.)
Tabloid 100% gold. Hanson slayed me.
Oh dear oh dear oh dear oh dear...
Lol, although, what does astrology have to do with anything less wrong-ish.
That's a reference to Three Worlds Collide.

Why is LessWrong not an Amazon affiliate? I recall buying at least one book due to it being mentioned on LessWrong, and I haven't been around here long. I can't find any reliable data on the number of active LessWrong users, but I'd guess it would number in the 1000s. Even if only 500 are active, and assuming only 1/4 buy at least one book mentioned on LessWrong, assuming a mean purchase value of $20 (books mentioned on LessWrong probably tend towards the academic, expensive side), that would work out at $375/year.

IIRC, it only took me a few minutes to sign up as an Amazon affiliate. They (stupidly) require a different account for each Amazon website, so 5*4 minutes (.com,, .de, .fr), +20 for GeoIP database, +3-90 (wide range since coding often takes far longer than anticipated) to set up URL rewriting (and I'd be happy to code this) would give a 'worst case' scenario of $173 annualized returns per hour of work.

Now, the math is somewhat questionable, but the idea seems like a low-risk, low-investment and potentially high-return one, and I note that Metafilter and StackOverflow do this, though sadly I could not find any information on the returns they see from this. So, is there any reason why nobody has done this, or did nobody just think of it/get around to it?

From your link, a further link doesn't make it sound great at SO - 2-4x the utter failure. But they are very positive about it because the cost of implementation was very low. Just top-level posts or no geolocating would be even cheaper. You may be amused (or something) by this search
A possibly relevant data point: I usually post any links to books I put online with my amazon affiliate link and in the last 3 months I've had around 25 clicks from links to books I believe I posted in Less Wrong comments and no conversions.

The entire world media seems to have had a mass rationality failure about the recent suicides at Foxconn. There have been 10 suicides there so far this year, at a company which employs more than 400,000 people. This is significantly lower than the base rate of suicide in China. However, everyone is up in arms about the 'rash', 'spate', 'wave'/whatever of suicides going on there.

When I first read the story I was reading a plausible explanation of what causes these suicides by a guy who's usually pretty on the ball. Partly due to the neatness of the explanation, it took me a while to realise that there was nothing to explain.

Your strength as a rationalist is your ability to be more confused by fiction than by reality. It's even harder to achieve this when the fiction comes ready-packaged with a plausible explanation (especially one which fits neatly with your political views).

That's what I thought as well, until I read this post from "Fake Steve Jobs". Not the most reliable source, obviously, but he does seem to have a point:

But, see, arguments about national averages are a smokescreen. Sure, people kill themselves all the time. But the Foxconn people all work for the same company, in the same place, and they’re all doing it in the same way, and that way happens to be a gruesome, public way that makes a spectacle of their death. They’re not pill-takers or wrist-slitters or hangers. ... They’re jumpers. And jumpers, my friends, are a different breed. Ask any cop or shrink who deals with this stuff. Jumpers want to make a statement. Jumpers are trying to tell you something.

Now I'm not entirely sure of the details, but if it's true that all the suicides in the recent cluster consisted of jumping off the Foxconn factory roof, that does seem to be more significant than just 15 employees committing suicide in unrelated incidents. In fact, it seems like it might even be the case that there are a lot more suicides than the ones we've heard about, and the cluster of 15 are just those who've killed themselves via this particular, highly visible, me... (read more)

Suicide and methods of suicide are contagious, FWIW.

keyword = "werther effect"

I was surprised when I read a statistical analysis on national death rates. Whenever there was a suicide by a particular method published in newspapers or on television, deaths of that form spiked in the following weeks. This is despite the copycat deaths often being called 'accidents' (examples included crashed cars and aeroplanes). Scary stuff (or very impressive statistics-fu).
Yes, this is connected to the existence of suicide epidemics. The most famous example is the ongoing suicide epidemic over the last fifty years in Micronesia, where both the causes and methods of suicide have been the same (hanging). See for example this discussion.
If all the members of a cult committed suicide then the local rate is 100%. The most local rate that we so far know of is 15/400,000 which is 4x below baseline. If these 15 people worked at, say, the same plant of 1,000 workers you may have a point. But we don't know. At this point there is nothing to explain.
Fair enough - my example was poorly thought out in retrospect. But I don't think it's correct that there's nothing to explain. If it's true that all 15 committed suicide by the same method - a fairly rare method frequently used by people who are trying to make a public statement with their death - then there seems to be something needing to be explained. As Fake Steve Jobs points out later in the cited article, if 15 employees of Walmart committed suicide within the span of a few months, all of them by way of jumping off the roof of their Walmart, wouldn't you think that was odd? Don't you think that would be more significant, and more deserving of an explanation, than the same 15 Walmart employees committing suicide in a variety of locations, by a variety of different methods? I'm not committing to any particular explanation here (Douglas Knight's suggestion, for one, sounds like a plausible explanation which doesn't involve any wrongdoing on Foxconn's part), I'm just saying that I do think there's "something to explain".
Just curious: why the downvote? Was this just a case of downvote = disagree? If so, what do you disagree with specifically?
Strange. I thought it made a good point, so I just upvoted it.
The first question that came to mind when I heard about this story was 'what's the base rate?'. I didn't investigate further but a quick mental estimate made me doubt that this represented a statistically significant increase above the base rate. It's disappointing yet unsurprising that few if any media reports even consider this point.
Wasn't there a somewhat well-publicized "spate" of suicides at a large French telecom a while back? I remember the explanation being the same - the number observed was just about what you'd expect for an employer of that size. ETA:
Even if the suicide rate was somewhat higher than average it still doesn't necessarily tell you much. You should really be looking at the probability of that number of suicides occurring in some distinct subset of the population - given all the subsets of a population that you can identify you will expect some to have higher than suicide rates than for the population as a whole. The relevant question is 'what is the probability that you would observe this number of suicides by chance in some randomly selected subset of this size?' Incidentally the rate appears to be below that of Cambridge University students:
Yes, this is my counter-counter-criticism as well. 'Sure, the overall China rate may be the same, but what's the suicide rate for young, employed workers employed by a technical company with bright prospects? I'll bet it's lower than the overall rate...'
Agreed. Also, I think what got the suicides in China in the news was that the victim attributed the suicide specifically to some weird policy or rule the company adhered to. It could be that the "normal" suicides at the company are being ignored, and the ones being reported are the suicides on top of this, justifying that concern that this is abnormal.
This was why I went looking for stats on suicides amongst university students. I remembered some talk when I was at Cambridge of a high suicide rate, which you might see as somewhat similarly counter-intuitive to a high suicide rate for 'young, employed workers employed by a technical company with bright prospects'. Actually, there are a number of reasons to expect a somewhat elevated suicide rate in a relatively high pressure environment where large numbers of young people have left home for the first time and are living in close proximity to large numbers of strangers their own age. Stories about high suicide rates at elite universities tend to take a very different tack to stories about Chinese workers however.
Ya, I can see how something like this could happen. By the way, a few statistics don't exactly prove anything. Was there 10 deaths last year? The year before? Do other factories have similiar problems? Etc. To many variables.
Incidentally, note that the evidence strongly suggests that actively taking out your aggression actually increases rather than decreases stress and aggression levels. See for example, Berkowitz's 1970 paper "Experimental investigation of hostility catharsis" in the Journal of Consulting and Clinical Psychology.

Marginal Revolution linked to A Fine Theorem, which has summaries of papers in decision theory and other relevant econ, including the classic "agreeing to disagree" results. A paper linked there claims that the probability settled on by Aumann-agreers isn't necessarily the same one as the one they'd reach if they shared their information, which is something I'd been wondering about. In retrospect this seems obvious: if Mars and Venus only both appear in the sky when the apocalypse is near, and one agent sees Mars and the other sees Venus, then they conclude the apocalypse is near if they exchange info, but if the probabilities for Mars and Venus are symmetrical, then no matter how long they exchange probabilities they'll both conclude the other one probably saw the same planet they did. The same thing should happen in practice when two agents figure out different halves of a chain of reasoning. Do I have that right?

ETA: it seems, then, that if you're actually presented with a situation where you can communicate only by repeatedly sharing probabilities, you're better off just conveying all your info by using probabilities of 0 and 1 as Morse code or whatever.

ETA: the paper works out an example in section 4.

I thought of a simple example that illustrates the point. Suppose two people each roll a die privately. Then they are asked, what is the probability that the sum of the dice is 9?

Now if one sees a 1 or 2, he knows the probability is zero. But let's suppose both see 3-6. Then there is exactly one value for the other die that will sum to 9, so the probability is 1/6. Both players exchange this first estimate. Now curiously although they agree, it is not common knowledge that this value of 1/6 is their shared estimate. After hearing 1/6, they know that the other die is one of the four values 3-6. So actually the probability is calculated by each as 1/4, and this is now common knowledge (why?).

And of course this estimate of 1/4 is not what they would come up with if they shared their die values; they would get either 0 or 1.

Here is a remarkable variation on that puzzle. A tiny change makes it work out completely differently.

Same setup as before, two private dice rolls. This time the question is, what is the probability that the sum is either 7 or 8? Again they will simultaneously exchange probability estimates until their shared estimate is common knowledge.

I will leave it as a puzzle for now in case someone wants to work it out, but it appears to me that in this case, they will eventually agree on an accurate probability of 0 or 1. And they may go through several rounds of agreement where they nevertheless change their estimates - perhaps related to the phenomenon of "violent agreement" we often see.

Strange how this small change to the conditions gives such different results. But it's a good example of how agreement is inevitable.

But in reality, what happens when people try to aumann involves a different set of problems, such as status-signalling, especially the idea that updating toward someone else's probability is instinctively seen as giving them status.
Thanks a lot for both links. I already understood common knowledge, but the paper is a very pleasing and thorough treatment of the topic.

Observation: The may open thread, part 2, had very few posts in the last days, whereas this one has exploded within the first 24 hours of its opening. I know I deliberately withheld content from it as once it is superseded from a new thread, few would go back and look at the posts in the previous one. This would predict a slowing down of content in the open threads as the month draws to a close, and a sudden burst at the start of the next month, a distortion that is an artifact of the way we organise discussion. Does anybody else follow the same rule for their open thread postings? Is there something that should be done to solve this artificial throttling of discussion?

Some sites have gone to an every Friday open thread; maybe we should do it weekly instead of monthly, too.

I would support that.
From observations even of previous "Part 2"s, it would seem that there is enough content to support that frequency of open thread.
I don't post in the open threads much, but if I run into a good rationality quote I tend to wait until the next rationality quotes thread is opened unless the current one is less than a week or so old.

I think my only other comment here has been "Hi." But, the webcomic SMBC has a treatment of the prisoner's dilemma today and I thought of you guys.


'Here is Eric Boyd's talk about the device he built called North Paw - a haptic compass anklet that continuously vibrates in the direction of North. It's a project of Sensebridge, a group of hackers that are trying to "make the invisible visible".'

The technology itself is pretty interesting; see also

So I've started drafting the very beginnings of a business plan for a Less Wrong (book) store-ish type thingy. If anybody else is already working on something like this and is advanced enough that I should not spend my time on this mini-project, please reply to this comment or PM me. However, I would rather not be inundated with ideas as to how to operate such a store yet: I may make a Less Wrong post in the future to gather ideas. Thanks!

My theory of happiness.

In my experience, happy people tend to be more optimistic and more willing to take risks than sad people. This makes sense, because we tend to be more happy when things are generally going well for us: that is when we can afford to take risks. I speculate that the emotion of happiness has evolved for this very purpose, as a mechanism that regulates our risk aversion and makes us more willing to risk things when we have the resources to spare.

Incidentally, this would also explain why people falling in love tend to be intensly happy at first. In order to get and keep a mate, you need to be ready to take risks. Also, if happiness is correlated with resources, then being happy signals having lots of resources, increasing your prospective mate's chances of accepting you. [...]

I was previously talking with Will about the degree to which people's happiness might affect their tendency to lean towards negative or positive utilitarianism. We came to the conclusion that people who are naturally happy might favor positive utilitarianism, while naturally unhappy people might favor negative utilitarianism. If this theory of happiness is true, then that makes perfect sens

... (read more)
How does this make sense exactly? A happy person, with more resources, would be better off not taking risks that could result in him losing what he has. On the other hand, a sad person with few resources, would need to take more risks then the happy person to get the same results. If you told a rich person, jump off that cliff and I'll give you a million dollars, they probably wouldn't do it. On the other hand, if you told a poor person the same thing, they might do it as long as there was a chance they could survive. My idea of why people were happy wasn't a static value of how many resources they had, but a comparative value. A rich person thrown into poverty would be very unhappy, but the poor person might be happy.
Kaj's hypothesis is a bit off: what he's actually talking about is the explore/exploit tradeoff. An animal in a bad (but not-yet catastrophic) situation is better off exploiting available resources than scouting new ones, since in the EEA, any "bad" situation is likely to be temporary (winter, immediate presence of a predator, etc.) and it's better to ride out the situation. OTOH, when resources are widely available, exploring is more likely to be fruitful and worthwhile. The connection to happiness and risk-taking is more tenuous. I'd be interested in seeing the results of that experiment. But "rich" and "poor" are even more loosely correlated with the variables in question - there are unhappy "rich" people and unhappy "poor" people, after all. (In other words, this is all about internal, intuitive perceptions of resource availability, not rational assessments of actual resource availability.)
If I were to wager a guess, the people who would accept the deal are those who feel they are in a catastrophic situation. Speaking of catastrophic situations, have you seen The Wages of Fear or any of the remakes? I've only seen Sorcerer), but it was quite good. It's a rather more realistic situation that jumping off a cliff, but the structure is the same: a group of desperate people driving cases of nitroglycerin-sweating dynamite across rough terrain to get enough money that they can escape.
Or maybe not...
I'd buy "main road incorporating rope suspension bridges" over "millionaire hiring people to throw themselves off cliffs", but I see what you mean.
I believe you're right, now that I think about that.
I was kind of thinking expected value. In principle, if you always go by expected value, in the long run you will end up maximizing your value. But this may not be the best move to make if you're low on resources, because with bad luck you'll run out of them and die even though you made the moves with the highest expected value. However, your objection does make sense and Eby's reformulation of my theory is probably the superior one, now that I think about it.
Hi Kaj, I really liked the article. I had a relevant theory to explain the perceived difference of attitudes of north Europeans versus south Europeans. I guess you could call it a theory of unhappiness. Here goes: I take as granted that mildly depressed people tend to make more accurate depictions of reality, that north Europeans have higher incidence of depression and also much better functioning economies and democracies. Given a low resource environment, one needs to plan further, and make more rational projections of the future. If being on the depressive side makes one more introspective and thoughtful, then it would be conducive to having better long-term plans. In a sense, happiness could be greed-inducing, in a greedy algorithm sense. This more or less agrees with kaj's theory. OTOH, not-happiness would encourage long-term planning and even more co-operative behaviour. In the current environment, resources may not be scarce, but our world has become much more complex, actions having much deeper consequences than in the ancestral environment (Nassim Nicholas Taleb makes this point in Black Swan) therefore also needing better thought out courses of action. So northern Europeans have lucked out where their adaptation to climate has been useful for the current reality. If one sees corruption as a local-greedy behaviour as opposed to lawfulness as a global-cooperative behaviour, this would also explain why going closer to the equator you generally see an increase in corruption and also failures in democratic government. Taken further, it would imply that near-equator peoples are simply not well-adapted to democratic rule, which demands a certain limiting of short-term individual freedom for the longer-term common good, and a more distributed/localised form of governance would do much better. I think this (rambling) theory can more or less be pieced together with kaj's, adding long-term planning as a second dimension. Disclaimer: Before anyone accuses me of dis
If any given instance of discrimination increases the degree of correspondence between your map and the territory, then there is no need for apology. Are these sorts of disclaimers really necessary here?
Relevant to your interests:
Greatly appreciated. present-oriented vs. future oriented is a good way to put it and I suspect there is some more research I could find if I dig further behind that speech.
And a very condensed note I wrote to myself (in brainstormish mode, without regard for feasibility or testability):

Searle has some weird beliefs about consciousness. Here is his description of a "Fading Qualia" thought experiment, where your neurons are replaced, one by one, with electronics:

... as the silicon is progressively implanted into your dwindling brain, you find that the area of your conscious experience is shrinking, but that this shows no effect on your external behavior. You find, to your total amazement, that you are indeed losing control of your external behavior. You find, for example, that when the doctors test your vision, you hear them say, ‘‘We are holding up a red object in front of you; please tell us what you see.’’ You want to cry out, ‘‘I can’t see anything. I’m going totally blind.’’ But you hear your voice saying in a way that is completely out of your control, ‘‘I see a red object in front of me.’’

(J.R. Searle, The rediscovery of the mind, 1992, p. 66, quoted by Nick Bostrom here.)

This nightmarish passage made me really understand why the more imaginative people who do not subscribe to a computational theory of mind are afraid of uploading.

My main criticism of this story would be: What does Searle think is the physical manifestation of those panicked, helpless thoughts?

I don't have Searle's book, and may be missing some relevant context. Does Searle believe normal humans with unmodified brains can consciously affect their external behavior?

If yes, then there's a simple solution to this fear: do the experiment he describes, and then gradually return the test subject to his original, all-biological condition. Ask him to describe his experience. If he reports (now that he's free of non-biological computing substrate) that he actually lost his sight and then regained it, then we'll know Searle is right, and we won't upload. Nothing for Searle to fear.

But if, as I gather, Searle believes that our "consciousness" only experiences things and is never a cause of external behavior, then this is subject to the same criticism as Searle's support of zombies.

Namely: if Searle is right, then the reason he is giving us this warning isn't because he is conscious. Maybe in fact his consciousness is screaming inside his head, knowing that his thesis is false, but is unable to stop him from publishing his books. Maybe his consciousness is already blind, and has been blind from birth due to a rare developmental accident, and it doesn't know what words he types in his books at all. Why should we listen to him, if his words about conscious experience are not caused by conscious experience?

Searle thinks that consciousness does cause behavior. In the scary story, the normal cause of behavior is supplanted, causing the outward appearance of normality. Thus, it's not that consciousness doesn't affect things, but just that its effects can be mimicked. Nisan's criticism is devastating, and has the advantage of not requiring technological marvels to assess. I do like the elegance of your simple solution, though.
David Chalmers discusses this particular passage by Searle extensively in his paper "Absent Qualia, Fading Qualia, Dancing Qualia": He demonstrates very convincingly that Searle's view is incoherent except under the assumption of strong dualism, using an argument based on more or less the same basic idea as your objection.

To the powers that be: Is there a way for the community to have some insight into the analytics of LW? That could range from periodic reports, to selective access, to open access. There may be a good reason why not, but I can't think of it. Beyond generic transparency brownie points, since we are a community interested in popularising the website, access to analytics may produce good, unforeseen insights. Also, authors would be able to see viewership of their articles, and related keyword searches, and so be better able to adapt their writing to the audience. For me, a downside of posting here instead of my own blog is the inability to access analytics. Obviously i still post here, but this is a downside that may not have to exist.


LW too focused on verbalizable rationality

This comment got me thinking about it. Of course LW being a website can only deal with verbalizable information(rationality). So what are we missing? Skillsets that are not and have to be learned in other ways(practical ways): interpersonal relationships being just one of many. I also think the emotional brain is part of it. There might me people here who are brilliant thinkers yet emotionally miserable because of their personal context or upbringing, and I think dealing with that would be important. I think a hollistic approach is required. Eliezer had already suggested the idea of a rationality dojo. What do you think?

I've been talking to various people about the idea of a Rationality Foundation (working title) which might end up sponsoring or facilitating something like rationality dojos. Needless to say this idea is in its infancy.
The example of coding dojos for programmers might be relevant, and not just for the coincidence in metaphors.
I'm a draftsman and it always struck me how absolutely terrible the English language is for talking about ludicrously simple visual concepts precisely. Words like parallel and perpendicular should be one syllable long. I wonder if there's a way to apply rationality/ mathematical think beyond geometry and to the world of art.
According to wiki: "Tacit knowledge (as opposed to formal or explicit knowledge) is knowledge that is difficult to transfer to another person by means of writing it down or verbalizing it" Thus: "Effective transfer of tacit knowledge generally requires extensive personal contact and trust. Another example of tacit knowledge is the ability to ride a bicycle." Supports the dojo idea...perhaps in SecondLife once the graphics are better?
How much personal contact and trust does it take to learn to ride a bicycle?
As someone who learned cycling as a near-adult, the main insight is that you turn the wheel in the direction in which the bike is falling to push it back vertical. Once I had been told that negative-feedback mechanism, the only delay was until I got frustrated enough with going slowly to say, "heck with this 'rolling down a slight slope' game, I'm just going to turn the pedals." Whereupon I was genuinely riding the bicycle. ...for about a minute, until I got the bright idea of trying to jump the curb. Did you know that rubbing the knee off a pair of jeans will leave a streak of blue on concrete?
What was your total time frame in learning to ride? Was there a period before you were told about turning the wheel?
I estimate the total time between donning the helmet and hitting the sidewalk was less than an hour - but it was probably a decade ago, so I don't trust my recollections.
Hahaha, great catch. Though maybe they meant personal contact with a bicycle!
Uh, lots? Who did you learn it from?
Per my upcoming "Explain Yourself!" article, I am skeptical about the concept of "tacit knowledge". For one thing, it puts up a sign that says, "Hey, don't bother trying to explain this in words", which leads to, "This is a black box; don't look inside", which leads to "It's okay not to know how this works". Second, tacit knowledge often turns out to be verbalizable, questioning whether the term "tacit" is really calling out a valid cluster in thingspace[1]. For example, take the canonical example of learning to ride a bike. It's true that you can learn it hands-on, using the inscrutable, patient training of the master. But you can also learn it by being told the primary counterintuitive insights ("as long as you keep moving, you won't tip over"), and then a little practice on your own. In that case, the verbal knowledge has substituted one-for-one with (much of) the tacit learning you would have gained on your own from practice. So how much of it was "really" tacit all along? How much of it are you just calling tacit because the master never reflected on what they were doing? So for me, the appeal to "difficulty of verbalizing it" certainly has some truth to it, but I find it mainly functions to excuse oneself from critical introspection, and from opening important black boxes. I advise people to avoid using this concept if remotely possible; it tends to say more about you than the inherent inscrutability of the knowledge. [1] To someone who sucks at programming, the ability to revise a recipe to produce more servings is "tacit knowledge".
As someone who has made much of the concept of tacit knowledge in the past, I'll have to say you have a point. (I'm now considering the addendum: "made much of it because it served my interests to present some knowledge I claimed to have as being of that sort". I'm not necessarily endorsing that hypothesis, just acknowledging its plausibility.) It still feels as if, once we toss that phrase out the window, we need something to take its place: words are not universally an effective method of instruction, practice clearly plays a vital part in learning (why?), and the hypothesis that a learner reconstructs knowledge rather than being the recipient of a "transfer" in a literal sense strikes me as facially plausible given the sum of my learning experiences. Perhaps an adult can comprehend "as long as you keep moving, you won't tip over", but I have a strong intuition it wouldn't go over very well with kids, depending on age and dispositions. My parenting experience (anecdotal evidence as it may be) backs that up. You need to see what a kid is doing right or wrong to encourage the former and correct the latter, you need a hefty dose of patience as the kid's anxieties get in the way sometimes for a long while. Learning to ride a bike is a canonical example because it is taught early on, there is hedonic value in learning it early on, but it is typically taught at an age when a kid rarely (or so my hunch says) has the learning-ability to understand advice such as "as long as you keep moving, you won't tip over". There is such a thing as learning to learn (and just how verbalizable is that skill?). It's all too easy to overgeneralize from a sparse set of examples and obtain a simple, elegant, convincing, but false theory of learning. I hope your article doesn't fall into that trap. :)
I don't disagree, but I don't see how it contradicts my position either. The evidence you give against words being effective is that, basically, they don't fully constrain what the other person is being told to do, so they can always mess up in unpredictable ways. That's true, but it just shows how you need to understand the listener's epistemic state to know which insights they lack that would allow them to bridge the gap People do get this wrong, and end up giving "let them eat cake" advice -- advice that, if it were useful, the problem would have been solved. But at the same time, a good understanding of where they are can lead to remarkably informative advice. (I've noticed Roko and HughRistik are excellent at this when it comes to human sociality, while some are stuck in "let them eat cake" land.) Well, in my case, once it clicked for me, my thought was, "Oh, so if you just keep moving, you won't tip over, it's only when you stop or slow down that you tip -- why didn't he just tell me that?" Well, if it were a sparse set I wouldn't be so confident. I have a frustratingly long history of people telling me something can't be explained or is really hard to explain, followed by me explaining it to newbies with relative ease. And of cases where someone appeals to their inarticulable personal experience for justification, when really it was an articulable hidden assumption they could have found with a little effort. Anyone is welcome to PM me for an advance draft of the article if they're interested in giving feedback.
I'm in general agreement, but leaves me wondering if you underestimate how much effort it takes to notice and express how to do things which are usually non-verbal.
I don't understand. The part you quoted isn't about expressing how to do non-verbal things; it's about people who say, "when you get to be my age, you'll agree, [and no I can't explain what experiences you have as you approach my age that will cause you to agree because that would require a claim regarding how to interpret the experience which you have a chance of refuting]" What does that have to do with the effort need to express how to do non-verbal things?
Excuse me-- I wasn't reading carefully enough to notice that you'd shifted from claims that it was too hard to explain non-verbal skills to claims that it was too hard to explain the lessons of experience.
Okay. Well, then, assuming your remark was a reply to a different part of my comment, my answer is that yes, it may be hard, but for most people, I'm not convinced they even tried.
Am I interpreting you correctly that you are not denying that some skills can only be learned by practicing the skill (rather than by reading about or observing the skill) but are saying that verbal or written instruction is just as effective as an aid to practice as demonstration if done well? I'm still a bit skeptical about this claim. When I was learning to snowboard for example it was clear that some instructors were better able to verbalize certain key information (keep your weight on your front foot, turn your body first and let the board follow rather than trying to turn the board, etc.) but I don't think the verbal instructions would have been nearly as effective if they were not accompanied by physical demonstrations. It's possible that a sufficiently good instructor could communicate just as effectively through purely verbal instruction but I'm not sure such an instructor exists. The fact that this is a rare skill also seems relevant even if it is possible - there are many more instructors who can be effective if they are allowed to combine verbal instruction with physical demonstrations.
Good points, but keep in mind snowboarding instructors aren't optimizing the same thing that a rationalist (in their capacity as a rationalist) is optimizing. If you just want to make money, quickly, and churn out good snowboarders, then use the best tools available to you -- you have no reason to convert the instruction into words where you don't have to. But if you're approaching this as a rationalist, who wants to open the black box and understand why certain things work, then it is a tremendously useful exercise to try to verbalize it, and identify the most important things people need to know -- knowledge that can allow them to leapfrog a few steps in learning, even and especially if they can't reach the Holy Grail of full transmission of the understanding. And I'd say (despite the first paragraph in this comment) that it's a good thing to do anyway. I suspect that people's inability to explain things stems in large part from a lack of trying -- specifically, a lack of trying to understand what mental processes are going on in side of them that allows a skill to work like it does. They fail to imagine what it is like not to have this skill and assume certain things are easy or obvious which really aren't. To more directly answer your question, yes, I think verbal instruction, if it understands the epistemic state of the student, can replace a lot of what normally takes practice to learn. There are things you can say that get someone in just the right mindset to bypass a huge number of errors that are normally learned hands-on. My main point, though, is that people severely overestimate the extent of their knowledge which can't be articulated, because the incentives for such a self-assessment are very high. Most people would do well to avoid appeals to tacit knowledge, an instead introspect on their knowledge so as to gain a deeper understanding of how it works, labeling knowledge as "tacit" only as a last resort.
I would suspect this has more to do with the skill of the student in translating verbal descriptions into motions. You can perfectly understand a series of motions to be executed under various conditions, without having the motor skill to assess the conditions and execute them perfectly in real-time.
I'm looking forward to your article, and I think that you're right to emphasize the vast gap between "unverbalizable" and "I don't know at the moment how to verbalize it". But, to really pass the "bicycle test", wouldn't you have to be able to explain verbally how to ride a bike so well that someone could get right on the bike and ride perfectly on the first try? That is, wouldn't you have to be able to eliminate even that "little practice on your own"? Or is there some part of being able to ride a bike that you don't count as knowledge, and which forms the ineliminable core that needs to be practiced?
Depends on what the "bicycle test" is testing. For me, the fact that something is staked out as a canonical, grounding example of tacit knowledge, and then is shown to be largely verbalizable, blows a big hole in the concept. It shows that "hey, this part I can't explain" was groundless in several subcases. I do agree that some knowledge probably deserves to be called tacit. But given the apparent massive relativity of tacitness, and the above example, it seems that these cases are so rare, you're best off working from the assumption that nothing is tacit, than from looking for cases that you can plausibly claim are tacit. It's like any other case where one possibility should be considered last. If you do a random test on General Relativity and find it to be way off, you should first work from the assumption that you, rather than GR, made a mistake somewhere. Likewise, if your instinct is to label some of your knowledge as tacit, your first assumption should be, "there's some way I can open up this black box; what am I missing?". Yes, these beliefs could be wrong -- but you need a lot more evidence before rejecting them should even be on the radar. (And to be clear, I don't claim my thesis about tacitness to deserve the same odds as GR!)
Just to be clear, I don't think it has been shown in the case of bike-riding that the knowledge can be transferred verbally. You can give someone verbal instruction that will help them improve faster at bike-riding, that isn't at issue. It's much less clear that telling someone the actual control algorithm you use when you ride a bike is sufficient to transform them from novice into proficient bike rider. You can program a robot to ride a bike and in that sense the knowledge is verbalizable, but looking at the source code would not necessarily be an effective method of learning how to do it.
I think being able to verbally transmit the knowledge that solves most of the problem for them is proof that at least some of the skill can be transferred verbally. And of course it doesn't help to tell someone the detailed control algorithm to ride a bike, and I wouldn't recommend doing so as an explanation -- that's not the kind of information they need! One day, I think it will be possible to teach someone to ride a bike before they ever use one, or even carry out similar actions, though you might need a neural interface rather than spoken words to do so. The first step in such a quest is to abandon appeals to tacit knowledge, even if there are cases where it really does exist.
None, and nobody. I got a bicycle and tried to ride it until I could ride it. It took about three weeks from never having sat on a bicycle to confidently mixing with heavy traffic. (At the age of 22, btw. I never had a bicycle as a child.) The first line that JoshB quoted from Wikipedia is fine -- there is this class of knowledge -- but I don't agree with the second at all. Some things you can learn just by having a go untutored. Where an instructor is needed, e.g. in martial arts, the only trust required is enough confidence in the competence of the teacher to do as he says before you know why.
How typical is that bike-learning history in your estimation?
I guess that more people learn to ride a bike in childhood than as adults, but I believe that the usual method at any age is to get on it and ride it. There really isn't much you can do to teach someone how to do it.
OK, so I suppose it doesn't take much personal contact and trust to acquire a skill of the bike-riding type. In particular if you're an autonomous enough learner, in particular if the skill is relatively basic. The original assertion, though, was about personal contact and trust being required to transfer a skill of the bike-riding type, and perhaps one reason to make this assertion is that the usual method involves a parent dispensing encouragement and various other forms of help, vis-a-vis a child. (I learnt it from my grandfather, and have a lot of positive affect to accompany the memories.) Providing an environment in which learning, an intrinsically risky activity, becomes safe and pleasurable - I know from experience that this takes rapport and trust, it doesn't just happen. Such an environment is perhaps not a prerequisite to acquiring a non-verbalized skill, but it does help a lot; as such it makes it possible for people who would otherwise give up on learning before they made it to the first plateau.
We must have had very different experiences of many things. Tell me more about learning being risky. I have been learning Japanese drumming since the beginning of last year (in a class), and stochastic calculus in the last few months (from books), and "risky" is not a word it would occur to me to apply to either process. The only risk I can see in learning to ride a bicycle is the risk of crashing.
One major risk involved in learning is to your self-esteem: feeling ridiculous when you make a mistake, feeling frustrated when you can't get an exercise right for hours of trying, and so on. As you note, in physical aptitudes there is a non-trivial risk of injury. There is the risk, too, of wasting a lot of time on something you'll turn out not to be good at. Perhaps these things seem "safe" to you, but that's what makes you a learner, in contrast with large numbers of people who can't be bothered to learn anything new once they're out of school and in a job. They'd rather risk their skills becoming obsolete and ending up unemployable than risk learning: that's how scary learning is to most people.
I would say that the problem then is with the individual, not with learning. Those feelings reset on false beliefs that no-one is born with. Those who acquire them learn them from unfortunate experiences. Others chance to have more fortunate experiences and learn different attitudes. And some manage in adulthood to expose their false beliefs to the light of day, clearly perceive their falsity, and stop believing them. Thus it is said, "The things that we learn prevent us from learning."
I doubt people are consciously making this decision, but rather they aren't calculating the potential rewards as opposed to potential risks well. A risk that is in the far future is often taken less seriously than a small risk now.
People who buy insurance are demonstrating ability to trade off small risks now against bigger risks in the future, but often the same people invest less in keeping their professional skills current than they do in insurance. Personal experience tells me that I had (and still have) a bunch of Ugh fields related to learning, which suggest that there are actual negative consequences of engaging in the activity (per the theory of Ugh fields). My hunch is that the perceived risks of learning accounts in a significant part for why people don't invest in learning, compared to the low perceived reward of learning. I could well be wrong. How could we go about testing this hypothesis?
I'm not sure. It may require a more precise statement to make it testable.
Are you serious? I could never have learned to ride a bike without my parents spending hours and hours trying to teach me. Did you also learn to swim by jumping into water and trying not to drown? I'd be very surprised if most people learned to ride a bike without instruction, but I may be unusual.
There was actually at some point a theory that "babies are born knowing how to swim", and on one occasion at around age three, at a holiday resort the family was staying at, I was thrown into a swimming pool by a caretaker who subscribed to this theory. It seems that after that episode nobody could get me to feel comfortable enough in water to get any good at swimming (in spite of summer vacations by the seaside for ten years straight, under the care of my grandad who taught me how to ride a bike). I only learned the basics of swimming, mostly by myself with verbal instruction from a few others, around age 30.
I'm so sorry. That is truly horrific abuse.
Maybe there's a cultural difference, but I don't know what country you're in (or were in). I've never heard of anyone learning to ride a bike except by riding it. But clearly we need some evidence. I don't care for the bodge of using karma to conduct a poll, so I'll just ask anyone reading this who can ride a bicycle to post a reply to this comment saying how they learned, and in what country. "Taught" should mean active instruction, something more than just someone being around to provide comfort for scrapes and to keep children out of traffic until they're ready. Results so far: RichardKennaway: self-taught as adult, late 70's, UK Morendil: taught in childhood by grandfather, UK? Blueberry: taught in childhood by parents, where? So that's two to one against my current view, but those replies may be biased: other self-taught people will not have had as strong a reason to post agreement.
I dont't know how much this will support your position, but: mid 1980s, Texas, USA, by my father. And as I said above, it did take a while to learn, but afterward, my reaction was, "Wait -- all I have to do is keep in motion and I won't fall over. Why didn't he just say that all along?" That began my long history of encountering people who overestimate the difficulty of, or fail to simplify the process to teaching or justifying something. ETA: Also, I haven't ridden a bike in over 15 years, so that might be a good test of whether my "just keep in motion" heuristic allows me to preserve the knowledge.
The fact that 'like riding a bike' is a saying used to describe skills that you never forget suggests that it wouldn't be a very good test.
Yeah, I wasn't so sure it would be a good test. Still, I'm not sure how well the "you don't forget how to learn a bike" hypothesis is tested, nor how much of its unforgettability is due to the simplicity of the key insights.
Most people don't store the insights of bike riding verbally-- the insights are stored kinesthetically. It seems to be much easier to forget math.
I don't disagree, but there's typically a barrier, increasing with time since last use, that must be overcome to re-access that kinesthetic knowledge. And think verbal heuristics like the one I gave can greatly shorten the time you need to complete this process.
early 90s, US. I also had training wheels for a while first, which didn't actually teach me anything. I didn't learn until they were removed. And I also had someone running along for reassurance.
Canada, mid 1960s. Brother tried to teach me but I mostly ignored him. Used bike with training wheels, which I raised higher and higher and removed completely after a couple of weeks.
United States, early 60s (I think it's worth mentioning when because cultures change), just given a bike with training wheels, and I figured it out myself.
France, but close enough. ;) There's some variation in method of instruction. My grandpa had fitted my bike with a long handle in the back and used that to help me balance after taking the training wheels off. With one of my kids I tried the method of gradually lifting the training wheels to make the balance more precarious over time. One of the other two just "got it", as I remember, in one or two sessions. Otherwise it was the standard riding down a slight slope and advising them "keep your feet on the pedals", and running alongside for reassurance.
The truth is, that's how most skilled artists learned to draw. In the past, there was a more formalized teaching role, often starting at age eight, and you can go through school and even get through art school having been given so little knowledge, that if you know how to draw a human from imagination, you can confidently say you are an autodidact. It's not because art, (particularly representational figure drawing, from imagination or not) is inherently unteachable, but a lot of people tend to think so. This is not the only skill like this, although I think it's one that's perhaps the least understood and where misinformation is the most tolerated.
I think it would be great to systematically explore and develop useful skillsets, perhaps in a modular fashion. We do have sequences. I would join a rationality dojo immediately. What do you mean practical ways? I understand the difficulty of transferring kinesthetic or social understanding, but how can we overcome that in nonverbalized fashion?
Some things have to be shown, you have to sometimes take part in an activity to "get" it, learn by trial and error, get feedback pointing out mistakes that you are unaware of, etc...
For example?
Do you think you could describe this image to an arbitrarily talented artist and end up with an image that even looked like it was based on it? It's not so much, "Such insolence, our ideas are so awesome they can not be broken down by mere reductionism" as "Wow, words are really bad at describing things that are very different from what most of the people speaking the language do." I think you could make an elaborate set of equations on a cartesian graph and come up with a drawing that looked like it and say fill up RGB values #zzzzzz at coordinates x,y or whatever, but that seems like a copout since that doesn't tell you anything about how Fragonard did it.
This reminds me of an exercise we did in school. (I don’t remember either when or for what subject.) Everyone was to make a relatively simple image, composed of lines, circles, triangles and the such. Then, without showing one’s image to the others, each of us was to describe the image, and the others to draw according to the description. The “target” was to obtain reproductions as close as possible to the original image. It’s surprisingly hard. It’s was a very interesting exercise for all involved: It’s surprisingly hard to describe precisely, even given the quite simple drawings, in such a way that everyone interprets the description the way you intended it. I vaguely remember I did quite well compared with my classmates in the describing part, and still had several “transcriptions” that didn’t look anywhere close to what I was saying. I think the lesson was about the importance of clear specifications, but then again it might have been just something like English (foreign language for me) vocabulary training. ---------------------------------------- An example: Draw a square, with horizontal & vertical sides. Copy the square twice, once above and once to the right, so that the two new squares share their bottom and, respectively, left sides with the original square. Inside the rightmost square, touching its bottom-right corner, draw another square of half the original’s size. (Thus, the small square shares its bottom-right corner with its host, and its top-left corner is on the center of its host.) Inside the topmost square, draw another half-size square, so that it shares both diagonals with its host square. Above the same topmost square, draw an isosceles right-angled triangle; its sides around the right angle are the same length as the large squares’; its hypotenuse is horizontal, just touching the top side of the topmost square; its right angle points upwards, and is horizontally aligned with the center of the original square. (Thus, the original square
My mum had to do this take for her work, save with building blocks, and for the learning-impaired. Instructions like 'place the block flat on the ground, like a bar of soap' were useful. One nit-pick: when you say squares half the size, you mean with half the side length, or one quarter of the size.
Color and line weight have not been specified, I note. Nor position relative to the canvas.
You could probably get pretty good results without messing with complex equations, by first describing the full picture, then describing what's in four quadrants made by drawing vertical and horizontal lines that split the image exactly in half, then describing quadrants of these quadrants, split in a similar way and so on. The artist could use their skills to draw the details without an insanely complex encoding scheme, and the grid discipline would help fix the large-scale geometry of the image. Edit: A 3x3 grid might work better in practice, it's more natural to work with a center region than to put the split point right in the middle of the image, which most probably contains something interesting. On the other hand, maybe the lines breaking up the recognizable shapes in the picture (already described in casual terms for the above-level description) would help bring out their geometrical properties better. Edit 2: Michael Baxandall's book Patterns of Intetion has some great stuff on using language to describe images.
Drawing a photograph with the aid of a Grid is a common technique for making copyinng easier, although it's also sometimes used as a teaching tool for early artists. I'm not in love with this explanation (Loomis does much better) but this should give you the essential idea: As a teaching tool for people who can't draw, I haven't seen it be effective, but it's awesome if you've got a deadline and don't want to spend all your time checking and rechecking your proportions.I doubt it would be effective, since it's so easy for novice artists to screw up when they have the image right in front of them. There's a more effective method which uses a ruler or compass and is often used to copy Bargue drawings. Use precise measurements around a line at the meridian and essentially connect the dots. For the curious: This might work long distance: "Okay, draw the next dot 9/32nds of an inch a way at 12 degrees down to the right." This still seems like a bit of a cop out, though. Yes, there are ways to assemble copies of images using a grid, but it doesn't help us figure out how such freehand images were made in the first place. We're not even taking a crack at the little black box.
Drawing on the Right Side of the Brain seems to be the classic for teaching people how to draw. It's a bunch of methods for seeing the details of what you're seeing (copying a drawing held upside down, drawing shadows rather than objects) so that you draw what you see rather than a mental simplified hieroglyphic of what you see.

New papers from Nick Bostrom's site.

Speaking of the Simulation argument, I just stumbled across (but haven't read) / :

This post is about the distinctions between Traditional and Bayesian Rationality, specifically the difference between refusing to hold a position on an idea until a burden of proof is met versus Bayesian updating.

Good quality government policy is an important issue to me (it's my Something to Protect, or the closest I have to one), and I tend to approach rationality from that perspective. This gives me a different perspective from many of my fellow aspiring rationalists here at Less Wrong.

There are two major epistemological challenges in policy advice, in addition to the normal difficulties we all have to deal with: 1) Policy questions fall almost entirely within the social sciences. That means the quality of evidence is much lower than it is in the physical sciences. Uncontrolled observations, analysed with statistical techniques, are generally the strongest possible evidence, and sometimes you have nothing but theory or professional instinct to work with.
2) You have a very limited time in which to find an answer. Cabinet Ministers often want an answer within weeks, a timeframe measured in months is luxurious. And often a policy proposal is too sensitive to discuss with the... (read more)

Reminded me of one of my favorite movie dialogues - from Sunshine. Context was actually physics, but the complexity of the situation and the time frame but the characters in the same situation as you with the Cabinet ministers. Capa: It's the problem right there. Between the boosters and the gravity of the sun the velocity of the payload will get so great that space and time will become smeared together and everything will distort. Everything will be unquantifiable. Kaneda: You have to come down on one side or the other. I need a decision. Capa: It's not a decision, it's a guess. It's like flipping a coin and asking me to decide whether it will be heads or tails. Kaneda: And? Capa: Heads... We harvested all Earth's resources to make this payload. This is humanity's last chance... our last, best chance... Searle's argument is sound. Two last chances are better than one.
Yes, that's a good example. There are times when a decision has to be made, and saying you don't know isn't very useful. Even if you have very little to go on, you still have to decide one way or the other.
I am not at all like you. I don't have much interest in policy at all, and I do tend to refuse to hold a position, being very mindful of how easy it is to be completely off course (Probably from reading too much history of science. It's "the graveyard of dead ideas", after all.). I'm likely to tell the Cabinet Ministers to get off my back or they'll have absolutely useless recommendations. However, I think you have hit upon the point that makes Bayesianism attractive to me: it's rationality you can use to act in real-time, under uncertainty, in normal life. Traditional Rationality is slow.
I see your point, the trouble is that a recommendation that comes too late often is absolutely useless. A lot of policy is time-dependant, if you don't act within a certain time frame then you might a swell do nothing. While sometimes doing nothing is the right thing to do, a late recommendation is often no better than no recommendation.
Yeah, I forgot to add that you've budged me slightly from my staunch positivist attitude for social science. Thanks. Reading up on complex adaptive systems has made me just that much more skeptical about our ability to predict policy's effects, and perhaps biased me.
It's nice to know I've had an influence :) As it happens, I'm pretty sceptical as to how much we can know as well. There's nothing like doing policy to gain an understanding of how messy it can be. While the social sciences have a less than wonderful record in developing knowledge (look at the record of development economics, as one example), and economic forecasting is still not much better than voodoo but it's not like there's another group out there with all the answers. We don't have all of the answers, or even most of them, but we're better than nothing, which is the only alternative.
Nothing is often a pretty good alternative. Government action always comes at a cost, even if only the deadweight loss of taxation (keyphrase "public choice" for reasons you might expect the cost to be higher than that). I'm not trying to turn this into a political debate, but you should consider doing nothing not necessarily a bad thing, and what you do not necessarily better.
When I said "better than nothing" I was referring to advice, not the actual actions taken. My background is in economics so I'm quite familiar with both dead-weight loss of taxation and public choice theory, though these days I lean more toward Bryan Caplan's rational irrationality theory of government failure. I agree that nothing is often a good thing for governments to do, and in many cases that is the advice that Cabinet receives.
Politicians' logic: “Something must be done. This is something. Therefore we must do it.”

Forgive me if this is beating a dead horse, or if someone brought up an equivalent problem before; I didn't see such a thing.

I went through a lot of comments on dust specks vs. torture. (It seems to me like the two sides were miscommunicating in a very specific way, which I may attempt to make clear at some point.) But now I have an example that seems to be equivalent to DSvs.T, easily understandable via my moral intuition and give the "wrong" (i.e., not purely utilitarian) answer.

Suppose I have ten people and a stick. The appropriate infinite... (read more)

DSvsT was not directly an argument for utilitarianism, it was an argument for tradeoffs and quantitative thinking and against any kind of rigid rules, sacred values, or qualitative thinking which prevents tradeoffs. For any two things, both of which have some nonzero value, there should be some point where you are willing to trade off one for the other - even if one seems wildly less important than the other (like dust specks compared to torture). Utilitarianism provides a specific answer for where that point is, but the DSvsT post didn't argue for the utilitarian answer, just that the point had to be at less than 3^^^3 dust specks. You would probably have to be convinced of utilitarianism as a theory before accepting its exact answer in this particular case.

The stick-hitting example doesn't challenge the claim about tradeoffs, since most people are willing to trade off one person getting hit multiple times with many people each getting hit once, with their choice depending on the numbers. In a stadium full of 100,000 people, for instance, it seems better for one person to get hit twice than for everyone to get hit once. Your alternative rule (maximin) doesn't allow some tradeoffs, so it leads to implausible conclusions in cases like this 100,000x1 vs. 1x2 example.

I don't think maximising the minima is what you want. Suppose your choice is to hit one person 20 times, or five people 19 times each. Unless your intuition is different from mine, you'll prefer the first option.
I don't think you can justifiably expect to be able to tell your brain something this self-evidently unrealistic, and have it update its intuitions accordingly.
Oh, and I'd love to hear what you mean about this.
There's one difference, which is that the inequality of the distribution is much more apparent in your example, because one of the options distributes the pain perfectly evenly. If you value equality of distribution as worth more than one unit of pain, it makes sense to choose the equal distribution of pain. This is similar to economic discussions about policies that lead to greater wealth, but greater economic inequality.
I think the point of Dust Specks Vs Torture was scope failure. Even allowing for some sort of "negative marginal utility" once you hit a wacky number 3^^^3, it doesn't matter. .000001 negative utility point multiplied by 3^^^3 is worse than anything, because 3^^^3 is wacky huge. For the stick example, I'd say it would have to depend on a lot of factors about human psychology and such, but I think I'd hit the one. Marginal utility tends to go down for a product, and I think that the shock of repeated blows would be less than the shock of the one against ten separate people. I think your opinion basically is an appeal to egalitarianism, since you expect negative utility to yourself from an unfair world where one person gets something that ten other people did not, for no good or fair reason.
I think you're mistaken about the marginal utility-- being hit again after you've already been injured (especially if you're hit on the same spot) is probably going to be worse than the first blow. Marginal disutility could plausibly work in the opposite direction from marginal utility. Each 10% of your money that you lose impacts your quality of life more. Each 10% of money that you gain impacts your quality of life less. There might be threshold effects for both, but I think the direction is right.
I was thinking more along the lines of scope failure: If some one said you were going to be hit 11 times would you really expect it to feel exactly 110% as bad as being hit ten times? But yes, from a traditional economics point of view, your post makes a hell of a lot more sense. Upvoted.
Part of the assumption of the problem was that hitting with a stick has some constant negative utility for all the people.
It's always hard to think about this sort of thing. I read that in the original problem, but then I ended up thinking about actual hitting people with sticks when deciding what was best. Is there anything in the archives like The True Prisoner's Dilemma but for giving an intuitive version of problems with adding utility?
Then it depends. If you're a utilitarian, it is still better to hit the guy nine times than to hit ten people ten times. If you allow some ideas about the utility of equality, then things get more complicated. That's why I think most people reject the simple math that 9 < 10.
I'd analyze your question this way. Ask any one of the ten people which they would prefer: A) to get hit B) to have a 1/10th chance of getting hit 9 times. Assuming rationality and constant disutility of getting hit, every one of them would choose B.

I have a theory: Super-smart people don't exist, it's all due to selection bias.

It's easy to think someone is extremely smart if you've only seen the sample of their most insightful thinking. But every time that happened to me, and I found that such a promising person had a blog or something like that, it universally took very little time to find something terribly brain-hurtful they've written there.

So the null hypothesis is: there's a large population of fairly-smart-but-nothing-special people, who think and publish their thought a lot. Because the best ... (read more)


I was thinking something similar just today:

Some people think out loud. Some people don't. Smart people who think out loud are perceived as "witty" or "clever." You learn a lot from being around them; you can even imitate them a little bit. They're a lot of fun. Smart people who don't think out loud are perceived as "geniuses." You only ever see the finished product, never their thought processes. Everything they produce is handed down complete as if from God. They seem dumber than they are when they're quiet, and smarter than they are when you see their work, because you have no window into the way they think.

In my experience, there are far more people who don't think out loud in math than in less quantitative fields. This may be part of why math is perceived as so hard; there are all these smart people who are hard to learn from, because they only reveal the finished product and not the rough draft. Rough drafts make things look feasible. Regular smart people look like geniuses if they leave no rough drafts. There may really be people who don't need rough drafts in the way that we mundanes do -- I've heard of historical figures like that, and those really are savants -- but it's possible that some people's "genius" is overstated just because they're cagey about expressing half-formed ideas.

You may be right about math. Reading the Polymath research threads (like this one) made me aware that even Terry Tao thinks in small and well-understood steps that are just slightly better informed than those of the average mathematician.

I Am a Strange Loop by Hofstadter may be of interest-- it's got a lot about how he thinks as well as his conclusions.
I'm not a psychologist but I thought I could improve on the vagueness of the original discussion. There are a few factors which determine "smartness" (or potential for success): 1. Speed. Having faster hardware. 2. Pattern Recognition. Being better at "chunking". 3. Memory. 4. Creativity. (="divergent" thinking.) 5. Detail-awareness. 6. Experience. Having incorporated many routines into the subconscious thanks to extensive practice. 7. Knowledge. (Quality is more important than quantity.) The first five traits might be considered part of someone's "talent." Experience and knowledge, which I'll group together as "training", must be gained through hard work. Potential for success is determined by a geometric (rather than additive) combination of talent and training: that is, roughly, potential for success=talent * training All this math, of course, is not remotely intended to be taken at face value, but it's merely the most efficient way to make my point. The "super-smart" start life with more talent than average. The rule of the bell curve holds, so they generally do not have an overwhelming cognitive advantage over the average person. But they have enough talent to justify investing much more of their resources into training. This is because a person with 15 talent will gain 15 success for every unit of time they put into training, while a unit of training is worth 17 success for a person with 17 talent. The less time you have to spend, the more time costs, so all other things being equal, the person with more talent will put more time into training. Suppose the person with 15 talent puts 100 units of time into training, and the person with 17 talent puts 110 units of time into training. Then: person with 15 talent * 100 training => 15000 success person with 17 talent * 110 training => 18700 success Which is 25% more success for only 13% more talent. There's probably some more formal work done along these lines, I'm not an economist either.
If you're interpreting "super-smart" to mean always right, or at least reasonable, and thus never severely wrong-headed, I think you're correct that no one like that exists, but it seems like a rather comic bookish idea of super-smartness. Also, I have no idea how good your judgment is about whether what you call brain-hurtful is actually ideas I'd think were egregiously wrong. I think there are a lot of folks smart enough to be special people-- those who come up with worthwhile insights frequently. And even if it's just a matter of generating lots of ideas and then publishing the best, recognizing the best is a worthwhile skill. It's conceivable that idea-generation and idea-recognizing are done by two people who together give the impression of one person who's smarter than either of them.
How would you describe the writing patterns of super-smart people? Similarly, how would meeting/talking/debating them would feel like?
I think my comment was rather vague, and people aren't sure what I meant. This is all my impressions, as far as I can tell evidence of all that is rather underwhelming; I'm writing this more to explain my thought than to "prove" anything. It seems to me that people come in different level of smartness. There are some people with all sort of problems that make them incapable of even human normal, but let's ignore them entirely here. Then, there are normal people who are pretty much incapable of original highly insightful thought, critical thinking, rationality etc. They can usually do OK in normal life, and can even be quite capable in their narrow area of expertise and that's about it. They often make the most basic logic mistakes etc. Then there are "smart" people who are capable of original insight, and don't get too stupid too often. They're not measuring example the same thing, but IQ tests are capable of distinguishing between those and the normal people reasonably well. With smart people both their top performance and their average performance is a lot better than with average people. In spite of that, all of them very often fail basic rationality for some particular domains they feel too strongly about. Now I'm conflicted if people who are so much above "smart" as "smart" is above normal really exists. A canonical example of such person would be Feynman - from my limited information he seems to be just so ridiculously smart. Eliezer seems to believe Einstein is like that, but I have even less information about him. You can probably think of a few such other people. Unfortunately there's a second observation - there's no reason to believe such people existed only in the past, or would have aversion to blogging - so if super-smart people exist, it's fairly certain that some blogs of such people exist. And if such blogs existed, I would expect to have found a few by now. And yet, every time it seemed to me that someone might just be that smart and I start
A few people who blog frequently and fit my criteria for "super-smart": Terence Tao, Cosma Shalizi, John Baez.
I was thinking of Tao as well. Also, Oleg Kiselyov for programming/computer science.
Yep, seconding the recommendation of Oleg. I read a lot of his writings and I'd definitely have included him on the list.
Interesting picks. I hadn't thought of Cosma Shalizi as 'super-smart' before, just erudite and with a better memory for the books and papers he's read than me. Will have to think about that...
I think you're giving the "normal person" too little credit.
Agreed. If nothing else, refugee situations aren't that uncommon in human history, and the majority are able to migrate and adapt if they're physically permitted to do so.
It doesn't seem to me that you have an accurate description of what a super-smart person would do/say other than match your beliefs and providing insightful thought. For example, do you expect super-smart people to be proficient in most areas of knowledge or even able to quickly grasp the foundations of different areas through super-abstraction? Would you expect them to be mostly unbiased? Your definition needs to be more objective and predictive, instead of descriptive.
I don't know what's the correct super-smartness cluster, so I cannot make objective predictive definition, at least yet. There's no need to suffer from physics envy here - a lot of useful knowledge has this kind of vagueness. Nobody managed to define "pornography" yet, and it's far easier concept than "super-smartness". This kind of speculation might end up with something useful with some luck (or not). Even defining by example would be difficult. My canonical examples would be Feynman and Einstein - they seem far smarter than the "normally smart" people. Let's say I collected a sufficiently large sample of "people who seem super-smart", got as accurate information about them as possible, and did a proper comparison between them and background of normally smart people (it's pretty easy to get good data on those, even by generic proxies like education - so I'm least worried about that) in a way that would be robust against even large number of data errors. That's about the best I can think of. Unfortunately it will be of no use as my sample will be not random super-smart people but those super-smart people who are also sufficiently famous for me to know about them and be aware of their super-smartness. This isn't what I want to measure at all. And I cannot think of any reasonable way to separate these. So the project is most likely doomed. It was interesting to think about this anyway.
Why would they blog? They would already know that most people have nothing of interest to tell them; and if they want to tell other people something, they can do it through other channels. If such a person had a blog, it might be for a very narrow reason, and they would simply refrain from talking about matters guaranteed to produce nothing but time-consuming stupidity in response.
I'm not sure that the ability to have original thoughts is at all closely connected to the ability to think rationally. What makes you reach that conclusion? Have you tried looking at Terence Tao's blog? I think he fits your model, but it may be that many of his posts will be too technical for a non-mathematician. I'm not sure in general if blogging is a good medium for actually finding this sort of thing. It is easy to see if a blogger isn't very smart. it isn't clear to me that it is a medium that allows one to easily tell if someone is very smart.
I doubt your disproof of super-smart people, for the very same reasons you do, perhaps with a greater weight assigned to those reasons. I am also not sure about your definition of super-smart. Is idiot savant (in math, say) super-smart? If you mean super-smart=consistently rational, I suspect nothing prevents people of normal-smart IQ from scoring (super) well there, trading off quantity of ideas for quality. There is a ceiling there as good ideas get more complex and require more processing power, but I suspect given how crazy this world is Norm Smart the Rationalist can score surprisingly highly on relative basis. As a data point you might want to look at "Monster Minds" chapter of Feynman's "Surely you're joking". Since you mentioned Feynman. The chapter is about Einstein. Finally, where is your blog? ;)
My blog is here.
You can set that in "preferences".
Reminds me of 'My Childhood Role Model'. As for the actual meat of your comment, I don't have much to add. 'Smart' is a slippery enough word that I'd guess one's belief in 'super-smart people' depends on how one defines 'smart.'
There is an important systematic bias you only tangentially mention in your analysis. Super-smart people (more generally, very successful people) don't feel they have to prove themselves all the time. (Especially if they are tenured. :) ) Many of them like to talk before they think. There are very smart people around them who quickly spot the obvious mistakes and laboriously complete the half-baked ideas. It is just more economic this way.
Have you never had an in-person conversation with a super-smart person? Also, hi folks, I'm back. It is surprisingly difficult to dive back into LW after leaving it for a few weeks.
Obviously no, as I don't believe in their existence.
My point is that I have trouble telling the difference between a fairly-smart and super-smart person by their writing for exactly the reason you mentioned. But in-person conversations give you access to the raw material and, if I take myself to be fairly smart there are definitely super-smart people out there. For example, I imagine if you had got to talking to Richard Feynman while he was alive you would have quickly realized he was a super-smart person.
I'm not sure about this. I have a lot of trouble distinguishing between just smart, super-smart, and smart-and-an-expert-in-their-field. Distinguishing them seems to not occur easily simply based on quick interactions. I can distinguish people in my own field to some extent, but if it isn't my own area, it is much more difficult. Worse, there are serious cognitive biases about intelligence estimations. People are more likely to think of someone as smart if they share interests and also more likely to think of someone as smart if they agree on issues. (Actually I don't have a citation for this one and a quick Google search doesn't turn it up, does someone else maybe have a citation for this?) One could imagine that many people might if meeting a near copy of themselves conclude that the copy was a genius. That said, I'm pretty sure that there are at least a few people out there who reasonably do qualify as super-smart. But to some extent, that's based more on their myriad accomplishments than any personal interaction.
I'd guess it's far far easier to fool someone in person with all the noise of primate social clues, so such information is worth a lot less than writing.

The Unreasonable Effectiveness of My Self-Exploration by Seth Roberts.

This is an overview of his self-experiments (to improve his mood and sleep, and to lose weight), with arguments that self-experimentation, especially on the brain, is remarkably effective in finding useful, implausible, low-cost improvements in quality of life, while institutional science is not.

There's a lot about status and science (it took Roberts 10 years to start getting results, and it's just to risky to careers for scientists to take on projects which last that long), and some int... (read more)

I winced.
I would like to see a top-level link post and discussion of this article (and maybe other related papers).
I'm slightly tempted to, because that article is sloppy and unfocused enough that it annoys me, even though it's broadly accurate. (I mean, 'the standard statistical system for drawing conclusions is, in essence, illogical'? Really?) But I don't know what I'd have to add to it, really, other than basically whining 'it is so unfair!'
Yeah, that would be great, but I can't do it; I don't have the technical background, so I hereby delegate the task to someone else willing to write it up.

I've been reading the Quantum Mechanics sequence, and I have a question about Many-Worlds. My understanding of MWI and the rest of QM is pretty much limited to the LW sequence and a bit of Wikipedia, so I'm sure there will be no shortage of people here who have a better knowledge of it and can help me.

My question is this: why are the Born Probabilites a problem for MWI?

I'm sure it's a very difficult problem, I think I just fail to understand the implications of some step along the way. FWIW, my understanding of the Born Probabilities mainly clicks here:


... (read more)

So... If a quantum event has a 30% chance of going LEFT and a 70% chance of going right . . . you'll have a 30% probability of observing LEFT and a 70% probability of observing RIGHT.

So why is this surprising?

The surprising (or confusing, mysterious, what have you) thing is that quantum theory doesn't talk about a 30% probability of LEFT and a 70% probability of RIGHT; what it talks about is how LEFT ends up with an "amplitude" of 0.548 and RIGHT with an "amplitude" of 0.837. We know that the observed probability ends up being the square of the absolute value of the amplitude, but we don't know why, or how this even makes sense as a law of physics.

Ah. So it's not the idea that it's weighted so much as the specific act of squaring the amplitude. "Why squaring the amplitude, why not something else?". I suppose the way I had been reading, I thought that the problem came from expecting a different result given the squared amplitude probability thing, not from the thing itself. That is helpful, many thanks.
That's one issue, but as Warrigal said, the other issue is "how this even makes sense." it seems to say that the amplitude is a measure of how real the configuration is.
That's one issue, but as Warrigal said, the other issue is "how this even makes sense." it seems to say that the amplitude is a measure of how real the configuration is.
Yes, precisely.
Delightful, and has a nice breakdown of the sort of questions to ask yourself (what exactly is the problem, how much precision is actually needed, what is the condition of the tools, etc.) if you want to get things done efficiently.

I would have thought everyone here would have seen this by now, but I hadn't until today so it may be new to someone else as well:

Charlie Munger on the 24 Standard Causes of Human Misjudgment

After more-or-less successfully avoiding it for most of LW's history, we've plunged headlong into mind-killer territory. I'm a little bit worried, and I'm intrigued to find out what long-time LWers, especially those who've been hesitant about venturing that direction, expect to see as a result over the next month or two.

It doesn't look encouraging. The discussions just don't converge, they meander all over the place and leave no crystalline residue of correct answers. (Achievement unlocked: Mixed Metaphor)

It is problematic but necessary, in my opinion. Politics IS the mind-killer, but politics DOES matter. Avoiding the topic would seem to be an admission that this rationality thing is really just a pretty toy. But it would be nice to lay down some ground-rules.
I don't think anyone has mentioned a political party or a specific current policy debate yet. That's when things really go downhill.
I think a current policy debate has potential for better results, since it would offer the potential for betting, and avoid some of the self-identification and loyalty that's hard to avoid when applying a model as simple as a political philosophy to something as complex as human culture.
Since we've had some discussion about additions/modifications to the site, and LW -- as I understand it -- was a originally a sort of spin-off from OB, maybe addition of a karma-based prediction market of some sort would be suitable (and very interesting).
Maybe make bets of karma? That might be very interesting. It would have less bite than monetary stakes, but highly risk averse individuals might be more willing to join the system.
I think having such a low-stakes game to play would be beneficial not only to highly risk-averse individuals, but to anyone. It would provide a useful training ground (maybe even a competitive ladder in a rationality dojo) for anyone who wants to also play with higher stakes elsewhere. Edit: I'm currently a mediocre programmer (and intend to become good via some practice). And while I don't participate often in the community (yet), this could be fun and educational enough that I would be willing to contribute a fairly substantial amount of labour to it. If anyone with marginally more know-how is willing to implement such an idea, let me know and I'll join up.
My feelings on this are mixed. I've found LW to be a refreshing refuge from such quarrels. On the other hand, without careful thought political debates reliably descend into madness quickly, and it is not as if politics is unimportant. Perhaps taking the mental techniques discussed here to other forums could improve the generally atrocious level of reasoning usually found in online political discussions, though I expect the effect would be small.

Are there any rationalist psychologists?

Also, more specifically but less generally relevant to LW; as a person being pressured to make use of psychological services, are there any rationalist psychologists in the Denver, CO area?

As a start, is a branch of psychotherapy with some respect around here because of the evidence that it sometimes works, compared to the other fields of psychotherapy with no evidence.
Do they really have such a poor track record? I know some scientists have very little respect for the "soft" sciences, but sociologist can at least make generalizations from studies done on large scales. Psychotherapy makes a lot of people incredulous, but iis it really fair to say that most methods in practice today are ~0% effective? Yes this is essentially a post stating my incredulity. Would you mind quelling it?
It's not that they're 0% effective, it's that they're not much more effective than placebo therapy (i.e. being put on a waiting list for therapy), or keeping a journal. CBT is somewhat more effective, but I've also heard that it's not as effective for high-ruminators... i.e., people who already obsess about their thinking.
Scientific medicine is difficult and expensive. I worry that the apparent success of CBT may be because methodological compromises needed to make the research practical happen to flatter CBT more than they flatter other approaches. I might be worrying about the wrong thing. Do we know anything about the usefulness of Prozac in treating depression? Since we turn a blind eye to the unblinding of all our studies by the sexual side-effects of Prozac, and also refuse to consider the direct impact of those side-effects it could be argued that we don't actually have any scientific knowledge of the effectiveness of the drug.
The claim I've seen associated with Robyn Dawes is that therapy is useful (which I read as "more useful than being on a waiting list"), but that untrained therapists are just as good as those trained under most methods. (ETA: and, contrary to Kevin, they have been tested and found wanting)
It's not that other forms of psychotherapy are scientifically shown to be 0% effective; it's just that evidence-based psychotherapy is a surprisingly recent field. Psychotherapy can still work even if some fields of it have not had rigorous studies showing their effectiveness... but you might as well go with a therapist that has training in a field of psychotherapy that has some scientific method behind it.
I can't help you with the Denver area in particular, but the general answer is a definite yes. In an interesting juxtaposition, American Psychologist magazine had a recent issue prominently featuring discussion of how to get past the misuse of statistics discussed in this very LW open thread. And it's not the first time the magazine addressed the point.
Does cognitive rationalist therapy count as both rationalist and psychology for purposes of this question? I think Learning Methods is a more sophisticated rationalist approach than CBT (it does a more meticulous job of identifying underlying thoughts), and might be worth checking into.
Interesting. I found the site to be not very helpful, until I hit this page, which strongly suggests that at least one thing people are learning from this training is the practical application of the Mind Projection Fallacy: The quote is from an article written by an LM student, and some insights from the learning process that helped her overcome her stage fright. IOW, at least one aspect of LM sounds a bit like "rationality dojo" to me (in the sense that here's an ordinary person with no special interest in rationalism, giving a beautiful (and more detailed than I quoted here) explanation of the Mind Projection Fallacy, based on her practical applications of it in everyday life . (Bias disclaimer: I might be positively inclined to what I'm reading because some of it resembles or is readily translatable to aspects of my own models. Another article that I'm in the middle of reading, for example, talks about the importance of addressing the origins of nonconsciously-triggered mental and physical reactions, vs. consciously overriding symptoms -- another approach I personally favor.)

The blog of Scott Adams (author of Dilbert) is generally quite awesome from a rationalist perspective, but one recent post really stood out for me: Happiness Button.

Suppose humans were born with magical buttons on their foreheads. When someone else pushes your button, it makes you very happy. But like tickling, it only works when someone else presses it. Imagine it's easy to use. You just reach over, press it once, and the other person becomes wildly happy for a few minutes.

What would happen in such a world?


We already have these buttons on LessWrong... ;)

Karma does make me feel important, but when it comes to happiness karma can't hold a candle to loud music, alcohol and girls (preferably in combination). I wish more people recognized these for the eternal universal values they are. If only someone invented a button to send me some loud music, alcohol and girls, that would be the ultimate startup ever.
Classical game theorists establish a scientific consensus that the only rational course of action is not to push the buttons. Anyone who does is regarded with contempt or pity and gets lowered in the social stratum, before finally managing to rationalize the idea out of conscious attention, with the help of the instinct to conformity. A few free-riders smugly teach the remaining naive pushers a bitter lesson, only to stop receiving the benefit. Everyone gets back to business as usual, crazy people spinning the wheels of a mad world.
7Wei Dai
Are you saying that classical game theorists would model the button-pushing game as one-shot PD? Why would they fail to notice the repetitive nature of the game?
I'd be far more willing to believe in game theorists calling for defection on the iterated PD than in mathematicians steering mainstream culture. However, with the positive-sum nature of this game, I'd expect theorists to go with Schelling instead of Nash; and then be completely disregarded by the general public who categorize it under "physical ways of causing pleasure" and put sexual taboos on it.
The theory says to defect in the iterated dilemma as well (under some assumptions).
Here's what the theory actually says: if you know the number of iterations exactly, it's a Nash equilibrium for both to defect on all iterations. But if you know the chance that this iteration will be the last, and this chance isn't too high (e.g. below 1/3, can't be bothered to give an exact value right now), it's a Nash equilibrium for both to cooperate as long as the opponent has cooperated on previous iterations.
This comment was very entertaining... but... I actually do think people in such a world ought not to press buttons. But not very strongly... only about the same "oughtnotness" as people ought not to waste time looking at porn. The argument is the same: Aren't there better things we could be doing? Ideally, in button-world, people will devise a way to remove their buttons. But if that couldn't be done, and we're seriously asking "what would happen?" I suppose it might end up being treated like sex. Having one's button publicly visible is "indecent" - buttons are only pushed in private. Etc. etc.
The analogy to sex is rough. From a historical and evolutionary perspective, sex is treated the way it is because it leads to gene replication and parenthood, not because it leads to pleasure. The lack of side effects from the buttons makes them more comparable to rubbing someone's back, smiling, or saying something nice to someone.
OK - well that's one possibility. But in discussing either of these analogies, aren't we just showing (a) that the pleasure-button scenario is underdetermined, because there are many different kinds of pleasure and (b) that it's redundant, because people can actually give each other pats on the back, or hand-jobs or whatever.
I dunno, this strikes me as a somewhat sex-negative attitude. Responding seriously to your question about the better things we could be doing, it strikes me that we people spend most of our time doing worthless things. We seldom really know whether we are happy, what it means to be happy, or how what we are doing might connect to somebody's future happiness. If the buttons actually made people happy from time to time, it could be quite useful as a 'reality check.' People suspecting that X led to happiness could test and falsify their claim by seeing whether X produced the same mental/emotional state that the button did. Obviously we shouldn't spend all our time pressing buttons, having sex, or looking at porn. But I sometimes wonder whether we wouldn't be better off if most people, especially in the developed world where labor seems to be over-supplied and the opportunity cost of not working is low, spent a couple hours a day doing things like that.
Isn't that a bit like snorting some coke (or perhaps just masturbating) after a happy experience (say, proving a particularly interesting theorem) to test whether it was really 'happy'? There are many different kinds of 'happiness', and what makes an experience a happy or an unhappy one is not at all simple to pin down. A kind of happiness that one can obtain at will, as often as desired, and which is unrelated to any "objective improvement" in oneself or the things one cares about, isn't really happiness at all. Pretend it's new year's eve and you're planning some goals for next year - some things that, if you achieve them, you will look back with pride and a sense of accomplishment. Is 'looking at lots of porn' on your list (even assuming that it's free and no-one was harmed in producing it)? I don't mean to imply anything about sex, because sex has a whole lot of things associated with it that make it extremely complicated. But the 'pleasure button' scenario gives us a clean slate to work from, and to me it seems an obvious reductio ad absurdum of the idea that pleasure = utility.
You seem to be confusing happiness with accomplishment: Sure it is. It may not be accomplishment, or meaningfulness, but it is happiness, by definition. I think the confusion comes because you seem to value many other things more than happiness, such as pride and accomplishment. Happiness is just a feeling; it's not defined as something that you need to value most, or gain the most utility from.
How do you distinguish a degenerate case of 'happiness' from 'satiation of a need'. Is the smoker or heroin addict made 'happy' by their fix? Does a glass of water make you 'happy' if you're dying from thirst, or does it just satiate the thirst? And can't the same sensation be either 'happy' or 'unhappy' depending on the circumstances. A person with persistent sexual arousal syndrome isn't made 'happy' by the orgasms they can't help but 'endure'. The idea that there's a "raw happiness feeling" detachable from the information content that goes with it is intuitively appealing but fatally flawed.
Yes, this is true. We will need to assume that the button can analyze the context to determine how to provide happiness for the particular brain it's attached to. My point is that happiness is not necessarily associated with accomplishment or objective improvement in oneself (though it can be). In such a situation, some people might not value this kind of detached happiness, but that doesn't mean it's not happiness.
Depends on how you define happiness. If you define it as "how much dopamine is in my system" ,"joy" or "these are the neat brainwaves my brain is giving off" then yes, you could achieve happiness by pressing a button (in theory). A lot of people seem to assume happiness = utility measured in utilons, which is a whole different thing altogether. Sort of like seeing some one writhe in ecstasy after jamming a needle in their arm and saying, "I'm so happy I'm not a heroin addict."
Oh, really? How can I get a cheap, legal, repeatable dopamine rush to my brain?
Edited my post to reflect your point. Although, I'm a young male and can achieve orgasm multiple times in under ten minutes with the aid of some lube and free porn. You probably didn't want to know that.
That's amazing. A drug that could eliminate refractory period like that would sell better than Viagra.
It seems the pharma industry discovered the effect of PDE5 inhibitors on erectile dysfunction pretty much by accident. The stuff was initially developed to treat heart disease, initial tests showed it didn't work, but male test subjects reported a useful side effect. Reminds me of the story of post-it notes: the guy who developed them actually wanted to create the ultimate glue, but sadly the result of his best efforts didn't stick very well, so he just went ahead and commercialized what he had. If big pharma is listening, I'd like to post a request for exercise pills.
Actually, orgasms are usually much less intense and don't result in ejaculation if I achieve them in under a certain amount of amount of time. I find the best are in the 20-30 minute period.
Yes, I've noticed that assumption, and I think even Jeremy Bentham talked about pleasure in utility terms. I don't think it's accurate for everyone, for instance, someone who values accomplishment more than happiness will assign higher utility to choices that lead to unhappy accomplishment than to unproductive leisure.
...and then they're happier working. By definition. Welcome to semantics.
That's a strange definition of "happier". They're happier with a choice just because they prefer that choice? Even if they appear frustrated and tired and grumpy all the time? Even if they tell you they're not happy and they prefer this unhappiness to not accomplishing anything? (In real life, I suspect happy people actually accomplish more, but consider a hypothetical where you have to choose between unhappy accomplishment and unproductive leisure.)
Eliezer did this whole thing in the Fun Theory sequence. Yes, not doing anything would be very boring, and being filled with cool drugs sounds like a horror story to my current utility curve. Let's hope the future isn't some form of ironic hell.
AlephNeil, I was taking Scott Adams' assertion that the button produces "happiness" at face value. I was being rather literal, I'm afraid. I think you're right to worry that no actual mechanism we can imagine in the near future would act like Scott's button. I stand by my point, though, that if we really did have a literal happiness button, it would probably be a good thing. As perhaps a somewhat more neutral example, I like to splash around in a swimming pool. It's fun. I hope to do that a lot over the next year or so. If I successfully play in the pool a lot during time that otherwise might have been spent reading marginally interesting articles, staring into space, harassing roommates, or working overtime on projects I don't care about, I will consider it a minor accomplishment. More to the point, if regular bouts of aquatic playtime keep me well-adjusted and accurately tuned-in to what it means to be happy, then I will rationally expect to accomplish all kinds of other things that make me and others happy. I will consider this to be a moderate accomplishment. There is a difference between pleasure and utility, but I don't think it's ridiculous at all to have a pleasure term in one's utility function. A more pleasant life, all else being equal, is a better one. There may be diminishing returns involved, but, well, that's why we shouldn't literally spend all day pressing the button.
That depends on how people react. It's at least plausible that people need some amount of pleasure in order to be able to focus on their other goals.
How does that work? I suppose it makes sense a little considering that the world has to go on and can't stop because everyones on the ground being "happy", but it wouldn't mean that people wouldn't do it, or even that it wouldn't be the "rational" thing to do.

Is everyone missing the obvious subtext in the original article - that we already live in just such a world but the button is located not on the forehead but in the crotch?

Perhaps some people would give their button-pushing services away for free, to anyone who asked. Let's call those people generous, or as they would become known in this hypothetical world: crazy sluts.

But you can touch that button yourself...
How does that compare to when someone else touches your button with their button?
I've never done that, so I don't know.
I see that subtext, but I also see a subtext of geeks blaming the obvious irrationality of everyone else for them not getting any, like, it's just poking a button, right?
Except that sex, unlike the button in the story, doesn't always make people happy. Sometimes, for some people, it comes with complications that decrease net utility. (Also, it is possible to push your own button with sex.)
Sure, but it's not my comparison - I'm just saying it appears to be the obvious subtext of the original article.
But two poor, "lonely" people could just get together and push each others buttons. Thats the problem with this, any two people that can cooperate with each other can get the advantage. There was once an expiriment to evolve different programs in a genetic algorithm that could play the prisoners dilema. I'm not sure exactly how it was organized, which would really make or break different strategies, but the result was a program which always cooperated except when the other wasn't and it continued refusing to cooperate with the other untill it believed they were "even".
Are you thinking of tit for tat? I'm not trying to argue for or against the comparison. Would you agree that the subtext exists in the original article or do you think I'm over-interpreting?
No, the subtext is definitely there in the original article. At least, I saw it immediately, as did most of the commenters:
I think the best analogy would be drugs, but those have bad things associated with them that the button example doesn't. They take up money, they cause health problems, etc.
That would not model the True Prisoner's Dilemma.
What's that got to do with the price of eggs?
A social custom would be established that buttons are only to be pressed by knocking foreheads together. Offering to press a button in a fashion that doesn't ensure mutuality is seen as a pathetic display of low status.

Pushing someone's happiness button is like doing them a favor, or giving them a gift. Do we have social customs that demand favors and gifts always be exchanged simultaneously? Well, there are some customs like that, but in general no, because we have memory and can keep mental score.

Hah. Status is relative, remember? Your setup just ensures that "dodging" at the last moment, getting your button pressed without pressing theirs, is seen as a glorious display of high status.

William Saletan at Slate is writing a series of articles on the history and uses of memory falsification, dealing mainly with Elizabeth Loftus and the ethics of her work. Quote from the latest article:

Loftus didn't flinch at this step. "A therapist isn't supposed to lie to clients," she conceded. "But there's nothing to stop a parent from trying something like [memory modification] with an overweight child or teen." Parents already lied to kids about Santa Claus and the tooth fairy, she observed. To her, it was a no-brainer: "A

... (read more)
Interesting. I have read several of Loftus's books, but the last one was The Myth of Repressed Memory: False Memories and Allegations of Sexual Abuse over ten years ago. I think I'll go see what she has written since. Thanks for reminding me of her work.

This might be old news to everyone "in", or just plain obvious, but a couple days ago I got Vladimir Nesov to admit he doesn't actually know what he would do if faced with his Counterfactual Mugging scenario in real life. The reason: if today (before having seen any supernatural creatures) we intend to reward Omegas, we will lose for certain in the No-mega scenario, and vice versa. But we don't know whether Omegas outnumber No-megas in our universe, so the question "do you intend to reward Omega if/when it appears" is a bead jar guess.

The caveat is of course that Counterfactual Mugging or Newcomb Problem are not to be analyzed as situations you encounter in real life: the artificial elements that get introduced are specified explicitly, not by an update from surprising observation. For example, the condition that Omega is trustworthy can't be credibly expected to be observed. The thought experiments explicitly describe the environment you play your part in, and your knowledge about it, the state of things that is much harder to achieve through a sequence of real-life observations, by updating your current knowledge.
I dunno, Newcomb's Problem is often presented as a situation you'd encounter in real life. You're supposed to believe Omega because it played the same game with many other people and didn't make mistakes. In any case I want a decision theory that works on real life scenarios. For example, CDT doesn't get confused by such explosions of counterfactuals, it works perfectly fine "locally". ETA: My argument shows that modifying yourself to never "regret your rationality" (as Eliezer puts it) is impossible, and modifying yourself to "regret your rationality" less rather than more requires elicitation of your prior with humanly impossible accuracy (as you put it). I think this is a big deal, and now we need way more convincing problems that would motivate research into new decision theories.
If you do present observations that move the beliefs to represent the thought experiment, it'll work just as well as the magically contrived thought experiment. But the absence of relevant No-megas is part of the setting, so it too should be a conclusion one draws from those observations.
Yes, but you must make the precommitment to love Omegas and hate No-megas (or vice versa) before you receive those observations, because that precommitment of yours is exactly what they're judging. (I think you see that point already, and we're probably arguing about some minor misunderstanding of mine.)
You never have to decide in advance, to precommit. Precommitment is useful as a signal to those that can't follow your full thought process, and so you replace it with a simple rule from some point on ("you've already decided"). For Omegas and No-megas, you don't have to precommit, because they can follow any thought process.
I thought about it some more and I think you're either confused somewhere, or misrepresenting your own opinions. To clear things up let's convert the whole problem statement into observational evidence. Scenario 1: Omega appears and gives you convincing proof that Upsilon doesn't exist (and that Omega is trustworthy, etc.), then presents you with CM. Scenario 2: Upsilon appears and gives you convincing proof that Omega doesn't exist, then presents you with anti-CM, taking into account your counterfactual action if you'd seen scenario 1. You wrote: "If you do present observations that move the beliefs to represent the thought experiment, it'll work just as well as the magically contrived thought experiment." Now, I'm not sure what this sentence was supposed to mean, but it seems to imply that you would give up $100 in scenario 1 if faced with it in real life, because receiving the observations would make it "work just as well as the thought experiment". This means you lose in scenario 2. No?
Omega would need to convince you that Upsilon not just doesn't exist, but couldn't exist, and that's inconsistent with scenario 2. Otherwise, you haven't moved your beliefs to represent the thought experiment. Upsilon must be actually impossible (less probable) in order for it to be possible for Omega to correctly convince you (without deception). Being updateless, your decision algorithm is only interested in observations so far as they resolve logical uncertainty and say which situations you actually control (again, a sort of logical uncertainty), but observations can't refute logically possible, so they can't make Upsilon impossible if it wasn't already impossible.
No it's not inconsistent. Counterfactual worlds don't have to be identical to the real world. You might as well say that Omega couldn't have simulated you in the counterfactual world where the coin came up heads, because that world is inconsistent with the real world. Do you believe that?
By "Upsilon couldn't exist", I mean that Upsilon doesn't live in any of the possible worlds (or only in insignificantly few of them), not that it couldn't appear in the possible world where you are speaking with Omega. The convention is that the possible worlds don't logically contradict each other, so two different outcomes of coin tosses exist in two slightly different worlds, both of which you care about (this situation is not logically inconsistent). If Upsilon lives on such a different possible world, and not on the world with Omega, it doesn't make Upsilon impossible, and so you care what it does. In order to replicate Counterfactual Mugging, you need the possible worlds with Upsilons to be irrelevant, and it doesn't matter that Upsilons are not in the same world as the Omega you are talking to. (How to correctly perform counterfactual reasoning on conditions that are logically inconsistent (such as the possible actions you could make that are not your actual action), or rather how to mathematically understand that reasoning is the septillion dollar question.)
Ah, I see. You're saying Omega must prove to you that your prior made Upsilon less likely than Omega all along. (By the way, this is an interesting way to look at modal logic, I wonder if it's published anywhere.) This is a very tall order for Omega, but it does make the two scenarios logically inconsistent. Unless they involve "deception" - e.g. Omega tweaking the mind of counterfactual-you to believe a false proof. I wonder if the problem still makes sense if this is allowed.
Sorry, can't parse that, you'd need to unpack more.
Whatever our prior for encountering No-mega, it should be counterbalanced by our prior for encountering Yes-mega (who rewards you if you are counterfactually-muggable).
You haven't considered the full extent of the damage. What is your prior over all crazy mind-reading agents that can reward or punish you for arbitrary counterfactual scenarios? How can you be so sure that it will balance in favor of Omega in the end?
In fact, I can consider all crazy mind-reading reward/punishment agents at once: For every such hypothetical agent, there is its hypothetical dual, with the opposite behavior with respect to my status as being counterfactually-muggable (the one rewarding what the other punishes, and vice versa). Every such agent is the dual of its own dual; in the universal prior, being approached by an agent is about as likely as being approached by its dual; and I don't think I have any evidence that one agent will be more likely to appear than its dual. Thus, my total expected payoff from these agents is 0. Omega itself does not belong to this class of agent; it has no dual. (ETA: It has a dual, but the dual is a deceptive Omega, which is much less probable than Omega. See below.) So Omega is the only one I should worry about. I should add that I feel a little uneasy because I can't prove that these infinitesimal priors don't dominate everything when the symmetry is broken, especially when the stakes are high.
Why? Can't your definition of dual be applied to Omega? I admit I don't completely understand the argument.
Okay, I'll be more explicit: I am considering the class of agents who behave one way if they predict you're muggable and behave another way if they predict you're unmuggable. The dual of an agent behaves exactly the same as the original agent, except the behaviors are reversed. In symbols: * An agent A has two behaviors. * It it predicts you'd give Omega $5, it will exhibit behavior X; otherwise, it will exhibit behavior Y. * The dual agent A* exhibits behavior Y if it predicts you'd give Omega $5, and X otherwise. * A and A* are equally likely in my prior. What about Omega? * Omega has two behaviors. * If it predicts you'd give Omega $5, it will flip a coin and give you $100 on heads; otherwise, nothing. In either case, it will tell you the rules of the game. What would Omega* be? * If Omega predicts you'd give Omega $5, it will do nothing. Otherwise, it will flip a coin and give you $100 on heads. In either case, it will assure you that it is Omega, not Omega. So the dual of Omega is something that looks like Omega but is in fact deceptive. By hypothesis, Omega is trustworthy, so my prior probability of encountering Omega* is negligible compared to meeting Omega. (So yeah, there is a dual of Omega, but it's much less probable than Omega.) Then, when I calculate expected utility, each agent A is balanced by its dual A , but Omega is not balanced by Omega.
If we assume you can tell "deceptive" agents from "non-deceptive" ones and shift probability weight accordingly, then not every agent is balanced by its dual, because some "deceptive" agents probably have "non-deceptive" duals and vice versa. No? (Apologies if I'm misunderstanding - this stuff is slowly getting too complex for me to grasp.)
The reason we shift probability weight away from the deceptive Omega is that, in the original problem, we are told that we believe Omega to be non-deceptive. The reasoning goes like this: If it looks like Omega and talks like Omega, then it might be Omega or Omega . But if it were Omega* , then it would be deceiving us, so it's most probably Omega. In the original problem, we have no reason to believe that No-mega and friends are non-deceptive. (But if we did, then yes, the dual of a non-deceptive agent would be deceptive, and so have lower prior probability. This would be a different problem, but it would still have a symmetry: We would have to define a different notion of dual, where the dual of an agent has the reversed behavior and also reverses its claims about its own behavior. What would Omega* be in that case? It would not claim to be Omega. It would truthfully tell you that if it predicted you would not give it $5 on tails, then it would flip a coin and give you $100 on heads; and otherwise it would not give you anything. This has no bearing on your decision in the Omega problem.) Edit: Formatting.
By your definitions, Omega would condition its decision on you being counterfactually muggable by the original Omega, not on you giving money to Omega itself. Or am I losing the plot again? This notion of "duality" seems to be getting more and more complex.
"Duality" has become more complex because we're now talking about a more complex problem — a version of Counterfactual Mugging where you believe that all superintelligent agents are trustworthy. The old version of duality suffices for the ordinary Counterfactual Mugging problem. My thesis is that there's always a symmetry in the space of black swans like No-mega. In the case currently under consideration, I'm assuming Omega's spiel goes something like "I just flipped a coin. If it had been heads, I would have predicted what you would do if I had approached you and given my spiel...." Notice the use of first-person pronouns. Omega* would have almost the same spiel verbatim, also using first-person pronouns, and make no reference to Omega. And, being non-deceptive, it would behave the way it says it does. So it wouldn't condition on your being muggable by Omega. You could object to this by claiming that Omega actually says "I am Omega. If Omega had come up to you and said....", in which case I can come up with a third notion of duality.
If Omega* makes no reference to the original Omega, I don't understand why they have "opposite behavior with respect to my status as being counterfactually-muggable" (by the original Omega), which was your reason for inventing "duality" in the first place. I apologize, but at this point it's unclear to me that you actually have a proof of anything. Maybe we can take this discussion to email?
Surely the last thing on anyone's mind, having been persuaded they're in the presence of Omega in real life, is whether or not to give $100 :) I like the No-mega idea (it's similar to a refutation of Pascal's wager by invoking contrary gods), but I wouldn't raise my expectation for the number of No-mega encounters I'll have by very much upon encountering a solitary Omega. Generalizing No-mega to include all sorts of variants that reward stupid or perverse behavior (are there more possible God-likes that reward things strange and alien to us?), I'm not in the least bit concerned. I suppose it's just a good argument not to make plans for your life on the basis of imagined God-like beings. There should be as many gods who, when pleased with your action, intervene in your life in a way you would not consider pleasant, and are pleased at things you'd consider arbitrary, as those who have similar values they'd like us to express, and/or actually reward us copacetically.
You don't have to. Both Omega and No-mega decide based on what your intentions were before seeing any supernatural creatures. If right now you say "I would give money to Omega if I met one" - factoring in all belief adjustments you would make upon seeing it - then you should say the reverse about No-mega, and vice versa. ETA: Listen, I just had a funny idea. Now that we have this nifty weapon of "exploding counterfactuals", why not apply it to Newcomb's Problem too? It's an improbable enough scenario that we can make up a similarly improbable No-mega that would reward you for counterfactual two-boxing. Damn, this technique is too powerful!
By not believing No-mega is probable just because I saw an Omega, I mean that I plan on considering such situations as they arise on the basis that only the types of godlike beings I've seen to date (so far, none) exist. I'm inclined to say that I'll decide in the way that makes me happiest, provided I believe that the godlike being is honest and really can know my precommitment. I realize this leaves me vulnerable to the first godlike huckster offering me a decent exclusive deal; I guess this implies that I think I'm much more likely to encounter 1 godlike being than many.

Thought I might pass this along and file it under "failure of rationality". Sadly, this kind of thing is increasingly common -- getting deep in education debt, but not having increased earning power to service the debt, even with a degree from a respected university.

Summary: Cortney Munna, 26, went $100K into debt to get worthless degrees and is deferring payment even longer, making interest pile up further. She works in an unrelated area (photography) for $22/hour, and it doesn't sound like she has a lot of job security.

We don't find out until... (read more)

Do you mean young people with unrepayable college debt, or young people with unrepayable debt for degrees which were totally unlikely to be of any use?
What's the substantive difference? In both cases, the young person has taken out a debt intended to amplify earnings by more than the debt costs, but that isn't going to happen. What does it matter whether the degree was of "any use" or not? What matters is whether it was enough use to cover the debt, not simply if there exist some gain in earnings due to the debt (which there probably is, though only via signaling, not direct enhancement of human capital).
I was making a distinction between extreme bad judgment (as shown in the article) and moderately bad judgment and/or bad luck. Your emphasis upthread seemed to be on how foolish that woman and her family were.
Arnold Kling has some thoughts about the plight of the unskilled college grad. 1 2
Thanks for the links, I had missed those. I agree with his broad points, but on many issues, I notice he often perceives a world that I don't seem to live in. For example, he says that people who can simply communicate in clear English and think clearly are in such short supply that he'd hire someone or take them on as a grad student simply for meeting that, while I haven't noticed the demand for my labor (as someone well above and beyond that) being like what that kind of shortage would imply. Second, he seems to have this belief that the consumer credit scoring system can do no wrong. Back when I was unable to get a mortgage at prime rates due to lacking credit history despite being an ideal candidate [1], he claimed that the refusals were completely justified because I must have been irresponsible with credit (despite not having borrowed...), and he has no reason to believe my self-serving story ... even after I offered to send him my credit report and the refusals! [1] I had no other debts, no dependents, no bad incidents on my credit report, stable work history from the largest private employer in the area, and the mortgage would be for less than 2x my income and have less than 1/6 of my gross in monthly payments. Yeah, real subprime borrower there...

One reason why the behavior of corporations and other large organizations often seems so irrational from an ordinary person's perspective is that they operate in a legal minefield. Dodging the constant threats of lawsuits and regulatory penalties while still managing to do productive work and turn a profit can require policies that would make no sense at all without these artificially imposed constraints. This frequently comes off as sheer irrationality to common people, who tend to imagine that big businesses operate under a far more laissez-faire regime than they actually do.

Moreover, there is the problem of diseconomies of scale. Ordinary common-sense decision criteria -- such as e.g. looking at your life history as you describe it and concluding that, given these facts, you're likely to be a responsible borrower -- often don't scale beyond individuals and small groups. In a very large organization, decision criteria must instead be bureaucratic and formalized in a way that can be, with reasonable cost, brought under tight control to avoid widespread misbehavior. For this reason, scalable bureaucratic decision-making rules must be clear, simple, and based on strictly defined ca... (read more)

As nearly as I can figure it, people who rely on credit ratings mostly want to avoid loss, but aren't very concerned about missing chances to make good loans.

For what it's worth, the credit score system makes a lot more sense when you realize it's not about evaluating "this person's ability to repay debt", but rather "expected profit for lending this person money at interest".

Someone who avoids carrying debt (e.g., paying interest) is not a good revenue source any more than someone who fails to pay entirely. The ideal lendee is someone who reliably and consistently makes payment with a maximal interest/principal ratio.

This is another one of those Hanson-esque "X is not about X-ing" things.

Expected profit explains much behavior of credit card companies, but I don't think it helps at all with the behavior of the credit score system or mortgage lenders (Silas's example!). Nancy's answer looks much better to me (except her use of the word "also").
I think there's also some Conservation of Thought (1) involved-- if you have a credit history to be looked at, there are Actual! Records!. If someone is just solvent and reliable and has a good job, then you have to evaluate that. There may also be a weirdness factor if relatively few people have no debt history. (1) Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed is partly about how a lot of what looks like tyranny when you're on the receiving end of it is motivated by the people in charge's desire to simplify your behavior enough to keep track of you and control you.
Simplifying my behavior enough to keep track of me and control me is tyranny.
Except that there are records (history of paying bills, rent), it's just that the lenders won't look at them. Maybe financial gurus should think about that before they say "stay away from credit cards entirely". It should be "You MUST get a credit card, but pay the balance." (This is another case of addictive stuff that can't addict me.) (Please, don't bother with advice, the problem has since been solved; credit unions are run by non-idiots, it seems, and don't make the above lender errors.) ETA: Sorry for the snarky tone; your points are valid, I just disagree about their applicability to this specific situation.
SilasBarta: Well, is it really possible that lenders are so stupid that they're missing profit opportunities because such straightforward ideas don't occur to them? I would say that lacking insider information on the way they do business, the rational conclusion would be that, for whatever reasons, either they are not permitted to use these criteria, or these criteria would not be so good after all if applied on a large scale. (See my above comment for an elaboration on this topic.) Or maybe the reason is that credit unions are operating under different legal constraints and, being smaller, they can afford to use less tightly formalized decision-making rules?
No, they do require that information to get the subprime loan; it's just that they classified me as subprime based purely on the lack of credit history, irrespective of that non-loan history. Providing that information, though required, doesn't get you back into prime territory. Considering that in the recent financial industry crisis, the credit unions virtually never needed a bailout, while most of the large banks did, there is good support for the hypothesis of CU = non-idiot, larger banks/mortgage brokers = idiot. (Of course, I do differ from the general subprime population in that if I see that I can only get bad terms on a mortgage, I don't accept them.)
SilasBarta: This merely means that their formal criteria for sorting out loan applicants into officially recognized categories disallow the use of this information -- which would be fully consistent with my propositions from the above comments. Mortgage lending, especially subprime lending, has been a highly politicized issue in the U.S. for many years, and this business presents an especially dense and dangerous legal minefield. Multifarious politicians, bureaucrats, courts, and prominent activists have a stake in that game, and they have all been using whatever means are at their disposal to influence the major lenders, whether by carrots or by sticks. All this has undoubtedly influenced the rules under which loans are handed out in practice, making the bureaucratic rules and procedures of large lenders seem even more nonsensical from the common person's perspective than they would otherwise be. (I won't get into too many specifics in order to avoid raising controversial political topics, but I think my point should be clear at least in the abstract, even if we disagree about the concrete details.) Why do you assume that the bailouts are indicative of idiocy? You seem to be assuming that -- roughly speaking -- the major financiers have been engaged in more or less regular market-economy business and done a bad job due to stupidity and incompetence. That, however, is a highly inaccurate model of how the modern financial industry operates and its relationship with various branches of the government -- inaccurate to the point of uselessness.
I actually agree with most of those points, and I've made many such criticisms myself. So perhaps larger banks are forced into a position where they rely too much on credit scores at one stage. Still, credit unions won, despite having much less political pull, while significantly larger banks toppled. Much as I disagree with the policies you've described, some of the banks' errors (like assumptions about repayment rates) were bad, no matter what government policy is. If lending had really been regulated to the point of (expected) unprofitability, they could have gotten out of the business entirely, perhaps spinning off mortgage divisions as credit unions to take advantage of those laws. Instead, they used their political power to "dance with the devil", never adjusting for the resulting risks, either political or in real estate. There's stupidity in that somewhere.
In some cases this was an example of the principal–agent problem - the interests of bank employees were not necessarily aligned with the interests of the shareholders. Bank executives can 'win' even when their bank topples.
The principal-agent problem should always be on the list of candidates, but it can occasionally be eliminated as an explanation. I was listening to the This American Life episode "Return to the Giant Pool of Money", and more than one of the agents in the chain had large amounts of their resources wiped out.
The question of whether an agent's interests are aligned with the principal's is largely orthogonal to the question of whether the agent achieves a positive return. The agent's expected return is more relevant.
There were many agents involved in the recent financial unpleasantness whose harm was enabled by the principal-agent problem. My intended examples did not suffer that problem. I could have made that clearer.
These are not such different answers. Working on a large scale tends to require hiring (potentially) stupid people and giving them little flexibility.
Yes, that's certainly true. In fact, what you say is very similar to one of the points I made in my first comment in this thread (see its second paragraph).
Fair point. This does replicate the Conservation of Thought theme. I think a good bit about business can be explained as not bothering because one's competitors haven't bothered either. I've seen financial gurus recommend getting a credit card and paying the balance. And thanks for the ETA.
Ramit Sethi for example. I had the impression that this was actually pretty much the standard advice from personal finance experts. Most of them are not worth listening to anyway though.
This might be what they say in their books, where they give a detailed financial plan, though I doubt even that. What they advise is usually directed at the average mouthbreather who gets deep into credit card debt. They don'd need to advise such people to build a credit history by getting a credit card solely for that purpose -- that ship has already said! All I ever hear from them is "Stay away from credit cards entirely! Those are a trap!" I had never once heard a caveat about, "oh, but make sure to get one anyway so you don't find yourself at 24 without a credit history, just pay the balance." No, for most of what they say to make sense, you have to start from the assumption that the listener typically doesn't pay the full balance, and is somehow enlightened by moving to such a policy. Notice how the citation you give is from a chapter-length treatment from a less-known finance guru (than Ramsey, Orman, Howard, etc.), and it's about "optimizing credit cards" a kind of complex, niche strategy. Not standard, general advice from a household name.
That would be an insanely stupid thing for anyone to say. Credit cards are very useful if used properly. I agree with mattnewport that the standard advice given in financial books is to charge a small amount every month to build up a credit rating. Also, charge large purchases at the best interest rate you can find when you'll use the purchases over time and you have a budget that will allow you to pay them off.
Well, then I don't know what to tell you. I'd listened to financial advice shows on and off and had read Clark Howard's book before applying for the mortgage back then, and never once did I hear or read that you should get a credit card merely to establish a credit history (and this is not why they issue them). I suspect it's because their advice begins from the assumption that you're in credit card debt, and you need to get out of that first, "you bozo". And your comment about the usefulness of credit cards for borrowing is a bit ivory-tower. In actual experience, based on all the expose reports and news stories I've seen, it's pretty much impossible to do that kind of planning, since credit card companies reserve the right to make arbitrary changes to the terms -- and use that right. I remember one case where a bank issued a card that had a "guaranteed" 1.9% rate for ~6 months with a ~$5000 limit -- but if you actually used anything approaching that limit, they would invoke the credit risk clauses of the agreement, deem you a high risk because of all the debt you're carrying, and jack up your rate to over 20%. So, a 1.9% loan that they can immediately change to 20% if they feel like it -- in what sense was it a 1.9% loan? For that reason, I don't even consider using a credit card for installment purchases.
Wow, they can jack up the rate like that? I would definitely consider that fraud and abuse. That's not common, however, and Congress recently passed legislation to prevent that sort of abuse. Currently, I don't have the option of not using a credit card; I would starve to death without it.
I thought so too, but then was overwhelmed with stories like that. Most credit cards agreements are written with a clause that says, "we can do whatever we want, and the most you can do to reject the new terms is pay off the entire debt in 15 days". This is one of the few instances where courts will honor a contract that gives one party such open-ended power over the other. If you haven't been burned this way, it's just a matter of time. And if you google the topic, I'm sure you'll find enough to satisfy your evidence threshold. Would you starve to death with it? If you can service the debts, let me loan you the money; at this point, most investors would sell out their mother to get a fraction of the interest rate on their savings that most credit cards charge. (Not that I would, but I'd turn down the offer without my trademark rudeness...)
::followed link:: Did you ever experience nicotine withdrawal symptoms? In people who aren't long-time smokers, they can take up to a week to appear.
For what that's worth, when I quit smoking, I didn't feel any withdrawal symptoms except being a bit nervous and irritable for a single day (and I'm not even sure if quitting was the cause, since it coincided with some stressful issues at work that could well have caused it regardless). That was after a few years of smoking something like two packs a week on average (and much more than that during holidays and other periods when I went out a lot). From my experience, as well as what I observed from several people I know very well, most of what is nowadays widely believed about addiction is a myth.
No, never did. My best guess is that I didn't smoke heavily enough to get a real addiction, though I smoked enough to get the psychoactive effects.
Yes, I would think it would take around 5-10 cigarettes a day (or more) for at least a week to develop an addiction. While cigarettes (and heroin, and caffeine) are very physically addictive, it still takes sustained, moderately high use to develop a physical addiction. Most cigarette smokers describe their addictions in terms of "x packs per day".
Okay, then I guess my case isn't informative ... I'd use the pack/year metric instead instead of the pack/day.
I wish I could direct you to this Scientific American article so I could ask how it compares to your experiences, but it's behind a paywall.
From what I can see before the paywall, it looks like I definitely didn't meet the threshold under the best science, but I could probably cross it from 5 cigarettes per day. I'd only try that out if I were rewarded for doing it (but not for stopping as that would defeat the purpose of such an experience).