Take a second to go upvote You Are A Brain if you haven't already...

Back?  OK.

Liron's post reminded me of something that I meant to say a while ago.  In the course of giving literally hundreds of job interviews to extremely high-powered technical undergraduates over the last five years, one thing has become painfully clear to me:  even very smart and accomplished and mathy people know nothing about rationality.

For instance, reasoning by expected utility, which you probably consider too basic to mention, is something they absolutely fall flat on.  Ask them why they choose as they do in simple gambles involving risk, and they stutter and mutter and fail.  Even the Econ majors.  Even--perhaps especially--the Putnam winners.

Of those who have learned about heuristics and biases, a nontrivial minority have gotten confused to the point that they offer Kahneman and Tversky's research as justifying their exhibition of a bias!

So foundational explanatory work like Liron's is really pivotal.  As I've touched on before, I think there's a huge amount to be done in organizing this material and making it approachable for people that don't have the basics.  Who's going to write the Intuitive Explanation of Utility Theory?

Meanwhile, I need to brush up on my Python and find a way to upvote Liron more than once.  If only...

Update: Tweaked language per suggestion, added Kahneman and Tversky link.

New Comment
47 comments, sorted by Click to highlight new comments since:

Of those who have learned about heuristics and biases, a nontrivial minority have gotten confused to the point that they offer Kahnemann and Tversky's research as justification for the self-arbitrages they've set up!

Suggested alternative wording:

"Of those who have learned about heuristics and biases, a nontrivial minority are so confused as to point to the biases research as justifying their exhibition of a bias!"

It's interesting that this correction has a higher score than the post itself.

People don't seem to vote posts up or down with the same enthusiasm as they vote on comments. Why? I do not know.

[-]jscn20

I would guess that it's because comments are shorter and tend to express a single idea. Posts tend to have a series of ideas, which means a voter is less likely to think all of them are good/worthy of an upvote.

I strongly agree. As an anecdotal data point, I understood the suggested alternative but not the original wording. And it is a powerful point to miss because I haven't heard of Kahnemann and Tversky.

Also, if mentioning specific researchers were central to the point, I would recommend linking to a resource about them, or better yet, create entries for them on the Less Wrong Wiki and link to those.

Done, thanks for the feedback!

I made the mistake I'm talking about---assuming certain things were well-known.

Seconded! Those names didn't ring a bell for me either, though I'm familiar with the results from Prespect Theory (I probably read about them on OB), and that's probably what talisman was refering to.

Definitely worth reading up. K & T are the intellectual fathers of the entire modern heuristics and biases program. There was some earlier work (e.g. Allais) but from what I hazily recall that work was fairly muddled conceptually.

Completely agree that people just use methods such as tabu search a*, etc... without understanding them at all, same happens with machine learning techniques, or even statistic ones. Mostly they get by using the recommended algorithm/meta heuristic for the domain they are working at.

I strongly recommend python for doing this, it is the best language to begin programming with, I have several programs I did by myself, I can collaborate with the project.

I strongly recommend python for doing this, it is the best language to begin programming with, I have several programs I did by myself, I can collaborate with the project.

Not disagreeing with you here, but you seem to have missed the implication; the reason Python was mentioned is because LessWrong is written in it.

Thanks for clarifying, I did not know it, I guess I have to read the introduction first.

Take a second to go upvote You Are A Brain if you haven't already...

This is extremely off-topic, but please do not tell me what to upvote. I actually downvoted that post because the slideshow was completely useless to me and I thought its quality was poor. This isn't to slam Liron; his post just didn't do it for me.

But just because you really, really liked it doesn't mean you get to tell me what to like.

I actually think Liron's slideshow needs a lot of work, but it seems very much like the kind of thing LWers should be trying to do out in the world.

the slideshow was completely useless to me

Yes, of course it was. It was created for teenagers who are utterly unfamiliar with this way of thinking.

its quality was poor

OK. Can you improve it or do better?

I actually think Liron's slideshow needs a lot of work, but it seems very much like the kind of thing LWers should be trying to do out in the world.

I would agree.

OK. Can you improve it or do better?

Possibly, but I have little reason to do so since this sort of thing is not particularly applicable to my life.

Of note, I am not trying to be a jerk or make this a big deal. My comment really has little to do with Liron's post. It has everything to do with you telling me to upvote something. I just, politely, want you to not do that again. I had typed up more details on why I downvoted but they are irrelevant for what I wanted to say to you.

[-][anonymous]20

I agree with the parent. This article was okay, but can you fawn over Liron somewhere else -- perhaps in the comments on its article?

Or better yet, somewhere other than LW.

Talisman: what line of work are you in where you interview enough Putnam winners to have a reasonable sample size. Seriously, write to me at my SIAI email about this and I'll try to work it into our recruitment strategy if there's any practical way to do so.

And likewise, can I apply for whatever position you're interviewing these people for? (I mean talisman, not SiAI. I think SIAI requires such unreasonable skills out of me as "tact" and "not voicing why you think other people are idiots".) I'm sure I'd be in the top 1%.

no, we definitely don't require that, but we are a LOT more selective than, say Goldman or D.E. Shaw in other ways and we do require that you be able to function as part of a team.

Wow! That's tough! I don't know if I could ever be more qualified than someone who nearly shut down the entire financial system! ;-)

Well, it generally does take geniuses to achieve something monumentally stupid. Same principle as (unintentionally) creating an Unfriendly AI: current researchers are not competent enough to pose any reasonable risk of it, but a future genius might just fail hard enough.

For instance, reasoning by expected utility, which you probably consider too basic to mention

Actually, I consider it too complicated for my first book! That's going to focus on getting across even more basic concepts like 'the point of reasoning about your beliefs is to function as a mapping engine that produces correlations between a map and the territory' and 'strong evidence is the sort of evidence we couldn't possibly find if the hypothesis were false'.

Funny. I feel like on OB and LW utility theory is generally taken as the air we breathe.

It is - but that's OB and LW.

'strong evidence is the sort of evidence we couldn't possibly find if the hypothesis were false'.

-blink-

If you mean this, please elaborate. If not, please change the wording before you confuse the living daylights out of some poor newcomer.

Edit: I'm not nitpicking him for infinite certainty. I acknowledge it's reasonable informally to tell me a ticket I'm thinking of buying couldn't possibly win the lottery. That's not what I mean. I mean even finding some overwhelmingly strong evidence doesn't necessarily mean the hypothesis is overwhelmingly likely to be true. If the comment's misleading then given it's subject it seems worth pointing out!

Example: Say you're randomly chosen to take a test with a false positive rate of 1% for a cancer that occurs in 0.1% of the population, and it returns positive. That's strong evidence for the hypothesis that you have that cancer, but the hypothesis is probably false.

Strongly seconded. Generally, it seems to me that Eliezer frequently seriously confuses people by mixing literal statements with hyperbole like this or "shut up and do the impossible". I definitely see the merit of the greater emotional impact, but I hope there's some way to get it without putting off the unusually literal-minded (which I expect most people who will get anything out of OB or The Book are).

Yeah, that is kind of tricky. Let me try to explain what Eliezer_Yudkowsky meant in terms of my preferred form of the Bayes Theorem:

O(H|E) = O(H) * P(E|H) / P(E|~H)

where O indicates odds instead of probability and | indicates "given".

In other words, "any time you observe evidence, amplify the odds you assign to your beliefs by the probability of observing the evidence if the belief were true, divided by the probabily of observing it if the belief were false."

Also, keep in mind that Eliezer_Yudkowsky has written about how you should treat very low probability events as being "impossible", even though you have to assign a non-zero probability to everything.

Nevertheless, his statement still isn't literally true. The strength of the evidence depends on the ratio P(E|H)/P(E|~H), while the quoted statement only refers to the denominator. So there can be situations where you have 100:1 odds of seeing E if the hypothesis were true, but 1:1000 odds (about a 0.1% chance) of seeing E if it were false.

Such evidence is very strong -- it forces you to amplify the odds you assign to H by a factor of 100,000 -- but it's far from evidence you "couldn't possibly find", which to me means something like 1:10^-10 odds.

Still, Eliezer_Yudkowsky is right that, generally, strong evidence will have a very small denominator.

EDIT: added link

In comments like this, we should link to the existing pages of the wiki, or create stubs of the new ones.

Bayes' theorem on LessWrong wiki.

Strong evidence is evidence that, given certain premises, has no chance of arising.

Of course, Eliezer has also claimed that nothing can have no chance of arising (probability zero), so it's easy to see how one might be confused about his position.

Traditionally, evidence that has less than a particular value of arising given the truth of a hypothesis (usually 5%) is considered to be strong, but that's really an arbitrary decision.

[-]Cyan00

Traditionally, evidence that has less than a particular value of arising given the truth of a hypothesis (usually 5%) is considered to be strong, but that's really an arbitrary decision.

Correction: traditionally evidence against an hypothesis is considered strong if the chance of that evidence or any more extreme evidence arising given the truth of the hypothesis is less than an arbitrary value. (If this tradition doesn't make sense to you, you are not alone.)

I'm really surprised to hear you say that - I would have thought it was pretty fundamental. Don't you at least have to introduce "shut up and multiply"?

First, you have to explain why relying on external math, rather than on a hunch, is a good idea. Second, you need to present a case for why shutting up and multiplying in this particular way is a good idea.

First, you have to explain why relying on external math, rather than on a hunch, is a good idea.

That applies to Bayesian reasoning too, doesn't it?

Second, you need to present a case for why shutting up and multiplying in this particular way is a good idea.

That's in some ways easier - basically this comes down to standard arguments in decision theory, I think...

This applies to anything, including excavators and looking up weather on the Internet. You have to trust your tools, which is especially hard where your intuition cries "Don't trust! It's dangerous! It's useless! It's wrong!". The technical level, where you go into the details of how your tools work, is not fundamentally different in this respect.

Here I'm focusing not on defining what is actually useful, or right, or true, but on looking into the process of how people can adopt useful tools or methods. A decision of some specific human ape-brain is a necessary part, even if the tool in question is some ultimate abstract ideal nonsense. I'm brewing a mini-sequence on this (2 or 3 posts).

I think that if there is such a thing as x-rationality, its heart is that mathematical models of rationality based on probability and decision theory are the correct measure against which we compare our own efforts.

At which point you run into a problem of formalization and choice of parameters, which is the same process of ape-brain-based decision-making. A statement that in some sense, decision theory/probability theory math is the correct way of looking at things, is somewhat useful, but doesn't give the ultimate measure (and lacks so much important detail). Since x-rationality is about human decision-making, a large part of it is extracting correct decisions out of your native architecture, even if these decisions are applied to formalization of problems in math.

That's in some ways easier - basically this comes down to standard arguments in decision theory, I think...

Since real gambles are always also part of the state of the world that one's utility function is defined over, you also need the moral principle that there shouldn't be (dis)utility attached to their structure. Decision theory strictly has nothing to say to the person who considers it evil to gamble with lives (operationalized as not taking the choice with the lowest variance in possible outcomes, or whatever), although it's easy to make it sound like it does. The moral principle here seems intuitive to me, but I have no idea if it is in general. (Something to Protect is the only post I can think of dealing with this.)

I don't really know the formal definition or theory of expected utility, but it is something which seems to underpin almost everything that is said here on LW or on OB.

Can anyone please point me to a good reference or write a wiki entry?

Are the wikipedia references recommended?

The wikipedia reference is a bit patchy. This Introduction to Choice under Risk and Uncertainy is pretty good if you have a bit more time, and can handle the technical parts.

Thanks conchis.

Thanks! I hadn't heard that definition of utilitarianism before.

As I recall, I made this up to suit my own ends :-(

Wikipedia quibbles with me significantly - stressing the idea that utilitarianism is a form of consequentialism:

"Utilitarianism is the idea that the moral worth of an action is determined solely by its contribution to overall perceivable utility: that is, its contribution to happiness or pleasure as summed among an ill-defined group of people. It is thus a form of consequentialism, meaning that the moral worth of an action is determined by its outcome."

I don't really want "utilitarianism" to refer to a form of consequentialism - thus my crude attempt at hijacking the term :-|

I hadn't even considered the possibility that your definition might lead to a 'utilitarianism' that is not consequentialist. In some circles, the two terms are used interchangeably. Sounds akin to 'rule utilitarianism', but more interesting - the right action is one that maximizes expected utility, regardless of its actual consequences. Does that sound like a good enough characterization?

I would still be prepared to call an agent "utilitarian" if it operated via maximising expected utility - even if its expectations turned out to be completely wrong, and its actions were far from those that would have actually maximised utility.

Humans are often a bit like this. They "expect" that hoarding calories is a good idea - and so that is what they do. Actually this often turns out to be not so smart. However, this flaw doesn't make humans less utilitarian in my book - rather they have some bad priors - and they are wired-in ones that are tricky to update.

[-][anonymous]10

Meanwhile, I need to brush up on my Python and find a way to upvote Liron more than once.

That doesn't require python... it requires rudimentary general problem solving ability, a certain disrespect for the spirit of the law and if automation were desired, could be implemented in one of many languages.

[-]pjeby-10

Kahneman and Tversky's research

Holy crap that's useful. System 1 and System 2 correspond almost exactly to my Savant/Speculator distinction, and other bits of the paper support monoidealism and my recent work teaching myself and others to act more confidently and creatively (not to mention improving learning) through explicit deferment of System 2 thinking during task performance. And that's just what I got from a light and partial skimming of the paper. It's going to take a chunk of time out of my day to absorb it all, but it's gonna be worth it.