Related: Truly a Part of You, What Data Generated That Thought
Some Case Studies
The other day my friend was learning to solder and he asked an experienced hacker for advice. The hacker told him that because heat rises, you should apply the soldering iron underneath the work to maximize heat transfer. Seems reasonable, logically inescapable, even. When I heard of this, I thought through to why heat rises and when, and saw that it was not so. I don't remember the conversation, but the punchline is that hot things become less dense, and less dense things float, and if you're not in a fluid, hot fluids can't float. In the case of soldering, the primary mode of heat transfer is conduction through the liquid metal, so to maximize heat transfer, get the tip wet before you stick it in, and don't worry about position.
This is a case of surface reasoning failing because the heuristic (heat rises) was not truly a part of my friend or the random hacker. I want to focus on the actual 5-second skill of going back To First Principles that catches those failures.
Here's another; watch for the 5 second cues and responses: A few years ago, I was building a robot submarine for a school project. We were in the initial concept design phase, wondering what it should look like. My friend Peter said, "It should be wide, because stability is important". I noticed the heuristic "low and wide is stable" and thought to myself "Where does that come from? When is it valid?". In the case of catamarans or sports cars, wide is stable because it increases the lever arm between restoring force (gravity) and support point (wheel or hull), and low makes the tipping point harder to reach. Under water, there is no tipping point, and things are better modeled as hanging from their center of volume. In other words, underwater, the stability criteria is vertical separation, instead of horizontal separation. (More precisely, you can model the submarine as a damped pendulum, and notice that you want to tune the parameters for approximately critical damping). We went back to First Principles and figured out what actually mattered, then went on to build an awesome robot.
Let's review what happened. We noticed a heuristic or bit of qualitative knowledge (wide is stable), and asked "Why? When? How much?", which led us to the quantitative answer, which told us much more precisely exactly what matters (critical damping) and what does not matter (width, maximizing restoring force, etc).
A more Rationality-related example: I recently thought about Courage, and the fact that most people are too afraid of risk (beyond just utility concavity), and as a heuristic we should be failing more. Around the same time, I'd been hounding Michael Vassar (at minicamp) for advice. One piece that stuck with me was "use decision theory". Ok, Courage is about decisions; let's go.
"You should be failing more", they say. You notice the heuristic, and immediately ask yourself "Why? How much more? Prove it from first principles!" "Ok", your forked copy says. "We want to take all actions with positive expected utility. By the law of large numbers, in (non-black-swan) games we play a lot of, observed utility should approximate expected utility, which means you should be observing just as much fail as win on the edge of what you're willing to do. Courage is being well calibrated on risk; If your craziest plans are systematically succeeding, you are not well calibrated and you need to take more risks." That's approximately quantitative, and you can pull out the equations to verify if you like.
Notice all the subtle qualifications that you may not have guessed from the initial advice; (non-pascalian/lln applies, you can observe utility, your craziest plans, just as much fail as win (not just as many, not more)). (example application: one of the best matches for those conditions is social interaction) Those of you who actually busted out the equations and saw the math of it, notice how much more you understand than I am able to communicate with just words.
Ok, now I've named three, so we can play the generalization game without angering the gods.
On the Five-Second Level
Trigger: Notice an attempt to use some bit of knowledge or a heuristic. Something qualitative, something with unclear domain, something that affects what you are doing, something where you can't see the truth.
Action: Ask yourself: What problem does it try to solve (what's its interface, type signature, domain, etc)? What's the specific mechanism of its truth when it is true? In what situations does that hold? Is this one of those? If not, can we derive what the correct result would be in this case? Basically "prove it". Sometimes it will take 2 seconds, sometimes a day or two; if it looks like you can't immediately see it, come up with whatever quick approximation you can and update towards "I don't know what's going on here". Come back later for practice.
It doesn't have to be a formal proof that would convince even the most skeptical mathematician or outsmart even the most powerful demon, but be sure to see the truth.
Without this skill of going back to First Principles, I think you would not fully get the point of truly a part of you. Why is being able to regenerate your knowledge useful? What are the hidden qualifications on that? How does it work? (See what I'm doing here?) Once you see many examples of the kind of expanded and formidably precise knowledge you get from having performed a derivation, and the vague and confusing state of having only a theorem, you will notice the difference. What the difference is, in terms of a derivation From First Principles, is left as an exercise for the reader (ie. I don't know). Even without that, though, having seen the difference is a huge step up.
From having seen the difference between derived and taught knowledge, I notice that one of the caveats of making knowledge Truly a Part of You is that just being able to get it From First Principles is not enough; Actually having done the proof tells you a lot more than simply what the correct theorem is. Do not take my word for it; go do some proofs; see the difference.
So far I've just described something that has been unusually valuable for me. Can it be taught? Will others gain as much? I don't know; I got this one more or less by intellectual lottery. It can probably be tested, though:
Testing the "Prove It" Habit
In school, we had this awesome teacher for thermodynamics and fluid dynamics. He was usually voted best in faculty. His teaching and testing style fit perfectly with my "learn first principles and derive on the fly" approach that I've just outlined above, so I did very well in his classes.
In the lectures and homework, we'd learn all the equations, where they came from (with derivations), how they are used, etc. He'd get us to practice and be good at straightforward application of them. Some of the questions required a bit of creativity.
On the exams, the questions were substantially easier, but they all required creativity and really understanding the first principles. "Curve Balls", we called them. Otherwise smart people found his tests very hard; I got all my marks from them. It's fair to say I did well because I had a very efficient and practiced From First Principles groove in my mind. (This was fair, because actually studying for the test was a reasonable substitute.)
So basically, I think a good discriminator would be to throw people difficult problems that can be solved with standard procedure and surface heuristics, and then some easier problems that require creative application of first principles, or don't quite work with standard heuristics (but seem to).
If your subjects have consistent scores between the two types, they are doing it From First Principles. If they get the standard problems right, but not the curve balls, they aren't.
Straight: Bayesian cancer test. Curve: Here's the base rate and positive rate, how good is the test (liklihood ratio)?
Straight: Sunk cost on some bad investment. Curve: Something where switching costs, opportunity for experience make staying the correct thing.
Straight: Monty Hall. Curve: Ignorant Monty Hall.
Again, maybe this can't be taught, but here's some practice ideas just in case it can. I got substantial value from figuring these out From First Principles. Some may be correct, others incorrect, or correct in a limited range. The point is to use them to point you to a problem to solve; once you know the actual problem, ignore the heuristic and just go for truth:
Science says good theories make bold predictions.
Deriving From First Principles is a good habit.
Boats go where you point them, so just sail with the bow pointed to the island.
People who do bad things should feel guilty.
I don't have to feel responsible for people getting tortured in Syria.
If it's broken, fix it.
(post more in comments)
This instinct may be related to The Fear of Common Knowledge. We seem afraid to attempt things that can provide common knowledge of what our ability levels are. In other words, if I only try things that are safe, or are so challenging that I can't reasonably expect to succeed anyway, then nobody including myself has to acknowledge just how dumb, or untalented, or ignorant I am. But if I try and fail at something that's only somewhat challenging, then things become awkward.
This should be on the front page.
I thought about that, but then I don't know how to make that judgement, so I just thought if it's good enough someone will move it where it belongs.
Nick Szabo's Objective Versus Intersubjective Truth seem relevant here:
I tend to think that Nick is overstating his case somewhat in this essay, but it seems hard to deny that there must be many truths that are not feasibly rederivable from first principles, and highly evolved traditions related to to interpersonal behavior is a likely place to find them. Additionally, I think the kind of "re-derivations from first principles" that we can actually do often just amount to handwaving ("Courage" in the OP is a good example of this) and offers rather little evidence that the rule or heuristic we're trying to derive is actually correct. Overall I caution against being overconfident about deriving things from first principles.
I've seen that essay linked a few times and finally took the time to read it carefully. Some thoughts, for what they're worth:
What exactly is a code? (Apparently they can be genetic or memetic, information theory and Hayek both have something to say about them, and social traditions are instances of them.) How do you derive, refute or justify a code?
There are apparently evolved memetic codes that solve interpersonal problems - how do we know that memetic evolution selects for good solutions to interpersonal problems, and that it doesn't select even more strongly for something useless or harmful, like memorability or easy transmission to children or appeal to the kinds of people in the best position to spread their ideas to others or making one feel good? Why isn't memetic evolution as much of an amoral Azathoth as biological evolution? The results of memetic evolution are just the memes that were best at surviving and reproducing themselves. These generally have no reason to be objectively true. I'm not convinced that there's any reason they should be intersubjectively true (socially beneficial) either. Also, selection among entire social systems seems to require group selection.
And granted that the traditions that are the results of the process of memetic/cultural evolution contain valuable truths, are those truths in the actual content of the traditions, or are they just in what we can infer from the fact that these were the particular traditions that resulted from the process?
It seems clear to me that memes are socially beneficial in the sense that we're much better off with the memes that we actually have (including traditional moralities, laws, etc.) than no memes, or a set of random memes. And also that it would be quite hard to find a set of memes that would do as well, if we were to start over from scratch. I'm not quite sure how to explain this, or answer your other questions, but perhaps Nick has given these issues more thought. He recently reposted the essay to his blog, so commenting there might be a good way to draw his attention.
Yes. Hence the "don't spend more than a few seconds trying" implication of it being a 5 second skill.
Handwaving or not, habitually looking at actual specific mechanisms and actual math has been hugely informative to me for what things actually matter; Is this thing actually true, What are the limits, etc.
It's not like I just handwaved up something that looked like the classical concept of courage and then said "oh look, now we can be reckless". No. I gave a specific example of what decision theory says is best in a particular case. We got actual narrow advice with explicit domain bounds, which overrides whatever we thought before. I omitted some details, and reported it in english, so it seems a bit fuzzy, but I did do the math and warn the reader to do the math for themselves to fill in the blanks. If you have some specific flaw with what I laid out, I'd like to hear about it.
I couldn't figure out how to translate your English into math, or see how to do the math myself. For the reasons stated in Nick's essay, I'm skeptical that it is feasible to fully "do the math" in problems like these. I suspect you may have done the math incorrectly, or applied simplifying assumptions that are not safe to make. My other top-level comment pointed out one important consideration that your math probably ignored.
I do think it's useful to look at actual specific mechanisms and actual math, but I worry it's easy to forget that the mechanism we're looking at is just one among many that exist in reality and the math inevitably involves many simplifying assumptions or could just be wrong, and become more confident in our conclusions than we should. Based on your post ("you can pull out the equations to verify if you like" instead of "here's my math, please help me check it for mistakes and bad assumptions") I think this worry is justified.
Sorry. I had some math in there for the solder and submarine example, and I've got the math somewhere for the courage thing, but I decided that the math didn't add much value. Should I leave the math in where it exists next time? Or put it back in now even?
If I get around to it, I'll post some equations in the comments.
I can see how courage might be a bad example. The revealing skill level thing is potentially important. I probably missed some stuff too. Maybe I should break that into another post, because deriving that sort of thing from the equations is an interesting thing to do that could use a lot more scrutiny.
Good point. Simplifying assumptions could sink us, as could overconfidence. I reckon a good way to figure it out is to test it and see how often the quick scribbly math fails us. My particular approach been quite useful and generally accurate, but since I can't yet see from first principles which bits of my habit are the important ones, all I can do is report my success, describe my procedure, and urge people to try it themselves, so that they'll figure out the important bits too. (hence the "don't take my word for it go look at the difference", and the meta example at the end)
Anyways, I know of no procedure better than actually trying to comprehend the reason for things, when it exists. Not looking at the reasons seems like a bad idea (seems may be an understatement. I've seen lots of people fail or push in the wrong direction when a bit of From First Principles would have saved them).
Yes, please post your math, either in the comments here or in another post, depending on how involved it is.
How would you test whether your math for "courage" failed you? (Presumably, if it's wrong, then you'd fail to maximize expected utility, but how could you tell that?)
Do you have any examples of this in the sphere of interpersonal behavior?
will do later. Too busy to dig it up now.
Look at other people, and your past self I guess? Are you doing better than you would have? Does it look like it's got to do with risk strategy? Not rigorous or anything, but you can get evidence. The courage thing is built on expected utility being measurable, so it shouldn't be too hard. Won't be easy either, though.
Not off the top of my head. I don't have a good solid set of equations or even rules for interpersonal stuff, so I wouldn't expect to recognize it. Also the bottleneck in interpersonal stuff is usually something other than using models blindly.
(I am obsessed with the zipline concept).
Yes! I want a pilot program of building a 100ft tall mast for each block in a school district, with a zipline leading down to the school. Imagine the saved time for everyone who no longer has to wait for school busses on their morning commute!
I investigated a similar idea for a conworld once and ended up rejecting it.
AFAICT ziplines with modern technology really aren't good at covering long distances. I didn't study the math, but just eyeballing existing long-distance ziplines it seems you need approximately 1 meter height for 10-20 meters traveled. The average distance to school in the US seems to be between 1 mile and 5 miles depending on who you ask. Let's take the lower option. To go 1 mile you'd need a 250-500 foot mast. But that's just on average; some people will live two miles away and need 500-1000 foot masts - up to as high as the Stratosphere Tower in Vegas.
Not only do you have to pay for a Stratosphere Tower on every block (there are 72000 blocks in Manhattan!), not only do you have to tolerate a forest of huge towers that will probably lower land value, but you've also got to get kids up a 500 foot tower every morning, which means realistically that you're paying for some really good elevators. And we're still only saving kids a two mile bike ride or five minutes waiting for a bus!
You could limit it to the blocks closest to the school to decrease max tower height, but that would also limit the benefit.
Who says the whole distance ought to be covered by a single line?
Also related (but not as good as this one, which was far more specific and thus useful): What data generated that thought?.
Hadn't seen that. Added a link to it.
Insuffiient information. For example, if the base rate is 80% and the positive rate is 80%, it could be anything from a perfect test to a random result independent of them having cancer. You need another equation, like knowing the false positive rate, or knowing that the false positive and false negative rates are the same.
Which isn't to say that this would be a bad test question, of course - just that the student has to realize that they're expected to explain why they have insufficient information to answer.
Well you can at least give the liklihood ratio of a positive test, which was what I was getting at. You're right tho, to give all the test parameters, you'd need the negative rate as well.
Maybe conservation of evidence can help us fill in the blanks somewhere? This seems like a fun thing to think about.
How? You need to know how likely it is to be positive given that you have cancer. If it's a perfect test, the likelihood ratio is infinity to one. If it's a random test, it's one to one. Since it could be either of those, or anything in between, you can't figure that out.
Am I misinterpreting what you mean by positive rate, likelihood ratio, or both?
prior odds are 1:99 against, posterior odds are 1:9.9, therefore LR of positive is 10. I may have miscommunicated. I meant posterior when I said "positive rate" ("cancer rate given positive" was my interpretation) I can see how it is better parsed as "rate that you test positive", which is something else. Sorry for the confusion.
You're overloading the term "from first principles" and I'm not sure it's helpful. Normally, the term implies going away from the empirical level of abstraction, but it sounds like you're going towards the empirical level.
Possibly. I mean deriving from highly trusted knowledge (like newtons laws, probability theory) instead of leaning on derived knowledge for which you have not done the derivation.
You're right it means going away from empirical/statistical knowledge to modelling the specific underlying process. Don't know if I'd call that more abstract or less abstract or what.
I think I'm using it how we used it in engineering school. If it becomes confusing, we can come up with a better phrase.
If it's already engineering jargon, I say ignore my complaint. Engineering jargon trumps philosophy jargon.
Is the non-standard spelling at places ("thot" "thru") deliberate?
Yes. My dad spells it that way and occasionally remarked that we need to move to more sensible spellings of some words (mostly gh words). Stuck with me.
I'm probably inconsistent about it tho.
If it bothers you, I can change it. It just seems better.
EDIT: changed it.
Please do change it.
It doesn't bother me, but (just so you know) it did not occur to me prior to reading this comment that it might be deliberate -- I thought that you just didn't know the correct spelling for the relevant words.
Please don't argue with a spell-checker.
I am not sure it is a very effective form of advocacy for orthographic reform to signal own illiteracy. I agree that English orthography is a disgrace.
Those spellings seem fine to me. "thru" might even have more currency than "through" by now.
I can live with “thru”, but “thot” assumes the cot-caught merger (“thaut” would be better) and “tho” looks awful to me (I'm not sure why). Anyway, I think most people find it easier to read irregular but familiar spellings than regular but unfamiliar ones.
Not in published books, not in the COCA nor in any other place from which I can get quantitative data I can think of.
Agreed: don't generalize this out of context.
It was originally said of certain things where failure could be offset by success, and for some reason we had heuristics that prioritized safety over average success. Such as some kinds of social interaction.
But in many, perhaps most, endeavors, it's rational to value safety over pure long-term expected utility. I have so far always succeeded at surviving despite having had cancer and driving a car every day. But that does not mean I should take more risks and try an experimental therapy for my diabetes.
Yes. The usual quote is, "If you never miss a plane, you're spending too much time at the airport." (Attributed to George Stigler.)
I had been attributing it to Umesh Vazirani all this time. Thanks!
If you never find yourself wrongly attributing quotes, you're spending too much time checking sources? Gwern is excluded from this heuristic.
Attribute everything to "Internet saying"
I thought that was one of the points being made in the post, ask why the heuristic is true to see if it applies in this specific case.
Yes, and I was expanding on when and why it does not apply.
I like this. Consider moving it to main.