Copying over my comment from the SSC review, which otherwise may get lost in the fog of comments there.


Super fun review!

I found this part to be the biggest disappointment of this book. I don’t think it grappled with the claim that the Outside View (and even Meta-Outside View) are often useful. It offered vague tips for how to decide when to use them, but I never felt any kind of enlightenment, or like there had been any work done to resolve the real issue here. It was basically a hit job on Outside Viewing.

Conversely, I found the book gave short but excellent advice on how to resolve the interminable conflict between the inside and outside views – the only way you can: empiricism. Take each case by hand, make bets, and see how you come out. Did you bet that this education startup would fail because you believed the education market was adequate? And did you lose? Then you should update away from trusting the outside view here. Et cetera. This was the whole point of Chapter 4, giving examples of Eliezer getting closer to the truth with empiricism (including examples where he updated towards using the expert-trusting outside view, because he’d been wrong).

You quote “Eliezer’s four pronged strategy” But I feel like his actual proposed methodology was in chapter 4:

Step one is to realize that here is a place to build an explicit domain theory—to want to understand the meta-principles of free energy, the principles of Moloch’s toolbox and the converse principles that imply real efficiency, and build up a model of how they apply to various parts of the world.
Step two is to adjust your mind’s exploitability detectors until they’re not always answering, “You couldn’t possibly exploit this domain, foolish mortal,” or, “Why trust those hedge-fund managers to price stocks correctly when they have such poor incentives?”
And then you can move on to step three: the fine-tuning against reality.

This is how you figure out if you’re Jesus – test your models, and build up a track record of predictions.

You might respond “But telling me to bet more isn’t an answer to the philosophical question about which to use” in which case I repeat: there isn’t a way a priori to know whether to trust experts using the outside view, because you don’t know how good experts are, and you need to build up domain-specific skills in predicting this.

You might respond “But this book didn’t give me any specific tools for figuring out when to trust the experts over me” in which case I continue to be baffled and point you to the first book – Moloch’s toolbox.

Finally, you might respond “Thank you Eliezer I’d already heard that a bet is a tax on bullsh*t, I didn’t require a whole new book to learn this” to which I respond that, firstly, I prefer the emphasis that “bets are a way to pay to find out where you’re wrong (and make money otherwise)” and secondly that the point of this book is that people are assuming way too quickly the adequacy of experts, so please make more bets in this particular domain. Which I think is a very good direction to push.

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 6:58 PM

This is probably also my response to Hanson's review, that didn't see how the two-books-in-a-book connected up. The first book (on Inadequate Equilibria) is what it looks like to build domain-knowledge about when to use the outside view / when to trust experts / when to trust your own meta-rationality. It is the object level to the second book's meta level.

The most interesting section to me of Hanson's review was

Furthermore, Yudkowsky thinks that he can infer his own high meta-rationality from his details:
I learned about processes for producing good judgments, like Bayes’s Rule, and this let me observe when other people violated Bayes’s Rule, and try to keep to it myself. Or I read about sunk cost effects, and developed techniques for avoiding sunk costs so I can abandon bad beliefs faster. After having made observations about people’s real-world performance and invested a lot of time and effort into getting better, I expect some degree of outperformance relative to people who haven’t made similar investments. … [Clues to individual meta-rationality include] using Bayesian epistemology or debiasing techniques or experimental protocol or mathematical reasoning.
Alas, Yudkowsky doesn’t offer empirical evidence that these possible clues of meta-rationality are in fact actually clues in practice, that some correctly apply these clues much more reliably than others, nor that the magnitude of these effects are large enough to justify the size of disagreements that Yudkowsky suggests as reasonable. Remember, to justifiably disagree on which experts are right in some dispute, you’ll have to be more meta-rational than are those disputing experts, not just than the general population. So to me, these all remain open questions on disagreement.

I do take things like the practice of looking for inadequate equilibria as an example of domain-specific knowledge in meta-rationality, and furthermore the practice of using bayes' rule, betting, debiasing etc as bayesian evidence of the author's strong meta-rationality. However, it would be great to think of some clear empirical evidence of this working better, as opposed to merely bayesian evidence of it working better, and I might spend some time thinking of data for the former.

[-]Zvi6y160

I think if anything in his review set off my instinct that I had to write "https://thezvi.wordpress.com/2017/11/27/you-have-the-right-to-think/" (which is sitting at -1 here which I'm sad about, but it got great discussion on my blog itself, which I think combines to valuable feedback on many levels that I'm still processing, and I'm thankful for people's candor).

The first part is an absurd Isolated Demand for Rigor, in violation of any reasonable rules of good writing and of common sense. Experts never seem to need to prove any of that stuff, but suddenly Eliezer's book is supposed to stop to provide expert-approved outside-view proof for the idea that being better at thinking and avoiding mistakes might make one better at thinking and avoiding mistakes. Magnitude is a legitimate question, but come on. You're not allowed to use evidence of your meta-rationality that isn't approved by the licensing court, or something? And even if the eivdence is blatant and outside view-approved you need to present all of it yourself explicitly?

The second half then says something that keeps being claimed and simply isn't true: That you have to be 'more meta-rational' or otherwise superior to others who have beliefs, in some way in order to have an opinion (to think) on something. Otherwise, your object-level evidence needs to be thrown out (in his comments he said multiple times no, don't throw out your object-level evidence, that's obviously wrong, but what else would it mean not to be able to judge?) This is insane. You don't need that at all. You just need to be good enough that your observations and analysis aren't zeroed out and fully priced in by the experts, which is a much lower bar. You could easily have different data, apply more compute, or do any number of other things, and even if you don't your likelihood ratio isn't going to be 1.

Whole thing is super frustrating.

#TheOppositeOfDeepAdviceIsAlsoDeepAdvice.

Robin Hanson: "You are never entitled to your own beliefs" i.e. there are rules of reasoning about evidence and is you state probabilities that are inconsistent with the evidence you've seen you're lying.

Zvi: "You are entitled to your own beliefs" i.e. there are many many MANY social pressures pushing for you to cast aside the evidence you've noticed for dissenting ideas, in favour of socially modest beliefs. Resist these pressures, for they are taking away your very right to think! (#LetsAssumeRightsExist)

And thus, a community deep in the first phrase, reacted poorly to the second. I admit, until I read the comment section of your post, I had not been able to at all form the charitable and correct reading of your post.

(And yeah, the comments there are awesome.)

If/when I point to empirical evidence that practising using bayes' theorem does in fact help your meta-rationality, my model of Pat Modesto says "Oh, so you claim that you have 'empirical' evidence and this means you know 'better' than others. Many people thought they too had 'special' evidence that allowed them to have 'different' beliefs." Pssh.

In general I agree with your post, and while Pat's is an argument I could imagine someone saying to me, I don't let this overwrite my models. If I think that person X has good meta-rationality, and you suggest my evidence is bad according to one particular outsid view, I will not throw away my models, but keep them while I examine this argument. If the argument is compelling I'll update, but the same heuristic that keeps me from preventing bucket errors will also stop me from immediatley saying anything like "Yes, you're probably right that I don't really have evidence of X's meta-rationality being strong".

I would be interested in some good generic tecniques for

  1. Working out how expert the experts actually are. There seem to be whole fields of experts who are nto actually any good at what they ostensibly do. Easy example: Freudian psychoanalysts seemed to have no actual skill in helping people get better (beyond what an intelligent layman with a 1/2 day's training in counselling techniques could offer).
  2. Understanding the limits of the experts and where they systematically get it wrong. Example: the very strong bias in the medical system to treat patients with drugs that are currently under patent.
  3. Working out how my own limitations relate to the above, when trying to work out what to do. As an example, it is notorious that doctors overstate benefits and understate risks of treatment (even after the treatment is complete and the downsides are, or should be, obvious). So I try to apply a discount factor and double check the cost/benefit before agreeing to a treatment.

I felt the book left me dangling in this regard. There is a lot of insight but not as actionable as I would have liked.