habryka

Running Lightcone Infrastructure, which runs LessWrong. You can reach me at habryka@lesswrong.com. I have signed no contracts or agreements whose existence I cannot mention.

Sequences

A Moderate Update to your Artificial Priors
A Moderate Update to your Organic Priors
Concepts in formal epistemology

Wiki Contributions

Comments

Thanks for doing this! This list seems roughly accurate. 

habryka327

Note: I crossposted this for Eliezer, after asking him for permission, because I thought it was a good essay. It was originally written for Twitter, so is not centrally aimed at a LW audience, but I still think it's a good essay ot have on the site.

Yeah, it's a mod-internal alternative to the AI algorithm for the recommendations tab (it uses Google Vertex instead).

I mean, I think it would be totally reasonable for someone who is doing some decision theory or some epistemology work, to come up with new "dutch book arguments" supporting whatever axioms or assumptions they would come up with. 

I think I am more compelled that there is a history here of calling money pump arguments that happen to relate to probabilism "dutch books", but I don't think there is really any clear definition that supports this. I agree that there exists the dutch book theorem, and that that one importantly relates to probabilism, but I've just had dozens of conversations with academics and philosophers and academics and decision-theorists, where in the context of both decision-theory and epistemology question, people brought up dutch books and money pumps interchangeably.

I've pretty consistently (by many different people) seen "Dutch Book arguments" used interchangeably with money pumps. My understanding (which is also the SEP's) is that "what is a money pump vs. a dutch book argument" is not particularly well-defined and the structure of the money pump arguments is basically the same as the structure of the dutch book arguments. 

This is evident from just the basic definitions: 

"A Dutch book is a set of bets that ensures a guaranteed loss, i.e. the gambler will lose money no matter what happens." 

Which is of course exactly what a money pump is (where you are the person offering the gambles and therefore make guaranteed money).

The money pump Wikipedia article also links to the Dutch book article, and the book/paper I linked describes dutch books as a kind of money pump argument. I have never heard anyone make a principled distinction between a money pump argument and a dutch book argument (and I don't see how you could get one without the other).

Indeed, the Oxford Reference says explicitly: 

money pump

A pattern of intransitive or cyclic preferences causing a decision maker to be willing to pay repeated amounts of money to have these preferences satisfied without gaining any benefit. [...] Also called a Dutch book [...]

(Edit: It's plausible that for weird historical reasons the exact same argument, when applied to probabilism would be called a "dutch book" and when applied to anything else would be called a "money pump", but I at least haven't seen anyone defend that distinction, and it doesn't seem to follow from any of the definitions)

habryka4-6

Well, thinking harder about this, I do think your critiques on some of these is wrong. For example, it is the case that the VNM axioms frequently get justified by invoking dutch books (the most obvious case is the argument for transitivity, where the standard response is "well, if you have circular preferences I can charge you a dollar to have you end up where you started").

Of course, justifying axioms is messy, and there isn't any particularly objective way of choosing axioms here, but in as much as informal argumentation happens, it tends to use a dutch book like structure. I've had many conversations with formal academic experience in academia and economics here, and this is definitely a normal way for dutch books to go. 

For a concrete example of this, see this recent book/paper: https://www.iffs.se/media/23568/money-pump-arguments.pdf 

Huh, this is a good quote.

habryka5-3

Or to let me know that some of the issues I mention were already on Wikipedia beforehand. I’d be happy to try to edit those.

None of these changes are new as far as I can tell (I checked the first three), so I think your basic critique falls through. You can check the edit history yourself by just clicking on the "View History" button and then pressing the "cur" button next to the revision entry you want to see the diff for. 

Like, indeed, the issues you point out are issues, but it is not the case that people reading this have made the articles worse. The articles were already bad, and "acting with considerable care" in a way that implies inaction would mean leaving inaccuracies uncorrected. 

I think people should edit these pages, and I expect them to get better if people give it a real try. I also think you could give it a try and likely make things better.

Edit: Actually, I think my deeper objection is that most of the critiques here (made by Sammy) are just wrong. For example, of course Dutch books/money pumps frequently get invoked to justify VNM axioms. See for example this.

habryka181

I have spent like 40% of the last 1.5 years trying to reform EA. I think I had a small positive effect, but it's also been extremely tiring and painful and I consider my duty with regards to this done. Buy in for reform in leadership is very low, and people seem primarily interested in short term power seeking and ass-covering.

The memo I mentioned in another comment has a bunch of analysis I'll send it to you tomorrow when I am at my laptop.

For some more fundamental analysis I also have this post, though it's only a small part of the picture: https://www.lesswrong.com/posts/HCAyiuZe9wz8tG6EF/my-tentative-best-guess-on-how-eas-and-rationalists

habryka117

The leadership of these is mostly shared. There are many good parts of EA, and reform would be better than shutting down, but reform seems unlikely at this point.

My world model mostly predicts effects on technological development and the long term future dominate, so in as much as the non-AI related parts of EA are good or bad, I think what matters is their effect on that. Mostly the effect seems small, and quibbling over the sign doesn't super seem worth it.

I do think there is often an annoying motte and bailey going on where people try to critique EA for their negative effects in the important things, and those get redirected to "but you can't possibly be against bednets", and in as much as the bednet people are willingly participating in that (as seems likely the case for e.g. Open Phil's reputation), that seems bad.

Load More