dreeves

Cofounder of Beeminder

Comments

Bayes' theorem, plausible deniability, and smiley faces

Oh, Quirrell is referring to what game theorists call Cheap Talk. If the thing I'm trying to convince you of is strictly in my own brain -- like whether I intend to cooperate or defect in an upcoming Prisoner's Dilemma -- then any promises I make are, well, cheap talk. This is related to costly signals and strategic commitment, etc etc.

Anyway, I think that's the missing piece there. "Nothing you can do to convince me [about your own intentions] [using only words]".

Bayes' theorem, plausible deniability, and smiley faces

This is indeed a fun way to illustrate Bayesian thinking! But I have a monkey wrench! There exist people who view smileys as almost explicitly connoting passive-aggression or sarcasm. Like the whole reason to add a smiley is to soften something mean. I'm not quite sure if there are enough such people to worry about but I think that that perception of smileys is out there.

Applied Picoeconomics

Correction to the Ainslie link: http://picoeconomics.org/breakdown.htm

Applied Picoeconomics

Hi from the future [1]! Beeminder has a version of this built in: the one-week akrasia horizon. You can change anything about a Beeminder goal, including ending it, at any time, but the change doesn't take effect for a week. As Katja Grace once said on Overcoming Bias: "[you] can’t change it out of laziness unless you are particularly forward thinking about your laziness (in which case you probably won’t sign up for this)."

 

[1] I'm mildly terrified that it's against the norms to reply to something this old. I've been thinking hard about your (Scott's) recent ACX post, "Towards A Bayesian Theory Of Willpower," and am digging up all your previous thoughts on the topic, so here I am.

Your Cheerful Price

Good thought experiment! I replied in the form of another Yudkowsky vignette. :)

Summary: "Infinity" is a perfectly coherent Cheerful Price for, say, something sufficiently repugnant to you or something very unethical. (But also you must have a finite Cheerful Price for anything, no matter how bad, if the badness happens with sufficiently small probability.)

Your Cheerful Price

That reminds me of this delightful and hilarious (edit: and true!) thing Eliezer said once:

Let me try to clear up the notion that economically rational agents must be cold, heartless creatures who put a money price on everything.

There doesn't have to be a financial price you'd accept to kill every sentient being on Earth except you. There doesn't even have to be a price you'd accept to kill your spouse. It's allowed to be the case that there are limits to the total utility you know how to generate by spending currency, and for anything more valuable to you than that, you won't exchange it for a trillion dollars.

Now, it *does* have to be the case for a von Neumann-Morgenstern rational agent that if a sum of money has any value to you at all, you will exchange anything else you have -- or any possible event you can bring about -- *at some probability* for that sum of money. So it *is* true that as a rational agent, there is some *probability* of killing your spouse, yourself, or the entire human species that you will cheerfully exchange for $50.

I hope that clears up exactly what sort of heartless creatures economically rational agents are.

Your Cheerful Price

Interesting! It hadn't occurred to me that this could be read as any kind of repudiation of "shut up and multiply". My previous comment on this post takes at stab at reconciling Cheerful Prices with my own extreme shut-up-and-multiply way of thinking.

Your Cheerful Price

Oh my goodness I love this. I'm actually so philosophically on board that I'm confused about treating Cheerful Prices as single real numbers. In my homo-economicus worldview, there exists a single price at which I'm exactly indifferent and then my cheerfulness goes up smoothly/continuously from there. It feels very arbitrary to pick something on that continuum and call it "the" cheerful price I have.

(My answer is to turn the nerdery up to 11 and compute a Shapley value, etc etc, but let me save that for another time or place. Jacob Falkovich and I have been talking about jointly blogging about this. We'll definitely want to tie it in to the concept of Cheerful Prices if we do!)

Translated into this delightful new language of Cheerful Prices, the rough version of my approach is like so:

I as the buyer name my lowest possible Cheerful Price (where I just barely find it worth it) and you as the seller name your highest possible Cheerful Price (above which it's just not worth it to you) and we settle on the mean of those two.

But maybe the point of Cheerful Prices is to simplify that. Let one person on one side of the trade make a guess about the consumer surplus and name something in that range. I.e., by naming my Cheerful Price I'm saying that at that price I'd be getting a big enough chunk of the consumer surplus that I don't need to know the size of your chunk. If you, as my counterparty, feel the same then we're golden.

The Power to Solve Climate Change

Really good points. It's funny, I have a draft of a similar point about personal behavior change that I tried to make as provocative-sounding as possible:

http://doc.dreev.es/carbonfoot (Trying To Limit Your Personal Carbon Footprint Hurts The Environment)

But note the PS where I suggest a counterargument: making personal sacrifices for climate change may shape your identity, drive you to greater activism, and make your activism and climate evangelism more persuasive (to those who don't appreciate the economics and game theory of it).

Being the (Pareto) Best in the World

Nice! I've heard a similar idea called a "talent stack" or "skill stack" but explaining it in terms of staking out a chunk of the Pareto frontier is much better.

Coincidentally, I just wrote a post explaining the idea of Pareto dominance -- http://blog.beeminder.com/pareto -- in case that's useful to anyone.

Load More