Eric Neyman

I'm a 3rd year PhD student at Columbia. My academic interests lie in mechanism design and algorithms related to the acquisition of knowledge. I write a blog on stuff I'm interested in (such as math, philosophy, puzzles, statistics, and elections): https://ericneyman.wordpress.com/

Sequences

Pseudorandomness Contest

Wiki Contributions

Comments

There were 14 -- but they did so well that it's unlikely to have been by chance: the p-value is 0.0002 (i.e. the probability of IQ >150 people having gotten such a large percentile conditioned on their true skill levels being distributed like the entire population is only 0.02%).

Hi! I just wanted to mention that I really appreciate this sequence. I've been having lots of related thoughts, and it's great to see a solid theoretical grounding for them. I find the notion that bargaining can happen across lots of different domains -- different people or subagents, different states of the world, maybe different epistemic states -- particularly useful. And this particular post presents the only argument for rejecting a VNM axiom I've ever found compelling. I think there's a decent chance that this sequence will become really foundational to my thinking.

Note that this is just the arithmetic mean of the probability distributions. Which is indeed what you want if you believe that P is right with probability 50% and Q is right with probability 50%, and I agree that this is what Scott does.

At the same time, I wonder -- is there some sort of frame on the problem that makes logarithmic pooling sensible? Perhaps (inspired by the earlier post on Nash bargaining) something like a "bargain" between the two hypotheses, where a hypothesis' "utility" for an outcome is the probability that the hypothesis assigns to it.

The aggregation method you suggest is called logarithmic pooling. Another way to phrase it is: take the geometric mean of the odds given by the probability distribution (or the arithmetic mean of the log-odds). There's a natural way to associate every proper scoring rule (for eliciting probability distributions) with an aggregation method, and logarithmic pooling is the aggregation method that gets associated with the log scoring rule (which Scott wrote about in an earlier post). (Here's a paper I wrote about this connection: https://arxiv.org/pdf/2102.07081.pdf)

I'm also exited to see where this sequence goes!

Thanks for the post! Quick question about your last equation: if each h is a distribution over a coarser partition of W (rather than W), then how are we drawing w from h for the inner geometric expectation?

How much should you shift things by? The geometric argmax will depend on the additive constant.

Thanks for the post -- I've been having thoughts in this general direction and found this post helpful. I'm somewhat drawn to geometric rationality because it gives more intuitive answers in thoughts experiments involving low probabilities of extreme outcomes, such as Pascal's mugging. I also agree with your claim that "humans are evolved to be naturally inclined towards geometric rationality over arithmetic rationality."

On the other hand, it seems like geometric rationality only makes sense in the context of natural features that cannot take on negative values. Most of the things I might want to maximize (e.g. utility) can be negative. Do you have thoughts on the extent to which we can salvage geometric rationality from this problem?

I wonder if the effect is stronger for people who don't have younger siblings. Maybe for people with younder siblings, part of the effect kicks in when they have a younger sibling (but they're generally too young to notice this), so the effect of becoming a parent is smaller.

"My probability is 30%, and I'm 50% sure that the butterfly probability is between 20% and 40%" carries useful information, for example. It tells people how confident I am in my probability.

I often talk about the "true probability" of something (e.g. AGI by 2040). When asked what I mean, I generally say something like "the probability I would have if I had perfect knowledge and unlimited computation" -- but that isn't quite right, because if I had truly perfect knowledge and unlimited computation I would be able to resolve the probability to either 0 or 1. Perfect knowledge and computation within reason, I guess? But that's kind of hand-wavey. What I've actually been meaning is the butterfly probability, and I'm glad this concept/post now exists for me to reference!

More generally I'd say it's useful to make intuitive concepts more precise, even if it's hard to actually use the definition, in the same way that I'm glad logical induction has been formalized despite being intractable. Also I'd say that this is an interesting concept, regardless of whether it's useful :)

Load More