I work at the Alignment Research Center (ARC). I write a blog on stuff I'm interested in (such as math, philosophy, puzzles, statistics, and elections): https://ericneyman.wordpress.com/
What are some examples of people making a prediction of the form "Although X happening seems like obviously a bad thing, in fact the good second-order effects would outweigh the bad first-order effects, so X is good actually", and then turning out to be correct?
(Loosely inspired by this quick take, although I definitely don't mean to imply that the author is making such a prediction in this case.)
Pacts against coordinating meanness.
I just re-read Scott Alexander's Be Nice, At Least Until You Can Coordinate Meanness, in which he argues that a necessary (but not sufficient) condition on restricting people's freedom should be that you should first get societal consensus that restricting freedom in that way is desirable (e.g. by passing a law via the appropriate mechanisms).
In a sufficiently polarized society, there could be two similarly-sized camps that each want to restrict each other's freedom. Imagine a country that's equally divided between Christians and Muslims, each of which wants to ban the other religion. Or you could imagine a country that's equally divided between vegetarians and meat-eaters, where the meat-eaters want to ban cell-cultivated meat while the vegetarians want to ban real meat (thus restricting the other group's freedom).
In such a situation, if each group values their own freedom more than the ability to impose their values on the other side (as is almost always the case), it would make sense for the two groups to commit to not violate the other side's freedom even if they gain sufficient power to do so.
I imagine that people in this community have thought about this. Are there any good essays on this topic?
Yup, that's right!
Sure! Let's say that we make a trade I buy a share of "Jesus will return in 2025" from you for 3 cents. Here's what that means in practice:
Now, let's say that we've made this trade. Fast forward to November, and you're interested in betting on the New York mayoral election. Maybe you'd like to buy shares of "Zohran Mamdani will win the mayoral election" because it's trading for 70 cents, but you think he's 85% likely to win, or something. You really wish you had those 97 cents that you gave to Polymarket to hold until the end of the year, because you can make a much more profitable (in expectation) bet now!
So you return to the Jesus market, to sell your "no" share. You paid 97 cents for it, but really, you're willing to sell it for 95 cents now. You'll eat that 2-cent loss, because at least then you'll get to place that really good bet on the New York market, where you think you're profiting a lot more in expectation. Meanwhile, I'm happy to be on the other end of that trade: I bought "Jesus will return" for 3 cents, and now I get to sell out of my position for 5 cents (by trading with you), earning me a guaranteed 2 cents.
(There's some details I'm eliding: basically, a "yes" share and a "no" share "cancel out" to 100 cents, so if you hold both 1 yes share and 1 no share, Polymarket internally just credits you 100 cents, so it's as if you get a dollar back and don't hold any shares at all. I didn't want to get into that because it's a slightly confusing detail.)
Does that make sense?
Ah oops, I now see that one of Drake's follow-up comments was basically about this!
One suggestion that I made to Drake, which I'll state here in case anyone else is interested:
Define a utility function: for example, utility = -(dollars paid out) - c*(variance of your estimator). Then, see if you can figure out how to sample people to maximize your utility.
I think this sort of analysis may end up being more clear-eyed in terms of what you actually want and how good different sampling methods are at achieving that.
This is a really cool mechanism! I'm surprised I haven't seen it before -- maybe it's original :)
After thinking about it more, I have a complaint about it, though. The complaint is that it doesn't feel natural to value the act of reaching out to someone at $X. It's natural to value an actual sample at $X, and you don't get a sample every time you reach out to someone, only when they respond.
Like, imagine two worlds. In world A, everyone's fair price is below X, so they're guaranteed to respond. You decide you want 1000 samples, so you pay $1000X. In world B, everyone has a 10% chance of responding in your mechanism. To get a survey with the same level of precision (i.e. variance), you still need to get 1000 responses, and not just reach out to 1000 people.
My suspicion is that if you're paying per (effective) sample, you probably can't mechanism-design your way out of paying more for people who value their time more. I haven't tried to prove that, though.
I strongly agree. I can't vouch for all of the orgs Ryan listed, but Encode, ARI, and AIPN all seem good to me (in expectation), and Encode seems particularly good and competent.
I have something like mixed feelings about the LW homepage being themed around "If Anyone Builds it, Everyone Dies":
Oh thanks, that's a good point, and maybe explains why I don't really find the examples given so far to be compelling. I'd like examples of the first type, i.e. where the bad effect causes the good effect.