Upvoted; I think this was worth making, and more people should do more things like this.
Notes:
I've now had multiple people tell me that I shouldn't have released anything game-shaped during what is apparently Silksong week. Accordingly, I'm changing the deadline to Sep 22nd; apologies for any inconvenience, and you're welcome for any convenience.
(I assume N/A means "not in the city"?)
Correct.
Also, strong-upvoted for asking a good question. For a community that spends so much time thinking and talking about bets and prediction markets, we really haven't engaged with practicalities like "but what about people who [have | worry about] life-ruining gambling addictions?"
As I see it, most of the epistemic benefit of betting comes from:
A) Having any kind of check on nonsense whatsoever.
B) The fact it forces people with different opinions to talk and agree about something (even if it's just the form of their disagreement).
C) The way involving money requires people to operationalize exactly what they mean (incidentally revealing when they don't mean anything at all).
None of this requires betting large amounts of money; afaict, most bets in the rat community have been small & symbolic amounts relative to the bettors' annual income. So an easy way to 80/20 this would be to set yourself a modest monthly gambling budget (which doesn't roll over from one month to the next, and doesn't get winnings added back in), only use it for political/technological/economic/literary/etc questions (no slot machines & horse races, etc), and immediately stop gambling if you ever exceed it.
Then a valid response to your friend becomes "sorry, that's over my gambling budget, but I would bet 50 reais at 2:1 odds in your favor, and you get to brag about it if it turns out I'm wrong". (. . . and if you wouldn't have made that bet either, you'd have learned something important and not even have had to risk the money.)
You get scored based on the number of mages you correctly accuse; and a valid accusation requires you to specify at least one kind of illegal healing they've done. (So if you've already got Exampellius the Explanatory for healing Chucklepox, you don't get anything extra for identifying that he's also healed several cases of Rumblepox.)
Yes; edited to clarify; ty.
I hereby voice strong approval of the meta-level approaches on display (being willing to do unpopular and awkward things to curate our walled garden, noticing that this particular decision is worth justifying in detail, spending several thousand words explaining everything out in the open, taking individual responsibility for making the call, and actively encouraging (!) anyone who leaves LW in protest or frustration to do so loudly), coupled with weak disapproval of the object-level action (all the complicating and extenuating factors still don't make me comfortable with "we banned this person from the rationality forum for being annoyingly critical").
I like the gradual increase in typos and distortions as the viewer's eyes go down the picture.
I mean, it helps? I wouldn't say it's required.
It's less to do with Bayes as in actually-doing-the-calculation and more to do with Bayes as in recognizing-there's-an-ideal-to-approximate. Letting evidence shift your position at all is the main thing. (If you do an explicit Bayesian calculation about something real, you'll have done about as many explicit Bayesian calculations as the median LW user has this year.)
I mean, we were. Then we found out that most published scientific findings, including the psych literature which includes the bias literature, don't replicate. And AI, the topic most of us were trying to de-bias ourselves to think about, went kinda crazy over the last five years. So now we talk about AI more than biases. (If you can find something worthwhile to say about biases, please do!)
If you pick two dozen or so posts at random, I'd expect you'll get more Philosophical ones than STEMmy ones. (AI posts don't count for either column imo; also, they usually don't hard-require technical background other than "LLMs are a thing now" and "inhuman intellects being smarter than humans is kinda scary".)
Extremely. Welcome aboard!