(I assume N/A means "not in the city"?)
Correct.
Also, strong-upvoted for asking a good question. For a community that spends so much time thinking and talking about bets and prediction markets, we really haven't engaged with practicalities like "but what about people who [have | worry about] life-ruining gambling addictions?"
As I see it, most of the epistemic benefit of betting comes from:
A) Having any kind of check on nonsense whatsoever.
B) The fact it forces people with different opinions to talk and agree about something (even if it's just the form of their disagreement).
C) The way involving money requires people to operationalize exactly what they mean (incidentally revealing when they don't mean anything at all).
None of this requires betting large amounts of money; afaict, most bets in the rat community have been small & symbolic amounts relative to the bettors' annual income. So an easy way to 80/20 this would be to set yourself a modest monthly gambling budget (which doesn't roll over from one month to the next, and doesn't get winnings added back in), only use it for political/technological/economic/literary/etc questions (no slot machines & horse races, etc), and immediately stop gambling if you ever exceed it.
Then a valid response to your friend becomes "sorry, that's over my gambling budget, but I would bet 50 reais at 2:1 odds in your favor, and you get to brag about it if it turns out I'm wrong". (. . . and if you wouldn't have made that bet either, you'd have learned something important and not even have had to risk the money.)
You get scored based on the number of mages you correctly accuse; and a valid accusation requires you to specify at least one kind of illegal healing they've done. (So if you've already got Exampellius the Explanatory for healing Chucklepox, you don't get anything extra for identifying that he's also healed several cases of Rumblepox.)
Yes; edited to clarify; ty.
I hereby voice strong approval of the meta-level approaches on display (being willing to do unpopular and awkward things to curate our walled garden, noticing that this particular decision is worth justifying in detail, spending several thousand words explaining everything out in the open, taking individual responsibility for making the call, and actively encouraging (!) anyone who leaves LW in protest or frustration to do so loudly), coupled with weak disapproval of the object-level action (all the complicating and extenuating factors still don't make me comfortable with "we banned this person from the rationality forum for being annoyingly critical").
I like the gradual increase in typos and distortions as the viewer's eyes go down the picture.
With this concept in mind, the entire rationalist project seems like the grayspace to a whitespace that hasn't been created yet.
This is a very neatly-executed and polished resource. I'm a little leery of the premise - the real world doesn't announce "this is a Sunk Cost Fallacy problem" before putting you in a Sunk Cost Fallacy situation, and the "learn to identify biases" approach has been done before by a bunch of other people (CFAR and https://yourbias.is/ are the ones which immediately jump to mind) - but given you're doing what you're doing I think you've done it about as well as it could plausibly be done (especially w.r.t. actually getting people to relive the canonical experiments). Strong-upvoted.
I've now had multiple people tell me that I shouldn't have released anything game-shaped during what is apparently Silksong week. Accordingly, I'm changing the deadline to Sep 22nd; apologies for any inconvenience, and you're welcome for any convenience.