Beemium (the subscription tier that allows pledgeless goals) is $40/mo currently, increased in January 2021 from $32/mo and in 2014 from the original $25/mo.
The essay What Motivated Rescuers During the Holocaust is on Lesswrong under the title Research: Rescuers during the Holocaust - it was renamed because all of the essay titles in Curiosity are questions, which I just noticed now and is cute. I found it via the URL lesswrong.com/2018/rescue, which is listed in the back of the book.
The bystander effect is an explanation of the whole story:
Why would you wait until t=1? It seems like at any time t the expected payoff will be (1−t2,0,…), which is strictly decreasing with t.
One big advantage of getting a hemispherectomy for life extension is that, if you don't tell the Metaculus community before you do it, you can predict much higher than the community median of 16% - I would have 71 Metaculus points to gain from this, for example, much greater than the 21 in expectation I would get if the community median was otherwise accurate.
This looks like the hyperreal numbers, with your 10 equal to their ω.
The real number 0.20 isn't a probability, it's just the same odds but written in a different way to make it possible to multiply (specifically you want some odds product * such that A:B * C:D = AC:BD). You are right about how you would convert the odds into a probability at the end.
A:B * C:D = AC:BD
Just before she is able to open the envelope, a freak magical-electrical accident sends a shower of sparks down, setting it alight. Or some other thing necessiated by Time to ensure that the loop is consistent. Similar kinds of problems to what would happen if Harry was more committed to not copying "DO NOT MESS WITH TIME".
I have used this post quite a few times as a citation when I want to motivate the use of expected utility theory as an ideal for making decisions, because it explains how it's not just an elegant decisionmaking procedure from nowhere but a mathematical inevitability of the requirements to not leave money on the table or to accept guaranteed losses. I find the concept of coherence theorems a better foundation than the normal way this is explained, by pointing at the von Neumann-Morgensten axioms and saying "they look true".
The number of observers in a universe is solely a function of the physics of that universe, so the claim that a theory that implies 2Y observers is a third as likely as a theory that implies Y observers (even before the anthropic update) is just a claim that the two theories don't have an equal posterior probability of being true.
This post uses the example of GPT-2 to highlight something that's very important generally - that if you're not concentrating, you can't distinguish GPT-2 generated text that is known to be gibberish from non-gibberish.
And hence gives the important lesson, which might be hard to learn oneself if they're not concentrating, that you can't really get away with not concentrating.