The Real Rules Have No Exceptions
the rule as stated, together with the criteria for deciding whether something is a “legitimate” exception, is the actual rule.
The approach I describe above merely consists of making this fact explicit.

This would be true were it not for your meta-rule. But the criteria for deciding whether something is a legitimate exception may be hazy and intuitive, and not prone to being stated in a simple form. This doesn't mean that the criteria are bad though.

For example, I wouldn't dream of formulating a rule about cookies that covered the case "you can eat them if they're the best in the state", but I also wouldn't say that just because someone is trying to avoid eating cookies means they can't eat the best-in-state cookies. It's a judgement call. If you expect your judgement to be impaired enough that following rigid explicitly stated rules will be better than making judgement calls, then OK, but it is far from obvious that this is true for most people.

Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces

The OP didn't give any argument for SPECKS>TORTURE, they said it was "not the point of the post". I agree my argument is phrased loosely, and that it's reasonable to say that a speck isn't a form of torture. So replace "torture" with "pain or annoyance of some kind". It's not the case that people will prefer arbitrary non-torture pain (e.g. getting in a car crash every day for 50 years) to a small amount of torture (e.g. 10 seconds), so the argument still holds.

Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces

Once you introduce any meaningful uncertainty into a non-Archimedean utility framework, it collapses into an Archimedean one. This is because even a very small difference in the probabilities of some highly positive or negative outcome outweighs a certainty of a lesser outcome that is not Archimedean-comparable. And if the probabilities are exactly aligned, it is more worth your time to do more research so that they will be less aligned, than to act on the basis of a hierarchically less important outcome.

For example, if we cared infinitely more about not dying in a car crash than about reaching our destination, we would never drive, because there is a small but positive probability of crashing (and the same goes for any degree of horribleness you want to add to the crash, up to and including torture -- it seems reasonable to suppose that leaving your house at all very slightly increases your probability of being tortured for 50 years).

For the record, EY's position (and mine) is that torture is obviously preferable. It's true that there will be a boundary of uncertainty regardless of which answer you give, but the two types of boundaries differ radically in how plausible they are:

  • if SPECKS is preferable to TORTURE, then for some N and some level of torture X, you must prefer 10N people to be tortured at level X than N to be tortured at a slightly higher level X'. This is unreasonable, since X is only slightly higher than X', while you are forcing 10 times as many people to suffer the torture.
  • On the other hand, if TORTURE is preferable to SPECKS, then there must exist some number of specks N such that N-1 specks is preferable to torture, but torture is preferable to N+1 specks. But this is not very counterintuitive, since the fact that torture costs around N specks means that N-1 specks is not much better than torture, and torture is not much better than N+1 specks. So knowing exactly where the boundary is isn't necessary to get approximately correct answers.
Explaining "The Crackpot Bet"

To repeat what was said in the CFAR mailing list here: This "bet" isn't really a bet, since there is no upside for the other party; they are worse off than when they started in every possible scenario.

What are your plans for the evening of the apocalypse?

I don't think that chapter is trying to be realistic (it paints a pretty optimistic picture),

Counterfactuals, thick and thin

Sure, in that case there is a 0% counterfactual chance of heads, your words aren't going to flip the coin.

Counterfactuals, thick and thin

The question "how would the coin have landed if I had guessed tails?" seems to me like a reasonably well-defined physical question about how accurately you can flip a coin without having the result be affected by random noise such as someone saying "heads" or "tails" (as well as quantum fluctuations). It's not clear to me what the answer to this question is, though I would guess that the coin's counterfactual probability of landing heads is somewhere strictly between 0% and 50%.

The Feedback Problem
Reviewer is obliged to find all errors.

Not true. A reviewer's main job is to give a high-level assessment on the quality of a paper. If the assessment is negative then usually they do not look for all specific errors in the paper. A detailed list of errors is more common when the reviewer recommends the journal to accept the paper (since then the author(s) can edit the paper and then publish in the journal) but still many reviewers do not do this (which is why it is common to find peer-reviewed papers with errors in them).

At least, this is the case in math.

Decisions are not about changing the world, they are about learning what world you live in

You don't harbor any hopes that after reading your post, someone will decide to cooperate in the twin PD on the basis of it? Or at least, if they were already going to, that they would conceptually connect their decision to cooperate with the things you say in the post?

Decisions are not about changing the world, they are about learning what world you live in

I am not sure how else to interpret the part of shminux's post quoted by dxu. How do you interpret it?

Load More