Dacyn

Wiki Contributions

Comments

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

-"Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they’re confused about, than for simply ignoring the problems."

I feel like "talking confusedly" here means "talking in a way that no one else can understand". If no one else can understand, they cannot give feedback on your ideas. That said, it is not clear that penalizing confused talk is a solution to this problem.

Meta-discussion from "Circling as Cousin to Rationality"

This is a great answer. I will have to incorporate concepts like "interlocutor" and "author" into my worldview.

If I may ask a somewhat metaphorical question, what determines who the interlocutor and author are in a context which is not so clear-cut as an online interaction? Like, if I ask a question in a talk, does that mean the presenter is the author and I am the interlocutor? Is DSL the author and JB the interlocutor? Or maybe the other way around? I may even go so far as to claim that in a context like this one, I am the author and my conversationmate was the interlocutor!!

Meta-discussion from "Circling as Cousin to Rationality"

So, I like this comment (and strong-upvoted it) because you are placing your concept of "obligation" out in the open for scrutiny. I have a question though. Here someone responded to a request for information after I said I would be "surprised" by the information they now claim to have provided. Would you say that I have an obligation to react to their response, i.e. either admit that I lost an argument, or take the effort to see whether I agree with their interpretation of the information? Right now I am not motivated to do the latter.

If this doesn't fall under your definition of "obligation", what would you say are the key differences between this scenario and the scenarios where you think people do have an obligation?

The Real Rules Have No Exceptions
the rule as stated, together with the criteria for deciding whether something is a “legitimate” exception, is the actual rule.
The approach I describe above merely consists of making this fact explicit.

This would be true were it not for your meta-rule. But the criteria for deciding whether something is a legitimate exception may be hazy and intuitive, and not prone to being stated in a simple form. This doesn't mean that the criteria are bad though.

For example, I wouldn't dream of formulating a rule about cookies that covered the case "you can eat them if they're the best in the state", but I also wouldn't say that just because someone is trying to avoid eating cookies means they can't eat the best-in-state cookies. It's a judgement call. If you expect your judgement to be impaired enough that following rigid explicitly stated rules will be better than making judgement calls, then OK, but it is far from obvious that this is true for most people.

Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces

The OP didn't give any argument for SPECKS>TORTURE, they said it was "not the point of the post". I agree my argument is phrased loosely, and that it's reasonable to say that a speck isn't a form of torture. So replace "torture" with "pain or annoyance of some kind". It's not the case that people will prefer arbitrary non-torture pain (e.g. getting in a car crash every day for 50 years) to a small amount of torture (e.g. 10 seconds), so the argument still holds.

Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces

Once you introduce any meaningful uncertainty into a non-Archimedean utility framework, it collapses into an Archimedean one. This is because even a very small difference in the probabilities of some highly positive or negative outcome outweighs a certainty of a lesser outcome that is not Archimedean-comparable. And if the probabilities are exactly aligned, it is more worth your time to do more research so that they will be less aligned, than to act on the basis of a hierarchically less important outcome.

For example, if we cared infinitely more about not dying in a car crash than about reaching our destination, we would never drive, because there is a small but positive probability of crashing (and the same goes for any degree of horribleness you want to add to the crash, up to and including torture -- it seems reasonable to suppose that leaving your house at all very slightly increases your probability of being tortured for 50 years).

For the record, EY's position (and mine) is that torture is obviously preferable. It's true that there will be a boundary of uncertainty regardless of which answer you give, but the two types of boundaries differ radically in how plausible they are:

  • if SPECKS is preferable to TORTURE, then for some N and some level of torture X, you must prefer 10N people to be tortured at level X than N to be tortured at a slightly higher level X'. This is unreasonable, since X is only slightly higher than X', while you are forcing 10 times as many people to suffer the torture.
  • On the other hand, if TORTURE is preferable to SPECKS, then there must exist some number of specks N such that N-1 specks is preferable to torture, but torture is preferable to N+1 specks. But this is not very counterintuitive, since the fact that torture costs around N specks means that N-1 specks is not much better than torture, and torture is not much better than N+1 specks. So knowing exactly where the boundary is isn't necessary to get approximately correct answers.
Explaining "The Crackpot Bet"

To repeat what was said in the CFAR mailing list here: This "bet" isn't really a bet, since there is no upside for the other party; they are worse off than when they started in every possible scenario.

What are your plans for the evening of the apocalypse?

I don't think that chapter is trying to be realistic (it paints a pretty optimistic picture),

Counterfactuals, thick and thin

Sure, in that case there is a 0% counterfactual chance of heads, your words aren't going to flip the coin.

Counterfactuals, thick and thin

The question "how would the coin have landed if I had guessed tails?" seems to me like a reasonably well-defined physical question about how accurately you can flip a coin without having the result be affected by random noise such as someone saying "heads" or "tails" (as well as quantum fluctuations). It's not clear to me what the answer to this question is, though I would guess that the coin's counterfactual probability of landing heads is somewhere strictly between 0% and 50%.

Load More