Reply to Nate Soares on Dolphins

Human intelligence counts as "gained characteristics not shared by the others".

I think you're thinking that it doesn't count as a lot of divergence, but "a list of divergences with few items" doesn't mean "not a lot of divergence". Human intelligence has an effect on the environment and capabilities of humans that is equal or greater than the effect of the differences between birds and reptiles.

Reply to Nate Soares on Dolphins

Your comment seems to me to assume that Scott thinks there would be nothing very wrong with a definition of “fish” that included whales only because that’s something he has to think in order to remain consistent while classifying transgender people the way they feel they should be classified.

Believing things for multiple reasons is a thing (despite the LW idea of a true rejection, as if people only have one reason for everything). Moreover, people aren't perfectly rational machines, and motivated reasoning is a thing. I certainly think that needing to believe it for the sake of transgendered people is a large component of why he believes it, and that he probably wouldn't otherwise believe it, even if it's not the only reason why.

Am I anti-social if I get vaccinated now?

Vaccines that are already delivered to your country are not going to get shipped elsewhere.

This seems to fail to acausal reasoning.

Reply to Nate Soares on Dolphins

What changed? Surely if “cognitively useful categories should carve reality at the joints, and dolphins being fish doesn’t do that” was good philosophy in 2008, it should still be good philosophy in 2021?

Scott Alexander's essay uses the example of fish versus whales to argue that transgender people should be classified by whatever sex they claim to be rather than classified by biological sex. This essay came out after 2008 and before 2021. And Scott Alexander is about as influential here as Yudkowsky.

In other words, what changed is that asserting that it makes sense to classify dolphins as fish is now something you need to assert for political purposes.

Edit: I missed the reference to gender issues. But I think it may explain why Yudkowsky and rationalists in general have changed their mind, regardless of why anyone in particular here has.

Assessing Interest in Group Trip to Secure Panamanian Residency [Imminent Rules Change]

If you propose a course of action which a normal person would find profoundly weird, I suggest that Chesterton's fence applies, and you figure out why a normal person would object it. Then articulate why it is usually beneficial to avoid such things, before you decide that this one time, the normal person is wrong and you really should go after the thing that he avoids.

And the answer is not going to be "because he's a normal person and so he keeps missing twenty dollar bills in the street".

Often, enemies really are innately evil.

By this reasoning almost nothing normally described as a terminal value is a terminal value. "He robs banks because he wants money because getting money makes him feel good".

Often, enemies really are innately evil.

I think the context is that many people say that there is no such thing as evil and advocate for some actions and against other actions based on that. Just pointing out that they are recommending harmful things is valuable.

Ruling out certain classes of responses is useful even if there is still more than one possibility remaining and it's still hard to pick the right one.

If someone told you not to use homeopathy to cure disease, would you respond that they haven't explained how you should cure disease?

Which animals can suffer?

Presumably people think that at some point an AI is able to suffer. So why wouldn't a neural network be able to suffer?

Covid 5/20: The Great Unmasking

Admitting one is wrong and correcting errors needs to be rewarded and encouraged rather than punished and piled onto.

What is happening now is the opposite of admitting one is wrong. It's not as if the CDC said "sorry, you could have gone without a mask for the last month, we'll try not to make that mistake again".

The Reebok effect

This implies that advertisers would be better off if they occasionally violated such assumptions (such as saying "of the top five" when they were in the top four) enough that it weakens the inferences viewers can make, by enough to benefit the advertisers.

Of course, the coordination problem in doing this is hard, but there are several ways around it (and not all of them just involve advertisers directly colluding with each other).

Load More