Philosophy PhD student. Interested in ethics, metaethics, AI, EA, disagreement/erisology
I really liked this. I thought the little graphics were a nice touch. And the idea is one of those ones that seems almost obvious in retrospect, but wasn't obvious at all before reading the post. Looking back I can see hints of it in thoughts I've had before, but that's not the same as having had the idea. And the handle ("point of easy progress") is memorable, and probably makes the concept more actionable (it's much easier to plan a project if you can have thoughts like "can I structure this in such a way that there is a point of easy progress, and that I will hit it within a short enough amount of time that it's motivating?").
I've started using the phrase "existential catastrophe" in my thinking about this; "x-catastrophe" doesn't really have much of a ring to it though, so maybe we need something else that abbreviates better?
So one thing I'm worried about is having a hard time navigating once we're a few episodes in. Perhaps you could link in the main post to the comment for each episode?
Could this be solved just by posting your work and then immediately sharing the link with people you specifically want feedback from? That way there's no expectation that they would have already seen it. (Granted, this is slightly different from a gdoc in that you can share a gdoc with one person, get their feedback, then share with another person, while what I suggested requires asking everyone you want feedback from all at once.)
I disagree, I think Kithpendragon did successfully refute the argument without providing examples. Their argument is quite simple, as I understand it: words can cause thoughts, thoughts can cause urges to perform actions which are harmful to oneself, such urges can cause actions which are harmful to oneself. There's no claim that any of these things is particularly likely, just that they're possible, and if they're all possible, then it's possible for words to cause harm (again, perhaps not at all likely, for all Kithpendragon has said, but possible). It borders on a technicality, and elsethread I disputed its practical importance, but for all that it is successful at what it's trying to do.
I agree that the idea that concrete examples are a "likely hazard" seems a bit excessive, but I can see the reasoning here even if I don't agree with it: if you think that words have the potential to cause substantial harm, then it makes sense to think that if you put out a long list of words/statements chosen for their potential to be harmful, the likelihood that at least one person will be substantially harmed by at least one entry on the list seems, if not high, then still high enough to warrant caution. Viliam has managed to get around this, because the reasoning only applies if you're directly mentioning the harmful words/statements, whereas Viliam has described some examples indirectly.
A sneeze can determine much more than hurricane/no hurricane. It can determine the identities of everyone who exists, say, a few hundred years into the future and onwards.
If you're not already familiar, this argument gets made all the time in debates about "consequentialist cluelessness". This gets discussed, among other places, in this interview with Hilary Greaves: https://80000hours.org/podcast/episodes/hilary-greaves-global-priorities-institute/. It's also related to the paralysis argument I mentioned in my other comment.
Upvoted for giving "defused examples" so to speak (examples that are described rather than directly used). I think this is a good strategy for avoiding the infohazard.
I was thinking a bit more about why Christian might have posted his comment, and why the post (cards on the table) got my hackles up the way it did, and I think it might have to do with the lengths you go to to avoid using any examples. Even though you aren't trying to argue for the thesis that we should be more careful, because of the way the post was written, you seem to believe that we should be much more careful about this sort of thing than we usually are. (Perhaps you don't think this; perhaps you think that the level of caution you went to in this post is normal, given that giving examples would be basically optimizing for producing a list of "words that cause harm." But I think it's easy to interpret this strategy as implicitly claiming that people should be much more careful than they are, and miss the fact that you aren't explicitly trying to give a full defense of that thesis in this post.)
Sorry for the long edit to my comment, I was editing while you posted your comment. Anyway, if your goal wasn't to go all the way to "people need to be more careful with their words" in this post, then fair enough.
I originally had a longer comment, but I'm afraid of getting embroiled in this, so here's a short-ish comment instead. Also, I recognize that there's more interpretive labor I could do here, but I figure it's better to say something non-optimal than to say nothing.
I'm guessing you don't mean "harm should be avoided whenever possible" literally. Here's why: if we take it literally, then it seems to imply that you should never say anything, since anything you say has some possibility of leading to a causal chain that produces harm. And I'm guessing you don't want to say that. (Related is the discussion of the "paralysis argument" in this interview: https://80000hours.org/podcast/episodes/will-macaskill-paralysis-and-hinge-of-history/#the-paralysis-argument-01542)
I think this is part of what's behind Christian's comment. If we don't want to be completely mute, then we are going to take some non-zero risk of harming someone sometime to some degree. So then the argument becomes about how much risk we should take. And if we're already at roughly the optimal level of risk, then it's not right to say that interlocutors should be more careful (to be clear, I am not claiming that we are at the optimal level of risk). So arguing that there's always some risk isn't enough to argue that interlocutors should be more careful -- you also have to argue that the current norms don't prescribe the optimal level of risk already, they permit us to take more risk than we should. There is no way to avoid the tradeoff here, the question is where the tradeoff should be made.
[EDIT: So while Stuart Anderson does indeed simply repeat the argument you (successfully) refute in the post, Christian, if I'm reading him right, is making a different argument, and saying that your original argument doesn't get us all the way from "words can cause harm" to "interlocutors should be more careful with their words."
You want to argue that interlocutors should be more careful with their words [EDIT: kithpendragon clarifies below that they aren't aiming to do that, at least in this post]. You see some people (e.g. Stuart Anderson, and the people you allude to at the beginning), making the following sort of argument:
You successfully refute (1) in the post. But this doesn't get us to "people do need to be careful with their words" since the following sort of argument is also available:
A. Words don't have a high enough probability of causing enough harm to enough people that people need to be any more careful with them than they're already being.
B. Therefore, people don't need to be careful with their words (at least, not any more than they already are). [EDIT: list formatting]]