Running Lightcone Infrastructure, which runs LessWrong and Lighthaven.space. You can reach me at habryka@lesswrong.com.
(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)
(To be clear, my take on all of this is that it is often appropriate to be rude and offensive, and often inappropriate. What has made these discussions so frustrating is that Said continues to insist that no rudeness or offensiveness is present in any of his writing, which makes it impossible to have a conversation about whether the rudeness of offensiveness is appropriate in the relevant context.
Like, yeah, LessWrong has a culture, a lot of which is determined by what things people are rude and offensive towards. One of my jobs as a moderator is to steer where that goes. If someone keeps being rude and offensive towards things I really want to cultivate on the site, I will tell them to stop, or at least provide arguments for why this thing that I do not think is worth scorn, deserves scorn.
But if that person then insists that no rudeness or offensiveness was present in any of their writing, despite an overwhelming fraction of readers reading it as such, then they are either a writer so bad at communication as to not belong on the site, or trying to avoid accountability for the content of their messages, both of which leave little room but to take moderation action that limits their contributions to the site)
we know to be associated with consciousness in humans
To be clear, my opinion is that we have no idea what "areas of the brain are associated with consciousness" and the whole area of research that claims otherwise is bunk.
It’s not clear to me that a species which showed strong behavioural evidence of consciousness and valenced experience should have their welfare strongly discounted using neuron count.
You can't have "strong behavioral evidence of consciousness". At least in my theory of mind it is clear that you need to understand what is going on inside of a mind to get strong evidence.
Like, modern video game characters (without any use of AI) would also check a huge number of these "behavioral evidence" checkboxes, and really very obviously aren't conscious or moral patients of non-negligible weight.
You also have subdivision issues. Like, by this logic you end up with thinking that a swarm of fish is less morally relevant than the individual fish that compose it.
Behavioral evidence is just very weak, and the specific checkbox approach that RP took also doesn't seem to me like it makes much sense even if you want to go down the behavioral route (in-particular in my opinion you really want to gather behavioral evidence to evalute how much stuff there is going on in the brain of whatever you are looking at, like whether you have complicated social models and long-term goals and other things).
I mean, I agree that if you want to entertain this as one remote possibility, sure, go ahead, I am not saying morality could not turn out to be weird. But clearly you can construct arguments of similar quality for at least hundreds if not thousands or tens of thousands distinct conclusions.
If you currently want to argue that this is true, and a reasonable assumption on which to make your purchase decisions, I would contend that yes, you are also very very confused about how ethics works.
Like, you can have a mutual state of knowledge about the uncertainty and the correct way to process that uncertainty. There are many plausible arguments for why random.org will spit out a specific number if you ask it for a random number, but it is also obvious that you are supposed to have uncertainty about what number it outputs. If someone shows up and claims to be confident that random.org will spit out a specific number next, they are obviously wrong, even if there was actually a non-trivial chance the number they were confident in will be picked.
The top-level post calculates an estimate in-expectation. If you calculate something in-expectation you are integrating your uncertainty. If you estimate that a randomly publicy traded company is worth 10x its ticker price, you might not be definitely wrong, but it is clear that you need to have a good argument, and if you do not have one, then you are obviously wrong.
In policy contexts (not just political policy, but like, making group decisions of almost any kind), it tends to matter a lot what the author intended, since that's often a natural schelling point for resolution of ambiguities that later gets referred to.
See for example courts trying to interpret what previous courts intended with a judgement, or what a law was intended to do. Same for company decisions to move ahead with a project. In almost any context with stakes, the intention of the author continues to matter (though how much varies from context to context, though my sense is almost always a good amount).
There are lots of critiques spread across lots of forum comments, but no single report I could link to. But you can see the relevant methodology section of the RP report yourself here:
https://rethinkpriorities.org/research-area/the-welfare-range-table/
You can see they approximately solely rely on behavioral proxies. I remember there was some section somewhere in the sequence arguing explicitly for this methodology, using some "we want to make a minimum assumption analysis and anything that looks at brain internals and neuron counts would introduce more assumptions" kind of reasoning, which I always consider very weak.
I currently think neuron count is a much better basis for welfare estimates than the RP welfare ranges (though it's still not great, and to be clear, consider hedonic utilitarianism also in general not a great foundation for ethics of any kind).
Moderation privileges require passing various karma thresholds (for frontpage posts, it's 2000 karma, I think, for personal posts it's 50).
One can totally arrive at conclusion similar to "bee suffering is 15% as important as a human suffering" via epistemic routes different to the one you outline.
I am not familiar with any! I've only seen these estimates arrived at via this IMO crazy chain of logic. It's plausible there are others, though I haven't seen them. I also really have no candidates that don't route at least through assumption one (hedonic utilitarianism), which I already think is very weak.
Like, I am not saying there is no way I could be convinced of this number. I do think as a consistency-check arriving at numbers not as crazy as this one is quite important in my theory of ethics for grounding whether any of this moral reasoning checks out, so I would start of highly skeptical, but I would of course entertain arguments, and once in a while an argument might take me to a place as prima-facie implausible as this one (though I think it has so far never happened in my life for something that seems this prima facie implausible, but I've gotten reasonably close).
Again, I think there are arguments that might elevate the assumptions of this post into "remotely plausible" territory, but there are, I am pretty sure, no arguments presently available to humanity that elevate the assumptions of this post into "reasonable to take as a given in a blogpost without extensive caveats".
I think if someone came to me and was like "yes, I get that this sounds crazy, but I think here is an argument for why 7 bees might be more important than a human" then I would of course hear them out. I don't think considering this as a hypothesis is crazy.
If someone comes to me and says "Look, I did not arrive at this conclusion via the usual crazy RP welfare-range multiplication insanity, but I have come to the confident conclusion that 7 bees are more important than a human, and I will now start using that as the basis for important real-world decisions I am making" then... I would hear you out, and also honestly make sure I keep my distance from you and update you are probably not particularly good at reasoning, and if you take it really seriously, maybe a bit unhinged.
So the prior analysis weighs heavily in my mind. I don't think we have much good foundational grounding for morality that allows one to arrive at confident conclusions of this kind, that are so counter to basically all other moral intuitions and heuristics we have, and so if anyone does, I think that alone is quite a bit of evidence that something fishy is going on.
I just don’t think that saying things like “extremely unlikely” or implying someone hasn’t “thought about [x] reasonably at all” is either productive or particularly accurate when we’re talking about something for which we have very little well-grounded knowledge.
I agree that some amount of extreme uncertainty is appropriate, but this doesn't mean that no conclusions are therefore insane. If someone was doing estimates that take into account extreme uncertainty, I would be much less upset! Instead the post says things like this:
If we assume very very very conservatively that a day of honey bee life is as unpleasant as a day spent attending a boring lecture, and then multiply by .15 to take into account the fact bees are probably less sentient than people
That is not a position of extreme uncertainty! And I really don't think there exist any arguments that would collapse this uncertainty in a reasonable way for the OP here, that I just haven't encountered.
I think a reasonable position on ethical values is extreme uncertainty. This post is not holding that position. It seems to think that it's a conservative estimate that a day of honey bee life is 15% as bad as a bad human day.
Yep, though of course there are priors. The thing I am saying is that there are at least some things (and not just an extremely small set of things) that it is OK to be rude towards, not that the average quality/value-produced of rude and non-rude content is the same.
For enforcement efficiency reasons, culture schelling point reasons, and various other reasons, it might still make sense to place something like a burden of proof on the person who claims that in this case rudeness and offensiveness is appropriate, so enforcement for rudeness without justification might still make sense, and my guess is does indeed make sense.
Also, for you in-particular, I have seen the things that you tend to be rude and offensive towards, at least historically, and haven't been very happy about that, and so the prior is more skewed against that. My guess is I would tell you in-particular that you have a bad track of aiming it well, and so would request additional justification on the marginal case from your side (similar to how we generally treat repeat criminal offenders different from first-time offenders, and often declare whole sets of actions that are otherwise completely legal from their option pool in prevention of future harm).