The point of deontological rules is to override consequentialist conclusions in practice, to setup rules of the game that consequentialism would then need to play within. A rule has a scope of applicability, situations where it's triggered, across many epistemic states, crucially including those saying the rule makes a wrong recommendation. Different people following the same rule are coordinated by it, so a rule's scope of applicability should also extend to different values, not just different beliefs.
In this framing, there can be disagreements about the scope of applicability of a rule among people who are all following the rule with some scopes. And there can be different coalitions of people following different flavors of a rule. Beliefs and values that move one sufficiently far from endorsing a rule overall should then be thought of as first pushing the person out of some coalition of rule-followers, and only then can the rule's scope of applicability significantly change in that person's behavior, enabling consequentialist decision-making to start playing a different game.
(Rules make sense as a matter of pure consequentilism in some sense, but only when it's sufficiently aware of issues with running on corrupted hardware and coordination opportunities, doesn't get too causal or updateful to notice many potential coalitions.)
I'm not sure I entirely understand your point, but I think its worth noting that properly understood, this doesn't have to mean that people follow "different rules" just that the one rule references the actors beliefs or intent. For example, if I have a rule that people who come over to my place shouldn't intentionally destroy my things, I can acknowledge that someone who intentionally take a glass and smashes it violates the rule, while someone who accidentally drops a glass does not. Many commonly advocated rules also are some version of "don't intentionally say false things". Two people can say the exact same statement, one knowing its falsity, the other believing it to be true. The first violates the rule, while the other does not.
Beliefs and values that move one sufficiently far from endorsing a rule overall should then be thought of as first pushing the person out of some coalition of rule-followers
Can you explain this more concretely? What would be an example of a belief or value that you have in mind here? Do you think this applies to either of the AI safety examples referenced in my post?
Scott Alexander has a recent piece about "deontological bars" in the context of AI safety. He describes the state of the discourse like this:
My initial reaction upon reading this was that I don't see how there can be a general bar against either of these things. To the extent that there is some kind of bar, I feel like their needs to be additional details that are being assumed. The case that there is a deontological bar on supporting AI companies is in the context where you believe said companies have a reasonable chance of causing extreme harm, like human extinction. If the AI company in question was some random company that was using AI to help cute puppies rather than a frontier AI company I don't think many people would claim there is a deontological bar on supporting the company. Similarly, the bar on activism presumably assumes that the activism in question is or is likely to become dishonest or extreme in some way. In both cases there is an underlying belief about the nature of the action that is required for the existence of the deontological bar.
In my view, in order for an action of this type to be deontologically barred, the person taking the action (supporting the AI company, engaging in the aciivism) must have the beliefs that make the action barred. Unlike consequentialism, deontology often cars about the intent of the actor when they take an action, and I think that would apply to the types of deontological constraints that Alexander is discussing above. I can see how working for a company that is likely to cause great harm could be deontologically barred, but I think the person who is actually working with or supporting the company must believe that it is likely to cause that harm. Similarly, I can see how dishonest or extreme activism could be barred, but the activist must intend the dishonest or extreme acts. Note that this is different then saying that the activist believes that their actions are dishonest or extreme. In both cases, the actor need not believe their actions are barred for them to actually be barred, but they need to have beliefs such that there is the necessary intent. The AI lab support might need to believe that the lab they support has a relatively high chance of causing great harm, it isn't enough for it to be true that the lab has a high chance of causing great harm. By the same token, the activist might need to believe that their activism has a relatively high chance of leading to violence, it isn't enough for it to be true that their activism has a high chance of leading to violence. If this kind of intent is not required, it seems to me like these alleged deontological bars are simply consequentialism in disguise.
For consequentialism it mostly matters what is true (given that the statements here are statements about the probabilities of various consequences), but deontology cares about intent. As a result, when we evaluate deontological bars, I think these bars need to make references to the beliefs of the person taking the action in question. This distinction is particularly important if you want to accuse someone of violating a bar. It can be tempting to use your own beliefs about the situation, but in my view that is a mistake. If your real problem with someone is that they have incorrect beliefs about the nature or consequences of their actions, it's probably better to the just say that explicitly rather than accusing them of violating a deontological bar that only exists for those that have your beliefs.