I think I would argue that harm/care isn't obviously deontological. Many of the others are indeed about the performance of the action, but I think arguably harm/care is actually about the harm. There isn't an extra term for "and this was done by X".
That might just be me foisting my consequentialist intuitions on people, though.
"What if there's an arms race / race to the bottom in persuasiveness, and you have to pick up all the symmetrical weapons others use and then use asymmetrical weapons on top of those?"
Doesn't this question apply to other cases of symmetric/asymmetric weapons just as much?
I think the argument is that you want to try and avoid the arms race by getting everyone to agree to stick to symmetrical weapons because they believe it'll benefit them (because they're right). This may not work if they don't actually believe they're right and are just using persuasion as a tool, but I think it's something we could establish as a community norm in restricted circles at least.
The point that the Law needs to be simple and local so that humans can cope with it is also true of other domains. And this throws up an important constraint for people designing systems that humans are supposed to interact with: you must make it possible to reason simply and locally about them.
This comes up in programming (to a man with a nail everything looks like a hammer): good programming practice emphasises splitting programs up into small components that can be reasoned about in isolation. Modularity, compositionality, abstraction, etc. aside from their other benefits, make it possible to reason about code locally.
Of course, some people inexplicably believe that programs are mostly supposed to be consumed by computers, which have very different simplicity requirements and don't care much about locality. This can lead to programs that are very difficult for humans to consume.
Similarly, if you are writing a mathematical proof, it is good practice to try and split it up into small lemmas, transform the domain with definitions to make it simpler, and prove sub-components in isolation.
Interestingly, these days you can also write mathematical proofs to be consumed by a computer. And these often suffer some of the same problems that computer programs do - because what is simple for the computer does not necessarily correspond to what is simple for the human.
(Tendentious speculation: perhaps it is not a coincidence that mathematicians tend to gravitate towards functional programming.)
I am reminded of Guided by the Beauty of our Weapons. Specifically, it seems like we want to encourage forms of rhetoric that are disproportionately persuasive when deployed by someone who is in fact right.
Something like "make the structure of your argument clear" is probably good (since it will make bad arguments look bad), "use vivid examples" is unclear (can draw people's attention to the crux of your argument, or distract from it), "tone and posture" are probably bad (because the effect is symmetrical).
So a good test is "would this have an equal effect on the persuasiveness of my speech if I was making an invalid point?". If the answer is no, then do it; otherwise maybe not.
Yes, this is very annoying.
I found Kevin Simmler's observation that an apology is a status lowering to be very helpful. In particular, it gives you a good way to tell if you made an apology properly - do you feel lower status?
I think that even if you take the advice in this post you can make non-apologies if you don't manage to make yourself lower your own status. Bits of the script that are therefore important:
This also means that weird dramatic stuff can be good if it actaully makes you lower your status. If falling to your knees and embracing the person's legs will be perceived as lowering your status rather than funny, then maybe that will help.
This is a great point. I think this can also lead to cognitive dissonance: if you can predict that doing X will give you a small chance of doing Y, then in some sense it's already in your choice set and you've got the regret. But if you can stick your fingers in your ears enough and pretend that X isn't possible, then that saves you from the regret.
Possible values of X: moving, starting a company, ending a relationship. Scary big decisions in general.
Something that confused me for a bit: people use regret-minimization to handle exporation-exploitation problems, shouldn't they have noticed a bias against exploration? I think the answer here is that the "exploration" people usually think about involves taking an already known option to gain more information about it, not actually expanding the choice set. I don't know of any framework that includes actions that actually change the choice set.
I've read it shallowly, and I think it's generally good. I think I'll have some more comments after I've thought about it a bit more. I'm surprised either by the lack of previous quantitative models, or the lack of reference to them (which is unsurprising if they don't exist!). Is there really nothing prior to this?
I would dearly, dearly love to be able to use the fairly-standard Markdown footnote extension.
I think your example won't work, but it depends on the implementation of FHE. If there's a nonce involved (which there really should be), then you'll get different encrypted data for the output of the two programs you run, even though the underlying data is the same.
But you don't actually need to do that. The protocol lets B exfiltrate one bit of data, whatever bit they like. A doesn't get to validate the program that B runs, they can only validate the output. So any program that produces 0 or 1 will satisfy A and they'll even decrypt the output for you.
That does indeed mean that B can find out if A is blackmailable, or something, so exposing your source code is still risky. What would be really cool would be a way to let A also be sure what program has been run on their source by B, but I couldn't think of a way to do this such that both A and B are sure that the program was the one that actually got run.