Previously:

My parents taught me the norm of keeping my promises.

My vague societal culture taught me a norm of automatically treat certain types of information as private.

My vague rationalist culture taught me norms that include:

Eliezer's post about meta-honesty was one of the most influential posts I've read in the past few years, and among the posts that inspired the coordination frontier. I was impressed that Eliezer looked at ethical edgecases, and wasn't content to make a reasonable judgment call and declare himself done. 

He went on to think through the ramifications of various policies, devise a potential new norm/protocol, examine reasons that protocol might work or not work. He noted considerations like [paraphrased] "It matters that the norm be simple enough that people can reliably understand and use it." Or, quoted directly: "This norm is too subtle for Twitter. It might be too subtle for us, too."

From this post, I derived a (not-quite-spelled-out) norm of "when you try to navigate the edge cases of your norms, try thinking through the underlying principles. But don't try to be too clever, and consider the ways your edge-case-handling may fail to scale."

With this in mind, I want to walk through one of the ethical dilemmas I faced that I reflected on, when writing Norm Innovation and Theory of Mind. This is more of an object-level post, primarily a followup to my Privacy Practices sequence. But it seemed like a useful illustrative example for the Coordination Frontier concept.

Privacy norms can be wielded as an obfuscating weapon

Sometimes, privacy is wielded as a tool to enable manipulation. 

I’ve run into a couple people who exploited my good faith / willingness to keep things confidential, as part of an overall manipulative pattern. Unfortunately, I don't feel comfortable going too far into the details here (please don't speculate in the comments), which makes it a bit harder to sanity check. 

"Manipulation" is a tricky to define. It's a blurry line between "communicating normally" and "communicating in a way that systematically distorts another people's thinking and controls their behavior against their wishes". I'd like it say "it's hard to define but you know it when you see it", but often it's hard to see it because manipulation systematically tries not to be seen. 

I've met some people seemed deliberately manipulative, and some people who might have been well intentioned, but in the end it didn’t matter. They interacted with me (and others) in a way that felt increasingly uncomfortable, which seemed to be harming people. They skirted lines, wove narratives that made it feel awkward for me to criticize them or think clearly. 

One of the key strategies they employed was to make it feel awkward to get help from other people to think or sanity check things. And one tactic in that strategy was pushing for confidentiality – sometimes explicitly, sometimes implicitly. 

Explicit promises I regret

One person (call them Dave) asked for explicit promises of confidentiality on a number of occasions, sometimes after-the-fact. We once had a long conversation about their worldview and worries they had, which ended with me saying "so, wait, is this meant to be confidential?". They responded, somewhat alarmed-seeming, with "oh definitely I would never have told you all this if I thought you might share it."

At the time I found that somewhat annoying, but agreed to keep it confidential and didn't think much of it. (Nowadays try to notice when a conversation is veering into sensitive topics, and have a quick meta-conversation about confidentiality preferences in advance).

The most frustrating thing came under ideal-privacy-conditions: Dave asked me to make a specific promise of confidentiality before telling me something. I agreed. Then they told me some stories that included somebody harming themselves as a result of interaction with Dave. 

Later on, a number of people turned out to be having bad interactions with Dave. Many of them had had similar conversations with him. Some of those conversations had included promises of confidentiality. Others had not. It gradually became clear that Dave was not being honest.

What became really frustrating was that a) it was actually important to figure out whether Dave was harmful, and it was much harder to do without sharing notes. b) more infuriatingly, most of the information had been given to some people without conditions of confidentiality, but it was still hard to talk about openly about without betraying promises.

I think it’s important to take promises seriously. But in this case I think many of the promises had been a mistake. Part of this is because I think people should generally make fewer privacy promises in the first place

At the time, I decided to reveal some bits of the information when it seemed really important, and acknowledging to myself that this made me a less trustworthy person, in some ways. This seemed worth it, because if I hadn’t revealed the information, I’d be revealing myself to be untrustworthy in other ways – I’d be the sort of person who was vulnerable to manipulative attacks. Integrity isn’t just about being honest, it’s about being functional and robust. Sometimes it involves hard tradeoffs in no-win scenarios.

It feels important to me that I internalize that hit to my integrity. That’s part of why I’m writing this blogpost – it’s sometimes necessary to violate your internal moral code (including keeping promises). But, when I do, I want people to know that I take it seriously. And I want people to have an accurate model of me.

But in this case, at least, the solution going forwards is pretty simple: I now try to avoid making such promises in the first place.

Instead, I include a clause saying “in rare circumstances, if I come to believe that this was a part of a manipulative pattern that is harming people, I may carefully share some of the information with other people.” Most of the time this is fine, because most private-information is obviously not the sort of thing that’s likely to be interpreted as part of a manipulative pattern (assuming you trust me to have remotely sane judgment).

There are some cases where you have information that you’d like to share with me, that I actually want to hear, which is the sort of thing that could be easily construed as manipulative and/or harmful, and which requires more trust than you currently trust my judgment. (Dave would have recognized this to be true about the conversation he was asking confidentiality about). 

I am not sure what to do in those cases. 

I think I would never commit to 100% reliable confidentiality. But, if the conversation seemed important, I’d first have a lengthy conversation about meta-honesty and meta-privacy. I might ask Alice for ~2 confidants that we both trust (from different parts of the social graph), who I might go to to get help evaluating whether Alice is manipulating me.

Implicit confidentiality and incriminating evidence

Another person (call them Carla) never extracted a promise of confidentiality from me. But they took advantage of a vague background norm. Normally, if someone comes to me talking about something troubling them, I try to keep it private by default. If someone is hurting and expresses vulnerability, I want them to feel safe talking through a problem with me (whether they're just venting, or trying to devise a solution). 

Sometimes, this includes them talking about times they screwed up, or ways they harmed people. And in most cases (that I have experienced) it still seemed correct to keep that confidential by default – the harm was relatively minor. Meanwhile, there was value in helping someone with a "Am I the asshole?" kind of question.

But some of my talks with Carla veered into “man, this is actually a red flag that should have prompted me to a higher level of attention”, where I should have considered not just how to help Carla, but how to investigate whether Carla was harming others and what to do about it. 

In one notable case, the conversation broached a subject that might have been explicitly damning of Carla. I asked a clarifying question about it. She said something evasive, avoided answering the question. I let it slide. 

If I had paid more attention, I think I could have updated on Carla not being trustworthy much sooner. (In fact, it was another few years before I made the update, and Carla is no longer welcome in my day-to-day life). I now believe Carla had a conscious strategy of revealing different parts of herself with different people, making people feel awkward for violating her trust, and using that to get away with harmful behavior in relatively-plain-sight.

I’m a bit unsure how harshly to judge myself. Noticing manipulation, evasion, and adversarial action is legitimately hard. My guess is that at the time, it was a little beyond my skillset to have noticed and taken appropriate action. It’s not useful to judge yourself harshly for things you couldn’t really have done better. 

But it wasn’t unreasonably beyond my skillset-at-the-time. And in any case, by this point, it is within my skillset. I hold myself now to the standard of paying attention if someone is skirting the line of confessing something harmful. 

What do you do if someone does confess something harmful, though? 

It’s still generally good to have a norm of “people can come to each other expressing vulnerabilities.” It’s bad if Alice has to worry “but, if I express a vulnerability that Bob decides is actually bad, Bob will reveal it to other people and I will get hurt.” 

Most of the time, I think it is quite good for Alice to feel safe coming to me, even if I think she’s being a bit of a jerk to someone. It’s only in rare cases that I think it makes sense to get a sanity-check from someone else.

I don’t think I ever owed Carla the security of a promise. But, it still matters whether people can generally expect to feel safe sharing vulnerable information with me.

Reneging on Confidentiality Pro-Socially

I don’t have a great solution. But, here is my current algorithm for how to handle this class of situation:

First, put a lot of upfront effort into talking publicly about my privacy policies, so that they’re already in the water and ideally, Alice already knows about them. 

Second, notice as soon as Alice starts sharing something vulnerable, and say “hey, is this something you want me to keep confidential?  If so I’d like to chat a little about how I do confidentiality.” (What happens next depends on the exact situation. But at the very least I convey that I’m not promising confidentiality yet. And if that means Alice isn’t comfortable sharing, she should stop and we should talk about it more at the meta level)

Third, if I think that Alice is manipulating me and I’m not sure what to do, get help from a confidant who promises a high degree of confidentiality. Share as little information as possible with them so that they can help me form my judgment about the situation. Try to get as much clarity as I can, while violating as little implicit or explicit expectations of privacy as possible. 

Fourth, ????. Maybe I decide the red flag is actually just a yellow flag, and Alice is fine, and I continue to help Alice with their problem. If I believe Alice’s behavior is manipulative and harmful, but that she’s mostly acting in good faith, maybe I talk directly to her about it. 

And, in (I hope rare?) cases, ask a couple other confidants, and if everyone seems to agree that Alice is acting adversarially, maybe start treating her as an adversary.

The Catch-All Escape Clause

Once upon a time, I didn’t have a special clause in my privacy policy for “manipulative patterns”. I made promises that didn’t caveat that potentiality. I had the moral-unluck to have to deal with some situations without having thought them through in advance, and took a hit to my integrity because of that.

It seems quite plausible this will not be the last time I discover a vulnerability in my privacy practices, or my general commitment-making practices. 

So, it currently seems like I should include more general escape clauses. If you are trusting me with something important, you are necessarily trusting my future judgment. I can promise that, if I need to renege on a promise, I will do so as pro-socially as I can. (i.e. try to internalize as much of the cost as I can, and try to adjust my overall policies to avoid having to renege further in the future)

Communities of Robust Agents

These are my current practices. I’m not confident they are best practices. I think this is a domain where it is particularly important that a social network has shared assumptions (or at least, common knowledge of divergent assumptions). 

It matters whether a community is a safe space to vulnerably reveal challenges you’re facing. 

It matters whether a community can sanely discuss literal infohazards. 

It also matters whether a community can notice when someone is using the guise of vulnerability or infohazards to obfuscate a pattern of harm, or power grab. 

I aspire to be a robust agent, and I hope that my community can be a robust community. The social circles I run in are trying to do complicated things, for which there is no common wisdom. 

They require norms that are intelligently designed, not culturally evolved. I want to have norms that are stable upon reflection, that are possible to talk about publicly, that people can inspect and agree “yes, these norms are the best tradeoffs we could make given the circumstances.”


 

New Comment
41 comments, sorted by Click to highlight new comments since: Today at 8:34 PM

To keep a secret properly, you have to act as if you didn't know it. At the same time, if you see things related to the secret, you make conclusions; but then you also have to act as if you hadn't made those conclusions. If the secret is entangled with many things in your life, you need to keep two separate realities.

Secrets that are not entangled with anything else are relatively easy to keep. You need to remember to never mention X, and that's it. I guess it is easy to make a mistake and assume that the secret will be of this type, and that it will be easy to keep it... and it turns out the other way round, and suddenly you need to keep track of two separate realities, and it is difficult.

Even worse if you e.g. know that something bad is going to happen, but you shouldn't try to prevent it, because in the "as if" reality, you do not have the information. Now you pay additional cost that you didn't expect before.

Keeping a secret may require you to lie. Someone asks you "do you have any evidence of X?", and you only keep the secret if you lie and say "no". Again, it is possible that you didn't realize this before; you expected that you will be required to remain silent on something, not to actively tell a lie.

Another problem is that it is difficult to keep two different realities in mind. Are you sure you can correctly simulate "what would be my conclusion from observing facts A, B, C, if I didn't know about X?". Like, maybe seeing just A, B, C would be enough for you to figure out X independently. Or maybe not.

Or maybe you would merely suspect X, like maybe with 30% probability. And maybe it would prompt you to look for further evidence about X. So, to properly simulate the "you, who haven't been told X", should you now pretend to do an investigation that is equally likely to make you update towards X or away from X? Are you sure you can actually do it? Is this even a meaningful thing to do? Whom are you trying to impress, exactly?

So, another bad consequence of being told a secret X is that it prevents you from finding out X independently and being free to talk about it openly. It even prevents you from being properly curious, because now anything you believe can be motivated. Most likely, if you conclude that you would have already figured out X independently, so you are not obliged to keep it anymore, the person who told you the secret will disagree.

A tangent/complication I'd love more clarity on: how to balance the fact that confidentiality can be used as cover the way you describe with the fact that real good can come out of therapy/priest type confidentiality, including when keeping the secret means allowing short terms harm. Sometimes people really are working through their shit and talking with you is part of their process. Sometimes they're not doing anything bad but are very afraid they are, and won't talk about it without a strong guarantee of confidentiality (this category commonly includes abuse victims who've been taught it's their fault).  It can be really hard to distinguish those times from people who are manipulating you, there's not even a bright line between them. 

I've had some relationships that felt sort of priest/therapist like... which then did indeed turn out to be manipulative-at-me. i.e. Alice "confesses" something me to about how they are manipulating or hurting others. But, over time, I realized they were also manipulating/hurting me (in smaller ways), and that I because I was a shared part of the social fabric I was somewhat complicit in their harming others. 

Two things that seem noteworthy about priests/therapists is

  • they are (I hope?) trained in skills for dealing with manipulative people
  • they are somewhat separated in the social fabric. (Therapists go out of their way to not be friends with you or know your social circle AFAICT. Priests I think are part of the social fabric but sort of specialized)

Re: Skills/training: I've gotten some experience dealing with manipulative people now, which (maybe?) helps me hold my own against them (although mostly it means that I have less tolerance for dealing with them or making space for them in my life). It'd have been nice if I got those skills without having to go through the corresponding experiences,  Much of the time I expect people don't have that luxury, but helping people gain the skill seems like part of the ideal.

they are somewhat separated in the social fabric

I agree that reduces the risk considerably. I think there are cases where the benefits of some amount of privacy-even-when-you're-maybe-harming-people between friends are worth the risks (but the exact amounts of benefit, risk, and secrecy are up for debate in any given case, and by default I won't agree to keep secrets that leave me impotent as people are harmed).

Things I've found helpful with the trade-off:

  • Having confidentiality only extend to people in the same social network, so the secret-keeping friend can get a sanity check without imperilling the one sharing the secret. 
  • Confidentiality only extends to things they tell you under confessional. It doesn't inhibit you from sharing things you noticed on your own or repeating things other people said, even if they point to the same conclusion as the secret. 

I'll note that while the latter is sane, it leads to potential issues with Parallel Construction, which I would expect a bad actor to almost certainly engage in.

It's plausible that literal priests and therapists have useful knowledge about this that you could find out by googling and talking to acquaintances, respectively.

I don't know how it works for therapists, but I know a bit about the priest situation.

Catholics take the relationship pretty seriously. Priests are supposed to give up their own life rather than violate the confidentiality of confession.

Civil law largely takes its lead from the Catholic church by carving out an exception so Catholic priests aren't forced to constantly refuse to testify and find themselves in contempt of court.

However the exception is not complete. In various localities there are carve outs for mandatory reporting about things like child abuse. Catholic priests are supposed to keep the secret anyway and go to jail for violating the law if they refuse to testify.

Other religions are less strict. For example, in my experience with Zen, dokusan ("going alone to the teacher", i.e. a private conversation between teacher a student about practice) is legally protected the same way confession is. Within our tradition there's a more standard assumption of privacy with a somewhat reasonable expectation that things might be shared among other teachers via your teacher asking for advice or reporting dangerous things to authorities, but it's not absolute like in Catholicism.

However it's very easy to break the confidentiality rules of confession-like situations. In particular, my understanding is that if a person ever talks about what is discussed in there at all to anyone else, they forfeit the right to confidentiality all together under the law and the priest/etc. can be compelled to testify (civilly, anyway; Catholic priests are still not supposed to, as I understand it, and can be excommunicated if they do).

In the end the choice the Catholic church makes is an absolute one based on being able to grant a religious sacrament. Others make somewhat less absolute guarantees of confidentiality that nonetheless are enough to enable someone to speak openly in ways that they wouldn't without such protection, but not in such an absolute way that reasonable harm cannot be avoided, as in the Catholic situation.

When I imagine a situation with a priest, I imagine that there's exactly one priest and common knowledge of who it is. Which seems like it changes things a bit.

  • "Whoa, why are you telling me this? Why not go to the priest?"
  • No ability to share small amounts of detail with one priest, different small amounts with another priest. Or to test the waters with one and see how they're likely to react to the more damaging stuff.
  • What about the victims here? (Acknowledging that that word might not be entirely appropriate.) I feel like some of what protects people is the thought "I don't like what person did to me but it's not worth ruining their life over". But you can tell a priest about that kind of thing, and if everyone tells the same priest... I dunno, but I like of feel like the priest does something?

Thinking more on this topic, there's a bullet to bite.  The only reason that secrets and privacy is asked or honored is that people are adversarial on some topics at some times.  If everyone was fully aligned, then the best policy would be full disclosure and public truth-sharing on all matters.

Once you've accepted that, keeping secrets seems to be more a matter of choosing sides than a question of absolute imperative.  Plus some amount of judgement of value of information and likelihood of misuse among the different participants.  Everyone is everyone else's frenemy in these cases (intentionally or not).  If you don't particularly side with anyone, you probably want to act to reduce overall damage from the (mis)use of the information, whether that's wrong decisions from lacking the data, or unpleasant/irrational/harmful uses of the data.  

Also, this implies that trying to formalize and publish your policies is unlikely to be effective in avoiding opprobrium.  Whether innocent or manipulative in intent, anyone harmed by publication of something told you in confidence is going to feel betrayed.  On the other side, anyone hurt by your failure to tell them a relevant truth will feel you've conspired against them.  These feelings on both sides are valid - in the conflict frame, you picked a side, and are responsible for the consequences.

I do like the view expressed here. There is a Occam's Razor aspect that I think helps in thinking about the situation. That said, I do think there might still be some value to the formalizing and publishing policies here.

While one may well find one self on the opposite side from someone that has confided something to you that will often just be one aspect of the larger personal relationship. We often implicitly or explicitly assume that my friend is my friend universally and not just some "fair weather" type friend. Friends accept someone with all their flaws and strengths. 

That is really a bit of a naive view though.

Perhaps by being more open in your own policy with people will make them consider the specifics of what's being shared more critically rather than just assuming the friend is on the same page in this case as they have been in 100 other cases.

While it's clear that once you have the information you either share or keep it confidential and so find you are on one side or the others. In other words, the person sharing imposes that problem on you -- once told you must be on one side or the other. In some cases that might be a hard decision to make. By making known a policy position perhaps the will limit the number of times you are placed in a situation you would really like to have avoided (ignorance can be bliss ;-) ).

So perhaps publishing one's policies is something of a optimal approach both helping reduce the stress in choosing a side and in even finding oneself in the position of having to make that choice.

There are also cases where the people are aligned but still keep secrets! The simple case is a surprise party, where everyone wants the target person not to know, including (though they aren't likely to think of this out of the blue) the target themself. So even perfect alignment isn't quite enough to get rid of secrets.

Another such case is if sharing something would embarrass somebody. They might be embarrassed in spite of others not acting adversarial towards them. 

I think that example is either incorrect or part of a larger example class which may be weaker.

It might be wrong if the embarrassment reflects fear of social harm. This would still be adversarial/harm-causing, so it is part of Dagon's framework.

If that's not correct, I think the agent is simply acting irrationally and this is a larger class. An irrational agent can be mentally harmed by anything at all, so this class is much larger, but also a bit weaker to talk about (it probably best fits on a sliding scale as well).

Hmm, suppose an adult had urinary problems and wetted their bed regularly.  Which category would you say that fits into? Or somebody whose parents had named them something that they didn't like and they changed their name and didn't want others to know their original given name due to aesthetic preferences and social implications of character traits related to that name?

There would be some social harm in sharing this either of these, but would it necessarily be adversarial? Even if others were aligned with the person with the secret, they couldn't help but look at them a bit different knowing the secret.

You asking that question made me realize that I had mentally redefined "adversarial" underneath us!

I feel "adversarial" is not really a good pointer to the concept I was using, which is what causes this confusion. I was reading it like it meant "referring to potential harm by person A onto person B", without any connotation of adversarial. I think that whether or not you accept this incredibly nonstandard definition is the deciding factor on this disagreement.

That said, you were right! Thanks for calling me on that weird move, I genuinely would not have seen what I'd done without that last clarification.

Yes, I agree that this nonstandard definition is a crux for this disagreement. Good analysis.

I like to think I'm trustworthy, and people (in person, at least) seem willing to confide in me and trust me to keep secrets.  However, I'm not a contractualist (or other form of deontologist)  at heart, and there probably exist circumstances where I deem it better for the world to break a promise (or oath, or other very serious contract) than to keep it.   

No contract or promise can possibly be as complex as the universe.  I just don't know all the circumstances that will require me to test or use my knowledge.  My promise of confidentiality always has exceptions, even though I very rarely state them.  So do yours - unless you're specifically trained, you won't resist torture, for a boring example.

I don't think explicit manipulation or deception is my primary concern - it happens, but I usually think it hurts them more than me.  I worry a lot more about non-adversarial incorrect beliefs or models - secrets without any boundaries and without specifics of what the consequences of sharing would be (taking the infohazard elements seriously, in order to make good risk decision) tend to be difficult to approach rationally.

Another dimension to get more good and less bad: you can ask for, and Alice can tell you, her reasons for wanting secrecy. Combined with explicitly saying that you're not taking an absolutist posture, this can sometimes give you more leeway to do reasonable stuff.

Well, the reason usually is "I fear it will make me look bad in the eyes of others".  What next?

In this case, sharing it with people who don't know her and will likely never encounter her will do minimal harm, so you might suggest that as an exception to the secret keeping.

General point: I think keeping secrets is a lot more like lying than people generally consider it to be, and I thought the general consensus was that lying is bad. I would appreciate someone laying out in detail how they think the two are different.

So there's version of keeping secrets that are literally lying (because the easiest way to keep the secret is to lie), and then there's versions of secrecy that involve fairly active optimization to prevent someone from figuring something out (which I'd agree is kinda like lying)

But, I generally don't think people are obligated to fully share all their information about everything with each other all the time. Most secrecy consists of just not bringing up stuff. There's lots of stuff I don't bring up with any given person. 

I'd agree that some ways of not bringing stuff can systematically distort people's perceptions. But that doesn't tell me what to differently. I think "don't lie" is a concrete schelling point to coordinate around because it's simple to execute. "Proactively provide all possible relevant information" isn't an action I even really know how to do. Meanwhile I think there's a bunch of good reasons to keep secrets at least sometimes.

I don't know if that answered your question. I'd be interested in you spelling out more why secrecy seems like lying, and why that seems bad. ("secrecy is kinda like lying" and "lying is bad" isn't sufficient to equal "secrecy is bad in that particular way.")

One way promises of secrecy can corrode dialogue is that they mean that if a certain topic comes up, you will either have to actively lie, actively manage things to not lie, or have obvious omissions that betray part of the secret. This puts a tax on discussion that can lead to it happening less often. This can be deliberately used by bad actors to inhibit discussion, but the dynamic is present even when the secret is noble.

E.g.  friend A tells me they're being abused, but to keep in quiet.  Friend B notices some concerning but not obviously abusive behavior from friend A's partner, and brings it to me as a mutual friend. I can either refuse to participate (which hints at the secret, or maybe friend B just thinks I'm a terrible person), or share what I know (breaks the promise), or try to have the conversation as if I don't know the thing (which involves saying literal lies, although I think not polluting the epistemic environment). The last one is a skill you can develop and I think it's useful, but it is dual use.

That's assuming A really is being abused.  An abuser could use this same technique to cast doubt on any concerns someone raises that they're abusive, while not leaving me free to discuss the sources of my doubt.

So the main issue, it seems, is that respecting confidentiality of some shared information may actually require a lot of effort, if it comes into conflict with the drive to behave morally. And manipulation in such cases then is bestowing this burden on a person without their informed consent, right?

That's one version of the problem. 

One related problems is that consent is tricky – Dave explicitly asked for consent, but if I had been older/wiser I'd have been more hesitant to say "yes", or given them the "I reserve right to carefully share the information if I feel like Dave is manipulating me" schpiel. I just hadn't realized some of consequences to consenting. (You could call this "getting informed consent" is difficult, but that still feels slightly off.)

I don't get a clear idea of what you mean by "manipulation" from your post, and I would be uncomfortable using this word as self-evident. "Making someone do something without their informed consent" seems like a reasonable attempt at a definition to me.

A concrete example: I had a friend of a friend in an abusive relationship. They eventually got out of it, but later resumed contact with their abuser, who explicitly asked them for secrecy about the contact out of concern for their privacy. This prevented the victim's friends from intervening, to the victim's detriment.

[Obviously this is a pretty personal story. The victim has blessed the general case of sharing information like this, and I don't care about the abuser]

It's definitely a huge red flag if someone is pressuring you out of sharing something with your closest friends.

Your friend's case seems to be very clear cut, and the root issue is not some vague manipulation or secrecy, but the actual abuse that, I assume, continued through their interaction.

Describing "the thing I mean by manipulation" correctly is unfortunately really tricky, specifically because manipulators use a bunch of tricks to make it hard to pin down. (This can be done intentionally or unintentionally)

I think Aella's recent post on Frame Control attempts to give one explanation, but I have some disagreements with the exact ontology there. (i.e. I think 'frame control' makes most sense to be a somewhat general term for controlling frames in any fashion, whether harmful or not, whether a big deal or not. Frame control is one of the primary tools that go into manipulation, but not the only one)

I think there is a third category worth subtly distinguishing from frame control and manipulation, which is "a particular kind of psychological damage, which is often related to frame control / manipulation".

I do acknowledge that there's a bunch more writing/clarification around manipulation-and-related-topics before I feel like I really understand it or that I'd advocate other people use the concept if they didn't already have a strong sense that it was useful. But I know enough people who've experience some flavor of manipulation that I think we can roughly point at the cluster while talking about related concepts.

Describing "the thing I mean by manipulation" correctly is unfortunately really tricky

Significant understatement.  Everyone engages in some amount of manipulative behavior, and exactly where to draw the line is personal and situation-dependent.  And manipulators (intentional or not) tend to be good at actively finding the line and pressuring you to categorize their behaviors in the way they want.

While Aella's post is very vivid in describing the horror of abuse, I don't necessarily see it in your post. You don't seem to be in a vulnerable/dependent position with respect to Carla and Dave, they don't humiliate you, don't make you doubt your own experience, don't seem to discard your feelings, and so on.

That's why if you said to me "I reserve the right to do X, if I find that you are manipulating me", I wouldn't be sure what you mean. (Even on the objective, God's eye, level, let's forget the question of how we make sure that it has indeed happened for a second).

I wonder if the qualifier (if you are X) is even needed. Whether the dilemma is created by someone manipulating things or just conflicting values (e.g., confidentiality/one's word and discovered wrong correctable by disclosure) who wants to be on the horns.

Why not simply take the stance that I will always reserve judgment on what confidences I will protect and when you telling me something means you are deferring to my judgement, not binding me to your position?

I feel like legal ethics has actually come up with a reasonable policy here. Check out ABA Model Rule 1.6, governing Attorney-Client confidentiality: https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_6_confidentiality_of_information/

 

Obviously one would need to analogize to the situation of a friendship, but, for example, see Rule 1.6(b)(2): 

A [confidant] may reveal information relating to the [confidential conversation] to the extent the [confidant] reasonably believes necessary:

(b) to prevent the [confiding party] from committing a crime or fraud that is reasonably certain to result in substantial injury to the financial interests or property of another and in furtherance of which the [confiding party] has used or is using the [confidant]'s services

 

The line they draw there is that if the client/confider is, by confiding the information, involving the attorney/confidant in the misconduct or using the attorney/confidant to help perpetrate the misconduct, then the attorney/confidant is not obligated to keep the information confidential.

He noted considerations like [paraphrased] "It matters that the norm be simple enough that people can reliably understand and use it." Or, quoted directly: "This norm is too subtle for Twitter. It might be too subtle for us, too."

Not that it remotely matters, and it's probably just an editing mistake, but I was bothered by the juxtaposition of "quoted directly" with a paraphrased quote (the actual one was "This concept is certainly too subtle for Twitter. Maybe it's too subtle for us too.").

different people might have very different ideas about what constitutes a good person's normal honesty, without realizing that they have very different ideas.

On Can you keep this confidential? How do you know? I suggested that people have different "privacy settings". I think that is the easiest part of meta-honesty. Knowing typical privacy expectations would also benefit the community. It requires less work to explain your expectations and there could be a post to refer to. 

So the question is: What are your privacy settings?

I found this post to be very interesting, but I guess I would have liked a more detailed scenario explaining how privacy norms can be manipulative. Then again, maybe this counts as an infohazard?

It's not an infohazard, just, well, private. A few issues were

a) I'd have to break my explicit and implicit commitments to Dave and Carla more significantly in order to go into more details. I chose to reveal a small amount of information here to illustrate the point, but I think revealing more information would 

b) it'd be easy for this post to fall into a "vagueblogging about a person" genre 

c) other examples I have in mind, the person doing the manipulation seemed less in bad faith (i.e. the manipulation seemed a bit accidental, or I'm less sure about it), so going into details feels less justified / more of a betrayal

I think it's possible to construct a good hypothetical fictional example that's rooted in truth but it's surprisingly hard.

It's not okay to ostracize people for behavior which actually-innocently violates weak social heuristics ("red flags"), and it's not okay to knowingly share information which makes people vulnerable to unjust ostracization. This post strongly violates my better-than-average heuristics for actually-abusive (not social-reality-abusive) behavior. The world is not a fair place; your post offers nothing in the way of tolerance for the actually-innocent.

In what way does this post do those bad things you mentioned? There is no mention of breaking innocent secrets, or secrets that would cause unjust ostracization, only patterns of actually harmful behavior.

If this post was made in confidence to you, would you tell others of it anyway?

Ummm... what?