Nice to hear!
I think it makes sense that the orgs haven't commented, as it would possibly run afoul of antitrust laws.
See for example when some fashion clothing companies talked about trying to slow down fashion cycles to produce less waste / carbon emissions, which led to antitrust regulators raiding their headquarters.
I agree about the cooperation thing. One addendum I'd add to my post is that shared reality seems like a common precursor to doing/thinking together.
If I want to achieve something or figure something out, I can often do better if I have a few more people working/thinking with me, and often the first step is to 'get everyone on the same page'. I think lots of times this first step is just trying to shove everyone into shared reality. Partially because that's a common pattern of behavior, and partially because if it did work, it would be super effective.
But because of the bad news where people actually have different experiences, cracks often form in the foundation of this coordinated effort. But I think if the team has common knowledge about the nature of shared reality and the non-terrible/coercive/violent way of achieving it (sharing understanding), this can lead to better cooperation (happier team members, less reality-masking, better map-sharing).
I'm also not sure what you mean about the trust problem, maybe you mean the polls which claim that trust in government and other stuff has been on the decline?
Yeah let's do in-person sometime, I also tried drafting long responses and they were terrible
Sure! I love talking about this concept-cluster.
I have a hunch that in practice the use of the term 'shared reality' doesn't actually ruin one's ability to refer to territory-reality. In the instances when I've used the term in conversation I haven't noticed this (and I like to refer to the territory a lot). But maybe with more widespread usage and misinterpretation it could start to be a problem?
I think to get a better sense of your concern it might be useful to dive into specific conversations/dynamics where this might go wrong.
...
I can imagine a world where I want to be able to point out that someone is doing the psychological mistake of confusing their desire to connect with their map-making. And I want the term I use to do that work, so I can just say "you want to share your subjective experience with me, but I'm disagreeing with you about reality, not subjective experience."
Does that kind of resonate with your concern?
Hmm, I want a term that refers to all those many dimensions together, since for any given 'shared reality' experience it might be like 30% concepts, 30% visual & auditory, 30% emotion/values, etc.
I'm down to factor them out and refer to shared emotions/facts/etc, but I still want something that gestures at the larger thing. Shared experience I think could do the trick, but feels a bit too subjective because it often involves interpretations of the world that feel like 'true facts' to the observer.
Wherein I write more, because I'm excited about all this:
The first time I heard the term 'shared reality' was in this podcast with Bruce Ecker, the guy who co-wrote Unlocking the Emotional Brain. He was giving an example of how a desire for 'shared reality' can make it hard to come to terms with e.g. emotional trauma.
by believing the parent's negative messages to you (either verbal or behavioral), you're staying in shared reality: and that's a big aspect of attachment. ... especially shared reality about yourself: 'they think I'm a piece of crap, and I do too. So I feel seen and known by them even if the content is negative'.
In this case, the parent thinks the kid is a 'piece of crap', which I expect doesn't feel like an emotion to the parent, it feels like a fact about the world. If they were more intellectually mature they might notice that this was an evaluation - but it's actually super hard to disentangle evaluations and facts.
I guess I think it's maybe impossible to disentangle them in many cases? Like... I think typically 'facts' are not a discrete thing that we can successfully point at, that they are typically tied up with intentions/values/feelings/frames/functions. I think Dreyfus made this critique of early attempts on AI, and I think he ended up being right (or at least my charitable interpretation of his point) - that it's only within an optimization process / working for something that knowledge (knowing what to do given XYZ) gets created.
Maybe this is an is/ought thing. I certainly think there's an external world/territory and it's important to distinguish between that and our interpretations of it. And we can check our interpretations against the world to see how 'factual' they are. And there are models of that world like physics that aren't tied up in some specific intention. But I think the 'ought' frame slips into things as soon as we take any action, because we're inherently prioritizing our attention/efforts/etc. So even a sharing of 'facts' involves plenty of ought/values in the frame (like the value of truth-seeking).
Sure! The main reason I use the term is because it already exists in the literature. That said, I seem to be coming at the concept from a slightly different angle than the 'shared reality' academics. I'm certainly not attached to the term, I'd love to hear more attempts to point at this thing.
I think the 'reality' is referring to the subjective reality, not the world beyond ourselves. When I experience the world, it's a big mashup of concepts, maps, visuals, words, emotions, wants, etc.
Any given one of those dimensions can be more or less 'shared', so some people could get their yummies from sharing concepts unrelated to their emotions. In your example, I think if my parents had something closer to my beliefs, I'd have more of the nice shared reality feeling (but would probably quickly get used to it and want more).
Some side notes, because apparently I can't help myself:
OK, I've added a disclaimer to the main text. I agree it's important. It seems worth having this kind of disclaimer all over the place, including most relationship books. Heck, it seems like Marshall Rosenburg in Non-Violent Communication is only successfully communicating like 40% of the critical tech he's using.
Thank you for flagging this! Should be fixed now.