Thank you, that answers my question - and it makes me more excited about the project! I'm glad to hear it's been unfolding nicely thus far, and I'm feeling pride/respect for you taking on the more formal mantle.
(I didn't see the early version)
Cool to see some more energy here - I have benefitted greatly from being around the CFAR nexus.
My kibitz: wait, who is driving? Who is leading? Who is holding the torch? Who is parenting this baby org?
You did a good job of pointing to people involved, but I'd like a better sense of how the people involved are themselves relating to the new org. My guess is that you (Anna) are intentionally holding a decent amount of the aCFAR torch by authoring this post, but also intentionally not wearing fancy titles or making a very formal org structure to allow for some of the magic of surrendered leadership (and to avoid some of the pitfalls of more structured leadership).
Still, I tend to expect more fruit borne from orgs with 1-3 'torch-bearers' or 'parents' or 'leaders'. I'd be more excited about aCFAR if there were some explicit parents, or if there were none then making that more explicit. At the very least getting more clarity would help me figure out how I want to relate to the org.
(disclaimer: I skimmed but didn't read closely your posts, perhaps you already touched on this. Hopefully this is helpful to surface in this context - feel free to ignore this comment if better discussed in another context, I can figure this out on my own time.)
Thank you for flagging this! Should be fixed now.
Nice to hear!
I think it makes sense that the orgs haven't commented, as it would possibly run afoul of antitrust laws.
See for example when some fashion clothing companies talked about trying to slow down fashion cycles to produce less waste / carbon emissions, which led to antitrust regulators raiding their headquarters.
I agree about the cooperation thing. One addendum I'd add to my post is that shared reality seems like a common precursor to doing/thinking together.
If I want to achieve something or figure something out, I can often do better if I have a few more people working/thinking with me, and often the first step is to 'get everyone on the same page'. I think lots of times this first step is just trying to shove everyone into shared reality. Partially because that's a common pattern of behavior, and partially because if it did work, it would be super effective.
But because of the bad news where people actually have different experiences, cracks often form in the foundation of this coordinated effort. But I think if the team has common knowledge about the nature of shared reality and the non-terrible/coercive/violent way of achieving it (sharing understanding), this can lead to better cooperation (happier team members, less reality-masking, better map-sharing).
I'm also not sure what you mean about the trust problem, maybe you mean the polls which claim that trust in government and other stuff has been on the decline?
Yeah let's do in-person sometime, I also tried drafting long responses and they were terrible
Sure! I love talking about this concept-cluster.
I have a hunch that in practice the use of the term 'shared reality' doesn't actually ruin one's ability to refer to territory-reality. In the instances when I've used the term in conversation I haven't noticed this (and I like to refer to the territory a lot). But maybe with more widespread usage and misinterpretation it could start to be a problem?
I think to get a better sense of your concern it might be useful to dive into specific conversations/dynamics where this might go wrong.
...
I can imagine a world where I want to be able to point out that someone is doing the psychological mistake of confusing their desire to connect with their map-making. And I want the term I use to do that work, so I can just say "you want to share your subjective experience with me, but I'm disagreeing with you about reality, not subjective experience."
Does that kind of resonate with your concern?
Hmm, I want a term that refers to all those many dimensions together, since for any given 'shared reality' experience it might be like 30% concepts, 30% visual & auditory, 30% emotion/values, etc.
I'm down to factor them out and refer to shared emotions/facts/etc, but I still want something that gestures at the larger thing. Shared experience I think could do the trick, but feels a bit too subjective because it often involves interpretations of the world that feel like 'true facts' to the observer.
Wherein I write more, because I'm excited about all this:
The first time I heard the term 'shared reality' was in this podcast with Bruce Ecker, the guy who co-wrote Unlocking the Emotional Brain. He was giving an example of how a desire for 'shared reality' can make it hard to come to terms with e.g. emotional trauma.
by believing the parent's negative messages to you (either verbal or behavioral), you're staying in shared reality: and that's a big aspect of attachment. ... especially shared reality about yourself: 'they think I'm a piece of crap, and I do too. So I feel seen and known by them even if the content is negative'.
In this case, the parent thinks the kid is a 'piece of crap', which I expect doesn't feel like an emotion to the parent, it feels like a fact about the world. If they were more intellectually mature they might notice that this was an evaluation - but it's actually super hard to disentangle evaluations and facts.
I guess I think it's maybe impossible to disentangle them in many cases? Like... I think typically 'facts' are not a discrete thing that we can successfully point at, that they are typically tied up with intentions/values/feelings/frames/functions. I think Dreyfus made this critique of early attempts on AI, and I think he ended up being right (or at least my charitable interpretation of his point) - that it's only within an optimization process / working for something that knowledge (knowing what to do given XYZ) gets created.
Maybe this is an is/ought thing. I certainly think there's an external world/territory and it's important to distinguish between that and our interpretations of it. And we can check our interpretations against the world to see how 'factual' they are. And there are models of that world like physics that aren't tied up in some specific intention. But I think the 'ought' frame slips into things as soon as we take any action, because we're inherently prioritizing our attention/efforts/etc. So even a sharing of 'facts' involves plenty of ought/values in the frame (like the value of truth-seeking).
Here's my understanding / summary, with the hope that you correct me on areas if I'm confused: