IDK, fields don't have to have names, there's just lots of work on these topics. You could start here https://en.wikipedia.org/wiki/Evolutionary_anthropology and google / google-scholar around.
See also https://www.youtube.com/watch?v=tz-L2Ll85rM&list=PL1B24EADC01219B23&index=556 (I'm linking to the whole playlist, linking to a random old one because those are the ones I remember being good, IDK about the new ones).
My hope is that this can become more feasible if we can provide accurate patterns for how the scissors-generating-process is trying to trick Susan(/Robert). And that if Susan is trying to figure out how she and Robert were tricked, by modeling the tricking process, this can somehow help undo the trick, without needing to empathize at any point with "what if candidate X is great."
This is clarifying...
Does it actually have much to do with Robert? Maybe it would be more helpful to talk with Tusan and Vusan, who are also A-blind, B-seeing, candidate Y supporters. They're the ones who would punish non-punishers of supporting candidate X / talking about A. (Which Susan would become, if she were talking to an A-seer without pushing back, let alone if she could see into her A-blindspot.) You could talk to Robert about how he's embedded in threats of punishment for non-punishment of supporting candidate Y / talking about B, but that seems more confusing? IDK.
I think I agree, but
I don't care about doing this bet. We can just have a conversation though, feel free to DM me.
(e.g. 1 billon dollars and a few very smart geniuses going into trying to make communication with orcas work well)
That would give more like a 90% chance of superbabies born in <10 years.
etc.
There's a whole research field on this FYI.
I'm not gonna read the reddit post because
I don't know whether orcas are supersmart. A couple remarks:
I appreciate you being relatively clear about this, but yeah, I think it's probably better to spend more time learning facts and thinking stuff through, compared to writing breathless LW posts. More like a couple weeks rather than a couple days. But that's just my stupid opinion. The thing is, there's probably gonna be like ten other posts in the reference class of this post, and they just... don't leave much of a dent in things? There's a lot that needs serious thinking-through, let's get to work on that! But IDK, maybe someone will be inspired by this post to think through orca stuff more thoroughly.
IIUC, I agree with your vision being desirable. (And, IDK, it's sort of plausible that you can basically do it with a good toolbox that could be developed straightforwardly-ish.)
But there might be a gnarly, fundamental-ish "levers problem" here:
(A levers problem is analogous to a buckets problem, but with actions instead of beliefs. You have an available action VW which does both V and W, but you don't have V and W available as separate actions. V seems good to do and W seems bad to do, so you're conflicted, aahh.)
I would guess that what we call empathy isn't exactly well-described as "a mental motion whereby one tracks and/or mirrors the emotions and belief-perspective of another". The primordial thing--the thing that comes first evolutionarily and developmentally, and that is simpler--is more like "a mental motion whereby one adopts whatever aspects of another's mind are available for adoption". Think of all the mysterious bonding that happens when people hang out, and copying mannerisms, and getting a shoulder-person, and gaining loyalty. This is also far from exactly right. Obviously you don't just copy everything, it matters what you pay attention to and care about, and there's probably more prior structure, e.g. an emphasis on copying aspects that are important for coordinating / synching up values. IDK the real shape of primordial empathy.
But my point is just: Maybe, if you deeply empathize with someone, then by default, you'll also adopt value-laden mental stances from them. If you're in a conflict with someone, adopting value-laden mental stances from them feels and/or is dangerous.
To say it another way, you want to entertain propositions from another person. But your brain doesn't neatly separate propositions from values and plans. So entertaining a proposition is also sort of questioning your plans, which bleeds into changing your values. Empathy good enough to show you blindspots involves entertaining propositions that you care about and that you disagree with.
Or anyway, this was my experience of things, back when I tried stuff like this.
Intelligence also has costs and has components that have to be invented, which explains why not all species are already human-level smart. One of the questions here is which selection pressures were so especially and exceptionally strong in the case of humans, that humans fell off the cliff.