In one of the sub-comments, I thought about the tests that identified mental-imagery, and started thinking of how you might test for several variants of "lack of sense of self" or some related attributes.
Related Tests for inner-sense or model of self
No-qualia seems challenging to test. But "no model of self" (one form of "lack-of-self-awareness") seems halfway-there, or at least in the correct spirit of the question? And I think that could be tested reliably; just get a group of people to predict their own behavior, and watch that subset of the group who reliably fail catastrophically at this.
For lack of consistency and other-awareness... There's a Nazi (ETA: Eichmann) who seemed likely to be a troublingly-vivid example of "no consistent worldview or other-awareness"; all his words and beliefs were inconsistent platitudes, and he seemed genuinely surprised when Jewish judges didn't feel sympathy for the difficulty in his attempts to get promoted by doing a "good job" optimizing trains for death. Unfortunately, I can't track down his name. If someone knows who I'm talking about, I'd love to be pointed at an article about his strange psychology again.
Lack-of-attachment-to-internal-identity seems to be another semi-related thing. I feel like there are some things where I care about "identity-alignment" a great deal, and other matters that others clearly care about where I just lack any feeling of identity euphoria/dysphoria around the matter regardless of what I do. I suspect there are some people who lack either sensation altogether. Probably some fraction of those people come across as identity chameleons; people who switch out identities according to external incentives, because they have no internal reason not to.
(Personally, meditation updated me considerably towards a reduced attachment to internal-identities, but there are still some I'm attached to and care about maintaining.)
Alexithymia is a phenomenon where you lack awareness of your own emotions, sometimes even as you are acting them out. This seems easy to test in a manner similar to green/red color-blindness; have the person try to appraise what sort of emotion they're feeling, and then read their circumstances or watch their behavior for a read of which emotion it actually is, and see who usually seems to misjudge it (or believe they're not feeling emotions entirely).
Another p-zombie variant
There's a different p-zombie subtype I've been thinking about a great deal myself..
If you set up a system where there's an observer, an actor, and an environment, then there are 2 kinds of consciousness:
I suspect the later can feel "conscious" even if the observer never influences the actor in any way.
Humans are usually a bit of both, but someone who only has "consciousness" in the later capacity feels a bit like a... "consciousness hitchhiking on a q-zombie" to me.
(Related: The Elephant and The Rider)
Mental imagery: Drawing seems to reliably distinguish between coherent-mental-visualizers and those who aren't. Tasks like "count the stripes on the tiger" make sense to a vivid/detailed visualizer, but not to someone who is just holding on to the concept "striped big cat."
I suspect drawing-attempts would also reliably identify people like "The Man who Mistook His Wife for a Hat", who viewed people as a "disorganized bag of facial-features" and had to rely on a single distinctive trait to identify even people he knew well (ex: Albert Einstein & his eccentric hairstyle), and who described things that weren't there when trying to interpret a low-feature image like a picture of the dunes of the Sahara.
Biology-nerd LWer here (or ex-biology-nerd? I do programming as a job now, but still talk and think about bio as a fairly-high-investment hobby). BS in entomology. Disclaimer that I haven't done grad school or much research; I have just thought about doing it and talked with people who have.
I suspect one thing that might appeal to these sorts of people, which we have a chance of being able to provide, is an interesting applied-researcher-targeted semi-plain-language (or highly-visual, or flow-chart/checklist, or otherwise accessibly presented) explanation of certain aspects of statistics that are particularly likely to be relevant to these fields.
ETA: A few things I can think of as places to find these people are "research" and "conferences." There are google terms they're going to use a lot (due to research), and also a lot of them are going to be interested in publishing and conferences as a way to familiarize themselves with new research in their fields and further their careers.
Leaning towards the research funnel... here's some things I understand now that I did not understand when I graduated, many of which I got from talking/reading in this community, which I think a "counterfactual researcher me" would have benefited from a lucid explanation of:
Things I think we've done that seem appealing from a researcher perspective include...
(...damn, is Scott really carrying the team here, or is this a perception filter and I just really like his blog?)
Small sample sizes, but I think in the biology reference class, I've seen more people bounce off of Eliezer's writing style than the programming reference class does (fairly typical "reads-as-arrogant" stuff; I didn't personally bounce off it, so I'm transmitting this secondhand). I don't think there's anything to be done about this; just sharing the impression. Personally, I've felt moments of annoyance with random LWers who really don't have an intuitive feel for the nuances for evolution, but Eliezer is actually one of the people who seems to have a really solid grasp on this particular topic.
(I've tended to like Elizer's stuff on statistics, and I respected him pretty early on because he's one of the (minority of) people on here who have a really solid grasp of what evolution is/isn't, and what it does/doesn't do. Respect for his understanding of a field-of-study I did understand, rubbed off as respecting him in fields of study he understood better than I did (ex: ML) by default, at least until my knowledge caught up enough that I could reason about it on my own.)
((FWIW; I suspect people in finance might feel similarly about "Inadequate Equilibria," and I suspect they wouldn't be as turned off by the writing style. They are likely to be desirable recruits for other reasons: finance at its best is fast-turnaround and ruthlessly empirical, it's often programming or programming-adjacent, EA is essentially "charity for quantitatively-minded people who think about black swans," plus there's something of a cultural fit there.))
Networking and career-development-wise... quite frankly, I think we have some, but not a ton to offer biologists directly. Maybe some EA grants for academics and future academics that are good at self-advocacy and open to moving. I've met maybe a dozen rationalists I could talk heavy bio with, over half of which are primarily in some other field at this point. Whereas we have a ton to offer programmers, and at earlier stages of their careers.
(I say this partially from personal experience, although it's slightly out-of-date: I started my stay in the Berkeley rationalist community ~4 years ago with a biology-type degree. I had a strong interest in biorisk, and virology in particular. I still switched into programming. There weren't many resources pointed towards early-career people in bio at the time (this may have changed; a group of bio-minded people including myself got a grant to host a group giving presentations on this topic, and were recently able to get a grant to host a conference), and any that existed was pointed at getting people to go to grad school. Given that I had a distaste for academia and no intention of going to grad school, I eventually realized the level of resources or support that I could access around this at the time was effectively zero, so I did the rational thing and switched to something that pays well and plugged in with a massive network of community support. And yes, I'm a tad bitter about this. But that's partially because I just had miscalibrated expectations, which I'm trying to help someone else avoid.)
One of my favorite little tidbits from working on this post: realizing that idea innoculation and the Streisand effect are opposite sides of the same heuristic.
Bubbles in Thingspace
It occurred to me recently that, by analogy with ML, definitions might occasionally be more like "boundaries and scoring-algorithms in thingspace" than clusters per-say (messier! no central example! no guaranteed contiguity!). Given the need to coordinate around definitions, most of them are going to have a simple and somewhat-meaningful center... but for some words, I suspect there are dislocated "bubbles" and oddly-shaped "smears" that use the same word for a completely different concept.
Homophones are one of the clearest examples; totally disconnected bubbles of substance.
Another example is when a word covers all cases except those where a different word applies better; in that case, you can expect to see a "bite" taken out of its space, or even a multidimensional empty bubble, or a doughnut-like gap in the definition. If the hole is centered ("the strongest cases go by a different term" actually seems like a very common phenomenon), it even makes the idea of a "central" definition rather meaningless, unless you're willing to fuse or switch terms.
Relatedly: I would bet someone money that Greg Egan does something insight-meditation-adjacent.
I started reading his work after someone noted my commentary on "the unsharableness of personal qualia" bore a considerable resemblance to Closer. And since then, whenever I read his stuff, I keep seeing him giving intelligent commentary and elaboration on things I had perceived and associated with deep meditation or LSD (the effects are sometimes similar for me). He's obviously a big physics fan, but I suspect insight meditation is another one of his big "creativity" generators. (Before someone inevitably asks: No, I don't say that about everything.)
To me, Egan's viewpoint reads as very atheist, but also very Buddhist. If you shear off all the woo and distill the remainder, Buddhism is very into seeing through "illusions" (even reassuring ones), and he seems to have a particular interest in this.
I can make up a plausible story that developing an obsession with how we coordinate-and-manifest the illusion of continuity from disparate brain-parts... could be a pretty natural side-effect of sometimes watching the mental sub-processes that generate the illusion of "a single, conscious, continuous self" fall apart from one another? (Meditation can do that, and it's very unsettling the first time you see it.).
So, here's the specific thing I can think of that seems like it might be helpful...
I try to be cautious about using meditation-based wire-heading or emotional-dulling, but at minimum, there's a state one step down from enlightenment (equanimity) that perceives suffering as merely "dissonance" in vibrations. The judging/negative-connotation gets dropped, and internal-perception of emotional affect is pretty flat (Note of caution: the emotions probably aren't gone, it's more like you perceive them differently. I'm not 100% sure how it works, myself. While it might sound similar, it's not quite the same as dissociation; the movement is more like you lean into your experience rather than out of it. Also, I read in a paper that its painkiller properties are apparently not based on opiods? Weird, right? So neurologically, I don't really know how it works, although I might develop theories if I researched it a bit harder.).
Enlightenment/fruition proper doesn't even form memories, although I've never been able to sustain that state for longer than a few seconds. But when it drops, it usually drops back into equanimity... so I guess between the two, it'd be a serious improvement on "eternal conscious suffering"?
Unfortunately, to get into Enlightenment territory, there's a series of intermediate steps that tend to set off existential crises, of widely-varying severity. Any book or teacher that doesn't take this and the wireheading potential seriously, is probably less good than one who does. That said, I still recommend it, especially for people who seem to keep having existential crises anyway. But it's a perception-alteration workbench; its sub-skills can sometimes be used to detrimental ends, if people aren't careful about what they install.
Here's one plus-side that you don't need the additional context to understand: I kinda suspect that at least most people would eventually find the right combination of insights and existential-crises to bumble into enlightenment by themselves, if they had an eternity of consecutive experiences to work with. Especially given that there seem to be multiple simple practices that get around to it eventually (although it might take a couple of lifetimes for some people).
As someone who has had stream-entry, and the change-in-perception called Enlightenment... I endorse your read of it as being potentially useful in this case?
I'm going to give more details in a sub-comment, to give people who are already rolling their eyes a chance to skip over this.