It's probably not presuming too much to guess that many around here have personal experience with the autism spectrum, if not in relation to themselves, then with close family. I say this because the kinds of subjects discussed around here are exactly the type that would appeal to those of an autistic persuasion, e.g. technical systems, logic, and (arguably) utilitarian philosophy. Many here probably have backgrounds in STEM, and those fields tend to have a significant over-representation of people on the spectrum.

An issue that often comes up in software design (a field with high ASD representation) is programmers not being able to properly model the wants and needs of non-technical end-users. I bring this up because I see AI alignment as being a scaled-up version of this problem. The kind of people who have a strong interest in AI/machine learning will likely have a greatly disproportional impact on the future of human civilization. This might be a problem as not only is this subset of humans highly atypical in cognitive style, but the very mental architecture which underlies their interest in technical systems restricts their ability to model the minds of typical humans!

The hardest humans for ASD types to model would be those with minds that are the diametric opposite of their own. Call this condition anti-autism. It would consist of...well I'm not exactly sure. It's hard for me to imagine the mental lives of these people. I've heard the phrase "people vs things" thrown around, implying that ASD types are drawn to inanimate objects, and humans who are on the opposite side of this condition would be drawn to people. I'm not so sure. I think that plenty of people with ASD have an obsessive interest in categorizing humans and other living things.

While there's been a great amount of study behind autism, there's a curious lack of interest in what a condition with its exact opposite traits looks like, or even the fact of its existence. Simon Baron-Cohen, one the most famous autism researchers and creator of the systemizer-emphathizer scale [0], argued that it doesn't come up because it's not actually a problem. Basically, a human who is system-blind but extremely skilled at reading other humans (hyper-empathy, in his terms) can get by perfectly well [1]:

Scientists have never got up close to these individuals. It is a bit like positing the existence of a new animal on theoretical grounds, and then setting out to discover if it is really found in nature.

[W]hat would such people look like? Their empathizing ability would be average or significantly better than that of other people in the general population, but their systemizing would be impaired. So these would be people who have difficulty understanding math or physics or machines or chemistry, as systems. But they could be extremely accurate at tuning in to others' feelings and thoughts.

Would such a profile carry any necessary disability? Hyperempathizing could be a great asset, and poor systemizing may not be too crippling.

Fortunately, in our society there is considerable tolerance for such individuals. For example, if you were a child who was systemblind, your teachers might simply allow you to drop mathematics and science at the earliest possible stage, and encourage you to pursue your stronger subjects.

If you were a systemblind adult and your car didn't work, you could just call the mechanic (who is likely to be at least a Type S). If your computer needs putting together, and you can't work out which lead goes into which socket, there are phone numbers that you can ring for technical support. And in evolutionary terms, there were likely equivalent people that a systemblind person could turn to for help when that person's home was destroyed in strong winds, or when their spear broke.

Baron-Cohen dismisses paranoia as being anti-autism because such people don't infer the mental states of other humans, but rather create a fictional account of them:

If someone is over-attributing intentions, or has become preoccupied by their own emotions, then by definition they are not exhibiting hyperempathy. Hyperempathy is the ability to ascertain the mental states of others to an unusually accurate and sensitive degree, and it can only occur if one is appropriately tuned in to the other person's feelings. A paranoid person, or someone who is easily inflamed into aggression by suspecting that others are hostile, has a problem. But their problem is not hyperempathy.

So again, it's not simple to guess what people with "the opposite of autism" are like, as they're generally not available for clinical study.

I think investigating this would be of interest to people working in AI alignment and whose ultimate goal is improving the condition of humanity in general. Understanding the needs and wants of the subset of humans most unlike themselves would likely help in modeling the desires of the typical person.

As an aside, Baron-Cohen's sytemizers vs empathizers framework reminded me a lot of Asimov's Foundation books [2]. The First Foundation, with it's technicians and natural scientists, and the Second Foundation, with its psychologists, who ultimately needed each other to survive.


[1] Baron-Cohen, Simon, The Essential Difference (2003)



17 comments, sorted by Click to highlight new comments since: Today at 11:09 PM
New Comment

Sort of related, everything studies wrote this essay in 2017 and now "wamb" is a term that my friends and I use all the time.

Thanks, I hadn't seen that before, and now I have a new concept to play with :-) 

I can also think of "MOPs" as a possible term, but that is more of a cultural class approach than a developmental skill approach

I have a proposed answer here

Actually non-autistic people are quite extreme in many ways when you look at it closely. 

Here is my spoof DSM6 entry as illustration

From DSM-VI: Hyper-Social (Allistic)  Spectrum Disorder
HSSD is a syndrome in which there is an over-focus on social phenomena at the expense of other aspects of the world. Contrast with Autistic Spectrum Disorder, which is in many ways the opposite.

Diagnosis: Any 5 of the following are present:

Inability to express self clearly; use of ambiguous and vague language; discomfort with clear language
Obsessive interest in knowing personal details of acquaintances or strangers e.g. celebrities, or even fictional characters
Unfounded belief in being able to read other people's minds, in particular to know if someone is lying or not.
Difficulty in thinking in a systematic logical way, e.g. to do math or program computers
Tendency to try to bend and stretch rules for no obvious reason. Discomfort with accurately following instructions and processes.
Forms beliefs based on the opinions of others rather than on facts and evidence
Tendency to affiliate with groups and to align all opinions to the group
Frequently lies for social convenience
Preoccupied with social status and “looking the part”
Focus on status symbols, and symbols of virtue and group affiliation
Focus on appearances more than underlying reality
Intolerance of diversity of opinion
Intolerance towards people who do not have HSSD
Spends large amounts of time on shallow “social” activities with little actual content. May lead to destructive activities such as substance abuse e.g. alcohol, and over-eating.
Lack of interest in mastering difficult, especially technical, subjects in depth
Tendency to stare into people's eyes, and to believe that this gives great insight into the other person's mind. Usually unaware that this can create discomfort in the other person.
Tendency to think that staring into people's eyes demonstrates trustworthiness


The "diametric model" of Crespi et al says psychosis / schizophrenia / schizotypy is the opposite of autism. 

Looks like the same empathizing vs systematizing (same goes for bipolar being psychosis-adjacent). What about cluster B and ADHD? Autism seems like Cluster C, whilst bipolar-adjacent empathic types seems like Cluster A. If this works, it seems that there is a tri-factor model of psychopathology?

I think identifying the blind spots of the typical AI engineer/architect is an interesting and potentially important goal. Though I'm not sure I follow the reasoning behind identifying the opposite as the path to "modeling the desires of the typical person."?

I think investigating this would be of interest to people working in AI alignment and whose ultimate goal and whose ultimate goal is improving the condition of humanity in general. Understanding the needs and wants of the subset of humans most unlike themselves would likely help in modeling the desires of the typical person.

Isn't that better and more easily accomplished by identifying the median person i.e. in what way is the typical AI engineer different from the general population, and adjusting for that?

Alternatively, one could find what is complementary to autism rather than the opposite of autism; assuming those are not necessarily the same. People who may be attracted to and good at roles/professions like people management, team sports, therapists etc.

My assumption is that, for people with ASD, modelling human minds that are as far from their own as possible is playing the game on hard-mode. Manage that, and modelling average humans becomes relatively simple.   

Interesting. Though I think extremes represent fewer degrees of freedom; where certain traits/characteristics dominate, and heuristics can better model behaviour. The "typical" person has all the different traits pushing/pulling, and so fewer variables you can ignore. i.e. the typical person might be more representative of hard-mode.

Sympathy vs Systematics as a 2x2 grid would be good, as cooperation vs competition, or voting and coalitional theory vs economics and zero-sum games. Might want to take a look at intelligence, "verbal tilt" and "dark core of personality" as they seem related in this context.

I would not be surprised if the negative stereotype of a "blonde" would be people socially recognising the undesirability of anti-autism. Also a trope about approaching homework to bully or bribe people to make them for you and approach to tests as trying to copy the right answers makes one resistant to extracting knowledge from education. And while it is great to defer to experts on areas you are not an expert in it would be grand if there were something you are yourself an expert on. If an employee only delegates and never does or is capable of doing work that could be a setup with all mortar and no brick.

Treating autism as a degree of emphathy is a little simplistic and might be a error-mode in these kind of analyses.

What I have found that autist have social competence with practising and thinking out social situations. Their models are explicit. Non-autists usually don't have great theorethical insight to their social competence. its as if they can't modify or customise it because it is a black box that doesn't hiccup. So I have come to think that the implicit vs explicit modelling is way more apt for the difference.

With neural networks we have a problem of interpretability that we can have a competent network where we don't have a good idea how it does its thing. It is intriguing to me to think that some people have models that have like 10 million factors and some that have only 1000. Social interaction might be a field where approaches that weakly update on a lot of different signals can naturally do well. The shortcomings of autistic people often look like being too formulaic, having a heuristic that doesn't have a lot of caveats or adaptability. Rules that the person is explicitly thinking via auditory memory. The challenges that is typical for non-autistics is murkyness, having to say a thing multiple times before it sinks (say having to say "no" three times. Reading the manual and still calling the helpline to integrate information that actually is in the manual), being confused by mixed signals and having trouble giving priority to some level (if you say "no" calmy it might just induce confusion, verbal level alone being too weak a signal to process)

I would also like to point out that if you live with people that sahre your cognitive makeup means you don't have to think about psychology but can just get interpersonal interoperability by symmetry or identicalness. If you are a neurominority then you will use signficant resources to try to figure out what is a working way to interface with other people. So I consider autists to be veterans about trying to interface, even if it by neccecity. 

I feel like if one wanted to know what would happen if one took the "autism" dialed and turned it all the way to far-negative values, one should look for traits that are negatively genetically correlated with autism. I don't know what those traits would be, though; I had trouble finding any good studies of that.

However, I don't know whether this is necessarily the group of people that autistic people would most struggle to understand, or where their understanding of other people would be most improved if they learned to understand that group. After all, the group might have its own oddities that aren't related to autism.

Let's take a look at cognitive vs non-cognitive traits first and foremost. Autism has cognitive and risky behavior bias, Bipolar has non-cognitive and longevity bias, but both are positively correlated to SES, Openness and Emotional Stability. Within this bias problem, we can see that Non-cognitive tilt is correlated to Conscientiousness, Extraversion, Agreeableness. 

This for some reason feels similar to Emil Kirkegaard's Verbal Tilt theory, and how autism is non-verbal tilt and intelligence, bipolar is verbal tilt and sentimentality. It is common knowledge that most dark traits are correlated to low IQ and/or SES, however its relation to tilts and highly educated counter-conjectures is not known. 

Williams Syndrome seems to me to just be the opposite of paranoia, rather than autism, where the individual creates a fictional account of another human's mental state that's positive rather than negative. 

That's to say, their ability to infer the mental states of other humans is worse than that of the typical human. 

[+][comment deleted]5mo 1

New to LessWrong?