Related to: Could auto-generated troll scores reduce Twitter and Facebook harassments?, Do we underuse the genetic heuristic? and Book review of The Reputation Society (part I, part II).


Today, algorithms can accurately identify personality traits and levels of competence from computer-observable data. FiveLabs and YouAreWhatYouLike are, for instance, able to reliably identify your personality traits from what you've written and liked on Facebook. Similarly, it's now possible for algorithms to fairly accurately identify how empathetic counselors and therapists are, and to identify online trolls. Automatic grading of essays is getting increasingly sophisticated. Recruiters rely to an increasing extent on algorithms, which, for instance, are better at predicting levels of job retention among low-skilled workers than human recruiters.

These sorts of algorithms will no doubt become more accurate, and cheaper to train, in the future. With improved speech recognition, it will presumably be possible to assess both IQ and personality traits through letting your device overhear longer conversations. This could be extremely useful to, e.g. intelligence services or recruiters.

Because such algorithms could identify competent and benevolent people, they could provide a means to better social decisions. Now an alternative route to better decisions is by identifying, e.g. factual claims as true or false, or arguments as valid or invalid. Numerous companies are working on such issues, with some measure of success, but especially when it comes to more complex and theoretical facts or arguments, this seems quite hard. It seems to me unlikely that we will have algorithms that are able to point out subtle fallacies anytime soon. By comparison, it seems like it would be much easier for algorithms to assess people's IQ or personality traits by looking at superficial features of word use and other readily observable behaviour. As we have seen, algorithms are already able to do that to some extent, and significant improvements in the near future seem possible.

Thus, rather than improving our social decisions by letting algorithms adjudicate the object-level claims and arguments, we rather use them to give reliable ad hominem-arguments against the participants in the debate. To wit, rather than letting our algorithms show that certain politicians claims are false and that his arguments are invalid, we let them point out that they are less than brilliant and have sociopathic tendencies. The latter seems to me significantly easier (even though it by no means will be easy: it might take a long time before we have such algorithms).

Now for these algorithms to lead to better social decisions, it is of course not enough that they are accurate: they must also be perceived as such by relevant decision-makers. In recruiting and the intelligence service, it seems likely that they will to an increasing degree, even though there will of course be some resistance. The resistance will probably be higher among voters, many of which might prefer their own judgements of politicians to deferring to an algorithm. However, if the algorithms were sufficiently accurate, it seems unlikely that they wouldn't have profound effects on election results. Whoever the algorithms favour would scream their results from the roof-tops, and it seems likely that this will affect undecided voters.

Besides better political decisions, these algorithms could also lead to more competent rule in other areas in society. This might affect, e.g. GDP and the rate of progress.

What would be the impact for existential risk? It seems likely to me that if algorithms led to the rule of the competent and the benevolent, that would lead to more efforts to reduce existential risks, to more co-operation in the world, and to better rule in general, and that all of these factors would reduce existential risks. However, there might also be countervailing considerations. These technologies could have a large impact on society, and lead to chains of events which are very hard to predict. My initial hunch is that they mostly would play a positive role for X-risk, however.

Could these technologies be held back for reasons of integrity? It seems that secret use of these technologies to assess someone during everyday conversation could potentially be outlawed. It seems to me far less likely that it would be prohibited to use them to assess, e.g. a politician's intelligence, trustworthiness and benevolence. However, these things, too, are hard to predict.

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 3:43 PM

Goodhart's law. People will start optimizing to look good to the algorithms and you'll lose something along the way. Still not necessarily worse than what we have now though.

I'm skeptical of the ability of algorithms to detect subtle things like that. Maybe actual sociopaths would be tested.

But I'd like to point out that fact checking algorithms aren't that hard, and might already exist. Watson was used to fact check political debates a few years ago.

It's also pretty easy to keyword match any argument, and pull up a highly voted reddit comment in reply. I made an IRC chatbot like this that was very fun. I believe in the near future you will see a lot of stuff like that.

Thus, rather than improving our social decisions by letting algorithms adjudicate the object-level claims and arguments, we rather use them to give reliable ad hominem-arguments against the participants in the debate. To wit, rather than letting our algorithms show that certain politicians claims are false and that his arguments are invalid, we let them point out that they are less than brilliant and have sociopathic tendencies.

On an objective (sic) level this seems counter intuitive and counter productive: Why punish someone for being non-conformant when their ideas might still be objectively good? I agree that in practice politics already runs more on the perceived personal qualities like agreeableness. Probably the Fundamental Attribution Bias at work. And maybe making this explicit may not be the worst thing.

Besides better political decisions, these algorithms could also lead to more competent rule in other areas in society. This might affect, e.g. GDP and the rate of progress.

But I am doubtful whether strengthening FAB will have this effect.

Because such algorithms could identify competent and benevolent people, they could provide a means to better social decisions.

They could. But let me rewrite that sentence. Because such algorithms could identify particular kinds of people, they give much power to those who can apply these algorithms to other people and read the results. This would lead to concentration of power and more manipulation -- and that doesn't look like the path to "better social decisions".

The trouble with judging ideas by their proponents is that there could be confounders. For instance, if intelligent people are more often in white-collar jobs than blue-collar, intelligent people might tend to favor laws benefiting white-collar workers even when they're not objectively correct. Even selecting for benevolence might not be enough--maybe benevolent people tend to go into the government, and people who are benevolent by human standards are still highly ingroup-biased. Then you'd see more benevolent people tending to support more funds and power going to the government, whether or not that's a good idea.

This sort of technology will progress without impedance.