Last week I committed to posting five examples where I think a group of scientists has gone astray. Goodhart's law says you get what you measure. The streetlight effect makes you look where it's easy to look. When combined, all you get from an entire field of research is a bunch of things which are easy to measure.

I have tried to make these as current as possible, with a special focus on finding avenues of research that the general scientific establishment has yet to abandon. Not all of them are like that though, due to time and effort limitations on myself (also an unexpected release of D&D.Sci). They are also mostly worse examples than my original one, which is to be expected.

I would also be remiss not to mention the comment on my last post by JenniferRM which is an excellent example of good reasoning about this sort of issue.

Ageing and Ageing Markers

Lots of research into slowing ageing is actually taking place! This is good news from the perspective of not dying. Unfortunately, a much smaller minority of it is optimal. Lots of research at the moment is focused on interventions which can extend the lifespan of mice, flies, worms, and yeast by a few tens of percentage points. This has given us insights like the role of mTOR, fasting, and the epigenetic clock. However, any intervention which extends life by only a few tens of percent as a maximum is not targeting the "ultimate" causes of ageing. Lots of research seems to look for small increases in longevity rather than looking for building understanding of ageing itself.

Electrocatalysis of Graphene Compounds

This is a closed-book one. Certain chemical reactions involve the transfer of electrons. Graphene can catalyse these reactions, but lots of graphene derivatives can catalyse it even faster. For electronic reasons, both putting some extra electrons into graphene, and taking some out, make it a better catalyst. Many many papers were published based on this theory, until finally someone made graphene doped with bird guano. This was an excellent catalyst, and managed to put the endless searching for better graphene dopants.

Racial Bias Testing

Racism, like most of the large issues facing society, is very complex. One of the big ideas of (relatively) recent times is that individuals who are not explicitly racist can still be biased. The Implicit Association Test is the one where you do classification simultaneously into good/bad and (for example) French/English. If you're faster at grouping croissants with murder and fish 'n' chips with charity than the other way round, then you might be a Francophobe. It is brilliant, elegant, simple, and also very poorly validated.

Bill Clinton's Nanotech

Bill Clinton spent billions on nanotech in 2000. Sadly (and understandably) his administration were not experts in nanotech. This made it almost impossible for them to judge which directions the research at the time needed to go in. Molecular-scale manufacturing and programmable molecules are still a long way away. Most things which are accepted as "nanotech" are co-opted biological molecules doing a slightly different thing to what they do in nature. Sometimes this really is revolutionary (nanopore gene sequencing comes to mind) but a lot of the time it isn't. Optimizing for things which a bureaucratic institute will think of as nanotech destroyed the possibility for actual nanotech.

Decision Theory

Here's the most controversial (on LW at least) one, and the one I'm the least confident in being an actual example of this. I worry that a lot of AI researchers spend a lot of time thinking about decision theory, and this whole process is being driven by finding decision theories which solve more and more esoteric problems. Understanding the nature of decision making is important but I don't feel like our lack of understanding sits in the gaps between UDT and TDT.


1 comments, sorted by Click to highlight new comments since: Today at 11:00 AM
New Comment

Just to comment on the last example: I totally agree with your assessment of this.

In particular anything that involves Löb's theorem or considerations about how an agent should reason when considering an identical copy of themselves is almost certainly impractical mathematical cloud-castle building. I don't have anything against that type of activity as a pursuit in itself and engage in it quite a lot, but don't have any illusions that it will solve any real problems in my lifetime.

Any actual AI will have extremely bounded rationality by those standards. Quite a few of the decision processes discussed in those articles are literally uncomputable, let alone able to be implemented in any hardware that can exist in the known universe. However, considering the much more relevant but thornier problems of resource-constrained decision making is not nearly so elegant and fun.