Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

The efficient market hypothesis applied to AI is an important variable for timelines. The idea is: If AGI (or TAI, or whatever) was close, the big corporations would be spending a lot more money trying to get to it first. Half of their budget, for example. Or at least half of their research budget! Since they aren't, either they are all incompetent at recognizing that AGI is close, or AGI isn't close. Since they probably aren't all incompetent, AGI probably isn't close.

I'd love to see some good historical examples of entire industries exhibiting the sort of incompetence at issue here. If none can be found, that's good evidence for this EMH-based argument.

--Submissions don't have to be about AI research; any industry failing to invest in some other up-and-coming technology highly relevant to their bottom line should work.

--Submissions don't need to be private corporations necessarily. Could be militaries around the world, for example.

(As an aside, I'd like to hear discussion of whether the supposed incompetence is actually rational behavior--even if AI might be close, perhaps it's not rational for big corporations to throw lots of money at mere maybes. Or maybe they think that if AGI is close they wouldn't be able to profit from racing towards it, perhaps because they'd be nationalized, or perhaps because the tech would be too easy to steal, reverse engineer, or discover independently. Kudos to Asya Bergal for this idea.)

New to LessWrong?

New Answer
New Comment

9 Answers sorted by

Daniel Kokotajlo

Dec 15, 2020

Ω5130

I was prompted to write this question by reading this excellent blog post about AlphaFold I'll quote it at length because it serves as a candidate answer to my question:

What is worse than academic groups getting scooped by DeepMind? The fact that the collective powers of Novartis, Pfizer, etc, with their hundreds of thousands (~million?) of employees, let an industrial lab that is a complete outsider to the field, with virtually no prior molecular sciences experience, come in and thoroughly beat them on a problem that is, quite frankly, of far greater importance to pharmaceuticals than it is to Alphabet. It is an indictment of the laughable “basic research” groups of these companies, which pay lip service to fundamental science but focus myopically on target-driven research that they managed to so badly embarrass themselves in this episode.
If you think I’m being overly dramatic, consider this counterfactual scenario. Take a problem proximal to tech companies’ bottom line, e.g. image recognition or speech, and imagine that no tech company was investing research money into the problem. (IBM alone has been working on speech for decades.) Then imagine that a pharmaceutical company suddenly enters ImageNet and blows the competition out of the water, leaving the academics scratching their heads at what just happened and the tech companies almost unaware it even happened. Does this seem like a realistic scenario? Of course not. It would be absurd. That’s because tech companies have broad research agendas spanning the basic to the applied, while pharmas maintain anemic research groups on their seemingly ever continuing mission to downsize internal research labs while building up sales armies numbering in the tens of thousands of employees.
If you think that image recognition is closer to tech’s bottom line than protein structure is to pharma’s, consider the fact that some pharmaceuticals have internal crystallographic databases that rival or exceed the PDB in size for some protein families.

This was about AlphaFold, by the way, not AlphaFold2. (!!!)

I was once chatting with someone being business development at Sanofi. According to him they took 1 1/2 years to exchange a button on their website because their internal processes are filled with bureaucracy. 

From that perspective there's the justified belief that big pharma has no capacity to develop new technology of this kind inhouse. They could hire a bunch of AI Phds but they likely would drown them in bureaucracy so that they wouldn't get the kind of results that AlphaFold got. It's much easier to let someone else do the work and then license it. 

1kjz3y
Exactly. Machine learning is not pharma's comparative advantage.

ryan_b

Dec 15, 2020

60

I submit this is usually a result of rational behavior. The reason is straightforward: the job of business executives is nominally to increase the share price, but what this actually entails is exploiting their capital as efficiently as possible. This matches what we would expect rational people to do on a couple of levels:

  • This is what formal training consists of during an MBA; there are formal processes for conducting the analysis; capital efficiency is included directly in financial analysis by investors on Wall Street. It would be very weird for successful corporate titans to go screw process and Wall Street all the time.
  • Even in the basic person trying to do their best case, what do I have and what can I do with it is as fundamental an approach as possible.

These two examples carry in them an implicit assumption, which I want to point to as a good predictor of the phenomenon: the new investment will decrease the value of investments they have already made. In other words, it will cannibalize value.

This is the logic behind Blockbuster/Netflix; if they had bought them, all the gains Netflix made at the expense of Blockbuster stores would have looked like shooting themselves in the foot. Let us consider the counterfactual case of Blockbuster buying Netflix for a song: their stores continue to get hammered yielding definite losses; the titanic success of Netflix is uncertain and in the future (deeply uncertain; would they have made the further transition to digital from mail-order? Could they have managed the state-of-the-art IT infrastructure to make it work if they had? Would they have had the foresight to invest in original content?). Would the investors have spared the rod after setting their capital on fire for such uncertain gains?

You can also consider another interesting case: Kodak and the digital camera. Now as it transpires Kodak didn't miss the boat so much as miss the shots it took, but I posit a causal mechanism at work: Kodak's primary investments were in chemicals and paper, so their leadership was not positioned to implement decisions well, even when they made strategically good ones.

So I say it is rational because they are doing what a method that a lot of smart people have worked very hard on refining says to do (usually successfully). I say the predictor for when it will happen is that it makes what they are already doing less valuable, and therefore they are ill-positioned to execute and even if they do they will be punished.

NunoSempere

Dec 15, 2020

40

I have some data on this on the top of my head from having read the history of 50 mostly random technologies (database.csv in the post):

  • People not believing that heavier than air flight was a thing, and Zeppelins eventually becoming obsolete
  • Various camera film producing firms, notably Kodak, failing to realize that digital was going to be a thing
  • (Nazi Germany not realizing that the nuclear bomb was going to be a thing)
  • London not investing in better sanitation until the Great Stink; this applies to mostly every major city.
  • People not investing in condoms for various reasons
  • People not coming up with the bicycle as an idea
  • Navies repeatedly not taking the idea of submarines seriously
  • Philip LeBon failing to raise interest in his "thermolamp"

So that's 8/50 of the top of my head (9/50 including Blockbuster, mentioned by another commenter)

I also have some examples of technology timelines here and some technology anecdotes from my sample of 50 technologies here, which might serve as inspiration. 

ESRogs

Dec 15, 2020

40

Car companies have done too little too late to switch to making EVs.

See also: The Innovator's Dilemma.

ThomasJ

Dec 15, 2020

40

The Moneyball story would be a good example of this. Essentially all of sports dismissed the quantitative approach until the A's started winning with it in 2002. Now quantitative management has spread to other sports like basketball, soccer, etc. 

You could make a similar case for quantitative asset management. Pairs trading, one of the most basic kinds of quantitative trading, was discovered in the early 1980s (claims differ whether it was Paul Wilmott, Bamberger & Tartaglia at Morgan Stanley, or someone else). While the computation power to make this kind of trading easy was certainly more widely available starting in the 80s, nothing would have prevented someone from investing sooner in the research required for  this style of trading. (Instead of, for instance, sending their analysts to become registered pseudoscientists)

CronoDAS

Dec 16, 2020

20

The Xerox Palo Alto Research Center produced many innovations in information technology - including the graphical user interface and mouse - that Xerox as a corporation completely failed to capitalize on. (There were PARC innovations that Xerox did successfully exploit, but the ones it missed the boat on are the ones that get all the attention.)

Multicore

Dec 15, 2020

20

Blockbuster failed to invest in internet tech for their movie rental business and was outcompeted by smaller, more savvy startups.

Ericf

Dec 16, 2020

10
  1. How do you know they aren't investing in developing AI? Corporate research goals are proprietary.

  2. There is just as much evidence of corps developing quantum computers, astrological calculators, geocentric ocean navigation, especially, and a whole host of other unproven technologies, some of which don't actually work. So the prior on "people with money aren't doing it" needs to include a healthy dose of "there's no there there"

6 comments, sorted by Click to highlight new comments since: Today at 12:34 AM

either they are all incompetent at recognizing that AGI is close, or AGI isn't close.

Or they don't see how being first helps them very much, or they don't know how to spend the research money on being first, or ...

There's a fairly straightforward optimization process that occurs in product development that I don't often see talked about in the abstract that goes something like this:

It seems like bigger firms should be able to produce higher quality goods. They can afford longer product development cycles, hire a broader variety of specialized labor, etc. In practice, it's smaller firms that compete on quality, why is this?

One of the reasons is that the pressure to cut corners increases enormously at scale along more than one dimension. As a product scales, eking out smaller efficiency gains is still worth enough money that that particular efficiency gain can have an entire employee, or team devoted to it. The incentive is to cut costs in all ways that are illegible to the consumer. But the average consumer is changing as a product scales up in popularity. Early adopters and people with more specialized needs are more sensitive to quality. As the product scales to less sensitive buyers, the firm can cut corners that would have resulted in lost sales earlier on in the product cycle, but now isn't a large enough effect to show up as revenues and profits go up. So this process continues up the curve as the product serves an ever larger and less sensitive market. Fewer things move the needle, and now the firm is milking its cash cow, which brings in a different sort of optimization (bean counters) which continues this process.

Now, some firms, rather than allow their lunch to get eaten, do engage in market segmentation to capture more value. The most obvious is when a brand has a sub brand that is a luxury line, like basically all car makers. The luxury line will take advantage of some of the advantages of scale from the more commoditized product lines but do things like manufacture key components in, say, germany instead of china. But with the same management running the whole show, it's hard for a large firm to insulate the market segmentation from exactly the same forces already described.

All of this is to answer the abstract question of why large firms don't generate the sort of culture that can do innovation, even when they seemingly throw a lot of money and time at it. The incentives flow down from the top. The 'top' of firms are answerable to the wrong set of metrics/incentives. This is 100% true of most of academia as well as private R&D.

So to answer the original question, I see micro examples of failing to invest in the right things everywhere. Large firms could be hotbeds of experimentation in large scale project coordination, but in practice individuals within an org are forced to conform to internal APIs to maintain legibility to management which explains why something like Slack didn't emerge as an internal tool at any big company.

[-]Ofer3yΩ230

(I'm not an economist but my understanding is that...) The EMH works in markets that fulfill the following condition: If Alice is way better than the market at predicting future prices, she can use her superior prediction capability to gain more and more control over the market, until the point where her control over the market makes the market prices reflect her prediction capability.

If Alice is way better than anyone else at predicting AGI, how can she use her superior prediction capability to gain more control over big corporations? I don't see how the EMH an EMH-based argument applies here.

Yeah, maybe it's not really EMH-based but rather EMH-inspired or EMH-adjacent. The core idea is that if AI is close lots of big corporations are really messing up big time; it's in their self-interest (at least, given their lack of concern for AI risk) to pursue it aggressively. And the other part of the core idea is that that's implausible.

[-]Ofer3yΩ230

And the other part of the core idea is that that's implausible.

I don't see why that's implausible. The condition I gave is also my explanation for why the EMH fulfills (in markets where it does), and it doesn't explain why big corporations should be good at predicting AGI.

it's in their self-interest (at least, given their lack of concern for AI risk) to pursue it aggressively

So the questions I'm curious about here are:

  1. What mechanism is supposed to causes big corporations to be good at predicting AGI?
  2. How come that mechanism doesn't also cause big corporations to understand the existential risk concerns?

I think the idea is that in general they are good at doing things that are in their self-interest, and since they don't currently think AI is an existential threat, they should think it's in their self-interest to make AGI if possible, and if it is possible, they should be able to recognise that since the relevant expertise in AI and AI forecasting is something they can acquire.

To be honest, I don't put much stock in this argument, which is why I'm asking this question.