In [Prediction] We are in an Algorithmic Overhang I made technical predictions without much explanation. In this post I explain my reasoning. This prediction is contingent on there not being a WWIII or equivalent disaster disrupting semiconductor fabrication.

I wouldn't be surprised if an AI takes over the world in my lifetime. The idea makes me uncomfortable. I question my own sanity. At first I think "no way could the world change that quickly". Then I remember that technology is advancing exponentially. The world is changing faster than ever has before and the pace is accelerating.

Superintelligence is possible. The laws of physics demand it. If superintelligence is possible then it is inevitable. Why hasn't we built one yet? There are four[1] candidate limitations:

  • Data. We lack sufficient training data.
  • Hardware. We lack the ability to push atoms around.
  • Software. The core algorithms are too complicated for human beings to code.
  • Theoretical. We're missing one or more major technical insights.

We're not limited by data

There is more data available on the Internet than in the genetic code of a human being plus the life experience of a single human being.

We're not (yet) limited by hardware

This is controversial but I believe throwing more hardware at existing algorithms won't bring them to human level.

I don't think we're limited by our ability to write software

I suspect that the core learning algorithm of human beings could be written in a handful of scientific papers comparable to the length and complexity of Einstein's Annus Mirabilis. I can't prove this. It's just gut instinct. If I'm wrong and the core learning algorithm(s) of human beings is too complicated to write in a handful of scientific papers then superintelligence will not be built by 2121.

Porting a mathematical algorithm to a digital computer is straightforward. Individual inputs like snake detector circuits can be learned by existing machine learning algorithms and fed into the core learning algorithm.

We are definitely limited theoretically

We don't know how mammalian brains work.

I don't think there's a big difference of fundamental architecture between human brains and e.g. mouse brains. Humans do have specialized brain regions for language like Broca's area but I expect language comprehension would be easy to solve if we had an artificial mouse brain running on a computer.

Figuring out how mammalian brains work would constitute a disruptive innovation. It would re-write the rules of machine learning overnight. The instant this algorithm becomes public it would start a race to an superintelligent AI.

What happens next depends on the the algorithm. If it can be scaled efficiently on CPUs and GPUs then a small group could build the first superintelligence. If sufficient hardware is required then it might be possible to restrict AGI to nation-states the way private ownership of nuclear weapons is regulated. I think such a future is possible but unlikely. More precisely, I predict with >50% confidence that the algorithm will run efficiently enough on CPUs or GPUs (or whatever we have on the shelf) for a venture-backed startup to build a superintelligence on off-the-shelf hardware even though specialized hardware would be far more efficient.

  1. A fifth explanation is we're good at pushing atoms around but our universal computers are too inefficient to run a superintelligence because the algorithms behind superintelligence run badly on the von Neumann architecture. This is a variant on the idea of being hardware limited. While plausible, I don't think it's very likely because universal computers are universal. ANNs may not (always) run efficiently on them but ANNs do run on them. ↩︎


New Comment
29 comments, sorted by Click to highlight new comments since: Today at 11:22 AM

If I'm wrong and the core learning algorithm(s) of human beings is too complicated to write in a handful of scientific papers then superintelligence will not be built by 2121.


It is possible for evolution to have stumbled upon a really complicated algorithm for humans. Deep learning is fairly simple. AIXI is simple. Evolution is simple. If the human brain is incredibly complicated, even in its core learning algorithm, we could make something else. (Or possibly copy lots of data with little understanding)

The brain may also be excessively complicated to defend against parasites.

You could also likely build superintelligence by wiring up human brains with brain computer interfaces, then using reinforcement learning to generate some pattern of synchronized activations and brain-to-brain communication that prompts to brains collectively solve problems more effectively than a single brain is able to - a sort of AI guided super-collaboration. That would bypass both the algorithmic complexity and the hardware issues.

The main constraints here are the bandwidths of brain computer interfaces (I saw a publication that derived a Moore’s law-like trend for this, but now can’t find it. If anyone knows where to find such a result, please let me know.) and the difficulty of human experiments.

"Accelerating progress in brain recording tech"? One reason to be optimistic about brain imitation learning: we may just be in the knee of the curve, well before the curves cross.

Thanks! I’m pretty sure this isn’t the one I saw, but it works even better for my purposes.

Edit: I'm working on an AI timeline / risk scenario where BCIs and neuro-imitative AI play a big role. I've sent you the draft if you're interested.

The set of designs that look like "Human brains + BCI + Reinforcement learning" is large. There is almost certainly something superintelligent in that design space, and a lot of things that aren't. Finding a superintelligence in this design space is not obviously much easier than finding a superintelligence in the space of all computer programs.

I am unsure how this bypasses algorithmic complexity and hardware issues. I would not expect human brains to be totally plug and play compatible. It may be that the results of wiring 100 human brains together (with little external compute) are no better than the same 100 people just talking. It may be you need difficult algorithms and/or lots of hardware as well as BCI's.

I think using AI + BCI + human brains will be easier than straight AI for the same reason that it’s easier to finetune pretrained models for a specific task than it is to create a pretrained model. The brain must have pretty general information processing structure, and I expect it’s easier to learn the interface / input encoding for such structures than it is to build human level AI.

Part of that intuition comes from how adaptable the brain is to injury, new sensory modalities, controlling robotic limbs, etc. Another part of the intuition comes from how much success we’ve seen even with relatively unsophisticated efforts to manipulate brains, such as curing depression.

Its easier to couple a cart to a horse than to build an internal combustion engine. 

Its easier to build a modern car, than to cybernetically enhance a horse to be that fast and strong.

Humans plus BCI are not to hard. If keyboards count as crude BCI, its easy.  Making something substantially superhuman. That's harder than building an ASI from scratch. 

You can easily combine multiple horses into a “super-equine” transport system by arranging for fresh horses to be available periodically across the journey and pushing each horse to unsustainable speeds.

Also, I don’t think it’s very hard to reach somewhat superhuman performance with BCIs. The difference between keyboards and the BCIs I’m thinking of is that my BCIs can directly modify neurology to increase performance. E.g., modifying motivation/reward to make the brains really value learning about/accomplishing assigned tasks. Consider a company where every employee/manager is completely devoted to company success, fully trust each other and have very little internal politicking/empire building. Even without anything like brain-level, BCI enabled parallel problem solving or direct intelligence augmentation, I’m pretty sure such a company would perform far better than any pure human company of comparable size and resources.

Firstly we already have humans working together.

Secondly, do BCI's mean brainwashing for the good of the company? I think most people wouldn't want to work for such a company. I mean companies probably could substantially increase productivity with psycoactive substances. But that's illegal and a good way to loose all your employees.

Also something moloch like has a tendency to pop up in a lot of unexpected ways.  I wouldn't be surprised if you get direct brain to brain politicking.

Also this is less relevant for AI safety research, where there is already little empire building because most of the people working on it already really value success. 

“… do BCI's mean brainwashing for the good of the company? I think most people wouldn't want to work for such a company.”

I think this is a mistake lots of people make when considering potentially dystopian technology: that dangerous developments can only happen if they’re imposed on people by some outside force. Most people in the US carry tracking devices with them wherever they go, not because of government mandate, but simply because phones are very useful.

Adderall use is very common in tech companies, esports gaming, and other highly competitive environments. Directly manipulating reward/motivation circuits is almost certainly far more effective than Adderall. I expect the potential employees of the sort of company I discussed would already be using BCIs to enhance their own productivities, and it’s a relatively small step to enhancing collaborative efficiency with BCIs.

The subjective experience for workers using such BCIs is probably positive. Many of the straightforward ways to increase workers’ productivity seem fairly desirable. They’d be part of an organisation they completely trust and that completely trusts them. They’d find their work incredibly fulfilling and motivating. They’d have a great relationship with their co-workers, etc.

Brain to brain politicking is of course possible, depending on the implementation. The difference is that there’s an RL model directly influencing the prevalence of such behaviour. I expect most unproductive forms of politicking to be removed eventually.

Finally, such concerns are very relevant to AI safety. A group of humans coordinated via BCI with unaligned AI is not much more aligned than the standard paper-clipper AI. If such systems arise before superhuman pure AI, then I expect them to represent a large part of AI risk. I’m working on a draft timeline where this is the case.

This makes sense. I like that you brought the topic up.

I predict that brain-computer interfaces will advance too slow to matter much in the race to a superintelligence but I'd be excited to be proven wrong. A world where brain-computer interfaces advance faster than AI would be extrenely interesting.

I’m actually woking on an AI progress timeline / alignment failure story where the big risk comes from BCI-enabled coordination tech (I've sent you the draft if you're interested). I.e., instead of developing superintelligence, the timeline develops models that can manipulate mood/behavior through a BCI, initially as a cure for depression, then gradually spreading through society as a general mood booster / productivity enhancer, and finally being used to enhance coordination (e.g., make everyone super dedicated to improving company profits without destructive internal politics). The end result is that coordination models are trained via reinforcement learning to maximize profits or other simple metrics and gradually remove non-optimal behaviors in pursuit of those metrics.

This timeline makes the case that AI doesn’t need to be superhuman to pose a risk. The behavior modifying models manipulate brains through BCIs with far fewer electrodes than the brain has neurons and are much less generally capable than human brains. We already have a proof of concept that a similar approach can cure depression, so I think more complex modifications like loyalty/motivation enhancement are possible in the not too distant future.

You may also find the section of my timeline addressing progress standard in AI interesting:

My rough mental model for AI capabilities is that they depend on three inputs:

  1. Compute per dollar. This increases at a somewhat sub-exponential rate. The time between 10x increases is increasing. We were initially at ~10x increase every four years, but recently slowed to ~10x increase every 10-16 years (source).
  2. Algorithmic progress in AI. Each year, the compute required to reach a given performance level drops by a constant factor, (so far, a factor of 2 every ~16 months) (source). I think improvements to training efficiency drive most of the current gains in AI capabilities, but they'll eventually begin falling off as we exhaust low hanging fruit.
  3. The money people are willing to invest in AI. This increases as the return on investment in AI increases. There was a time when money invested in AI rose exponentially and very fast, but it’s pretty much flattened off since GPT-3. My guess is this quantity follows a sort of stutter-stop pattern where it spikes as people realize algorithmic/hardware improvements make higher investments in AI more worthwhile, then flattens once the new investments exhaust whatever new opportunities progress in hardware/algorithms allowed.

When you combine these somewhat sub-exponentially increasing inputs with the power-law scaling laws so far discovered (see here), you probably get something roughly linear, but with occasional jumps in capability as willingness to invest jumps.

I think there's a reasonable case that AI progress will continue at approximately the same trajectory as it has over the last ~50 years.

What metric would you use to capture the trajectory of AI progress over the last 50 years? And would such a metric be able to bridge the transition from GOFAI to deep learning?

My preferred algorithmic metric would be compute required to reach a certain performance level. This doesn’t really work for hand-crafted expert systems. However, I don’t think those are very informative of future AI trajectories.

We're not (yet) limited by hardware

There are 2 questions here, the intelligence of existing algorithms with new hardware, and the intelligence of new algorithms with existing hardware.

We could be (and probably are) in a world where existing algorithms + more hardware and existing hardware + better algorithms can both lead to superintelligence. In which case the question is how much progress is needed, and how long it will take. 

When you say "[w]e could be (and probably are) in a world where existing algorithms + more hardware…can…lead to superintelligence" are you referring to popular algorithms like GPT or obscure algorithms buried in a research paper somewhere?

Possibly GPT3 x 100. Or RL of similar scale.

Very likely Evolution (with enough compute, but you might need a lot of compute.) 

AIXI. You will need a lot of compute. 

I was kind of referring to the disjunction.

  • Theoretical. We're one or more major technical insights.

"missing" missing?

Fixed. Thanks.

If I'm wrong and the core learning algorithm(s) of human beings is too complicated to write in a handful of scientific papers then superintelligence will not be built by 2121.


Note that even if it is complicated it might be an attractor, so that a not-yet-AGI meta-learning algorithm might stumble upon it.

New to LessWrong?