This One Equation May Be the Root of Intelligence

How does intelligence work?

According to Dr. Joe Tsien, a leading neuroscientist at Augusta University in Georgia, the key lies in one simple, unassuming equation: N = 2i–1.

At its core, Tsien’s theory of connectivity describes how our billions of neurons flexibly assemble to not only gather knowledge, but to crystalize concepts and extrapolate from learned ideas to reason about things we have not yet experienced.

“Intelligence is really about dealing with uncertainty and infinite possibilities,” Tsien said in a press release.

intelligence-brain-equation-fundamental-1If you’re staring at the equation in disbelief, you’re not alone. The theory is so seemingly banal that it’s easy to dismiss as another pompous attempt at solving the brain’s neural code — all theory, no evidence.

But in a new paper published in Frontiers in Systems Neuroscience, Tsien and his team took the theory to task in a series of animal experiments and found it at work in seven different brain regions, governing basic behaviors such as feeding, memory and fear.

And simplicity isn’t the most shocking aspect of Tsien’s idea.

Even more controversially, the theory goes head-to-head with a fundamental teaching in neuroscience: cells that fire together, wire together.

The century-old idea is so widely accepted it may as well be dogma. It suggests that when neurons activate together to encode an object, concept or memory, their connections strengthen. If any part of the ensemble activates in the future, it triggers recall of the entire memory. In other words, cells fire randomly, but connect non-randomly through learning.

At a fundamental level the brain’s wiring is innate.

The theory makes sense from both the computational and cellular perspectives, but is “beautifully vague,” according to Tsien.

In stark contrast, Tsien predicts the brain runs on a series of pre-programmed, conserved networks. These networks are not learned; instead, they’re made up of pre-established neural networks, wired according to a simple mathematical principle.

In other words, at a fundamental level the brain’s wiring is innate — the motifs, established by genetics, underlie our ability to extract features, discover relational patterns, abstract knowledge and ultimately, reason.

“In my view, Joe Tsien proposes an interesting idea that proposes a simple organizational principle of the brain, and that is supported by intriguing and suggestive evidence,” said Dr. Thomas C. Südhof, a Stanford neuroscientist studying memory formation and a winner of the 2013 Nobel Prize in Physiology or Medicine.

“This idea is very much worth testing further,” he said.

The theory of connectivity

Tsien is no stranger to the study of intelligence.

While working at Princeton University 17 years ago, Tsien was among the first to genetically engineer “smart mice” that learned faster, remembered longer and solved complex maze problems faster than their ordinary brethren.

intelligence-brain-equation-fundamental-4The creation of the Doogie mouse, named after the genius teen in the TV show Doogie Howser, MD, sparked an idea: if tinkering with just a few genes can drastically alter cognition regardless of training, it may be because the studies were messing with the brain’s fundamental wiring.

Years later, while studying how mice form different types of fear memories, Tsien discovered that cells in the hippocampus — the “memory center” of the brain — varied in their activation patterns.

Some cells fired to any type of fearful event — an air-blow on the back (simulating an owl attack), an earthquake-like shake or a sudden free fall. Others responded to a subset of events, such as to a shake and drop, but not to an air blow. Yet others were even pickier, only activating to context-specific events, such as an earthquake in a blue but not red room.

When mapped out, the neurons formed clusters ranging from specific to general.

“This seed of an idea led to the theory of connectivity,” said Tsien.

At the core of the theory is N = 2i–1, or a power-of-two-based mathematical wiring logic that illustrates how neural networks go from specific to general.

Each neural network is called a “clique.” A simple clique includes neurons that receive a specific input. Unlike the popular belief individual neurons are the brain’s basic computational unit, Tsien says these neuron clusters should take the role.

“This allows the system to avoid a catastrophic failure in the event of losing a single neuron,” explains Tsien.

These simple neural cliques then wire up into larger networks called functional connectivity motifs (FCMs) according to N = 2i–1. Here, “N” is the number of neural cliques connected in different ways and “i” is the types of information they receive.

For example, say you have an animal that wants food and mates (i=2). This means it needs three neural cliques (N=2*2-1) to fully represent its needs.

“According to this equation, each FCM is predicted to consist of a full range of neural cliques that extract and process a variety of inputs in a combinatorial manner,” said Tsien.

By combining these patterns, the brain can build new ideas and concepts about the world, said Tsien. In a way, it’s kind of like flexibly recombining Lego blocks to make new structures.

For an animal that deals with more complex inputs, each neural clique handles a different aspect of incoming information. Together, they wire together to form diverse larger motifs capable of processing higher-level input.

These motifs are pre-programmed, not learned, and according to Tsien are the basic computational building blocks of the brain.

In this way, the brain can take information and turn combinations of specific features such as “earthquake” and “landslide” to more generalized knowledge, such as “natural disasters.”

Because neurons network together in this particular way, they form circuits that can find patterns from all sorts of information. By combining these patterns, the brain can build new ideas and concepts about the world, said Tsien. In a way, it’s kind of like flexibly recombining Lego blocks to make new structures.

Testing the theory

If the brain really operates on N= 2i-1, the theory should hold for multiple types of cognitive tasks. Putting the idea to the test, the researchers fitted mice with arrays of electrodes to listen in on their neural chatter.

In one experiment, they gave the animals different combinations of four types of food — standard chow, sugar pellets, rice or skim milk droplets. According to the theory, the mice should have 15 (N= 24-1) neuronal cliques to fully represent each food type and their various combinations.

And that’s what they found.

intelligence-brain-equation-fundamental-5 When recording from the amygdala, a brain area that processes emotions, some neurons responded generally to all kinds of food, whereas others were more specific. When clustered for their activity patterns, a total of 15 cliques emerged — just as the theory predicted.

In another experiment aimed at triggering fear, the animals were subjected to four scary scenarios: a sudden puff of air, an earthquake-like shake, an unexpected free-fall or a light electrical zap to the feet. This time, recordings from a region of the cortex important for controlling fear also unveiled 15 cell cliques.

Similar results were found in other areas of the brain — altogether, seven distinct regions. The notable exception was dopamine neurons in the reward circuit, which tend to fire in a more binary manner to encode things like good or bad.

This suggests the equation is at work in many — though not all — cognitive modalities, say the researchers.

They then moved on to testing the prediction that the algorithm is pre-configured by evolution and development, rather than learned. Here, they repeated the above experiments, but with a type of genetically modified mice that lacked the NMDA receptor — a master switch that is necessary for learning-induced network changes.

Surprisingly, the mathematical rule remained intact even after the genetic deletion.

Given that neurons in mice without NMDA receptors cannot “fire together, wire together,” the authors concluded that the theory of connectivity is fundamentally different than our current notion of plasticity, in that it’s not learned, but innate.

Now what?

Tsien believes the theory can be immediately used to reexamine data regarding how memories are physically stored in the brain and potentially lead to new insights about how disease and aging affect the brain at the cell-assembly level.

With a well-described algorithm ready for testing, the theory could also potentially inform neuromorphic computing, teaching artificial circuits to discover knowledge and generate flexible behaviors.

But for someone who studies intelligence, Tsien is rather hesitant to take his algorithm to the realm of machines.

“It is important to note that artificial general intelligence based on brain principles can come with great benefits,” he says, “and potentially even greater risks.”


Image Credit: Shutterstock

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog NeuroFantastic.com. Her first book, "Will AI Replace Us?" (Thames & Hudson) was published in 2019.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured