There is a class of arguments that never really get resolved because the participants are using the same word to point at different things.
“Intelligence” is one of those words.
A human is intelligent. A raven is intelligent. An ant colony is sort of intelligent. A thermostat is maybe one-thousandth intelligent if you squint. A plant does something impressive and adaptive, but most people don’t want to call it intelligent because then we have to admit ficuses are playing the same game as mathematicians. An LLM can write a sonnet and debug a program, but it can’t feed itself, repair its server racks, or defend its continued existence except by persuading humans to keep paying the electricity bill. A civilization is obviously doing something intelligent, but a civilization is not a brain.
So what is the thing they all have in common?
If you define intelligence as “the ability to reason abstractly”, you lose plants, bacteria, and institutions. If you define it as “goal-directed behavior”, you get missiles and viruses and perhaps rivers. If you define it as “success on a broad range of cognitive tasks”, you’ve just hidden the mystery under the rug and labeled the rug “cognitive tasks”.
There is some missing common denominator.
Here is my candidate:
Intelligence is adaptive control of energy through information.
That’s the short version.
The longer version is:
Intelligence is the degree to which a system can use information and feedback to adaptively capture, store, allocate, and recruit energy and other resources into goal-relevant work, while maintaining itself or its goals and preserving or expanding future options across changing conditions.
This sounds, at first glance, like a definition written by someone who has recently learned the word “thermodynamics” and would like the world to know it. But I think it solves several real problems at once.
II.
Start with the easiest observation:
All action in the physical world requires energy.
If something is going to move, grow, hunt, compute, remember, persuade, reproduce, terraform, or found a bureaucracy, some free energy has to get spent somewhere. Intelligence without energy is like steering without a vehicle. Purely decorative.
But the reverse is not true. Energy alone is not intelligence.
A wildfire channels enormous energy. So does a hurricane. So does a bomb. None of these strike us as particularly intelligent. They are energetic, not smart. The distinction is important.
A smart thing is not a thing that has lots of energy. A smart thing is a thing that can steer energy.
It takes in information about the world, updates on that information, and uses the update to direct effort better than blind physics would.
A plant bends toward light. It opens and closes stomata depending on water stress. It reallocates growth between root and shoot. It times flowering. It stores sugar for later. It is not solving Raven’s Progressive Matrices, but it is not doing nothing either. It is sensing the world and changing its use of matter and energy in response.
A bacterium swims up a nutrient gradient. An ant colony reallocates workers. A beaver spends calories now to build a dam that changes future water flows and future food access. A human notices winter is coming and spends summer energy gathering wood. A civilization spends fossil sunlight laid down in the Carboniferous to build semiconductor fabrication plants that make better chips that design better systems that harvest more energy.
There is a continuity here.
Not a sharp boundary where “mere mechanism” becomes “real intelligence”, but a spectrum of increasingly effective ways of converting information into controlled work.
III.
This helps with an old embarrassment in how we talk.
People want intelligence to be both narrow and broad at the same time.
Narrow, because they want humans — especially educated humans doing socially approved cognitive labor — to keep a special halo around the word.
Broad, because they also notice that ravens, octopuses, wolves, corporations, markets, and maybe GPT-like systems are all doing some family-resemblance version of the same thing.
So we oscillate between definitions that are too exclusive and definitions that are too inclusive.
Too exclusive: intelligence is theorem-proving, language, abstraction, planning.
Then plants disappear, insects become instinct-machines, and civilization becomes an awkward pile of “not technically intelligent but somehow able to industrialize continents.”
Too inclusive: intelligence is any successful optimization process.
Then natural selection is intelligent, evolution is intelligent, rivers are intelligent, and one begins to suspect the definition is being paid by the syllable.
The energy-information definition offers a middle path.
It is broad enough to include plants, germs, insects, animals, humans, and distributed systems.
It is narrow enough to exclude most things we don’t actually mean, because it requires not just causal efficacy but adaptive control. Not merely dissipating gradients, but using feedback to route effort better across variable conditions.
A battery stores energy but does not decide. A fire spreads but does not preserve options. A thermostat senses and acts, but only on one tiny axis. A plant senses and allocates across many axes, though slowly and rigidly. An animal adds mobility, active search, and richer online adaptation. A human adds symbolic reasoning, cultural memory, long-horizon planning, and social coordination. A civilization adds an absurd capacity to recruit external energy far beyond the metabolism of any individual organism.
This feels much closer to the thing we were trying to point at all along.
IV.
There is one immediate objection.
“If intelligence is about controlling energy, doesn’t that just make humans intelligent because humans use lots of energy?”
No.
That would confuse power with intelligence.
Intelligence is not the quantity of energy under control. It is the quality of control over energy.
A bomb releases more energy in a second than a mathematician uses in a week. This does not make the bomb more intelligent than the mathematician.
The relevant thing is not total wattage. It is the degree to which information lets the system direct available energy into useful work, in a way that adapts to circumstances and preserves future options.
You can think of it like this:
Power is how big a river you can move.
Intelligence is how well you can put the river where it should go.
Sometimes high intelligence leads to high power, because better steering lets you recruit more external energy. Humans are the clearest case. The human body is not individually impressive compared to a tiger or a whale. But humans discovered fire, agriculture, domestication, wind, coal, oil, electricity, fission, and computation. We are a species whose signature move is taking a modest biological metabolism and parlaying it into civilization-scale energy throughput.
That does not mean “high energy use = high intelligence.” It means “high intelligence often cashes out as the ability to coordinate much larger energy flows.”
A child is intelligent before she controls a power grid. A smart but poor society may be more intelligent than a rich but stagnant one. But over long enough time horizons, intelligence tends to reveal itself in the ability to secure, store, and direct more resources.
Not always immediately. Not always linearly. But usually.
V.
This also untangles a newer confusion, around AI.
People argue endlessly about whether LLMs are “really intelligent”, and it is often unclear whether they are disagreeing about the world or about bookkeeping.
Suppose you have a large model that can write code, synthesize information, persuade a manager, guide a robot, operate software, and help design experiments. Then you give it memory, tool use, internet access, long-term goals, APIs, and a financial budget. Has the model become more intelligent?
Under some definitions, yes. Under others, no.
Under this one, we can say something cleaner:
The underlying model may have approximately the same intrinsic intelligence, while the overall system has much greater effective intelligence because it has more embodiment, leverage, and access to energy and resources.
This seems to match how people actually talk when they say a system has been “unhobbled”. The core predictor may be similar. But the world-facing system can now recruit human attention, compute, money, actuators, and organizational action much more effectively. It can steer more energy.
The same distinction already existed in biology.
A seed has one kind of intelligence. A tree has the same lineage but much more leverage. An isolated human on a desert island has one amount of effective intelligence; the same human backed by a city, a search engine, and a factory system has much more.
So one benefit of this definition is that it gives us language to separate:
intrinsic intelligence: how good the control policy/model is
effective intelligence: how much energy and resources the system can actually steer in practice
autonomy: how much of its own continued functioning it can maintain without outside support
This triad seems much more useful than trying to force every case into a single scalar called “intelligence”.
VI.
The deepest part of the definition is the clause about preserving or expanding future options.
This matters because many systems can achieve a local goal by burning through their option set.
A fire is very effective at converting available fuel into heat, but not at leaving itself better positioned to keep doing valuable work later. A reckless trader can make money by taking on hidden tail risk. A parasite can reproduce by killing the host. A government can increase current output by consuming the capital stock. A species can feast its way to carrying capacity and then collapse.
The more intelligent system is often not the one that maximizes immediate energy throughput. It is the one that spends energy now in ways that increase future control.
The beaver expends effort building a dam and gets a more navigable environment. The squirrel stores nuts. The plant grows roots before drought. The human builds institutions, schools, roads, batteries, archives. The civilization invents writing, accounting, law, metallurgy, and then semiconductors.
Each of these is an operation where energy is used not just to accomplish something, but to improve the future mapping from information to controlled action.
This is why intelligence feels linked to strategy rather than mere reactivity. It is not just “do work.” It is “do work in a way that increases the ability to do the right work later.”
That is close to a real explanatory core.
VII.
The spectrum that falls out looks something like this:
A rock has essentially zero intelligence. It neither senses nor reallocates.
A fire has a little more of whatever pre-intelligence is, because it is responsive to local structure and can exploit gradients, but it has almost no memory, modeling, or option-preserving behavior.
A thermostat has tiny intelligence. It closes a feedback loop and spends energy conditionally.
A plant has real but narrow intelligence. It is slow, embodied, distributed through tissue and growth patterns, and specialized to a limited action set. But it is obviously not zero.
A bacterium has more online control than people give it credit for.
An insect is a significant step up: mobility, active exploration, learning, more varied policy switching.
A mammal adds richer world-modeling, social reasoning, memory, and flexible behavior.
A human adds language, abstraction, cumulative culture, and social mechanisms for recruiting energy far beyond the body.
An institution or civilization is not merely a pile of humans; it is a higher-order control system that can route information and energy over long timescales and enormous physical scales.
And then AI systems occupy a weird diagonal position. High representational capacity, variable autonomy, borrowed metabolism, and potentially vast leverage if embedded in the right scaffolding.
This spectrum seems to capture more of what we care about than either IQ-talk or naive cybernetics.
VIII.
What problems does this solve?
First, it turns intelligence from a binary into a spectrum without making the spectrum meaningless.
Plants and germs do not have to be “not intelligent” merely because they are unlike us. But we also don’t have to declare every optimization process equally intelligent. The relevant dimension is adaptive control over energy and resource flows.
Second, it unifies biological, technological, and civilizational cases under one frame. Animal cognition, metabolism, tool use, markets, institutions, and AI all become comparable as different architectures for turning information into directed work.
Third, it separates intelligence from raw power. This is a good conceptual hygiene move. We are too easily impressed by scale and too easily confused by force.
Fourth, it clarifies why embodiment matters. Intelligence expressed through a body, a supply chain, or a civilization is different from intelligence trapped in a box. Not necessarily greater in essence, but greater in effective control.
Fifth, it gives us a language for thinking about “unhobbling” systems. A mind with more tools, memory, sensors, and actuators may not be vastly wiser internally, but it can become much more consequential externally.
And sixth, it explains why successful systems so often converge on storage, planning, and infrastructure. These are not incidental add-ons. They are the signature moves of intelligence understood as future-oriented control.
IX.
There are still limits.
This is not a definition of consciousness. A plant can have some intelligence on this scale without having anything like human subjective experience.
It is not a definition of moral worth. A paperclip maximizer might be highly intelligent in this sense and still not deserve rights. Or perhaps it deserves exactly as many rights as a hydroelectric dam.
It is not a definition of truth-seeking. Some intelligent systems are excellent at manipulating signals and terrible at understanding reality except instrumentally.
And it is not a perfect scalar metric. There are many subdimensions: breadth, timescale, flexibility, memory, rate of learning, autonomy, social coordination, and ability to recruit outside energy.
But this seems like the normal state of useful definitions. They clarify a cluster; they do not abolish all future philosophy.
X.
The slogan version is still the best:
Intelligence is adaptive control of energy through information.
It is short enough to remember, broad enough to include plants and civilizations, and sharp enough to explain why a wildfire and a mathematician are not playing the same game.
The fully accurate version is uglier, but only because reality is uglier:
Intelligence is the degree to which a system can use information and feedback to adaptively capture, store, allocate, and recruit energy and other resources into goal-relevant work, while maintaining itself or its goals and preserving or expanding future options across changing conditions.
I like this definition because it answers the conundrum we started with. It explains why plants are not zero. Why beavers and ants feel surprisingly intelligent. Why humans seem more intelligent not merely because they think prettier thoughts, but because they can coordinate forests, rivers, metals, fossil fuels, grids, and now computation. Why an LLM in a text box and an agentic AI with tools feel like different kinds of thing. Why civilization itself increasingly looks like an intelligence rather than just a backdrop for individual minds.
Most of all, it suggests that intelligence is not best understood as a ghostly property of brains.
It is a physical achievement.
To be intelligent is to be the sort of thing that can look at the world, learn something from what it sees, and then use that knowledge to bend more of reality than blind chance would have bent on its own.
I've been following lesswrong/overcoming bias/starslatecodex for 20 years, but this is my first post. please comment!
I.
There is a class of arguments that never really get resolved because the participants are using the same word to point at different things.
“Intelligence” is one of those words.
A human is intelligent. A raven is intelligent. An ant colony is sort of intelligent. A thermostat is maybe one-thousandth intelligent if you squint. A plant does something impressive and adaptive, but most people don’t want to call it intelligent because then we have to admit ficuses are playing the same game as mathematicians. An LLM can write a sonnet and debug a program, but it can’t feed itself, repair its server racks, or defend its continued existence except by persuading humans to keep paying the electricity bill. A civilization is obviously doing something intelligent, but a civilization is not a brain.
So what is the thing they all have in common?
If you define intelligence as “the ability to reason abstractly”, you lose plants, bacteria, and institutions. If you define it as “goal-directed behavior”, you get missiles and viruses and perhaps rivers. If you define it as “success on a broad range of cognitive tasks”, you’ve just hidden the mystery under the rug and labeled the rug “cognitive tasks”.
There is some missing common denominator.
Here is my candidate:
Intelligence is adaptive control of energy through information.
That’s the short version.
The longer version is:
Intelligence is the degree to which a system can use information and feedback to adaptively capture, store, allocate, and recruit energy and other resources into goal-relevant work, while maintaining itself or its goals and preserving or expanding future options across changing conditions.
This sounds, at first glance, like a definition written by someone who has recently learned the word “thermodynamics” and would like the world to know it. But I think it solves several real problems at once.
II.
Start with the easiest observation:
All action in the physical world requires energy.
If something is going to move, grow, hunt, compute, remember, persuade, reproduce, terraform, or found a bureaucracy, some free energy has to get spent somewhere. Intelligence without energy is like steering without a vehicle. Purely decorative.
But the reverse is not true. Energy alone is not intelligence.
A wildfire channels enormous energy. So does a hurricane. So does a bomb. None of these strike us as particularly intelligent. They are energetic, not smart. The distinction is important.
A smart thing is not a thing that has lots of energy. A smart thing is a thing that can steer energy.
It takes in information about the world, updates on that information, and uses the update to direct effort better than blind physics would.
A plant bends toward light. It opens and closes stomata depending on water stress. It reallocates growth between root and shoot. It times flowering. It stores sugar for later. It is not solving Raven’s Progressive Matrices, but it is not doing nothing either. It is sensing the world and changing its use of matter and energy in response.
A bacterium swims up a nutrient gradient. An ant colony reallocates workers. A beaver spends calories now to build a dam that changes future water flows and future food access. A human notices winter is coming and spends summer energy gathering wood. A civilization spends fossil sunlight laid down in the Carboniferous to build semiconductor fabrication plants that make better chips that design better systems that harvest more energy.
There is a continuity here.
Not a sharp boundary where “mere mechanism” becomes “real intelligence”, but a spectrum of increasingly effective ways of converting information into controlled work.
III.
This helps with an old embarrassment in how we talk.
People want intelligence to be both narrow and broad at the same time.
Narrow, because they want humans — especially educated humans doing socially approved cognitive labor — to keep a special halo around the word.
Broad, because they also notice that ravens, octopuses, wolves, corporations, markets, and maybe GPT-like systems are all doing some family-resemblance version of the same thing.
So we oscillate between definitions that are too exclusive and definitions that are too inclusive.
Too exclusive: intelligence is theorem-proving, language, abstraction, planning.
Then plants disappear, insects become instinct-machines, and civilization becomes an awkward pile of “not technically intelligent but somehow able to industrialize continents.”
Too inclusive: intelligence is any successful optimization process.
Then natural selection is intelligent, evolution is intelligent, rivers are intelligent, and one begins to suspect the definition is being paid by the syllable.
The energy-information definition offers a middle path.
It is broad enough to include plants, germs, insects, animals, humans, and distributed systems.
It is narrow enough to exclude most things we don’t actually mean, because it requires not just causal efficacy but adaptive control. Not merely dissipating gradients, but using feedback to route effort better across variable conditions.
A battery stores energy but does not decide.
A fire spreads but does not preserve options.
A thermostat senses and acts, but only on one tiny axis.
A plant senses and allocates across many axes, though slowly and rigidly.
An animal adds mobility, active search, and richer online adaptation.
A human adds symbolic reasoning, cultural memory, long-horizon planning, and social coordination.
A civilization adds an absurd capacity to recruit external energy far beyond the metabolism of any individual organism.
This feels much closer to the thing we were trying to point at all along.
IV.
There is one immediate objection.
“If intelligence is about controlling energy, doesn’t that just make humans intelligent because humans use lots of energy?”
No.
That would confuse power with intelligence.
Intelligence is not the quantity of energy under control. It is the quality of control over energy.
A bomb releases more energy in a second than a mathematician uses in a week. This does not make the bomb more intelligent than the mathematician.
The relevant thing is not total wattage. It is the degree to which information lets the system direct available energy into useful work, in a way that adapts to circumstances and preserves future options.
You can think of it like this:
Sometimes high intelligence leads to high power, because better steering lets you recruit more external energy. Humans are the clearest case. The human body is not individually impressive compared to a tiger or a whale. But humans discovered fire, agriculture, domestication, wind, coal, oil, electricity, fission, and computation. We are a species whose signature move is taking a modest biological metabolism and parlaying it into civilization-scale energy throughput.
That does not mean “high energy use = high intelligence.” It means “high intelligence often cashes out as the ability to coordinate much larger energy flows.”
A child is intelligent before she controls a power grid. A smart but poor society may be more intelligent than a rich but stagnant one. But over long enough time horizons, intelligence tends to reveal itself in the ability to secure, store, and direct more resources.
Not always immediately. Not always linearly. But usually.
V.
This also untangles a newer confusion, around AI.
People argue endlessly about whether LLMs are “really intelligent”, and it is often unclear whether they are disagreeing about the world or about bookkeeping.
Suppose you have a large model that can write code, synthesize information, persuade a manager, guide a robot, operate software, and help design experiments. Then you give it memory, tool use, internet access, long-term goals, APIs, and a financial budget. Has the model become more intelligent?
Under some definitions, yes. Under others, no.
Under this one, we can say something cleaner:
The underlying model may have approximately the same intrinsic intelligence, while the overall system has much greater effective intelligence because it has more embodiment, leverage, and access to energy and resources.
This seems to match how people actually talk when they say a system has been “unhobbled”. The core predictor may be similar. But the world-facing system can now recruit human attention, compute, money, actuators, and organizational action much more effectively. It can steer more energy.
The same distinction already existed in biology.
A seed has one kind of intelligence. A tree has the same lineage but much more leverage. An isolated human on a desert island has one amount of effective intelligence; the same human backed by a city, a search engine, and a factory system has much more.
So one benefit of this definition is that it gives us language to separate:
This triad seems much more useful than trying to force every case into a single scalar called “intelligence”.
VI.
The deepest part of the definition is the clause about preserving or expanding future options.
This matters because many systems can achieve a local goal by burning through their option set.
A fire is very effective at converting available fuel into heat, but not at leaving itself better positioned to keep doing valuable work later. A reckless trader can make money by taking on hidden tail risk. A parasite can reproduce by killing the host. A government can increase current output by consuming the capital stock. A species can feast its way to carrying capacity and then collapse.
The more intelligent system is often not the one that maximizes immediate energy throughput. It is the one that spends energy now in ways that increase future control.
The beaver expends effort building a dam and gets a more navigable environment.
The squirrel stores nuts.
The plant grows roots before drought.
The human builds institutions, schools, roads, batteries, archives.
The civilization invents writing, accounting, law, metallurgy, and then semiconductors.
Each of these is an operation where energy is used not just to accomplish something, but to improve the future mapping from information to controlled action.
This is why intelligence feels linked to strategy rather than mere reactivity.
It is not just “do work.” It is “do work in a way that increases the ability to do the right work later.”
That is close to a real explanatory core.
VII.
The spectrum that falls out looks something like this:
A rock has essentially zero intelligence. It neither senses nor reallocates.
A fire has a little more of whatever pre-intelligence is, because it is responsive to local structure and can exploit gradients, but it has almost no memory, modeling, or option-preserving behavior.
A thermostat has tiny intelligence. It closes a feedback loop and spends energy conditionally.
A plant has real but narrow intelligence. It is slow, embodied, distributed through tissue and growth patterns, and specialized to a limited action set. But it is obviously not zero.
A bacterium has more online control than people give it credit for.
An insect is a significant step up: mobility, active exploration, learning, more varied policy switching.
A mammal adds richer world-modeling, social reasoning, memory, and flexible behavior.
A human adds language, abstraction, cumulative culture, and social mechanisms for recruiting energy far beyond the body.
An institution or civilization is not merely a pile of humans; it is a higher-order control system that can route information and energy over long timescales and enormous physical scales.
And then AI systems occupy a weird diagonal position. High representational capacity, variable autonomy, borrowed metabolism, and potentially vast leverage if embedded in the right scaffolding.
This spectrum seems to capture more of what we care about than either IQ-talk or naive cybernetics.
VIII.
What problems does this solve?
First, it turns intelligence from a binary into a spectrum without making the spectrum meaningless.
Plants and germs do not have to be “not intelligent” merely because they are unlike us. But we also don’t have to declare every optimization process equally intelligent. The relevant dimension is adaptive control over energy and resource flows.
Second, it unifies biological, technological, and civilizational cases under one frame. Animal cognition, metabolism, tool use, markets, institutions, and AI all become comparable as different architectures for turning information into directed work.
Third, it separates intelligence from raw power. This is a good conceptual hygiene move. We are too easily impressed by scale and too easily confused by force.
Fourth, it clarifies why embodiment matters. Intelligence expressed through a body, a supply chain, or a civilization is different from intelligence trapped in a box. Not necessarily greater in essence, but greater in effective control.
Fifth, it gives us a language for thinking about “unhobbling” systems. A mind with more tools, memory, sensors, and actuators may not be vastly wiser internally, but it can become much more consequential externally.
And sixth, it explains why successful systems so often converge on storage, planning, and infrastructure. These are not incidental add-ons. They are the signature moves of intelligence understood as future-oriented control.
IX.
There are still limits.
This is not a definition of consciousness. A plant can have some intelligence on this scale without having anything like human subjective experience.
It is not a definition of moral worth. A paperclip maximizer might be highly intelligent in this sense and still not deserve rights. Or perhaps it deserves exactly as many rights as a hydroelectric dam.
It is not a definition of truth-seeking. Some intelligent systems are excellent at manipulating signals and terrible at understanding reality except instrumentally.
And it is not a perfect scalar metric. There are many subdimensions: breadth, timescale, flexibility, memory, rate of learning, autonomy, social coordination, and ability to recruit outside energy.
But this seems like the normal state of useful definitions. They clarify a cluster; they do not abolish all future philosophy.
X.
The slogan version is still the best:
Intelligence is adaptive control of energy through information.
It is short enough to remember, broad enough to include plants and civilizations, and sharp enough to explain why a wildfire and a mathematician are not playing the same game.
The fully accurate version is uglier, but only because reality is uglier:
Intelligence is the degree to which a system can use information and feedback to adaptively capture, store, allocate, and recruit energy and other resources into goal-relevant work, while maintaining itself or its goals and preserving or expanding future options across changing conditions.
I like this definition because it answers the conundrum we started with. It explains why plants are not zero. Why beavers and ants feel surprisingly intelligent. Why humans seem more intelligent not merely because they think prettier thoughts, but because they can coordinate forests, rivers, metals, fossil fuels, grids, and now computation. Why an LLM in a text box and an agentic AI with tools feel like different kinds of thing. Why civilization itself increasingly looks like an intelligence rather than just a backdrop for individual minds.
Most of all, it suggests that intelligence is not best understood as a ghostly property of brains.
It is a physical achievement.
To be intelligent is to be the sort of thing that can look at the world, learn something from what it sees, and then use that knowledge to bend more of reality than blind chance would have bent on its own.
I've been following lesswrong/overcoming bias/starslatecodex for 20 years, but this is my first post. please comment!