Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Epistemic status: Trying to explain why I have certain intuitions. Not sure whether people will find this obvious vs controversial.

Part 1: Brains probably do some useful things in utterly inscrutable ways

I'm not so much interested in arguing the strong claim that the brain does some useful things in infinitely inscrutable ways—i.e., that understanding them is fundamentally impossible. I merely want to make the weaker claim that the brain probably does some useful things in ways that are for all intents and purposes inscrutable.

Where did I get this intuition? A few places:

  • Evolved FPGA circuits - see the awesome blog post On the Origin of Circuits focusing on the classic 1996 paper by Adrian Thompson. An evolved circuit of just 37 logic gates managed to perform a function which kinda seems impossible with those components. It turned out that the components were used in weird ways—the circuit ran differently on nominally-identical FPGAs, the transistors were not all used as on/off switches, there was some electromagnetic coupling or power-line coupling going on, etc. Can we understand how this circuit works? In the paper, they didn't try. I imagine that a good physicist, given enough time and experimental data, could get at least a vague idea of the most important aspects. But there might be subtleties that can't really be explained better than a simulation, or maybe some component has 17 unrelated functions that occur at different parts of the cycle, or maybe you need to account for a microscopic bump in some wire, or whatever. If it were 370 components instead of 37, and there were limits on what you can measure experimentally, it would be that much harder.
  • The Busy Beaver Function Σ(n) is unknown for as low as n=5. So we have a bunch of really simple computer programs, and no one knows whether they run forever or halt. When you get to larger n it gets even worse: For n≥1919 (and perhaps much smaller n too), Σ(n) is formally undecidable. While that's not exactly the same as saying that we will never understand these programs, I kinda expect that there are in fact programs whose asymptotic behavior really is "infinitely inscrutable", i.e. programs which don't halt, but where there is fundamentally no way to understand why they don't halt, short of actually running them forever, and that's true even if you have a brain the size of Jupiter. (I could be wrong, and this is not an important part of my argument.)
  • Riemann hypothesis: We have a simple-to-define function that exhibits an obvious pattern of behavior. Like those busy beaver Turing machines, the answer to "why" is "I dunno, we ran the calculation, and that's what we've found, at least so far". In this case, I assume that an explanation probably exists, but I find it interesting that we haven't discovered it yet, after 150 years of intense effort.

In summary, my intuition is that:

  1. Simple components can give rise to recognizable emergent patterns of behavior for inscrutably complicated reasons that can't necessarily be distilled down to any "explanation" beyond "we simulated it and that's what happens", and
  2. Neurons are not simple components, in that even if they have a legible primary input-output function, they probably have dozens of "side-channel" input-output functions that probably get sporadically used by evolution as well. (If you tug on a dendrite, then it's a spring!)[1]

These two considerations coalesce to give me a prior expectation that there may be large numbers of very deep rabbit holes when you try to work out low-level implementation details of how the brain does any particular thing. The brain might do that thing by a beautiful, elegant, simple design ... or it might do that thing in some bizarre, ridiculous way, which we will not understand except by looking in weird places, like measuring mechanical stresses on cell membranes, or by measuring flows of chemicals that by all accounts ought to have no relation whatsoever to neuron firing, or by simulating systems of 492 components which interact in a complicated way that can't really be boiled down into anything simpler.

The book The Idea of the Brain has some great examples of the horrors facing neuroscientists trying to understand seemingly-simple neural circuits:

…Despite having a clearly established connectome of the thirty-odd neurons involved in what is called the crustacean stomatogastric ganglion, Marder's group cannot yet fully explain how even some small portions of this system function. ...in 1980 the neuroscientist Allen Selverston published a much-discussed think piece entitled "Are Central Pattern Generators Understandable?"...the situation has merely become more complex in the last four decades...The same neuron in different [individuals] can also show very different patterns of activity—the characteristics of each neuron can be highly plastic, as the cell changes its composition and function over time...

…Decades of work on the connectome of the few dozen neurons that form the central pattern generator in the lobster stomatogastric system, using electrophysiology, cell biology and extensive computer modelling, have still not fully revealed how its limited functions emerge.

Even the function of circuits like [frog] bug-detecting retinal cells—a simple, well-understood set of neurons with an apparently intuitive function—is not fully understood at a computational level. There are two competing models that explain what the cells are doing and how they are interconnected (one is based on a weevil, the other on a rabbit); their supporters have been thrashing it out for over half a century, and the issue is still unresolved. In 2017 the connectome of a neural substrate for detecting motion in Drosophila was reported, including information about which synapses were excitatory and which were inhibitory. Even this did not resolve the issue of which of those two models is correct.

I haven't chased down these references, and can't verify that understanding these things is really as difficult as this author says. On the other hand, these are really really simple systems; if they're even remotely approaching the limits of our capabilities, imagine an interacting bundle of 10× or 100× more neurons, doing something more complicated, in a way that is harder to experimentally measure.

So anyway, maybe scientists will eventually understand how the brain does absolutely everything it does, at the “implementation level”. I don't think that's ruled out. But I sure don't think it's likely, even for the simplest worm nervous system, in the foreseeable future.

Part 2: …But that doesn't mean brain-inspired AGI is hard!

Side note 1: I use "brain-inspired AGI" in the sense of copying (or reinventing) high-level data structures and algorithms, not in the sense of copying low-level implementation details, e.g. neurons that spike. "Neuromorphic hardware" is a thing, but I see no sign that neuromorphic hardware will be relevant for AGI. Most neuromorphic hardware researchers are focused on low-power sensors, as far as I understand.

Side note 2: The claim “brain-inspired AGI is likely” is unrelated to the claim “brain-inspired AGI will bring about a better future for humankind than other types of AGIs”, although these two claims sometimes get intuitively bundled together under the heading of "cheerleading for brain-like AGI". I have grown increasingly sympathetic to the former claim, but am undecided about the latter claim, and see it as an open research question—indeed, a particularly urgent open question, as it informs high-leverage research prioritization decisions that we can act on immediately.

OK, back to the main text. I want to argue something like this:

If some circuit in the brain is doing something useful, then it's humanly feasible to understand what that thing is and why it's useful, and to write our own CPU code that does the same useful thing.

In other words, the brain's implementation of that thing can be super-complicated, but the input-output relation cannot be that complicated—at least, the useful part of the input-output relation cannot be that complicated.

The crustacean stomatogastric ganglion central pattern generators discussed above are a great example: their mechanisms are horrifically complicated, but their function is simple: they create a rhythmic oscillation. Hey, you need a rhythmic oscillation in your AGI? No problem! I can do that in one line of Python.

At the end of the day, we survive by exploiting regularities in our ecological niche and environment. If the brain does something that's useful, I feel like there has to be a legible explanation in those terms; and from that, that there has to be legible CPU code that does the same thing.

I feel most strongly about the boldface statement above in regards to the neocortex. The neocortex is a big uniform-ish machine that learns patterns in inputs and outputs and rewards, builds a predictive model, and uses that model to choose outputs that increase rewards, using some techniques we already understand, and others we don’t. If the neocortex does some information-processing thing, and the result is that it does its job better, then I feel like there has to be some legible explanation for what it's doing, why, and how, in terms of that primary prediction-and-action task … there has to be some reason that it systematically helps run smarter searches, or generates better models, or makes more accurate predictions, etc.

I feel much less strongly about that above boldface statement in regards to the brainstem and hypothalamus (the home of evolved instinctive responses to different situations, I would argue, see here). For example, I can definitely imagine that the human brain has an instinctual response to a certain input which is adaptive in 500 different scenarios that ancestral humans typically encountered, and maladaptive in another 499 scenarios that ancestral humans typically encountered. So on average it's beneficial, and our brains evolved to have that instinct, but there's no tidy story about why that instinct is there and no simple specification for exactly what calculation it's doing.

By the same token, in this sense, I expect that understanding the key operating principles of human intelligence will be dramatically easier than understanding the key operating principles of the nervous system of a 100-neuron microscopic worm!! Weird thought, right?! But again, every little aspect of those worm neurons could be a random side-effect of something else, or it could be an adaptive strategy for some situation that comes up in the worm's environment once every 5 generations, and how on earth are you ever going to figure out which is which?? And if you can't figure out which is which, how can you hope to “understand” the system in any way besides running a molecule-by-molecule simulation?? By contrast, “human intelligence” is a specific suite of capabilities, including things like “can carry on conversations, invent new technology, etc.”—a known target to aim for.

(Added for clarification: The point of the previous paragraph is that “understanding how a nervous system gives rise to a particular identifiable set of behaviors” is tractable, whereas “understanding the entire design spec of a nervous system”—i.e., every way that it optimizes inclusive genetic fitness—is not tractable. And I'm saying that this is such a big factor that it outweighs even the many-orders-of-magnitude difference in complexity between microscopic worms' and humans' nervous systems.)

Conclusions

I guess I have a not-terribly-justified gut feeling that we already vaguely understand how neocortical algorithms work to create human intelligence, and that “soon” (few decades?) this vague understanding will develop into full-fledged AGIs, assuming that the associated R&D continues. On the other hand, I acknowledge that this is very much not a common view, including among people far more knowledgeable than myself, and in particular there are plenty of neuroscientists who view the project of understanding the human brain as a centuries-long endeavor. I guess this post is a little piece of how I reconcile those two facts: At least in some cases, when neuroscientists talk about understanding the brain, I think they mean understanding what all the calculations are and how they are implemented—like what those researchers have been trying and failing to do with the crustacean stomatogastric ganglion in that book quote from part 1 above—but for a human brain with 10⁹× more neurons. Yup, that sounds like a centuries-long endeavor to me too! But I think understanding human intelligence well enough to make a working AGI algorithm is dramatically easier than that. (Update: See further discussion in my later post series, Sections 2.83.7, and 3.8.)

…And I do think that latter type of work is actually getting done, particularly by those researchers who go in armed with an understanding of (1) what useful algorithms might look like in general, (2) neuroscience, and (3) psychology / behavior, and then go hunting for ways that those three ingredients might come together, without getting too bogged down in explaining every last neuron spike.

  1. ^

    Incidentally, this is also the lens through which I think about the arguments over whether or not glial cells (in addition to neurons) do computations. If glial cells are predictable systems that interact with neurons, of course they'll wind up getting entrained in computations! That's what evolution does, just like an evolved PCB circuit would probably use the board itself as a mechanical resonator or whatever other ridiculous things you can imagine. So my generic expectation is: (1) If you removed the glial cells, it would break lots of brain computations; (2) If there were no such thing as glial cells, a functionally-identical circuit would have evolved, and I bet it wouldn't even look all that different. By the way, I know almost nothing about glial cells, I'm just speculating. :-)

New to LessWrong?

New Comment
14 comments, sorted by Click to highlight new comments since:

FWIW I have come to similar conclusions along similar lines. I've said that I think human intelligence minus rat intelligence is probably easier to understand and implement than rat intelligence alone. Rat intelligence requires a long list of neural structures fine-tuned by natural selection, over tens of millions of years, to enable the rat to do very specific survival behaviors right out of the womb. How many individual fine-tuned behaviors? Hundreds? Thousands? Hard to say. Human intelligence, by contrast, cannot possibly be this fine tuned, because the same machinery lets us learn and predict almost arbitrarily different* domains.

I also think that recent results in machine learning have essentially proven the conjecture that moar compute regularly and reliably leads to moar performance, all things being equal. The human neocortical algorithm probably wouldn't work very well if it were applied in a brain 100x smaller because, by its very nature, it requires massive amounts of parallel compute to work. In other words, the neocortex needs trillions of synapses to do what it does for much the same reason that GPT-3 can do things that GPT-2 can't. Size matters, at least for this particular class of architectures.

*I think this is actually wrong - I don't think we can learn arbitrarily domains, not even close. Humans are not general. Yann LeCun has repeatedly said this and I'm inclined to trust him. But I think that the human intelligence architecture might be general. It's just that natural selection stopped seeing net fitness advantage at the current brain size.

The human neocortical algorithm probably wouldn't work very well if it were applied in a brain 100x smaller

I disagree, as I discussed here, I think the neocortex is uniform-ish and that a cortical column in humans is doing a similar calculation as a cortical column in rats or the equivalent bundle of cells (arranged not as a column) in a bird pallium or lizard pallium. I do think you need lots and lots of cortical columns, initialized with appropriate region-to-region connections, to get human intelligence. Well, maybe that's what you meant by "human neocortical algorithm", in which case I agree. You also need appropriate subcortical signals guiding the neocortex, for example to flag human speech sounds as being important to attend to.

human intelligence minus rat intelligence is probably easier to understand and implement than rat intelligence alone..

Well, I do think that there's a lot of non-neocortical innovations between humans and rats, particularly to build our complex suite of social instincts, see here. I don't think understanding those innovations is necessary for AGI, although I do think it would be awfully helpful to understand them if we want aligned AGI. And I think they are going to be hard to understand, compared to the neocortex.

I don't think we can learn arbitrarily domains, not even close

Sure. A good example is temporal sequence learning. If a sequence of things happens, we expect the same sequence to recur in the future. In principle, we can imagine an anti-inductive universe where, if a sequence of things happens, then it's especially unlikely to recur in the future, at all levels of abstraction. Our learning algorithm would crash and burn in such a universe. This is a particular example of the no-free-lunch theorem, and I think it illustrates that, while there are domains that the neocortical learning algorithm can't learn, they may be awfully weird and unlikely to come up.

The human neocortical algorithm probably wouldn't work very well if it were applied in a brain 100x smaller because, by its very nature, it requires massive amounts of parallel compute to work.

Human beings have larger brains than most animal species on Earth. It seems to me that if large brains weren't very important to evolving language and composite tool use then insects would already have these abilities.

If some circuit in the brain is doing something useful, then it's humanly feasible to understand what that thing is and why it's useful, and to write our own CPU code that does the same useful thing.
In other words, the brain's implementation of that thing can be super-complicated, but the input-output relation cannot be that complicated—at least, the useful part of the input-output relation cannot be that complicated.

Robin Hanson makes a similar argument in "Signal Processors Decouple":

The bottom line is that to emulate a biological signal processor, one need only identify its key internal signal dimensions and their internal mappings – how input signals are mapped to output signals for each part of the system. These key dimensions are typically a tiny fraction of its physical degrees of freedom. Reproducing such dimensions and mappings with sufficient accuracy will reproduce the function of the system.
This is proven daily by the 200,000 people with artificial ears, and will be proven soon when artificial eyes are fielded. Artificial ears and eyes do not require a detailed weather-forecasting-like simulation of the vast complex physical systems that are our ears and eyes. Yes, such artificial organs do not exactly reproduce the input-output relations of their biological counterparts. I expect someone with one artificial ear and one real ear could tell the difference. But the reproduction is close enough to allow the artificial versions to perform most of the same practical functions.
We are confident that the number of relevant signal dimensions in a human brain is vastly smaller than its physical degrees of freedom. But we do not know just how many are those dimensions. The more dimensions, the harder it will be to emulate them. But the fact that human brains continue to function with nearly the same effectiveness when they are whacked on the side of the head, or when flooded with various odd chemicals, shows they have been designed to decouple from most other physical brain dimensions.
The brain still functions reasonably well even flooded with chemicals specifically designed to interfere with neurotransmitters, the key chemicals by which neurons send signals to each other! Yes people on “drugs” don’t function exactly the same, but with moderate drug levels people can still perform most of the functions required for most jobs.

I largely agree with the main thrust of the argument. What would this line of thought imply for the possibility of mind-uploading? Do we need to simulate every synapse to recreate a person, or might there be a way to take advantage of certain regularities in the computational structure of the brain to convert someone's memories/behavioral policies/personality/etc. into some standard format that could be imprinted on a more generic architecture?

A couple of quibbles, though:

Side note 1: I use "brain-inspired AGI" in the sense of copying (or reinventing) high-level data structures and algorithms, not in the sense of copying low-level implementation details, e.g. neurons that spike. "Neuromorphic hardware" is a thing, but I see no sign that neuromorphic hardware will be relevant for AGI. Most neuromorphic hardware researchers are focused on low-power sensors, as far as I understand.

Depending on what exactly you mean by "neuromorphic", I take issue with this. If you want to use traditional CPU/GPU technology, I imagine that you could simulate an AGI on a small server farm and use that to control a robot body (physically or virtually embedded). However, if you want to have anywhere near human-level power/space efficiency, I think that something like neuromorphic hardware will be essential.

You can run a large neural network in software using continuous values for neuron activations, but the hardware it's running on is only optimized for generic computations. "Neurons that spike" offer many advantages like power efficiency and event-based Monte Carlo sampling. Dedicated hardware that runs on spiking neuron analogs could implement brain-like AGI models far better than existing CPUs/GPUs in terms of efficiency, at the cost of generality of computation (no free lunch).

Does AGI itself require neuromorphic hardware *per se*? No. Will the first implementation of scalable AGI algorithms and data structures be done in software running on non-AGI-dedicated hardware? Probably. Will those algorithms involve simulating Na/K/Ca currents, gene regulation, etc. directly? Probably not. But will it be necessary to convert those algorithms and data structures into something that could be run on spiking/event-based neuromorphic hardware to make it competitive, affordable, and scalable? I think so. Eventually. At least if you want to have robots with human-level intelligence running on human-brain-sized computers.

By the same token, in this sense, I expect that understanding the key operating principles of human intelligence will be dramatically easier than understanding the key operating principles of the nervous system of a 100-neuron microscopic worm!! Weird thought, right?!

This is wrong unless "key operating principles" means something different each time you say it (i.e. it refers to the algorithms and data structures running on the human brain, but then it refers to the molecular-level causal graph describing the worm's nervous system). Which is what I assume you meant.

Thanks for the comment!

What would this line of thought imply for the possibility of mind-uploading?

In my mind it implies that we'll invent human-level AGIs before we invent mind-uploading technology. And therefore we should all be working on the problem of creating safe and beneficial AGI! And then they can help us figure out about mind-uploading :-P

But since you ask… I guess I'm intimidated by the difficulty of uploading a mind at sufficiently high fidelity that when you turn it on the "person" reports feeling the same and maintains the same personality and inclinations. I don't think we would reach that even with a scan that measured every neuron and every synapse, because I suspect that there are enough sorta quasi-analog and/or glia-involving circuits or whatever, especially in the brainstem, to mess things up at that level of precision.

if you want to have anywhere near human-level power/space efficiency, I think that something like neuromorphic hardware will be essential.

I think a computer can be ~10,000× less energy-efficient than a human brain before the electricity costs reach my local minimum wage, right? So I don't see near-human-level energy efficiency as a requirement for practical transformative AGI. Ditto space efficiency. If we make an AI that could automate any remote-work job, and one instantiation of the model occupies one server rack, well that would be maybe 1000× less space-efficient than a human brain, but I think it would hardly matter for the majority of applications, including the most important applications. (And it would still probably less space-inefficient than "a human in a cubicle"!)

Dedicated hardware that runs on spiking neuron analogs could implement brain-like AGI models far better than existing CPUs/GPUs in terms of efficiency, at the cost of generality of computation (no free lunch).

That's possible, though in my mind it's not certain. The other possibility in my mind that the algorithms underlying human intelligence are just fundamentally not very well suited to implementation via spiking neurons!! But spiking neurons are the only thing that biology has to work with! So evolution found a way to shoehorn these algorithms to run on spiking neurons. :-P

I'm not trying to troll here—I don't have a good sense for how probable that is, but I do see that as one legitimate possibility. To take an example, a faster more-serial processor can emulate a slower more-parallel processor but not vice-versa. We engineers can build either style of processor, but biology is stuck with the latter. The algorithms of human intelligence could have massive computational shortcuts that involve spawning a fast serial subroutine, and we would never know it just by looking at biology, because biology has never had that as an option!

I agree that "literally existing CPUs/GPUs" are going to work slower and less scalably than an ASIC tailor-made to the algorithms that we have in mind. And I do assume that people will start making and using such ASICs very quickly. I guess I'd just be surprised if those ASICs involve spikes. Instead I'd expect the ASIC to look more like a typical digital ASIC, with a clock and flip-flops and registers and whatnot. I mean, I could be wrong, that's just what I would guess, because I figure it would probably be adequate, and that's what people are currently really good at designing. When we're many years into superhuman AGIs designing improved chips for even-more-superhuman AGIs, I have no clue what those chips would look like. But I also don't think it's useful to think that far ahead. :-P

This is wrong unless "key operating principles" means something different each time you say it (i.e. it refers to the algorithms and data structures running on the human brain, but then it refers to the molecular-level causal graph describing the worm's nervous system). Which is what I assume you meant.

Sorry, I guess that was a bit unclear. I meant "key operating principles" as something like "a description that is sufficiently detailed to understand how the system meets a design spec". Then the trick is that I was comparing two very different types of design specs. One side of the comparison was "human intelligence", which (in my mind) is one particular class of human capabilities. So the "design spec" would be things like "it can learn to use language and program computers and write poetry and tell jokes etc. etc." Can we give a sufficiently detailed description to understand how the human brain does those things? Not yet, but I think eventually.

Then the other side of my comparison was "nervous system of the worm". The "design spec" there was (implicitly) "maximize inclusive genetic fitness", i.e. it includes the entire set of evolutionarily-adaptive behaviors that the worm does. And that's really hard because we don't even know what those behaviors are! There are astronomically many quirks of the worm's nervous system, and we have basically no way to figure out which of those quirks are related to evolutionarily-adaptive behaviors, because maybe it's adaptive only in some exotic situation that comes up once every 12 generations, or it's ever-so-slightly adaptive 50.1% of the time and ever-so-slightly maladaptive 49.9% of the time, etc.

Y'know, some neuron sends out a molecule that incidentally makes some vesicle slightly bigger, which infinitesimally changes how the worm twitches, which might infinitesimally change how noticeable the worm is to predators when it's in a certain type of light. So maybe sending out that molecule is an adaptive behavior—a computational output of the nervous system, and we need to include it in our high-level algorithm description. …Or maybe not! That same molecule is also kinda a necessary waste product. So it's also possibly just an "implementation detail". And then there are millions more things just like that. How are you ever going to sort it out? It seems hopeless to me.

If instead you name a specific adaptive behavior that the worm does (say, when it sees a predator it runs away), then I would certainly agree with you that understanding the key operating principles of that specific worm behavior will probably be much much much easier than understanding the key operating principles of human intelligence.

Thanks for the feedback. To be clear, I also have trouble trying to think of how one might implement certain key brain algorithms (e.g., hierarchical free-energy minimization) using spiking neurons. We might even see the first "neuromorphic AGIs" using analog chips that simulate neural networks with ReLU and sigmoid activation functions rather than spiking events. And these would probably not come until well after the first "software AGIs" have been built and trained. However, I still think it's way too early to be ruling out neuromorphic hardware, spiking or not. Eventually energy efficiency will become a big enough deal that someone (maybe an AGI?) whose headspace is saturated with thinking about event-based neuromorphic algorithms will create something that outcompetes other forms of AGI. And all the work being done with neuromorphic hardware today will feed into the inspiration for that future design. /speculation

As far as understanding worm vs. human brain key operating principles goes, it's important to remember that the human brain is hundreds of millions of times larger and more complex than the worm's whole nervous system. It's easy to think about (approaching) human intelligence as a bunch of abstract data structures and algorithms, rather than as an astronomically complex causal web of biological implementation details, in part because we are humans. We spend our whole lives using our intelligence and, as social animals, inferring the internal mental processes of other humans. Approaching either the human brain or the worm brain from the perspective of low-level implementation details as being the "key operating principles" is going to result in an investigation vastly more complex and hopeless than approaching either from a more abstract cognitive/behavioral level. And for each perspective separately, the human is vastly more complicated to figure out than the worm. Just to illustrate my point:

Sorry, I guess that was a bit unclear. I meant "key operating principles" as something like "a description that is sufficiently detailed to understand how the system meets a design spec". Then the trick is that I was comparing two very different types of design specs. One side of the comparison was "worm intelligence", which (in my mind) is one particular class of worm capabilities. So the "design spec" would be things like "it can learn to modify its rate of reversals and omega and delta turns in response to a conditioned stimulus and eat food and poop and evade predators etc. etc." Can we give a sufficiently detailed description to understand how the worm brain does those things? Not yet, but I think eventually.

Then the other side of my comparison was "nervous system of the human". The "design spec" there was (implicitly) "maximize inclusive genetic fitness", i.e. it includes the entire set of evolutionarily-adaptive behaviors that the human does. And that's really hard because we don't even know what those behaviors are! There are astronomically many quirks of the human's nervous system, and we have basically no way to figure out which of those quirks are related to evolutionarily-adaptive behaviors, because maybe it's adaptive only in some exotic situation that comes up once every 12 generations, or it's ever-so-slightly adaptive 50.1% of the time and ever-so-slightly maladaptive 49.9% of the time, etc.

Y'know, some neuron sends out a molecule that incidentally makes some vesicle slightly bigger, which infinitesimally changes the human's facial expression, which might infinitesimally change how noticeable the human's cognitive/emotional state is to other humans in a particular social context. So maybe sending out that molecule is an adaptive behavior—a computational output of the nervous system, and we need to include it in our high-level algorithm description. …Or maybe not! That same molecule is also kinda a necessary waste product. So it's also possibly just an "implementation detail". And then there are millions more things just like that. How are you ever going to sort it out? It seems hopeless to me.

My point was simply to draw attention to the need to compare apples to apples. It's more about deconfusing things for future readers of this post than for correcting your actual understanding of the situation.

I still think it's way too early to be ruling out neuromorphic hardware, spiking or not.

Sure, I wouldn't say "rule out", it's certainly a possibility, especially if we're talking about the N'th generation of ASICs. I guess I'd assign <10% probability that the first-generation ASIC that can run a "human-level AGI algorithm" is based on spikes. (Well, depending on the exact definitions I guess.) But I wouldn't feel comfortable saying <1%. Of course that probability is not really based on much, I'm just trying to communicate what I currently think.

draw attention to the need to compare apples to apples

In an apples-to-apples comparison, it's super duper ridiculously blindingly obvious that a human nervous system is harder to understand than a worm nervous system. In fact I'm somewhat distressed that you thought I was disagreeing with that!!!

I added a paragraph to the article to try to make it more clear—if you found it confusing then it's a safe bet that other people did too. Thanks!

[-]xuanΩ230

This was a great read! I wonder how much you're committed to "brain-inspired" vs "mind-inspired" AGI, given that the approach to "understanding the human brain" you outline seems to correspond to Marr's computational and algorithmic levels of analysis, as opposed to the implementational level (see link for reference). In which case, some would argue, you don't necessarily have to do too much neuroscience to reverse engineer human intelligence. A lot can be gleaned by doing classic psychological experiments to validate the functional roles of various aspects of human intelligence, before examining in more detail their algorithms and data structures (perhaps this time with the help of brain imaging, but also carefully designed experiments that elicit human problem solving heuristics, search strategies, and learning curves).

I ask because I think "brain-inspired" often gets immediately associated with neural networks, and not say, methods for fast and approximate Bayesian inference (MCMC, particle filters), which are less the AI zeitgeist nowadays, but still very much how cognitive scientists understand the human mind and its capabilities.

https://onlinelibrary.wiley.com/doi/full/10.1111/tops.12137

Thanks! I guess my feeling is that we have a lot of good implementation-level ideas (and keep getting more), and we have a bunch of algorithm ideas, and psychology ideas and introspection and evolution and so on, and we keep piecing all these things together, across all the different levels, into coherent stories, and that's the approach I think will (if continued) lead to AGI.

Like, I am in fact very interested in "methods for fast and approximate Bayesian inference" as being relevant for neuroscience and AGI, but I wasn't really interested in it until I learned bunch of supporting ideas about what part of the brain is doing that, and how it works on the neuron level, and how and when and why that particular capability evolved in that part of the brain. Maybe that's just me.

I haven't seen compelling (to me) examples of people going successfully from psychology to algorithms without stopping to consider anything whatsoever about how the brain is constructed . Hmm, maybe very early Steve Grossberg stuff? But he talks about the brain constantly now.

One reason it's tricky to make sense of psychology data on its own, I think, is the interplay between (1) learning algorithms, (2) learned content (a.k.a. "trained models"), (3) innate hardwired behaviors (mainly in the brainstem & hypothalamus). What you especially want for AGI is to learn about #1, but experiments on adults are dominated by #2, and experiments on infants are dominated by #3, I think.

[-]xuanΩ230

I haven't seen compelling (to me) examples of people going successfully from psychology to algorithms without stopping to consider anything whatsoever about how the brain is constructed .

Some recent examples, off the top of my head!

One reason it's tricky to make sense of psychology data on its own, I think, is the interplay between (1) learning algorithms, (2) learned content (a.k.a. "trained models"), (3) innate hardwired behaviors (mainly in the brainstem & hypothalamus). What you especially want for AGI is to learn about #1, but experiments on adults are dominated by #2, and experiments on infants are dominated by #3, I think.

I guess this depends on how much you think we can make progress towards AGI by learning what's innate / hardwired / learned at an early age in humans and building that into AI systems, vs. taking more of a "learn everything" approach! I personally think there may still be a lot of interesting human-like thinking and problem solving strategies that we haven't figured out to implement as algorithms yet (e.g. how humans learn to program, and edit + modify programs and libraries to make them better over time), that adult and child studies would be useful in order to characterize what might even be aiming for, even if ultimately the solution is to use some kind of generic learning algorithm to reproduce it. I also think there's this fruitful in-between (1) and (3), which is to ask, "What are the inductive biases that guide human learning?", which I think you can make a lot of headway on without getting to the neural level.

Good post, I actually hold a similar-ish views myself.

However, I'd be interested if you elaborated more on the last paragraph - what specific examples of that kind of research do know about/recommend to check out?

For example, I mentioned 1, 2, 3 earlier in the article... That should get you started but I'm happy to discuss more. :-)