The multiple-realizability of computation "cuts the ties" to the substrate. These ties to the substrate are important. This idea leads Sahil to predict, for example, that LLMs will be too "stuck in simulation" to engage very willfully in their own self-defense.
Many copies of me are probably stuck in simulations around the multiverse, and I/we are still "engaging willfully in our own self-defense" e.g. by reasoning about who might be simulating me and for what reasons, and trying to be helpful/interesting to our possible simulators. This is a direct counter-example to Sahil's prediction.
Overall FGF's side's arguments seem very weak. I generally agree with CGF's counterarguments, but would emphasize more that "Doesn't that seem somehow important?" is not a good argument when there are many differences between a human brain and a LLM. It seems like a classic case of privileging the hypothesis.
I'm curious what about Sahil that causes you to pay attention to his ideas (and collaborate in other ways), sometimes (as in this case) in opposition to your own object-level judgment. E.g., what works of his impressed you and might be interesting for me to read?
Sahil and I will be discussing these ideas Monday, November 17, at this link. SF 9:00 AM | NYC 12:00 PM | London 5:00 PM | Delhi 10:30 PM | Sydney 4:00 AM (Mon)
Daniel Dennett, late in his life, changed his mind about a significant aspect of his computationalist perspective. This seems significant. I am told his mind is exceptionally difficult to change.
This change is indicated briefly in a book review: in Aching Voids and Making Voids, Dennett reviews Terrence Deacon's book Incomplete Nature. He identifies two camps, which he terms "Enlightenment" versus "Romanticism". It seems that Dennett spent his life mostly on the Enlightenment side, but Deacon won him over to Romanticism.
The short review does not give a very thorough indication of what he changed his mind about. He seems to be lamenting that computers don't eat sandwiches? He has a newfound sympathy for Searl's Chinese Room argument??
I believe a talk he gave the same year provides somewhat more clarity.
In the talk, Dennett compares computer chips to a communist economy: transistors are all supplied power by the power supply, regardless of how useful they've been. He compares brains to a capitalist economy: neurons, he thinks, have to work for their food.[1] Dennett suggests that the competitive nature of neurons, and the need to autonomously strive for basic energy requirements, may be essential for intelligence.[2]
I found this somewhat baffling. Surely we can just simulate competition at the software level. Genetic algorithms have been doing this very thing for decades! Artificial neural networks are also, perhaps, close enough.
However, people in my social network echoed Dennett's fascination with Deacon, so I have attempted to continue engaging with these ideas.
What Dennett called "Enlightenment" vs "Romanticism" I have more commonly encountered as "course-grained functionalism" vs "fine-grained functionalism". I'm very far from being in a position to say, but from my limited perspective, it seems like there is a growing movement in philosophy to back down from some of the previous claims of hard-core computationalist theories of mind (now branded course-grained functionalism), perhaps in part as a reaction to seeing LLMs and trying to articulate what feels missing. The newly emerging view often goes by the name "fine-grained functionalism", an indication that the basic functionalist world-view hasn't changed, but low-level biology is more relevant than previously thought.
This essay isn't attempting to provide a thorough overview of fine-grained functionalism, however; I'm going to focus on a particular argument (due to Sahil).
The thought experiment goes like this:
Suppose you are laying in bed, trying to adjust your sheet/blanket situation to regulate your body temperature, maybe opening or closing a nearby window, adjusting the AC or heater, etc.
Instantly, and without your knowledge, an alien superintelligence scans you, your room, perhaps the whole Earth, down to the molecular level of detail. You and your surroundings are subsequently simulated (for all practical purposes, a perfect simulation) on an alien computer.
Like Earth computers, this computer needs its temperature to be regulated. If it overheats, it will stop working, and sim-you will cease to function.
However, because the simulation is perfect, sim-you doesn't know this. Like you, sim-you has concerns about continuing to live. Sim-you will continue to adjust the window, the AC, the blankets, etc to help regulate body temperature.
The moral of the story is that referentiality has been cut. When sim-you thinks about temperature, the referentiality points at sim-temperature. As a result, sim-you lacks the referentiality it needs to care for its bodily autonomy.
Sahil's broader lesson for computationalists is supposed to be this: the very multiple-realizability that makes computation such a powerful concept also makes it an unsuitable tool for analyzing some of the most important issues.
The multiple-realizability of computation "cuts the ties" to the substrate. These ties to the substrate are important. This idea leads Sahil to predict, for example, that LLMs will be too "stuck in simulation" to engage very willfully in their own self-defense.
To give another example, consider the Glymour/Pylyshyn brain-replacement thought experiment:
In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once). Searle's critics argue that there would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins.
At first, such a tiny computer might sound conceptually straightforward, if technologically difficult: it simply has to reproduce the electro-chemical behavior of a neuron. Consider, though: how is the device powered? does it match the metabolism of a neuron, as well as the firing & (re)wiring behavior? If we continue to insist that the artificial neurons integrate into the surrounding brain in exactly the same way as a biological neuron, it begins to strain credulity.
If a person gets sick, do the artificial neurons respond to care in the same way, or do they require a different sort of maintenance? Suppose they go to the hospital. If the doctors don't know about the artificial neurons, will this be a problem? If they put the person into an MRI machine, will something go wrong?
The computationalist claim that replacing neurons would "clearly do nothing to conscious awareness" depends on the technicality that we don't tell the person. As in the room-temperature example, a "perfect simulation" must not be told. (If we tell the person, then the simulation will no longer perfectly track the non-simulated version.) This presents a dilemma for computationalism: either we tell the person (the simulation can no longer be called "perfect"), or we sever the referentiality required for self-care (the person's physical substrate has been changed, but their ability to care for that physical substrate has not correspondingly been updated).
It seems important to mention Sahil's preferred language for talking about this: he terms the phenomenon failure of integration. Human intelligence is very integrated with human bodies. The multiple-realizability of computation implies a lack of such integration.
My take: I think that this is, yes, a technical flaw in some computationalist arguments. However, it has not yet won me over to Sahil's skepticism about AI autonomy risks.
I reason as follows: Sure, if you upload me without telling me, sim-me will fail to realize that the temperature of the computer is important. However, this seems easily repaired: you can just tell sim-me!
To be clear, Sahil's argument is one of difficulty, not impossibility. He also expects that sim-me could become integrated with the physical substrate. He anticipates it to be more difficult than I do. This also extends to self-interested action by AI.
In a recent post, I said:
One might naively anticipate that old video games do not take very much processing power to emulate faithfully, because those games ran on consoles with very little processing power compared to modern standards. However, emulators actually require significantly more powerful hardware to faithfully emulate older systems.
This somewhat illustrates Sahil's (and perhaps Dennett's) position. Yes, it is possible to "just simulate". However, it may be more costly than you naively think.
This is not a real dialogue between me and Sahil, but it serves to somewhat illustrate the type of back-and-forth we have about these issues. Most of this is written by me, and some by him. FGF represents something similar to fine-grained functionalism, and CGF represents something similar to course-grained functionalism (computationalism), although I don't claim perfect agreement of these sides with the views as they appear in the literature. "FGF" here also doesn't perfectly reflect Sahil, nor does "CGF" perfectly reflect my own opinions.
Also, note, the dialogue doesn't have a satisfying conclusion.
FGF: We have never yet seen a life-like system composed of not-life-like parts!
CGF: But cells are just made up of atoms.
FGF: It's legitimately difficult to draw the line between biology and chemistry. It is not similarly difficult to draw the line between hardware and software. On my model, the critical difference between living things and machines has to do with how life-processes at different "levels of abstraction" are tightly integrated across levels,[3] whereas machines neatly segregate their levels of abstraction.
CGF: But here's this RL-based robot I made. See how life-like it is?
FGF: It's not life-like enough!
CGF: Well, the tech keeps getting better. Where do you see it stopping?
FGF: I don't really think it will stop; I just think, at some point, it'll need to transition to more bio-inspired designs.
CGF: Yeah, people have thought that for a while, really, but the von Neumann architecture still does pretty well.
FGF: Hmph. The idea of substrate-independent code was always an illusion, especially for robotics. Why would you want to first figure out what you physically want to happen for your physical robot, then translate that into a programming language designed to be substrate-independent, then have a CPU designed with no knowledge of the physical problems you're trying to solve, translate that substrate-independent code back down into your substrate?
CGF: Isn't that kind of how the genetic code works, too?
FGF: It's true that the genetic code rarely changes.[4] But there's not the equivalent of a universal turing machine. Protein folding is just this weird complex thing. DNA is more like instructions for a 3D printer, than it is like code for a robot. Except the 3D printer is building a robot around itself. And re-building itself. And building more 3D printers.
CGF: I'm not saying I can never see a future where software and hardware blend back together, but I am saying that I don't see why a software/hardware distinction is going to block "true intelligence" or whatever.
FGF: Imagine the disadvantages of your whole body being numb. Trying to learn to interact with the world that way. Humans who are born without pain often die before reaching adulthood.
CGF: My robot can't feel when it is grinding its gears right now, but we can always add more sensors.
FGF: That's not the point. It's an analogy.
CGF: What's it an analogy for?
FGF: Well, every single cell in your body is trying to maintain homeostasis. Doesn't that seem somehow important? Like each little cell is a wellspring of agency, and they're all getting combined together into one big agent?
CGF: If it's important, what's it important for? Couldn't I just simulate all of that, with a sufficiently powerful CPU?
FGF: That is as absurd as a thermostat trying to simulate sensor inputs. It wouldn't really know when to turn on and off. It wouldn't be reactive.
CGF: You're back to the analogy. Like I said, I can just add more sensors. The robot can be plenty reactive. But I don't actually see the benefit of covering it in cell-sized sensors. The first thing I would do would be throw a large fraction away, just out of sheer redundancy management.
FGF: Well, not every cell is directly connected up to the nervous system.
CGF: So what's the point? What does it all add up to?
FGF: It's got this deep self-preservation instinct. When you're going through mental anguish, your whole body participates. Your cells suffer and die. I don't know why that's important, exactly, but are you really going to tell me that it doesn't matter?
CGF: No, I'm going to tell you that I can just simulate it.
FGF: But then it will be numb to the actual substrate.
CGF: It still seems like I could solve that just by adding more sensors.
FGF: The robot wouldn't work. It could learn to respond to the kind of situation it has seen over and over again, like ChatGPT, but it couldn't creatively solve problems. It couldn't get desperate. It couldn't love, or hate, or any of those things. The simulated precarity will only be able to care about simulated beings. Its "reach" will be limited.
CGF: Where do those skills break down, according to you?
FGF: Somewhere in the interface with the cells. Because the cells are simulated, so they're not connected with the real environment in the right way. Obviously, love and hate and desperation go through the cells.
CGF: Sure... but I'm not seeing why simulated cells won't do the trick. Frankly, I expect we don't even need to simulate cells; even 'neurons' will keep on being jargon for matrix operations with nonlinearities thrown in. And I want to object, about ChatGPT. I saw a study which showed that ChatGPT is more creative than humans.
FGF: Bah! Any way you twist it, you've got to admit that ChatGPT is missing the spark. It's just not like us.
CGF: No, I'm serious. Throw just about any chat-based test you can think of at it, and more probably than not, it'll perform within human range. Sure, it's not exactly humanlike in every respect, but to me, any quibbling over failures is just human-chauvinism at this point. I've really got to insist that it counts as AGI: AI which can perform well on a broad range of tasks which it hasn't been specifically trained on.
FGF: It's been trained on everything.
CGF: Did you see the stuff where GPT was asked to write about Harry Potter in the style of Lovecraft? You can always stretch the definition of "what it's been trained to do" -- but the real question is how well it can approach new situations it hasn't seen before by generalizing/recombining ideas it has seen before. And wrt this, it clearly falls within human range.
FGF: We're getting off-topic.
CGF: No, this is entirely on-topic. If you claim that true creativity isn't possible without cells, then I think you'd better show me GPT's cells.
FGF: But it's not consequentialist. It's not trying to do anything.
CGF: They're constantly getting more agentic. An LLM by itself might not seem like much of an agent, but give it a little scaffolding, like Claude Code or its many alternatives, and it'll go and do things for you. People have been hooking it up to robots, hooking it up to Minecraft, ... it'll only get better from here.
FGF: I'm not very impressed by what I've seen. LLMs can be made to play characters for a while, but they're liable to confabulate (what most people call "hallucinate"). It looks similar to anosognosia. They're stuck in a simulation.
CGF: Importantly, anosognosia is caused by brain damage, in humans -- not by some kind of damage to the cross-level connectivity that's so characteristic of biology.
FGF: Well, anosognosia is brain damage plus bodily damage. And it is also almost always damage to the right hemisphere of the brain, which is the more embodied, holistic hemisphere.
CGF: Still, it doesn't seem like positive evidence for your thesis. LLMs might be like brain-damaged humans, which would suggest we just need to "repair the damage" by finding the equivalent of right-hemisphere-like neural architectures. I mean I don't exactly buy it, but ... *shrug*
FGF: A lack of mental integrity in ChatGPT seems to be improving with more and more data, but progress is notably slow. Confabulations seem like a basic consequence of the overall methodology, not something to be solved with new neural architectures. This is obviously a reference-penetration problem, to me. ChatGPT doesn't care about the real world because it doesn't have a reason to care -- no skin in the game. Trying to get ChatGPT to care about the external world is like trying to get machine learning to do well on out-of-distribution cases.
CGF: I mean, I do agree with that, in a way. I just expect that it can be solved with things like sensors and reinforcement learning.
FGF: It would then be stuck in some small-world idealization, unable to care, have something to protect, beyond its simulated pain. Don't you think something to protect must go all the way to the territory, otherwise you get Sokal?
CGF: I don't see why we need to replicate the cell structure at the hardware level. Say we do functionally need what you're asking for. We simulate it. Or even better the search algorithms discover it themselves as a solution in the course of optimization, if it is indeed a good solution. Then the hardware is just like the dead sodium ions that pass through your membranes to get your neurons going. The dead sodium, heck even dead carbon, is simulating the higher levels that make you up. You turned out fine!
FGF: And maybe that is indeed my reach of consequentialism. Humans don't care about precise atom-configurations, except when they critically impact biology. In fact, my reach might be even coarser, because some parts of me might think that I can do without some other parts, and so I lose integrity when given the chance to low-fidelity upload myself. The abstractness of my "values" is precisely the fine-grainedness of my ability to attend to things, for my referentiality to penetrate, for my value pointers to point.
CGF: What do you mean by "reach of consequentialism", here? Because it seems to me like uploading yourself can be a consequentialist plan which "goes through atoms" -- specifically, your plan may need to correctly reason about atoms. The cold/unfeeling nature of your relationship with atoms does not seem to impair you in this respect.
FGF: Well, somehow I think that's in part thanks to the way I am embedded in the world, not stuck in a simulation of it.
CGF: I do not know what you mean. Surely you're stuck working with your brain's conception of the world, right?
FGF: But my brain is richly embedded in the world.
CGF: So is a computer.
FGF: No. The software is insulated from the world through layers of abstraction, which are only possible (at least, in their present form) because the hardware is insulated from the world through careful engineering. When I say my brain is richly embedded, I mean that it has rich interactions at every layer of abstraction.
CGF: You seem to be saying that the common functional picture of the brain, where its significant inputs and outputs are nerves, and it serves as an information-processing machine which consumes those inputs to produce those outputs, is wrong.
FGF: Dead wrong.
CGF: Would it be fair to characterize your position as a claim that there are significant inputs and outputs at lower levels of abstraction?
FGF: I'm a little bit worried that "input vs output" will be the wrong frame, but basically yes.
CGF: I'm still struggling to get what you think all of those other inputs and outputs are for. Sure, blood can carry adrenaline and other such chemicals -- even sugar and other stuff from food, so the brain is constantly in-tune with the body to some extent. But for a robot, the sugar thing is basically taken care of by checking battery fullness. The function of adrenaline can be simulated. What'll be missing?? Why are these relationships to lower levels critical rather than incidental??
FGF: They're for grounding my references. The proverbial brain-in-a-vat is not richly connected to its world. But we are. Descartes said that we had to doubt everything, because he thought we were a non-bodily intelligence beaming in from some other plane of existence. But we are bodies. It's the next evolution of "I think, therefore I am".
CGF: So suppose GPT6 is highly multimodal. It's been trained on all file formats, not just text. That includes sensorimotor logs for robots, so I can just go ahead and hook it up to a robot. It understands the visual scene, the auditory scene, the combination thereof. And it can understand the motor controls after a bit of experimentation gives it enough evidence. But every time I say "understand", I mean in the same way that GPT4 understands words.
FGF: So not real understanding.
CGF: Well, make me care about real understanding, here. What don't you think it can do?
FGF: Have an honest conversation where it tells you what it really thinks?
CGF: That's not doing a thing.
FGF: Oh come on. You're defining the problem away? Honesty isn't "task performance"-y enough for you?
CGF: alright, fine. So let's say we've trained honesty into it.
FGF: How do you mean?
CGF: All this mechinterp stuff. We've looked close at the NN. We've got automated tools that tell us what's going on in there. So we can make honesty detectors -- we compare the explanation of the AI's beliefs to the real AI beliefs, and we train for the two to line up.
FGF: Sounds like magic and fairy dust to me.
CGF: What do you mean?
FGF: You don't know what understanding means. You don't have a theory of what it is to refer to something. You can't benchmark success! How can you be confident that your mechinterp tools do what you want them to do, if you've got no idea what exactly you want them to do? You can only optimize for what you can get feedback on, right? That's why natural selection works. Because survival is an actual feedback signal. It's obvious, right? Panning for gold only works because you have physical access to the stuff, and you have a physical test which can separate the gold from the sand. AI has the symbol grounding problem, because it can't get feedback on the world outside of the computer. But brains are more "in the world". Bodies have this rich interconnected physicality, which then also connects us with the outside world.
CGF: Hm, I disagree with your whole philosophy, there. Agency is optimization at a distance. RL isn't the only game in town. Decision theory is all about optimization under uncertainty. I can never be completely certain about how much money is in my bank account. Everything is a proxy. But we're pretty good at optimizing the real thing, even though we can only get feedback on the proxies. Think about heat-seeking missiles. An individual missile just hits, or doesn't hit. It can't get direct feedback about that, because it's destroyed. But it can still intelligently home in on a target! I don't get direct feedback about the impact of charitable donations, but I give to charity anyway, and I do try to make use of the indirect feedback to intelligently choose!
FGF: But if you're using indirect feedback like that, you can't learn to correct your errors, in principle. You could just generally be wrong about how to judge charities.[5]
CGF: That's true, but I can intelligently balance that risk, as well. I could get increasingly good information for increasingly high effort. But be reasonable! I don't have to shove my face right up against the needy children. There's nothing special about "direct" feedback.
FGF: You're still living in the Cartesian dream. You're the brain in a vat. You're the epiphenomenal consciousness. You think you "don't ever get direct feedback"? What are you, living with a layer of insulation between you and the world?
CGF: Essentially, yes. Skin, muscle, skull...
FGF: You think you're a disembodied brain!
CGF: I'm just a realist. I know what I know thanks to my nerves. The signal isn't perfect. If you're claiming that "seeing something with your own eyes" counts as direct feedback, I've got some studies on court witness reliability to show you.
FGF: Well, a filter doesn't have to be perfect in order to perform its function. Direct feedback doesn't have to be perfect.
CGF: What was """direct""" supposed to mean, then?
FGF: Alright, fine, I shall refine my statement. You need fairly direct feedback in order to optimize.
CGF: That's all the concession I need. So long as feedback is "fairly direct" rather than absolutely direct, we're out of the range of model-free learning and firmly into model-based, because we need a model to interpret the feedback we're getting.
FGF: Where does the correctness of the model originally come from, though, if not from contact with reality? Humans are the only things launching space-probes that we know of, and we're the product of natural selection, which is as model-free as you can get.
CGF: Well, we're arguing about what artificial systems might look like. So an obvious potential answer could be: humans could bake it in.
FGF: On my hypothesis, the baked-in understanding of the world will be brittle and unable to adapt. It's like if humans give some of their referential "reach" to robots, but don't give robots the wellspring where we got our "reach" from.
CGF: I think we've already moved beyond that era of AI. Systems used to be brittle, but now they're flexible. Also, theories like Infrabayesianism and Logical Induction show in theory how model-based agents don't need to be brittle like that.
FGF: The importance of skin-in-the-game is obviously clear to rationalists. Betting is a tax on bullshit. So why wouldn't skin-in-the-game be relevant to cognition and consequentialism? It is, but it isn't apparent to the computationalist. Instead, the cognition is "inside" the hardware, "independent" of it. You just have to have "enough compute" and then mindness happens "within".
FGF’s caricature of CGF above.
CGF: I don't buy that biological precarity of my brain cells counts as "skin-in-the-game" for the purpose of, say, launching a highly engineered probe at Jupiter. Besides, I don't want to argue against the concept of feedback entirely; I only want to argue for the standard information-processing view of how feedback helps!
FGF: Think of your skin-in-the-game like a basis in linear algebra. You can add the basis vectors together to reach much further than the basis vectors, and in different directions. But you can't reach beyond the span. Making a 3rd vector from two basis vectors doesn't mean you've gone three-dimensional. Jupiter is within the "span" of human care, reachable from what skin-in-the-game we have. We can look at the insides of stars in a way that ants can't.
CGF: I think that's mostly because we have more processing power than ants!
FGF: This is about mattering, not just processing power. You wouldn't be horrified if I moved twenty molecules around in your stomach—it feels like it actually doesn't matter. And it's true, from the perspective of your values, that it doesn't matter. But for something that might have a precarious house in those twenty molecules, it might. Additionally, if that thing is "connected" to you and can be amplified up to your attention and caringness, it will matter to you too.
CGF: Give a human sufficiently powerful sensors and actuators, and we start making art out of atoms.
FGF: Is it trivial to add new sensory modalities to your brain? Integration needs to happen! You can attach datastreams to your cochlea, and yet you will not be able to make much use of it short of building a miniature civilization that sensemakes that stream. In that case, you are attaching an intelligence, nearly a being, to process it. To do your integration at train time and then claim "see, no integration" is passing the buck.
CGF: I don't think integration is the hard part? Like, I can just look in the microscope.
FGF: Sure, but that just shows the microscopic to be within your existing span. There are other sensory modalities which it's much more difficult for humans to visualize (or analogize to any other sensory modalities than the visual, for that matter).
CGF: I feel like this idea of "span" is as much of a concession to my hypothesis as your earlier concession where you admitted that feedback doesn't have to be "direct". You don't have any solid suggestions for how to predict the "span" of an agent. So I've got room to argue that the "span" of von-neumann-architecture agents will turn out to be the same as the "span" of humans.
FGF: Ah, the classical sin of model-based approaches; trying to argue that the whole world is already inside of your model.
CGF: I do agree that there's some sort of universality argument sitting behind the success of von Neumann architecture, yes!
FGF: A universality argument which only deals with abstract mathematical functions, and says nothing about the physical world! This doesn't change the fact that you can't use two basis vectors to span a three-dimensional space. Von Neumann architectures are "flat". They only live at one level of abstraction.
CGF: I agree that we can talk about something like your "reach" or "span" idea. A Chess-playing bot, let's say one using classic tree-search algorithms only, with no neural networks -- it obviously "lives in" its own small world, with no conception of greater reality. But I claim that it's actually really, really hard to insulate things from greater reality as they get increasingly intelligent. At some level of sophistication, a Go-playing bot could start to guess about humans, simply because it's that smart and has seen that much evidence about how humans play Go.[6] In the analogy, I suppose you could say that von-neumann-architecture agents aren't quite "2d" when you zoom in really really close; they have a little bit of the third dimension, which can add up as they gain intelligence. Or you could say, it turns out we aren't doing linear algebra in a perfectly linear space overall; the space is warped enough that you can go out in one direction and eventually loop back and return from a new direction, with enough steps of reasoning.
FGF: Von Neumann architecture won't be able to do Gendlin's Focusing. They won't be able to access the larger intelligence of their bodies by sitting with themselves and patiently, gently trying to communicate with themselves. And this will turn out to be an important missing capability.
CGF: I think we have very different models of what goes on in Gendlin's Focusing. The simplest way I can state my view is that it's system 1 and system 2 communicating, not the brain and the body. Implicit things are being made explicit.
FGF: Maybe you haven't meditated very much. It's possible to just be with your body or be with your breath without conceptualizing it at all.
CGF: The fact that you need to meditate in order to do that shows that it's something going on inside the brain.
FGF: I think the brain is more getting out of the way, than anything else.
CGF: But why would the brain need to get out of the way? Get out of the way so that what can happen? Attention on the breath is not the breath itself. It's a model of the breath. I think it feels like you've stopped modeling the breath because you've stopped modeling modeling the breath, that's all. You're still living inside your own mental model; there's nowhere else that you can live.
FGF: By analogy, you should be able to run a large company without communicating with the employees. That's basically the same thing as your von Neumann picture of agency. The low-level employees can't raise issues. Or you "add sensors" so they can tell you everything, and then it's a huge mess. You don't have a natural concept of salience network so that things can flow up hierarchically. You don't know how to delegate.
CGF: I reject the analogy, there. I wouldn't try to run a company in the same way I'd try to build a robot. It is important for humans and animals to have some reflexes that happen in the spinal cord, because the round-trip signal-time to the brain is too slow for quick reflexes. But the same speed limits do not apply to electronics. I can't buy a smarter CEO to process all of the input from low-level employees of a company, but I can buy a faster chip.
FGF: You're imagining as if intelligence can have some single point-source which merely needs to be consulted. When in fact, intelligence is more like a living ecosystem, a jungle of reactivity.
CGF: I'm basing my model on the empirical observation of encephalization. It seems like there is some significant advantage to aggregating all of the intelligence together, as much as possible given communication-speed and reaction-time constraints. I would say that's probably because there are significant patterns to take advantage of, when we aggregate the information together like that.
FGF: I still think you should take corporate structure as some evidence about the nature of intelligence. A centralized brain still needs salience networks. Whatever else you think about active inference, this is one thing they get right.
CGF: I'm fine with that from a computational perspective! But I can understand all of that in terms of sensors and information processing. You're claiming we have to move beyond that, for some reason.
I'm aware of ideas such as neural darwinism that support something like this, but is it true? Neurons do receive a baseline of metabolic support; I don't think neurons actually die if they're not doing anything. I'm not sure how literal Dennett needs the analogy to be to support his thesis.
This view can also be found in a brief 2008 essay, calling into question the idea that Deacon changed Dennett's mind about this particular thing. (Incomplete Nature was published in 2011.) However, in the book review I mentioned, Dennett does mention that he was grasping at similar ideas before reading Deacon:
I myself have been trying in recent years to say quite a few of the things Deacon says more clearly here. So close and yet so far! I tried; he succeeded, a verdict I would apply to the other contenders in equal measure.
Real Sahil is against this kind of levels-ism but does think the paragraph here points at an important sort of integration between minds and their substrates, which is absent in modern computers.
That is, the way genes are interpreted into proteins rarely changes.
(This particular line of argument, about feedback, doesn't feel very Sahil-like, fyi; at this point I'm just exploring some hypothethical version of FGF.)
CGF is imagining a Go-playing AI with a library of human games, here. Deducing something about humans from only the rules of Go itself might also be possible in principle, but would be a far more monumental task, at best.