Rafael Harth

I'm an independent researcher currently working on a sequence of posts about consciousness. You can send me anonymous feedback here: https://www.admonymous.co/rafaelharth. If it's about a post, you can add [q] or [nq] at the end if you want me to quote or not quote it in the comment section.

Sequences

Consciousness Discourse
Litereature Summaries
Factored Cognition
Understanding Machine Learning

Wiki Contributions

Comments

Sorted by

As someone who expects LLMs to be a dead end, I nonetheless think this post makes a valid point and does so using reasonable and easy to understand arguments. I voted +1.

As I already commented, I think the numbers here are such that the post should be considered quite important even though I agree that it fails at establishing that fish can suffer (and perhaps lacks comparison to fish in the wild). If there was another post with a more nuanced stance on this point, I'd vote for that one instead, but there isn't. I think fish wellbeing should be part of the conversation more than it is right now.

It's also very unpleasant to think or write about these things, so I'm also more willing to overlook flaws than I'd be by default.

Shape can most certainly be emulated by a digital computer. The theory in the paper you linked would make a brain simulation easier, not harder, and the authors would agree with that

Would you bet on this claim? We could probably email James Pang to resolve a bet. (Edit: I put about 30% on Pang saying that it makes simulation easier, but not necessarily 70% on him saying it makes simulation harder, so I'd primarily be interested in a bet if "no idea" also counts as a win for me.)

It is not proposing that we need to think about something other than neuronal axons and dendrites passing information, but rather about how to think about population dynamics.

Really? Isn't the shape of the brain something other than axons and dendrites?

The model used in the paper doesn't take any information about neurons into account, it's just based on a mesh of the geometry of the particular brain region.

So this is the opposite of proposing a more detailed model of brain function is necessary, but proposing a courser-grained approximation.

And they're not addressing what it would take to perfectly understand or reproduce brain dynamics, just a way to approximately understand them.

The results (at least the flagship result) are about a coarse approximation, but the claim that anatomy restricts function still seems to me like contradicting the neuron doctrine.

Admittedly the neuron doctrine isn't well-defined, and there are interpretations where there's no contradiction. But shape in particular is a property that can't be emulated by digital computers, so it's a contradiction as far as the OP goes (if in fact the paper is onto something).

I mean, we have formalized simplicity metrics (Solomonoff Induction, minimal description length) for a reason, and that reason is that we don't need to rely on vague intuitions to determine whether a given theory (like wave function collapse) is plausible.

No reputable neuroscientist argued against it to any strong degree, just for additional supportive methods of information transmission.

I don't think this is correct. This paper argues explicitly against the neuron doctrine (enough so that they've put it into the first two sentences of the abstract), is published in a prestigious journal, has far above average citation count, and as far as I can see, is written by several authors who are considered perfectly fine/serious academics. Not any huge names, but I think enough to clear the "reputable" bar.

I don't think this is very strong evidence since I think you can find people with real degrees supporting all sorts of contradicting views. So I don't think it really presents an issue for your position, just for how you're phrased it here.

Two thoughts here

  • I feel like the actual crux between you and OP is with the claim in post #2 that the brain operates outside the neuron doctrine to a significant extent. This seems to be what your back and forth is heading toward; OP is fine with pseudo-randomness as long as it doesn't play a nontrivial computational function in the brain, so the actual important question is not anything about pseudo-randomness but just whether such computational functions exist. (But maybe I'm missing something, also I kind of feel like this is what most people's objection to the sequence 'should' be, so I might have tunnel vision here.)

  • (Mostly unrelated to the debate, just trying to improve my theory of mind, sorry in advance if this question is annoying.) I don't get what you mean when you say stuff like "would be conscious (to the extent that I am), and it would be my consciousness (to a similar extent that I am)," since afaik you don't actually believe that there is a fact of the matter as to the answers to these questions. Some possibilities what I think you could mean

    1. I don't actually think these questions are coherent, but I'm pretending as if I did for the sake of argument
    2. I'm just using consciousness/identity as fuzzy categories here because I assume that the realist conclusions must align with the intuitive judgments (i.e., if it seems like the fuzzy category 'consciousness' applies similarly to both the brain and the simulation, then probably the realist will be forced to say that their consciousness is also the same)
    3. Actually there is a question worth debating here even if consciousness is just a fuzzy category because ???
    4. Actually I'm genuinely entertaining the realist view now
    5. Actually I reject the strict realist/anti-realist distinction because ???

I think causal closure of the kind that matters here just means that the abstract description (in this case, of the brain as performing an algorithm/computation) captures all relevant features of the the physical description, not that it has no dependence on inputs. Should probably be renamed something like "abstraction adequacy" (making this up right now, I don't have a term on shelf for this property). Abstraction (in)adequacy is relevant for CF I believe (I think it's straight-forward why?). Randomness probably doesn't matter since you can include this in the abstract description.

You're using 'bridging law' differently from I was, so let me rephrase.

To explain subjective experience, you need bridging-laws-as-you-define-them. But it could be that consciousness is functional and the bridging laws are implicit in the description of the universe, rather than explicit. Differently put, the bridging laws follow as a logical consequences of how the remaining universe is defined, rather than being an additional degree of freedom.[1]

In that case, since bridging laws do not add to the length of the program,[2] Solomonoff Induction will favor a universe in which they're the same for everyone, since this is what happens by default (you'd have a hard time imagining that bridging laws follow by logical necessity but are different for different people). In fact, there's a sense in which the program that SI finds is the same as the program SI would find for an illusionist universe; the difference is just about whether you think this program implies the existence of implicit bridging laws. But in neither cases is there an explicit set of bridging laws that add to the length of the program.


  1. Most of Eliezer's anti zombie sequence, especially Zombies Redacted can be viewed as an argument for bridging laws being implicit rather than explicit. He phrases this as "consciousness happens within physics" in that post. ↩︎

  2. Also arguable but something I feel very strongly about; I have an unpublished post where I argue at length that and why logical implications shouldn't increase program length in Solomonoff Induction. ↩︎

I reacted locally invalid (but didn't downvote either comment) because I think "computation" as OP is using it is about the level of granularity/abstraction at which consciousness is located, and I think it's logically coherent to believe both (1) materialism[1] and (2) consciousness is located at a fundamental/non-abstract level.

To make a very unrealistic analogy that I think nonetheless makes the point: suppose you believed that all ball-and-disk integrators were conscious. Do you automatically believe that consciousness can be defined with a computation? Not necessarily -- you could have a theory according to which a digital computer computing the same integrals is not consciousness (since, again, consciousness is about the fine-grained physical steps, rather than the abstracted computational steps, and a digital computer calculating performs very different physical steps than a ball-and-disk integrator doing the same). The only way you now care about "computation" is if you think "computation" does refer to low-level physical steps. In that case, your implication is correct, but this isn't what OP means, and OP did define their terms.


  1. as OP defines the term; in my terminology, materialism means something different ↩︎

Load More