Benjamin Spiegel

Posts

Sorted by New

Wiki Contributions

Comments

Whole Brain Emulation: No Progress on C. elgans After 10 Years

I agree with you, though I personally wouldn't classify this as purely an intuition since it is informed by reasoning which itself was gathered from scientific knowledge about the world. Chalmers doesn't think that Joe could exist because it doesn't seem right to him. You believe your statement because you know some scientific truths about how things in our world come to be (i.e. natural selection) and use this knowledge to reason about other things that exist in the world (consciousness), not merely because the assertion seems right to you.

Whole Brain Emulation: No Progress on C. elgans After 10 Years

Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain?

No, we cannot. Just as we cannot know with certainty whether a mind-upload is conscious. Just because we presume that our 2021 brain is a related conscious agent to our 2011 brain, and granting the fact that we cannot verify the properties that enabled the conscious connection between the two brains, does not mean that the properties do not exist.

It seems to me that this can't be verified by any experiment, and thus must be cut off by the Newton's Flaming Laser Sword.

Perhaps we presently have no way of testing whether some matter is conscious or not. This is not equivalent to saying that, in principle, the conscious state of some matter cannot be tested. We may one day make progress toward the hard problem of consciousness and be able to perform these experiments. Imagine making this argument throughout history before microscopes, telescopes, and hadron colliders. We can now sheath Newton’s Flaming Laser Sword.

I can't say the same about any introspection-based observations that can't be experimentally verified.

I believe this hedges on an epistemic question about whether we can have have knowledge of anything using our observations alone. I think even a skeptic would say that she has consciousness, as the fact that one is conscious may be the only thing that one can know with certainty about themself. You don’t need to verify any specific introspective observation. The act of introspection itself should be enough for someone to verify that they are conscious.

The human brain is a notoriously unreliable computing device which is known to produce many falsehoods about the world and (especially!) about itself.

This claim refers to the reliability of the human brain to verify the truth value of certain propositions or indentify specific and individuable experiences. Knowing whether oneself is conscious is not strictly a matter of verifying a proposition, nor identifying an individuable experience. It’s only about verifying whether one has any experience whatsoever, which should be possible. Whether I believe your claim to consciousness or not is a different problem.

Whole Brain Emulation: No Progress on C. elgans After 10 Years

What a great read! I suppose I'm not convinced that Fading Qualia is an empirical impossibility, and therefore that there exists a moment of Suddenly Disappearing Qualia when the last neuron is replaced with a silicon chip. If consciousness is quantized (just like other things in the universe), then there is nothing wrong in principle with Suddenly Disappearing Qualia when a single quantum of qualia is removed from a system with no other qualia, just like removing the last photon from a vacuum.

Joe is an interesting character which Chalmers thinks is implausible, but aside from it rubbing up against a faint intuition, I have no reason to believe that Joe is experiencing Fading Qualia. There is no indication for any reason that the workings of consciousness should obey any intuitions we may have about it.

Whole Brain Emulation: No Progress on C. elgans After 10 Years

There are a lot of interesting points here, but I disagree (or am hesitant to agree) with most of them.

If you agree that the natural replacements haven't killed you (2011-you and 2021-you are the same conscious agent), then it's possible to transfer your mind to a machine in a similar manner. Because you've already survived a mind uploading into a new brain.

Of course, I'm not disputing whether mind-uploading is theoretically possible. It seems likely that it is, although it will probably be extremely complex. There's something to be said about the substrate independence of computation and, separately, consciousness. No, my brain today does not contain the same atoms as my brain from ten years ago. However, certain properties of the atoms (including the states of their constituent parts) may be conserved such as spin, charge, entanglement, or some yet undiscovered state of matter. So long as we are unaware of the constraints on these properties that are necessary for consciousness (or even whether these properties are relevant to consciousness), we cannot know with certainty that we have uploaded a conscious mind.

If a machine behaves like me, it is me. Whatever we share some unmeasurable sameness - is of no importance for me. 

The brain is but a computing device. You give it inputs, and it returns outputs. There is nothing beyond that. For all practical purposes, if two devices have the same inputs→outputs mapping, you can replace one of them with another.

These statements are ringing some loud alarm bells for me. It seems that you are rejecting consciousness itself. I suppose you could do that, but I don't think any reasonable person would agree with you. To truly gauge whether you believe you are conscious or not, ask yourself, "have I ever experienced pain?" If you believe the answer to that is "yes," then at least you should be convinced that you are conscious.

What you are suggesting at the end there is that WBE = mind uploading. I'm not sure many people would agree with that assertion.

Whole Brain Emulation: No Progress on C. elgans After 10 Years

You don't need to solve why anything is conscious in the first place, because you can just take it as a given that human brains are conscious and re-implement the computational and biological mechanisms that are relevant for their consciousness.

I'm pretty sure the problem with this is that we don't know what it is about the human brain that gives rise to consciousness, and therefore we don't know whether we are actually emulating the consciousness-generating thing when we do WBE. Human conscious experience could be the biological computation of neurons + X. We might be able to emulate biological computation perfectly, but if X is necessary for conscious experience then we've just created a philosophical zombie. To find out whether our emulation is sufficient to produce consciousness, we would need to find out what X is and how to emulate it. I'm pretty sure this is exactly the hard problem of consciousness.

Even if biological computation is sufficient for generating consciousness, we will have no way of knowing until we solve the hard problem of consciousness.

Whole Brain Emulation: No Progress on C. elgans After 10 Years

This really depends on whether you believe a mind-upload retains the same conscious agent from the original brain. If it did, we would need to solve the hard problem of consciousness, which seems significantly harder than just WBE. The delay between solving WBE and the hard problem of consciousness is so vast in my opinion that being excited for mind-uploading when WBE progress is made is like being excited for self-propelled cars after making progress in developing horse-drawn wagons. In both cases, little progress has been made on the most significant component of the desired thing.

The Best Software For Every Need

I second this! I love writing essays in Typora, great for note taking as well

The Apprentice Thread

[APPRENTICE] Working on and thinking about major problems in neurosymbolic AI / AGI. I:

  • am three months from finishing undergrad with a BS in Computers and Minds (I designed this degree to be a comprehensive AI degree). I have 1.5 years of academic research experience working with some core RL folks at my university. Considering grad schools for next fall but still unsure.
  • have an academic background in:
    • AI subfields ([hierarchical] reinforcement learning (options framework), more general sequential decision-making, grounding language to behavior). Interested in re-integrating the many subfields of AI. I have submitted a first-author paper to a conference on language-assisted policy search.
    • linguistics (pragmatics, compositional semantics)
    • psychology (personality [there's plenty of insight from psychology that we have yet to integrate into AI])
    • philosophy (a few ethical dimensions of AI [I co-designed a course on the ethical dilemmas of the future of technology], philosophy of mind, epistemology)
  • am additionally interested in these things, which I don't know as much about:
    • representation learning
    • planning
    • POMDPs
    • more structured MDP formalisms
  • need help thinking through research topics and ideas worth pursuing.
  • am looking to sharpen my focus and understanding of how AGI might work under the hood (I happen to think the most robust and intuitive route is via neurosymbolic AI).

 

[MENTOR] Anything I mentioned in my academic background in the apprentice section. I should have more time at the start of 2022.