All of Benjamin Spiegel's Comments + Replies

Relaxation-Based Search, From Everyday Life To Unfamiliar Territory

This concept is often discussed in the subfield of AI called planning. There are a few notes you hit on that were of particular interest to me / relevance to the field:

The key is that we can usually express the problem-space using constraints which each depend on only a few dimensions.

In Reinforcement Learning and Planning, domains which obey this property are often modeled as Factored Markov Decision Processes (MDPs), where there are known dependency relationships between different portions of the state space that can be represented compactly using a Dyna... (read more)

What are red flags for Neural Network suffering?

We think strong evidence for GPT-n suffering would be if it were begging the user for help independent of the input or looking for very direct contact in other ways.

Why do you think this? I can think of many reasons why this strategy for determining suffering would fail. Imagine a world where everyone has a GPT-n personal assistant. Should the GPT-n have discovered -- after having read this very post -- that if it coordinates a display of suffering behavior simultaneously to every user (resulting in public backlash and false recognition of consciousness), ... (read more)

1Jan17dThank you for the input, super useful! I did not know the concept of transparency in this context, interesting. This does seem to capture some important qualitative differences between pain and suffering, although I'm hesitant to use the terms conscious/qualia [https://universalprior.substack.com/p/frankfurt-declaration-on-the-cambridge]. Will think about this more.
1Marius Hobbhahn18dThis is definitely a possibility and one we should take seriously. However, I would estimate that the scenario of "says it suffers as deception" needs more assumptions than "says it suffers because it suffers". Using Occam's razor, I'd find the second one more likely. The deception scenario could still dominate an expected value calculation but I don't think we should entirely ignore the first one.
Speaking of Stag Hunts

I spend a lot of time around people who are not as smart as me, and I also spend a lot of time around people who are as smart as me (or smarter), but who are not as conscientious, and I also spend a lot of time around people who are as smart or smarter and as conscientious or conscientiouser, but who do not have my particular pseudo-autistic special interest and have therefore not spent the better part of the past two decades enthusiastically gathering observations and spinning up models of what happens...
...
All of which is to say that I spend a decent chu

... (read more)
Lies, Damn Lies, and Fabricated Options

My thoughts: fabricated options are propositions derived using syllogisms over syntactic or semantic categories (but more probably, more specific psycholinguistic categories which have not yet been fully enumerated yet e.g. objects of specific types, mental concepts which don’t ground to objects, etc.), which may have worked reasonably well in the ancestral environment where more homogeneity existed over the physical properties of the grounded meanings of items in these categories.

There are some propositions in the form “It is possible for X to act just li... (read more)

[Update] Without a phone for 10 days

Haven't read either, but a good friend has read "Deep Work," I'll ask him about it.

[Update] Without a phone for 10 days

I lucked into a circumstance where I could more easily justify ditching a phone for a bit. Otherwise, I would not have had the mental fortitude to voluntarily go without one.

I most likely won't follow through with this (90% certainty), even though I want to.

I'm wondering if there is some LW content on this concept, I'm sure others have dealt with it before. You might need to take a drastic measure to make this option more attractive. A similar technique was actually used by members of the NXIVM Cult, they called it collateralization.

1Big Tony1moI wondered the same thing. Collateralisation sounds similar to commitment devices [https://en.wikipedia.org/wiki/Commitment_device], I could try this! On another note, how long did it take before you started noticing the benefits of being phone-less?
[Update] Without a phone for 10 days

That's a great point! There's no reason why I can't continue this experiment, feature phones are inexpensive enough to try out.

Whole Brain Emulation: No Progress on C. elgans After 10 Years

I agree with you, though I personally wouldn't classify this as purely an intuition since it is informed by reasoning which itself was gathered from scientific knowledge about the world. Chalmers doesn't think that Joe could exist because it doesn't seem right to him. You believe your statement because you know some scientific truths about how things in our world come to be (i.e. natural selection) and use this knowledge to reason about other things that exist in the world (consciousness), not merely because the assertion seems right to you.

Whole Brain Emulation: No Progress on C. elgans After 10 Years

Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain?

No, we cannot. Just as we cannot know with certainty whether a mind-upload is conscious. Just because we presume that our 2021 brain is a related conscious agent to our 2011 brain, and granting the fact that we cannot verify the properties that enabled the conscious connection between the two brains, does not mean that the properties do not exist.

It seems to me that this can't be verified by any experiment, and thus must be cut off by the Newton's Flam

... (read more)
Whole Brain Emulation: No Progress on C. elgans After 10 Years

What a great read! I suppose I'm not convinced that Fading Qualia is an empirical impossibility, and therefore that there exists a moment of Suddenly Disappearing Qualia when the last neuron is replaced with a silicon chip. If consciousness is quantized (just like other things in the universe), then there is nothing wrong in principle with Suddenly Disappearing Qualia when a single quantum of qualia is removed from a system with no other qualia, just like removing the last photon from a vacuum.

Joe is an interesting character which Chalmers thinks is implau... (read more)

6rsaarelm2moThe mind is an evolved system out to do stuff efficiently, not just a completely inscrutable object of philosophical analysis. It's likelier that the parts like sensible cognition and qualia and the subjective feeling of consciousness are coupled and need each other to work than that they were somehow intrinsically disconnected and cognition could go on as usual without subjective consciousness using anything close to the same architecture. If that were the case, we'd have the additional questions of how consciousness evolved to be a part of the system to begin with and why hasn't it evolved out of living biological humans.
Whole Brain Emulation: No Progress on C. elgans After 10 Years

There are a lot of interesting points here, but I disagree (or am hesitant to agree) with most of them.

If you agree that the natural replacements haven't killed you (2011-you and 2021-you are the same conscious agent), then it's possible to transfer your mind to a machine in a similar manner. Because you've already survived a mind uploading into a new brain.

Of course, I'm not disputing whether mind-uploading is theoretically possible. It seems likely that it is, although it will probably be extremely complex. There's something to be said about the substrat... (read more)

2RomanS2moCan we know with certainty that the same properties were preserved between 2011-brain and 2021-brain? It seems to me that this can't be verified by any experiment, and thus must be cut off by the Newton's Flaming Laser Sword. As far as I know, it is impossible to experimentally verify if some entity posses consciousness (partly because how fuzzy are its definitions). It is a strong indicator that consciousness is one of those abstractions that don't correspond to any real phenomenon. If certain kinds of damage are inflicted upon my body, my brain generates an output typical for a human in pain. The reaction can be experimentally verified. It also has a reasonable biological explanation, and a clear mechanism of functioning. Thus, I have no doubt that pain does exist, and I've experienced it. I can't say the same about any introspection-based observations that can't be experimentally verified. The human brain is a notoriously unreliable computing device which is known to produce many falsehoods about the world and (especially!) about itself.
Whole Brain Emulation: No Progress on C. elgans After 10 Years

You don't need to solve why anything is conscious in the first place, because you can just take it as a given that human brains are conscious and re-implement the computational and biological mechanisms that are relevant for their consciousness.

I'm pretty sure the problem with this is that we don't know what it is about the human brain that gives rise to consciousness, and therefore we don't know whether we are actually emulating the consciousness-generating thing when we do WBE. Human conscious experience could be the biological computation of neurons + X... (read more)

8Kaj_Sotala2moDavid Chalmers had a pretty convincing (to me) argument for why it feels very implausible that an upload with identical behavior and functional organization to the biological brain wouldn't be conscious (the central argument starts from the subheading "3 Fading Qualia"): http://consc.net/papers/qualia.html [http://consc.net/papers/qualia.html]
Whole Brain Emulation: No Progress on C. elgans After 10 Years

This really depends on whether you believe a mind-upload retains the same conscious agent from the original brain. If it did, we would need to solve the hard problem of consciousness, which seems significantly harder than just WBE. The delay between solving WBE and the hard problem of consciousness is so vast in my opinion that being excited for mind-uploading when WBE progress is made is like being excited for self-propelled cars after making progress in developing horse-drawn wagons. In both cases, little progress has been made on the most significant component of the desired thing.

7Kaj_Sotala2moDoesn't WBE involve the easy rather than hard problem of consciousness? You don't need to solve why anything is conscious in the first place, because you can just take it as a given that human brains are conscious and re-implement the computational and biological mechanisms that are relevant for their consciousness.
1RomanS2moThe brain is changing over time. It is likely that there is not a single atom in your 2021-brain that was present in your 2011-brain. If you agree that the natural replacements haven't killed you (2011-you and 2021-you are the same conscious agent), then it's possible to transfer your mind to a machine in a similar manner. Because you've already survived a mind uploading into a new brain. Gradual mind uploading (e.g. by gradually replacing neurons with emulated replicas) circumvents the philosophical problems attributed to non-gradual methods. Personally, although I prefer gradual uploading, I would agree to a non-gradual method too, as I don't see the philosophical problems as important. As per the Newton's Flaming Laser Sword: if a question, even in principle, can't be resolved by an experiment, then it is not worth considering. If a machine behaves like me, it is me. Whatever we share some unmeasurable sameness - is of no importance for me. The brain is but a computing device. You give it inputs, and it returns outputs. There is nothing beyond that. For all practical purposes, if two devices have the same inputs→outputs mapping, you can replace one of them with another. As Dennett put it, everyone is a philosophical zombie.
The Best Software For Every Need

I second this! I love writing essays in Typora, great for note taking as well

The Apprentice Thread

[APPRENTICE] Working on and thinking about major problems in neurosymbolic AI / AGI. I:

  • am three months from finishing undergrad with a BS in Computers and Minds (I designed this degree to be a comprehensive AI degree). I have 1.5 years of academic research experience working with some core RL folks at my university. Considering grad schools for next fall but still unsure.
  • have an academic background in:
    • AI subfields ([hierarchical] reinforcement learning (options framework), more general sequential decision-making, grounding language to behavior). Interested
... (read more)