Since the early 21st century, some transhumanist proponents and futuristic researchers claim that Whole Brain Emulation (WBE) is not merely science fiction - although still hypothetical, it's said to be a potentially viable technology in the near future. Such beliefs attracted significant fanfare in tech communities such as LessWrong.

In 2011 at LessWrong, jefftk did a literature review on the emulation of a worm, C. elegans, as an indicator of WBE research progress.

Because the human brain is so large, and we are so far from having the technical capacity to scan or emulate it, it's difficult to evaluate progress.  Some other organisms, however, have much smaller brains: the nematode C. elegans has only 302 cells in its entire nervous system.  It is extremely well studied and well understood, having gone through heavy use as a research animal for decades. Since at least 1986 we've known the full neural connectivity of C. elegans, something that would take decades and a huge amount of work to get for humans.  At 302 neurons, simulation has been within our computational capacity for at least that long.  With 25 years to work on it, shouldn't we be able to 'upload' a nematode by now?

There were three research projects from the 1990s to the 2000s, but all are dead-ends that were unable to reach the full research goals, giving a rather pessimistic vision of WBE. However, immediately after the initial publication of that post, LW readers Stephen Larson (slarson) & David Dalrymple (davidad) pointed out in the comments that they were working on it, the two ongoing new projects of their own made the future look promising again.

The first was the OpenWorm project, coordinated by slarson. Its goal is to create a complete model and simulation of C. elegans, and to release all tools and data as free and open source software. Implementing a structural model of all 302 C. elegans neurons in the NeuroML description language was an early task completed by the project.

The next was another research effort at MIT by davidad. David explained that the OpenWorm project focused on anatomical data from dead worms, but very little data exists from living animals' cells. They can't tell scientists about the relative importance of connections between neurons within the worm's neural system, only that a connection exists.

  1. The "connectome" of C. elegans is not actually very helpful information for emulating it. Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data.
  2. What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.
  3. With optogenetic techniques, we are just at the point where it's not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.

In a year or two, he believed an automated device can be built to gather such data. And he was confident.

I'm a disciple of Kurzweil, and as such I'm prone to putting ridiculously near-future dates on major breakthroughs. In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Surprised, for whatever that's worth, if this is still an open problem in 2020.

When asked by gwern for a statement for PredictionBook.com, davidad said:

  • "A complete functional simulation of the C. elegans nervous system will exist on 2014-06-08." 76% confidence
  • "A complete functional simulation of the C. elegans nervous system will exist on 2020-01-01." 99.8% confidence

(disappointingly, these statements were not actually recorded on PredictionBook).

Unfortunately, 10 years later, both projects appear to have made no significant progress and failed to develop a working simulation that is able to resemble biological behaviors. In a 2015 CNN interview, slarson said the OpenWorm project was "only 20 to 30 percent of the way towards where we need to get", and seems to be in the development hell forever since. Meanwhile, I was unable to find any breakthrough from davaidad before the project ended. David personally left the project in 2012.

When the initial review was published, there was already 25 years of works on C. elegans, and right now yet another decade has passed, yet we're still unable to "upload" a nematode. Therefore, I have to end my post with the pessimistic vision of WBE by quoting the original post.

This seems like a research area where you have multiple groups working at different universities, trying for a while, and then moving on.  None of the simulation projects have gotten very far: their emulations are not complete and have some pieces filled in by guesswork, genetic algorithms, or other artificial sources.  I was optimistic about finding successful simulation projects before I started trying to find one, but now that I haven't, my estimate of how hard whole brain emulation would be has gone up significantly.  While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.

This is discouraging.

Closing thoughts: What went wrong? What are the unsolvable difficulties here?

Update

Some technical insights behind the failure was given in a 2014 update ("We Haven't Uploaded Worms"), jefftk showed the major problems are:

  1. Knowing the connections isn't enough, we also need to know the weights and thresholds. We don't know how to read them from a living worm.
  2. C. elegans is able to learn by changing the weights. We don't know how weights and thresholds are changed in a living worm.

The best we can do is modeling a generic worm - pretraining and running the neural network with fixed weights. Thus, no worm is "uploaded" because we can't read the weights, and these simulations are far from realistic because they are not capable of learning. Hence, it's merely a boring artificial neural network, not a brain emulation.

To see why this isn't enough, consider that nematodes are capable of learning. [...] For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don't do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can't learn. They also don't read weights off of any individual worm, which means we can't talk about any specific worm as being uploaded.

If this doesn't count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.

Furthermore, in a Quora answer, davidad hinted that his project was discontinued partially due to the lack of funding.

If I'd had $1 million seed, I wouldn't have had to cancel the project when I did...

Conclusion: Relevant neural recording technologies are needed to collect data from living worms, but they remain undeveloped, and the funding simply isn't there. 

Update 2

I just realized David actually had an in-depth talk about his work and the encountered difficulties at MIRI's AI Risk for Computer Scientists workshop in 2020, according to this LW post ("AIRCS Workshop: How I failed to be recruited at MIRI"). 

Most discussions were pretty high level. For example, someone presented a talk where they explained how they tried and failed to model and simulate a brain of C. Elegansis. A worm with an extremely simple and well understood brain. They explained to us a lot of things about biology, and how they had been trying and scanning precisely a brain. If I understood correctly, they told us they failed due to technical constraints and what those were. They believe that, nowadays, we can theoretically create the technology to solve this problem. However there is no one interested in said technology, so it won't be developed and be available to the market.

Does anyone know any additional information? Is the content of that talk available in paper form?

Update 3

Note to the future readers: within a week of the initial publication of this post, I received some helpful insider comments, including David himself, on the status of this field. The followings are especially worth reading.

184

76 comments, sorted by Click to highlight new comments since: Today at 9:25 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Let's look at a proxy task. "Rockets landing on their tail". The first automated landing of an airliner was in 1964. Using a similar system of guidance signals from antenna on the ground surely a rocket could land after boosting a payload around the same time period. While SpaceX first pulled it off in 2015.

In 1970 if a poorly funded research lab said they would get a rocket to land on its tail by 1980 and in 1980 they had not succeeded, would you update your estimated date of success to "centuries"? C elegans has 302 neurons and it takes, I think I read 10 ANN nodes to mimic the behavior of one biological neuron. With a switching frequency of 1 khz and it's fully connected you would need 302 * 100^2 *1000 operations per second. This is 0.003 TOPs, and embedded cards that do 200-300 TOPs are readily available.

So the computational problem is easy. Did David build an automated device to collect data from living cells? If not, was the reason it wasn't done because of some sudden unexpected huge difficulty that 100+ people and multi-million dollar budget couldn't solve, or was it because...those people weren't there and neither was the funding?

My larger point is that if the '... (read more)

Did David build an automated device to collect data from living cells? If not, was the reason it wasn't done because of some sudden unexpected huge difficulty that 100+ people and multi-million dollar budget couldn't solve, or was it because...those people weren't there and neither was the funding?

Good points, I did more digging and found some relevant information I initially missed, see "Update". He didn't, and funding was indeed a major factor. 

9CraigMichael1y
The DC-X did this first in 1993, although this video is from 1995. https://youtube.com/watch?v=wv9n9Casp1o [https://youtube.com/watch?v=wv9n9Casp1o] (And their budget was 60 million 1991 dollars, Wolfram Alpha says that’s 117 million in 2021 dollars)https://en.m.wikipedia.org/wiki/McDonnell_Douglas_DC-X [https://en.m.wikipedia.org/wiki/McDonnell_Douglas_DC-X]
5jefftk1y
That isn't the right interpretation of the proxy task. In 2011, I was using progress on nematodes to estimate the timing of whole brain emulation for humans. That's more similar to using progress in rockets landing on their tail to estimate the timing of self-sustaining Mars colony. (I also walked back from "probably hundreds of years" to "I don't think we'll be uploading anyone in this century" after the comments on my 2011 post, writing the more detailed:https://www.jefftk.com/p/whole-brain-emulation-and-nematodes [https://www.jefftk.com/p/whole-brain-emulation-and-nematodes])
1Gerald Monroe1y
Ok. Hypothetical experiment. In 2042 someone does demonstrate a convincing dirt dynamics simulation and a flock of emulated nematodes. The emulated firing patterns correspond well with experimentally observed nematodes. With that information you would still feel safe in concluding the solution is 58 years away for human scale?
2jefftk1y
I'm not sure what you mean by "convincing dirt dynamics simulation and a flock of emulated nematodes"? I'm going to assume you mean the task I described in my post: teach one something, upload it, verify it still has the learned behavior. Yes, I would still expect it to be at least 58 years away for human scale. The challenges are far larger for humans, and it taking over 40 years from people starting on simulating nematodes to full uploads would be a negative timeline update to me. Note that in 2011 I expected this for around 2021, which is nowhere near on track to do: https://www.jefftk.com/p/whole-brain-emulation-and-nematodes [https://www.jefftk.com/p/whole-brain-emulation-and-nematodes]
3Gerald Monroe1y
Ok. I have thought about it further and here is the reason I think you're wrong. You implicitly have made an assumption that the tools available to neuroscientist today are good, and we have a civilization with the excess resources to support such an endeavor. This is false. Today the available resources for such endeavors is only enough to fund small teams. The research that is profitable like silicon chip improvement gets hundreds of billions invested into it. So any extrapolation is kinda meaningless. It would be like asking in 1860 how many subway tunnels would be in NYC in 1940. The heavy industry to build it simply didn't exist so you would have to conclude it would be slow going. Similarly, the robotic equipment to do bioscience is currently limited and specialized. It's why a genome can be sequenced for a few thousand dollars but graduate students still use pipettes. Oh and if you wanted to know when the human genome would be sequenced in 1920, and in 1930 learned that zero genes had been sequenced, you might make a similar conclusion.
2jefftk1y
Do you have a better way of estimating the timing of new technologies that require many breakthroughs to reach?
2Gerald Monroe1y
I'll try to propose one. * Is the technology feasible with demonstrated techniques at a laboratory level. * Will there likely be gain to the organization that sells or deploys this technology in excess of it's estimated cost? * Does the technology run afoul of existing government regulation that will slow research into it? * Does the technology have a global market that will result in a sigmoidal adoption curve? Electric cars should have been predictable this way: They were feasible since 1996, or 1990. (LFP battery is the first lithium chemistry with the lifespan to be a net gain for an EV, 1990 is the first 'modern' lithium battery assembled in a lab) The gain is reduced fuel cost, maintenance cost, and supercar acceleration and vehicle performance with much cheaper drivetrains. Governments perceive a benefit in EVs so they have subsidized the research Yes, and the adoption curve is sigmoidal. smartphones follow a similar such set of arguments, and the chips that made them possible were only low power enough around the point that the sigmoidal adoption started. They were not really possible much prior. Also, Apple made a large upfront investment to deliver an acceptable user experience all at once, rather than incrementally adding features like other early smartphone manufacturers did. I will put my stake in the sand and say that autonomous cars fit all these rules: - Feasible, as the fundamental problem of assessing collision risk for a candidate path, the only part the car has to have perfect, is a simple and existing algorithm - enormous gain, easily north of a trillion dollars in annual revenue or hundreds of billions in annual profit will be available to the industry - Governments are reluctant but are obviously allowing the research and testing - The adoption curve will be sigmoidal, because it has obvious self gain. The first shipping autonomous EVs will likely produce a cost advantage for a taxi firm or be rented directly, and
2slugbait937mo
"it takes, I think I read 10 ANN nodes to mimic the behavior of one biological neuron" More like a deep network of 5-8 layers of 256 ANN nodes, according to recent work (turns out lots of computation is happening in the dentrites): https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/
1Gerald Monroe7mo
Thank you for the link. Given some neurons have 1000+ inputs and complex timing behavior, this is perfectly reasonable.
1FCCC1y
Were they trying to simulate a body and an environment? Seems to me that would make the problem much harder, as you’d be trying to simulate reality. (E.g. How does an organic body move through physical space based on neural activity? How does the environment’s effects on the body stimulate neural changes?)
1Gerald Monroe1y
You have to or you haven't really solved the problem. It's very much bounded, you do not need to simulate "reality", you need to approximate the things that the nematode can experience to slightly higher resolution than it can perceive. So basically you need some kind of dirt dynamics model that has sufficient fidelity to the nematodes very crude senses to be equivalent. It might be easier with an even smaller organism.
1FCCC1y
Maybe someone should ask the people who were working on it what their main issues were.

I might have time for some more comments later, but here's a few quick points (as replies):

  1. I was certainly overconfident about how easy Nemaload would be, especially given the microscopy and ML capabilities of 2012, but moreso I was overconfident that people would continue to work on it. I think there was very little work toward the goal of a nematode-upload machine from 2013 to 2017. Once or twice an undergrad doing a summer internship at the Boyden lab would look into it for a month, and my sense is that accounted for something like 3-8% of the total global effort.
3mukashi1y
Why there was not a postdoc or a PhD hired for doing this work? Was it due to the lack of funding?

I can't say for sure why Boyden or others didn't assign grad students or postdocs to a Nemaload-like direction; I wasn't involved at that time, there are many potential explanations, and it's hard to distinguish limiting/bottleneck or causal factors from ancillary or dependent factors.

That said, here's my best explanation. There are a few factors for a life-science project that make it a good candidate for a career academic to invest full-time effort in:

  1. The project only requires advancing the state of the art in one sub-sub-field (specifically the one in which the academic specializes).
  2. If the state of the art is advanced in this one particular way, the chances are very high of a "scientifically meaningful" result, i.e. it would immediately yield a new interesting explanation (or strong evidence for an existing controversial explanation) about some particular natural phenomenon, rather than just technological capabilities. Or, failing that, at least it would make a good "methods paper", i.e. establishing a new, well-characterized, reproducible tool which many other scientists can immediately see is directly helpful for the kind of "scientifically meaningful" experiments they alre
... (read more)

Note, (1) is less bad now, post-2018-ish. And there are ways around (2b) if you're determined enough. Michael Skuhersky is a PhD student in the Boyden lab who is explicitly working in this direction as of 2020. You can find some of his partial progress here https://www.biorxiv.org/content/biorxiv/early/2021/06/10/2021.06.09.447813.full.pdf and comments from him and Adam Marblestone over on Twitter, here: https://twitter.com/AdamMarblestone/status/1445749314614005760

  1. There has been some serious progress in the last few years on full functional imaging of the C. elegans nervous system (at the necessary spatial and temporal resolutions and ranges).

However, despite this I haven't been able to find any publications yet where full functional imaging is combined with controlled cellular-scale stimulation (e.g. as I proposed via two-photon excitation of optogenetic channels), which I believe is necessary for inference of a functional model.

  1. I got initial funding from Larry Page on the order of 10^4 USD and then funding from Peter Thiel on the order of 10^5 USD. The full budget for completing the Nemaload project was 10^6 USD, and Thiel lost interest in seeing it through.

The tone of strong desirability for progress on WBE in this was surprising to me. The author seems to treat progress in WBE as a highly desirable thing; a perspective I expect most on LW do not endorse.

The lack of progress here may be a quite good thing.

As many other people here, I strongly desire to avoid death. Mind uploading is an efficient way to prevent many causes of death, as it is could make a mind practically indestructible (thanks to backups, distributed computing etc). WBE is a path towards mind uploading, and thus is desirable too.

Mind uploading could help mitigate OR increase the AI X-risk, depending on circumstances and implementation details. And the benefits of uploading as a mitigation tool seem to greatly outweigh the risks. 

The most preferable future for me is the future there mind uploading is ubiquitous, while X-risk is avoided. 

Although unlikely, it is still possible that mind uploading will emerge sooner than AGI. Such a future is much more desirable than the future without mind uploading (some possible scenarios).

-3Benjamin Spiegel1y
This really depends on whether you believe a mind-upload retains the same conscious agent from the original brain. If it did, we would need to solve the hard problem of consciousness, which seems significantly harder than just WBE. The delay between solving WBE and the hard problem of consciousness is so vast in my opinion that being excited for mind-uploading when WBE progress is made is like being excited for self-propelled cars after making progress in developing horse-drawn wagons. In both cases, little progress has been made on the most significant component of the desired thing.

If it did, we would need to solve the hard problem of consciousness, which seems significantly harder than just WBE.

Doesn't WBE involve the easy rather than hard problem of consciousness? You don't need to solve why anything is conscious in the first place, because you can just take it as a given that human brains are conscious and re-implement the computational and biological mechanisms that are relevant for their consciousness.

0Benjamin Spiegel1y
I'm pretty sure the problem with this is that we don't know what it is about the human brain that gives rise to consciousness, and therefore we don't know whether we are actually emulating the consciousness-generating thing when we do WBE. Human conscious experience could be the biological computation of neurons + X. We might be able to emulate biological computation perfectly, but if X is necessary for conscious experience then we've just created a philosophical zombie. To find out whether our emulation is sufficient to produce consciousness, we would need to find out what X is and how to emulate it. I'm pretty sure this is exactly the hard problem of consciousness. Even if biological computation is sufficient for generating consciousness, we will have no way of knowing until we solve the hard problem of consciousness.

Human conscious experience could be the biological computation of neurons + X. We might be able to emulate biological computation perfectly, but if X is necessary for conscious experience then we've just created a philosophical zombie.

David Chalmers had a pretty convincing (to me) argument for why it feels very implausible that an upload with identical behavior and functional organization to the biological brain wouldn't be conscious (the central argument starts from the subheading "3 Fading Qualia"): http://consc.net/papers/qualia.html

3Benjamin Spiegel1y
What a great read! I suppose I'm not convinced that Fading Qualia is an empirical impossibility, and therefore that there exists a moment of Suddenly Disappearing Qualia when the last neuron is replaced with a silicon chip. If consciousness is quantized (just like other things in the universe), then there is nothing wrong in principle with Suddenly Disappearing Qualia when a single quantum of qualia is removed from a system with no other qualia, just like removing the last photon from a vacuum. Joe is an interesting character which Chalmers thinks is implausible, but aside from it rubbing up against a faint intuition, I have no reason to believe that Joe is experiencing Fading Qualia. There is no indication for any reason that the workings of consciousness should obey any intuitions we may have about it.
7rsaarelm1y
The mind is an evolved system out to do stuff efficiently, not just a completely inscrutable object of philosophical analysis. It's likelier that the parts like sensible cognition and qualia and the subjective feeling of consciousness are coupled and need each other to work than that they were somehow intrinsically disconnected and cognition could go on as usual without subjective consciousness using anything close to the same architecture. If that were the case, we'd have the additional questions of how consciousness evolved to be a part of the system to begin with and why hasn't it evolved out of living biological humans.
4Benjamin Spiegel1y
I agree with you, though I personally wouldn't classify this as purely an intuition since it is informed by reasoning which itself was gathered from scientific knowledge about the world. Chalmers doesn't think that Joe could exist because it doesn't seem right to him. You believe your statement because you know some scientific truths about how things in our world come to be (i.e. natural selection) and use this knowledge to reason about other things that exist in the world (consciousness), not merely because the assertion seems right to you.
1RomanS1y
The brain is changing over time. It is likely that there is not a single atom in your 2021-brain that was present in your 2011-brain. If you agree that the natural replacements haven't killed you (2011-you and 2021-you are the same conscious agent), then it's possible to transfer your mind to a machine in a similar manner. Because you've already survived a mind uploading into a new brain. Gradual mind uploading (e.g. by gradually replacing neurons with emulated replicas) circumvents the philosophical problems attributed to non-gradual methods. Personally, although I prefer gradual uploading, I would agree to a non-gradual method too, as I don't see the philosophical problems as important. As per the Newton's Flaming Laser Sword: if a question, even in principle, can't be resolved by an experiment, then it is not worth considering. If a machine behaves like me, it is me. Whatever we share some unmeasurable sameness - is of no importance for me. The brain is but a computing device. You give it inputs, and it returns outputs. There is nothing beyond that. For all practical purposes, if two devices have the same inputs→outputs mapping, you can replace one of them with another. As Dennett put it, everyone is a philosophical zombie.
1Benjamin Spiegel1y
There are a lot of interesting points here, but I disagree (or am hesitant to agree) with most of them. Of course, I'm not disputing whether mind-uploading is theoretically possible. It seems likely that it is, although it will probably be extremely complex. There's something to be said about the substrate independence of computation and, separately, consciousness. No, my brain today does not contain the same atoms as my brain from ten years ago. However, certain properties of the atoms (including the states of their constituent parts) may be conserved such as spin, charge, entanglement, or some yet undiscovered state of matter. So long as we are unaware of the constraints on these properties that are necessary for consciousness (or even whether these properties are relevant to consciousness), we cannot know with certainty that we have uploaded a conscious mind. These statements are ringing some loud alarm bells for me. It seems that you are rejecting consciousness itself. I suppose you could do that, but I don't think any reasonable person would agree with you. To truly gauge whether you believe you are conscious or not, ask yourself, "have I ever experienced pain?" If you believe the answer to that is "yes," then at least you should be convinced that you are conscious. What you are suggesting at the end there is that WBE = mind uploading. I'm not sure many people would agree with that assertion.
3RomanS1y
Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain? It seems to me that this can't be verified by any experiment, and thus must be cut off by the Newton's Flaming Laser Sword. As far as I know, it is impossible to experimentally verify if some entity posses consciousness (partly because how fuzzy are its definitions). It is a strong indicator that consciousness is one of those abstractions that don't correspond to any real phenomenon. If certain kinds of damage are inflicted upon my body, my brain generates an output typical for a human in pain. The reaction can be experimentally verified. It also has a reasonable biological explanation, and a clear mechanism of functioning. Thus, I have no doubt that pain does exist, and I've experienced it. I can't say the same about any introspection-based observations that can't be experimentally verified. The human brain is a notoriously unreliable computing device which is known to produce many falsehoods about the world and (especially!) about itself.
2Benjamin Spiegel1y
No, we cannot. Just as we cannot know with certainty whether a mind-upload is conscious. Just because we presume that our 2021 brain is a related conscious agent to our 2011 brain, and granting the fact that we cannot verify the properties that enabled the conscious connection between the two brains, does not mean that the properties do not exist. Perhaps we presently have no way of testing whether some matter is conscious or not. This is not equivalent to saying that, in principle, the conscious state of some matter cannot be tested. We may one day make progress toward the hard problem of consciousness and be able to perform these experiments. Imagine making this argument throughout history before microscopes, telescopes, and hadron colliders. We can now sheath Newton’s Flaming Laser Sword. I believe this hedges on an epistemic question about whether we can have have knowledge of anything using our observations alone. I think even a skeptic would say that she has consciousness, as the fact that one is conscious may be the only thing that one can know with certainty about themself. You don’t need to verify any specific introspective observation. The act of introspection itself should be enough for someone to verify that they are conscious. This claim refers to the reliability of the human brain to verify the truth value of certain propositions or indentify specific and individuable experiences. Knowing whether oneself is conscious is not strictly a matter of verifying a proposition, nor identifying an individuable experience. It’s only about verifying whether one has any experience whatsoever, which should be possible. Whether I believe your claim to consciousness or not is a different problem.
1niconiconi1y
Did I miss some subtle cultural changes at LW? I know the founding principles of LW are rationalism and AI safety from the start. But in my mind, LW always has all types of adjacent topics and conversations, with many different perspectives. Or at least this was my impression of the 2010s LW threads on the Singularity and transhumanism. Did these discussions become more and more emphasized on AI safety and derisking over time? I'm not a regular reader of LW, any explanation would be greatly appreciated.
1J Thomas Moros1y
While I can understand why many would view advances toward WBE as an AI-safety risk, many in the community are also concerned with cryonics. WBE is an important option for the revival of cryonics patients. So I think the desirability of WBE should be clear. It just may be the case that we need to develop safe AI first.
9Matt Goldenberg1y
It's not just an AI safety risk, it's also an S-risk in it's own right.
1RomanS1y
While discussing a new powerful tech, people often focus on what could go horribly wrong, forgetting to consider what could go gloriously right. What could could go gloriously right with mind uploading? It could eliminate involuntary death, saving trillions of future lives. This consequence alone massively outweighs the corresponding X- and S-risks.
7davidad1y
At least from the orthodox QALY perspective on "weighing lives", the benefits of WBE don't outweigh the S-risks, because for any given number of lives, the resources required to make them all suffer are lesser than the resources required for the glorious version. The benefits of eventually developing WBE do outweigh the X-risks, if we assume that * human lives are the only ones that count, * WBE'd humans still count as humans, and * WBE is much more resource-efficient than anything else that future society could do to support human life. However, orthodox QALY reasoning of this kind can't justify developing WBE soon (rather than, say, after a Long Reflection), unless there are really good stories about how to avoid both the X-risks and the S-risks.
1RomanS1y
As far as I know, mind uploading is the only tech that can reduce the risk of death (from all causes) to almost zero. It is almost impossible to destroy a mind that is running on resilient distributed hardware with tons of backups hidden in several star systems. There is a popular idea that some very large amount of suffering is worse than death. I don't subscribe to it. If I'm tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death - because (by definition) it cannot be repaired. And everything else can be repaired, including the damage from any amount of suffering. In such calculations, I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe. I also don't see how eliminating any arbitrary large amount of suffering could be preferable to saving 1 life. Unless the suffering leads to permadeath, the sufferers can get over it. The dead - cannot. Bad feelings are vastly less important than saved lives. It's a good idea to reduce suffering. But the S-risk is trivially eliminated from the equitation if the tech in question is life-saving. There are currently ~45 million permadeaths per year. Thus, any additional year without widely accessible mind uploading means there is an equivalent of 45+ million more humans experiencing the worst possible suffering until the end of the universe. In 10 years, it's a half a billion. In 1000 years, it's a half a trillion. This high cost of the Long Reflection is one more reason why it should never be forced upon humanity.

I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe.

This is so incredibly far from where I would place the equivalence, and I think where almost anyone would place it, that I'm baffled. You really mean this?

5RomanS1y
There is an ancient and (unfortunately) still very popular association between death and sleep / rest / peace / tranquility. The association is so deeply engraved, it is routinely used by most people who have to speak about death. E.g. "rest in peace", "put to sleep", "he is in a better place now" etc. The association is harmful. The association suggests that death could be a valid solution to pain, which is deeply wrong. It's the same wrongness as suggesting to kill a child to make the child less sad. Technically, the child will not experience sadness anymore. But infanticide is not a sane person's solution to sadness. The sane solution is to find a way to make the child less sad (without killing them!). The sane solution to suffering is to reduce suffering. Without killing the sufferer. For example, if a cancer patient is in great pain, the most ethical solution is to cure them from cancer, and use efficient painkillers during the process. If there is no cure, then utilize cryonics to transport them into the future where such a cure becomes available. Killing the patient because they're in pain is a sub-optimal solution (to put it mildly). I can't imagine any situation where permadeath is preferable to suffering. With enough tech and time, all kinds of suffering can be eliminated, and their effects can be reversed. But permadeath is, by definition, non-reversible and non-repairable. If one must choose between a permanent loss of human life and some temporary discomfort, it doesn't make sense to prefer the permanent loss of life, regardless of the intensity of the discomfort.
6Duncan_Sabien1y
(I agree wholeheartedly with almost everything you've said here [https://medium.com/@ThingMaker/why-against-death-12ef0775e038], and have strong upvoted, but I want to make space for the fact that some people don't make sense, and some people reflectively endorse not making sense, and so while I will argue against their preference for death over discomfort, I will also fight for their right to make the wrong choice for themselves, just as I fight for your and my right to make the correct choice for ourselves. Unless there is freedom for people to make wrong choices, we can never move beyond a socially-endorsed "right" choice to something Actually Better.)
2lc4mo
Something is handicapping your ability to imagine what the "worst possible discomfort" would be.
1RomanS4mo
The thing is: regardless of how bad is the worst possible discomfort, dying is still a rather stupid idea, even if you have to endure the discomfort for millions of years. Because if you live long enough, you can find a way to fix any discomfort. I wrote in more detail about it here [https://www.lesswrong.com/posts/AnhwmSxtzvgk39Xdr/a-fate-worse-than-death-1].
6Matt Goldenberg1y
This sweeps a large amount of philosphical issues under the rug by begging the conclusion (that death is the worst thing), and then using that justify itself (death is the worst thing, and if you die, you're stuck dead, so that's the worst thing).
0RomanS1y
I predict that most (all?) ethical theories that assume that some amount of suffering is worse than death - have internal inconsistencies. My prediction is based on the following assumption: * permanent death is the only brain state that can't be reversed, given sufficient tech and time The non-reversibility is the key. For example, if your goal is to maximize happiness of every human, you can achieve more happiness if none of the humans ever die, even if some humans will have periods of intense and prolonged suffering. Because you can increase happiness of the humans who suffered, but you can't increase happiness of the humans who are non-reversibly dead. If your goal is to minimize suffering (without killing people), then you should avoid killing people. Killing people includes withholding life extension technologies (like mind uploading), even if radical life extension will cause some people to suffer for millions of years. You can decrease suffering of the humans who are suffering, but you can't do that for the humans who are non-reversibly dead. The mere existence of the option of voluntary immortality necessitates some quite interesting changes in ethical theories. Personally, I simply don't want to die, regardless of the circumstances. The circumstances might include any arbitrary large amount of suffering. If a future-me ever begs for death, consider him in the need of some brain repair, not in the need of death.
6Erhannis11d
While I (a year late) tentatively agree with you (though a million years of suffering is a hard thing to swallow compared to the instinctually almost mundane matter of death) I think there's an assumption in your argument that bears inspection. Namely, I believe you are maximizing happiness at a given instance in time - the present, or the limit as time approaches infinity, etc. (Or, perhaps, you are predicating the calculations on the possibility of escaping the heat death of the universe, and being truly immortal for eternity.) A (possibly) alternate optimization goal - maximize human happiness, summed over time. See, I was thinking, the other day, and it seems possible we may never evade the heat death of the universe. In such a case, if you only value the final state, nothing we do matters, whether we suffer or go extinct tomorrow. At the very least, this metric is not helpful, because it cannot distinguish between any two states. So a different metric must be chosen. A reasonable substitute seems to me to be to effectively take the integral of human happiness over time, sum it up. The happy week you had last week is not canceled out by a mildly depressing day today, for instance - it still counts. Conversely, suffering for a long time may not be automatically balanced out the moment you stop suffering (though I'll grant this goes a little against my instincts). If you DO assume infinite time, though, your argument may return to being automatically true. I'm not sure that's an assumption that should be confidently made, though. If you don't assume infinite time, I think it matters again what precise value you put on death, vs incredible suffering, and that may simply be a matter of opinion, of precise differences in two people's terminal goals. (Side note: I've idly speculated about expanding the above optimization criteria for the case of all-possible-universes - I forget the exact train of thought, but it ended up more or less behaving in a manner such that y
9RomanS10d
Our current understanding of physics (and of our future capabilities) is so limited, I assume that our predictions on how the universe will behave trillions years from now are worthless. I think we can safely postpone the entire question to the times after we achieve a decent understanding of physics, after we became much smarter, and after we can allow ourselves to invest some thousands of years of deep thought on the topic.

While we're sitting around waiting for revolutionary imaging technology or whatever, why not try and make progress on the question of how much and what type of information can we obscure about a neural network and still approximately infer meaningful details of that network from behavior. For practice, start with ANNs and keep it simple. Take a smallish network which does something useful, record the outputs as it's doing its thing, then add just enough random noise to the parameters that output deviates noticeably from the original. Now train the perturbed version to match recorded data. What do we get here, did we recover the weights and biases almost exactly? Assuming yes, how far can this go before we might as well have trained the thing from scratch? Assuming success, does it work equally on different types and sizes of networks, if not what kind of scaling laws does this process obey? Assuming some level of success, move on to a harder problem, a sparse network, this time we throw away everything but connectivity information and try to repeat the above. How about something biologically realistic but we try to simulate the spiking neurons with groups of standard artificial ones.. you get the drift.

It seems to me that this task has an unclear goal. Imagine I linked you a github repo and said "this is a 100% accurate and working simulation of the worm." How would you verify that? If we had a WBE of Ray Kurzweil, we could at least say "this emulated brain does/doesn't produce speech that resembles Kurzweil's speech." What can you say about the emulated worm? Does it wiggle in some recognizable way? Does it move towards the scent of food?

Quote jefftk.

To see why this isn't enough, consider that nematodes are capable of learning. [...] For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don't do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can't learn. They also don't read weights off of any individual worm, which means we can't talk about any specific worm as being uploaded.

If this doesn't count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.

(just included the quotation in my post)

A study by Alcor trained C. elegans worms to react to the smell of a chemical. They then demonstrated that the worms retained this memory even after being frozen and revived. Were it possible to upload a worm, the same exact test would show that you had successfully uploaded a worm with that memory vs. one without that memory.

Study here: Persistence of Long-Term Memory in Vitrified and Revived Caenorhabditis elegans

3jacobjacob1y
You could imagine an experimental designs where you train worms to accomplish a particular task (i.e. learn which scent means food. If indeed they use scent. I don't know.) You then upload both trained and untrained worms. If trained uploads perform better from the get-go than untrained ones, it's some evidence it's the same worm. To make it more granular there's a lot of learning tasks from behavioural neuroscience you could adapt. You could also do single neuron studies: train the worm on some task, find a neuron that seems to correspond to a particular abstraction. Upload worm; check that the neuron still corresponds to the abstraction. Or ablation studies: you selectively impair certain neurons in the live trained worm, uploaded worm, and control worms, in a way that causes the same behaviour change only in the target individual. Or you can get causal evidence via optogenetics [https://en.wikipedia.org/wiki/Optogenetics#:~:text=Optogenetics+is+a+biological+technique,express+light%2Dsensitive+ion+channels.&text=Control+(or+recording)+of+activity,spatiotemporal%2Dspecific+manner+by+light.] .
4niconiconi1y
Optogenetics was exactly the method proposed by David, I just updated the article and included a full quote. I originally thought my post was already a mere summary of the previous LW posts by jefftk, excessive quotation could make it too unoriginal, interested readers could simply read more by following the links. But I just realized giving sufficient context is important when you're restarting a forgotten discussion.

I want to point out that there has been some very small amounts of progress in the last 10 years on the problem of moving from connectome to simulation rather than no progress. 

First, there has been interesting work at the JHU Applied Physics Lab which extends what Busbice was trying to do when he tried to run as simulation of c elegans in a Lego Mindstorms robot (by the way, that work by Busbice was very much overhyped by Busbice and in the media, so it's fitting that you didn't mention it). They use a basic integrate and fire model to simulate the n... (read more)

4niconiconi1y
Thanks for the info. Your comment is the reason why I'm on LessWrong.

Imagine you have two points, A and B. You're at A, and you can see B in the distance. How long will it take you to get to B?

Well, you're a pretty smart fellow. You measure the distance, you calculate your rate of progress, maybe you're extra clever and throw in a factor of safety to account for any irregularities during the trip. And you figure that you'll get to point B in a year or so.

Then you start walking.

And you run into a wall. 

Turns out, there's a maze in between you and point B. Huh, you think. Well that's ok, I put a factor of safety into my ... (read more)

4orthonormal1y
On the other hand, sometimes people end up walking right through what the established experts thought to be a wall. The rise of deep learning from a stagnant backwater in 2010 to a dominant paradigm today (crushing the old benchmarks in basically all of the most well-studied fields) is one such case. In any particular case, it's best to expect progress to take much, much longer than the Inside View indicates. But at the same time, there's some part of the research world where a major rapid shift is about to happen.

As someone interested in seeing WBE become a reality, I have also been disappointed by the lack of progress. I would like to understand the reasons for this better. So I was interested to read this post, but you seem to be conflating two different things. The difficulty of simulating a worm and the difficulty of uploading a worm. There are a few sentences that hint both are unsolved, but they should be clearly separated.

Uploading a worm requires being able to read the synaptic weights, thresholds, and possibly other details from an individual worm. No... (read more)

There’s a scan of 1 mm^3 of a human brain, 1.4 petabytes with hundred(s?) of millions of synapses

‪https://ai.googleblog.com/2021/06/a-browsable-petascale-reconstruction-of.html‬

Do we at least have some idea of what kind of technology would be needed for reading out connection weights?

6RomanS1y
A possible indirect way of doing that is by recording the worm's behavior: 1. record the behavior under many conditions 2. design an ANN that has the same architecture as the real worm 3. train many instances of the ANN on a half of the recordings 4. select an instance that shows the right behavior on the withheld half of the recordings 5. if the records are long and diverse enough, and if the ANN is realistic enough, the selected instance will have the weights that are functionally equivalent to the weights of the real worm The same approach could be used to emulate the brain of a specific human. Although the required compute in this case might be too large to become practical in the next decades.
4niconiconi1y
David believed one can develop optogenetic techniques to do this. Just added David's full quote to the post.

There's a fundamental difficulty with these sorts of attempts to emulate entire nervous systems (which gets exponentially worse as you scale up) that I don't think gets enough attention:  failure of averaging. See this paper on simulating single neurons: https://pubmed.ncbi.nlm.nih.gov/11826077/#:~:text=Averaging%20fails%20because%20the%20maximal,does%20not%20contain%20its%20mean.

The abstract:

"Parameters for models of biological systems are often obtained by averaging over experimental results from a number of different preparations. To explore the va... (read more)

Curated. The topic of uploads and whole-brain emulation is a frequent one, and one whose feasibility is always assumed to be true. While this post doesn't argue otherwise, it's fascinating to hear where we're with the technology for this.

4Rob Bensinger1y
Post is about tractability / difficulty, not feasibility
5ESRogs1y
What's the distinction you're making? A quick google suggests this as the definition for "feasibility": This matches my understanding of the term. It also sounds a lot like tractability / difficultly. Are you thinking of it as meaning something more like "theoretical possibility"?
4Rob Bensinger1y
That isn't the definition I'm familiar with -- the one I was using is in Webster's [https://www.merriam-webster.com/dictionary/feasible]:
3Ruby1y
Indeed! By "feasibility is assumed to be true", I meant in other places and posts.

One complication glossed over in the discussion (both above and below) is that a single synapse, even at a single point in time, may not be well characterized as a simple "weight". Even without what we might call learning per se, the synaptic efficacy seems, upon closer examination, to be a complicated function of the recent history, as well as the modulatory chemical environment. Characterizing and measuring this is very difficult. It may be more complicated in a C. elegans than in a mammal, since it's such a small but highly optimized hunk of circuitry.

Hey, 

TL;DR I know a researcher who's going to start studying C. elegans worms in a way that seems interesting as far as I can tell. Should I do something about that?

 

I'm trying to understand if this is interesting for our community, specifically as a path to brain emulation, which I wonder if could be used to (A) prevent people from dying, and/or (B) creating a relatively-aligned AGI.

This is the most relevant post I found on LW/EA (so far).

I'm hoping someone with more domain expertise can say something like:

  • "OMG we should totally extra fund this
... (read more)

OpenWorm seems to be a project with realistic goals but unrealistic funding in contrast to the EU's Human Brain Project (HBP): a project with an absurd amount of funding, with absurdly unrealistic goals. Even ignoring the absurd endpoint, any 1billion Euro project should be split up into multiple smaller ones with time to take stock of things in between.

What could the EU have achieved by giving $50million to OpenWorm to spend in 3 years (before getting more ambitious)? 

Would it not have done so in the first place because of hubris? The worm is somehow... (read more)

My impression of OpenWorm was that it was not focused on WBE. It tried to be a more general-purpose platform for biological studies. It attracted more attention than a pure WBE project would, by raising vague hopes of also being relevant to goals such as studying Alzheimer's.

My guess is that the breadth of their goals led them to become overwhelmed by complexity.

It's possible to donate to Openworm to accelerate the development, via their website

Maybe the problem is figuring out how to realistically simulate a SINGLE neuron, which could then be extended 302 or 100,000,000,000 times. Also due to shorter generation times any random c.elegans has 50 times more ancestors than any human, so evolution may have had time to make their neurons more complex. 

New to LessWrong?