Since the early 21st century, some transhumanist proponents and futuristic researchers claim that Whole Brain Emulation (WBE) is not merely science fiction - although still hypothetical, it's said to be a potentially viable technology in the near future. Such beliefs attracted significant fanfare in tech communities such as LessWrong.

In 2011 at LessWrong, jefftk did a literature review on the emulation of a worm, C. elegans, as an indicator of WBE research progress.

Because the human brain is so large, and we are so far from having the technical capacity to scan or emulate it, it's difficult to evaluate progress.  Some other organisms, however, have much smaller brains: the nematode C. elegans has only 302 cells in its entire nervous system.  It is extremely well studied and well understood, having gone through heavy use as a research animal for decades. Since at least 1986 we've known the full neural connectivity of C. elegans, something that would take decades and a huge amount of work to get for humans.  At 302 neurons, simulation has been within our computational capacity for at least that long.  With 25 years to work on it, shouldn't we be able to 'upload' a nematode by now?

There were three research projects from the 1990s to the 2000s, but all are dead-ends that were unable to reach the full research goals, giving a rather pessimistic vision of WBE. However, immediately after the initial publication of that post, LW readers Stephen Larson (slarson) & David Dalrymple (davidad) pointed out in the comments that they were working on it, the two ongoing new projects of their own made the future look promising again.

The first was the OpenWorm project, coordinated by slarson. Its goal is to create a complete model and simulation of C. elegans, and to release all tools and data as free and open source software. Implementing a structural model of all 302 C. elegans neurons in the NeuroML description language was an early task completed by the project.

The next was another research effort at MIT by davidad. David explained that the OpenWorm project focused on anatomical data from dead worms, but very little data exists from living animals' cells. They can't tell scientists about the relative importance of connections between neurons within the worm's neural system, only that a connection exists.

  1. The "connectome" of C. elegans is not actually very helpful information for emulating it. Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data.
  2. What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.
  3. With optogenetic techniques, we are just at the point where it's not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.

In a year or two, he believed an automated device can be built to gather such data. And he was confident.

I'm a disciple of Kurzweil, and as such I'm prone to putting ridiculously near-future dates on major breakthroughs. In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Surprised, for whatever that's worth, if this is still an open problem in 2020.

When asked by gwern for a statement for PredictionBook.com, davidad said:

  • "A complete functional simulation of the C. elegans nervous system will exist on 2014-06-08." 76% confidence
  • "A complete functional simulation of the C. elegans nervous system will exist on 2020-01-01." 99.8% confidence

(disappointingly, these statements were not actually recorded on PredictionBook).

Unfortunately, 10 years later, both projects appear to have made no significant progress and failed to develop a working simulation that is able to resemble biological behaviors. In a 2015 CNN interview, slarson said the OpenWorm project was "only 20 to 30 percent of the way towards where we need to get", and seems to be in the development hell forever since. Meanwhile, I was unable to find any breakthrough from davaidad before the project ended. David personally left the project in 2012.

When the initial review was published, there was already 25 years of works on C. elegans, and right now yet another decade has passed, yet we're still unable to "upload" a nematode. Therefore, I have to end my post with the pessimistic vision of WBE by quoting the original post.

This seems like a research area where you have multiple groups working at different universities, trying for a while, and then moving on.  None of the simulation projects have gotten very far: their emulations are not complete and have some pieces filled in by guesswork, genetic algorithms, or other artificial sources.  I was optimistic about finding successful simulation projects before I started trying to find one, but now that I haven't, my estimate of how hard whole brain emulation would be has gone up significantly.  While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.

This is discouraging.

Closing thoughts: What went wrong? What are the unsolvable difficulties here?

Update

Some technical insights behind the failure was given in a 2014 update ("We Haven't Uploaded Worms"), jefftk showed the major problems are:

  1. Knowing the connections isn't enough, we also need to know the weights and thresholds. We don't know how to read them from a living worm.
  2. C. elegans is able to learn by changing the weights. We don't know how weights and thresholds are changed in a living worm.

The best we can do is modeling a generic worm - pretraining and running the neural network with fixed weights. Thus, no worm is "uploaded" because we can't read the weights, and these simulations are far from realistic because they are not capable of learning. Hence, it's merely a boring artificial neural network, not a brain emulation.

To see why this isn't enough, consider that nematodes are capable of learning. [...] For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don't do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can't learn. They also don't read weights off of any individual worm, which means we can't talk about any specific worm as being uploaded.

If this doesn't count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.

Furthermore, in a Quora answer, davidad hinted that his project was discontinued partially due to the lack of funding.

If I'd had $1 million seed, I wouldn't have had to cancel the project when I did...

Conclusion: Relevant neural recording technologies are needed to collect data from living worms, but they remain undeveloped, and the funding simply isn't there. 

Update 2

I just realized David actually had an in-depth talk about his work and the encountered difficulties at MIRI's AI Risk for Computer Scientists workshop in 2020, according to this LW post ("AIRCS Workshop: How I failed to be recruited at MIRI"). 

Most discussions were pretty high level. For example, someone presented a talk where they explained how they tried and failed to model and simulate a brain of C. Elegansis. A worm with an extremely simple and well understood brain. They explained to us a lot of things about biology, and how they had been trying and scanning precisely a brain. If I understood correctly, they told us they failed due to technical constraints and what those were. They believe that, nowadays, we can theoretically create the technology to solve this problem. However there is no one interested in said technology, so it won't be developed and be available to the market.

Does anyone know any additional information? Is the content of that talk available in paper form?

Update 3

Note to the future readers: within a week of the initial publication of this post, I received some helpful insider comments, including David himself, on the status of this field. The followings are especially worth reading.

New to LessWrong?

New Comment
87 comments, sorted by Click to highlight new comments since: Today at 2:28 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Let's look at a proxy task. "Rockets landing on their tail". The first automated landing of an airliner was in 1964. Using a similar system of guidance signals from antenna on the ground surely a rocket could land after boosting a payload around the same time period. While SpaceX first pulled it off in 2015.

In 1970 if a poorly funded research lab said they would get a rocket to land on its tail by 1980 and in 1980 they had not succeeded, would you update your estimated date of success to "centuries"? C elegans has 302 neurons and it takes, I think I read 10 ANN nodes to mimic the behavior of one biological neuron. With a switching frequency of 1 khz and it's fully connected you would need 302 * 100^2 *1000 operations per second. This is 0.003 TOPs, and embedded cards that do 200-300 TOPs are readily available.

So the computational problem is easy. Did David build an automated device to collect data from living cells? If not, was the reason it wasn't done because of some sudden unexpected huge difficulty that 100+ people and multi-million dollar budget couldn't solve, or was it because...those people weren't there and neither was the funding?

My larger point is that if the '... (read more)

Did David build an automated device to collect data from living cells? If not, was the reason it wasn't done because of some sudden unexpected huge difficulty that 100+ people and multi-million dollar budget couldn't solve, or was it because...those people weren't there and neither was the funding?

Good points, I did more digging and found some relevant information I initially missed, see "Update". He didn't, and funding was indeed a major factor. 

Let's look at a proxy task. "Rockets landing on their tail"… While SpaceX first pulled it off in 2015.

The DC-X did this first in 1993, although this video is from 1995.

https://youtube.com/watch?v=wv9n9Casp1o

(And their budget was 60 million 1991 dollars, Wolfram Alpha says that’s 117 million in 2021 dollars) https://en.m.wikipedia.org/wiki/McDonnell_Douglas_DC-X

7jefftk2y
That isn't the right interpretation of the proxy task. In 2011, I was using progress on nematodes to estimate the timing of whole brain emulation for humans. That's more similar to using progress in rockets landing on their tail to estimate the timing of self-sustaining Mars colony. (I also walked back from "probably hundreds of years" to "I don't think we'll be uploading anyone in this century" after the comments on my 2011 post, writing the more detailed: https://www.jefftk.com/p/whole-brain-emulation-and-nematodes)
1Gerald Monroe2y
Ok. Hypothetical experiment. In 2042 someone does demonstrate a convincing dirt dynamics simulation and a flock of emulated nematodes. The emulated firing patterns correspond well with experimentally observed nematodes. With that information you would still feel safe in concluding the solution is 58 years away for human scale?
2jefftk2y
I'm not sure what you mean by "convincing dirt dynamics simulation and a flock of emulated nematodes"? I'm going to assume you mean the task I described in my post: teach one something, upload it, verify it still has the learned behavior. Yes, I would still expect it to be at least 58 years away for human scale. The challenges are far larger for humans, and it taking over 40 years from people starting on simulating nematodes to full uploads would be a negative timeline update to me. Note that in 2011 I expected this for around 2021, which is nowhere near on track to do: https://www.jefftk.com/p/whole-brain-emulation-and-nematodes
4Gerald Monroe2y
Ok. I have thought about it further and here is the reason I think you're wrong. You implicitly have made an assumption that the tools available to neuroscientist today are good, and we have a civilization with the excess resources to support such an endeavor. This is false. Today the available resources for such endeavors is only enough to fund small teams. The research that is profitable like silicon chip improvement gets hundreds of billions invested into it. So any extrapolation is kinda meaningless. It would be like asking in 1860 how many subway tunnels would be in NYC in 1940. The heavy industry to build it simply didn't exist so you would have to conclude it would be slow going. Similarly, the robotic equipment to do bioscience is currently limited and specialized. It's why a genome can be sequenced for a few thousand dollars but graduate students still use pipettes. Oh and if you wanted to know when the human genome would be sequenced in 1920, and in 1930 learned that zero genes had been sequenced, you might make a similar conclusion.
2jefftk2y
Do you have a better way of estimating the timing of new technologies that require many breakthroughs to reach?
2Gerald Monroe2y
I'll try to propose one. * Is the technology feasible with demonstrated techniques at a laboratory level. * Will there likely be gain to the organization that sells or deploys this technology in excess of it's estimated cost? * Does the technology run afoul of existing government regulation that will slow research into it? * Does the technology have a global market that will result in a sigmoidal adoption curve? Electric cars should have been predictable this way:            They were feasible since 1996, or 1990.  (LFP battery is the first lithium chemistry with the lifespan to be a net gain for an EV, 1990 is the first 'modern' lithium battery assembled in a lab)            The gain is reduced fuel cost, maintenance cost, and supercar acceleration and vehicle performance with much cheaper drivetrains.            Governments perceive a benefit in EVs so they have subsidized the research            Yes, and the adoption curve is sigmoidal. smartphones follow a similar such set of arguments, and the chips that made them possible were only low power enough around the point that the sigmoidal adoption started.  They were not really possible much prior.  Also, Apple made a large upfront investment to deliver an acceptable user experience all at once, rather than incrementally adding features like other early smartphone manufacturers did. I will put my stake in the sand and say that autonomous cars fit all these rules:           - Feasible, as the fundamental problem of assessing collision risk for a candidate path, the only part the car has to have perfect, is a simple and existing algorithm          - enormous gain, easily north of a trillion dollars in annual revenue or hundreds of billions in annual profit will be available to the industry          - Governments are reluctant but are obviously allowing the research and testing          - The adoption curve will be sigmoidal, because it has obvious self gain.  The first shipping autonomous EVs will lik
1Ben1y
This doesn't give any real help in guessing the timing. But I think the curve to imagine is much closer to a step function than it is to a linear slope. So not seeing an increase just means we haven't reached the step, not that their is a linear slope that is too small to see.
3slugbait932y
"it takes, I think I read 10 ANN nodes to mimic the behavior of one biological neuron" More like a deep network of 5-8 layers of 256 ANN nodes, according to recent work (turns out lots of computation is happening in the dentrites):  https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/
1Gerald Monroe2y
Thank you for the link.  Given some neurons have 1000+ inputs and complex timing behavior, this is perfectly reasonable.  
1FCCC2y
Were they trying to simulate a body and an environment? Seems to me that would make the problem much harder, as you’d be trying to simulate reality. (E.g. How does an organic body move through physical space based on neural activity? How does the environment’s effects on the body stimulate neural changes?)
1Gerald Monroe2y
You have to or you haven't really solved the problem.  It's very much bounded, you do not need to simulate "reality", you need to approximate the things that the nematode can experience to slightly higher resolution than it can perceive.   So basically you need some kind of dirt dynamics model that has sufficient fidelity to the nematodes very crude senses to be equivalent.  It might be easier with an even smaller organism.  
1FCCC2y
Maybe someone should ask the people who were working on it what their main issues were.

I might have time for some more comments later, but here's a few quick points (as replies):

  1. There has been some serious progress in the last few years on full functional imaging of the C. elegans nervous system (at the necessary spatial and temporal resolutions and ranges).

However, despite this I haven't been able to find any publications yet where full functional imaging is combined with controlled cellular-scale stimulation (e.g. as I proposed via two-photon excitation of optogenetic channels), which I believe is necessary for inference of a functional model.

  1. I was certainly overconfident about how easy Nemaload would be, especially given the microscopy and ML capabilities of 2012, but moreso I was overconfident that people would continue to work on it. I think there was very little work toward the goal of a nematode-upload machine for the four years from 2014 through 2017. Once or twice an undergrad doing a summer internship at the Boyden lab would look into it for a month, and my sense is that accounted for something like 3-8% of the total global effort.
3mukashi2y
Why there was not a postdoc or a PhD hired for doing this work? Was it due to the lack of funding?

I can't say for sure why Boyden or others didn't assign grad students or postdocs to a Nemaload-like direction; I wasn't involved at that time, there are many potential explanations, and it's hard to distinguish limiting/bottleneck or causal factors from ancillary or dependent factors.

That said, here's my best explanation. There are a few factors for a life-science project that make it a good candidate for a career academic to invest full-time effort in:

  1. The project only requires advancing the state of the art in one sub-sub-field (specifically the one in which the academic specializes).
  2. If the state of the art is advanced in this one particular way, the chances are very high of a "scientifically meaningful" result, i.e. it would immediately yield a new interesting explanation (or strong evidence for an existing controversial explanation) about some particular natural phenomenon, rather than just technological capabilities. Or, failing that, at least it would make a good "methods paper", i.e. establishing a new, well-characterized, reproducible tool which many other scientists can immediately see is directly helpful for the kind of "scientifically meaningful" experiments they alre
... (read more)

Note, (1) is less bad now, post-2018-ish. And there are ways around (2b) if you're determined enough. Michael Skuhersky is a PhD student in the Boyden lab who is explicitly working in this direction as of 2020. You can find some of his partial progress here https://www.biorxiv.org/content/biorxiv/early/2021/06/10/2021.06.09.447813.full.pdf and comments from him and Adam Marblestone over on Twitter, here: https://twitter.com/AdamMarblestone/status/1445749314614005760

1Laël Cellier1y
For point 2, is it possible to use the system to make advance in computer ai through studying the impact of large modifications of the connectome or the synapses in silicon instead of in vivo for getting eeg equivalent? Of course, I understand the system might have to virtually sleep from time to time unlike the pure mathematical matrix based probability current systems. This would be the matter of making the simulation more debuggable instead of only being able to study muscles according to input (senses).
1Laël Cellier1y
Also, isn t the whole project making some completely wrong assumptions? I heard about the ideas that neurons don t make synapses on their own and that astrocytes instead of just being support cells is acting like sculptors on their sculptures with research having focused on neurons mainly because eeg detectability. And to support this, that it is the underlying reasons different species with similar numbers neurons shows smaller or larger connectomes and they are research that claimed to have improved the number of synapses per neurons and memorisations capabilties of rodents (compared to those without) by introducing genes controlling the astrocytes of primates (thus I recognise this theory is left uninvestigated for protostomes and their neuroglia instead of the full fledged astroglia of vertebrates). Of course, this would had even more difficulty to the project https://www.frontiersin.org/articles/10.3389/fcell.2022.931311/full. Having results with completely wrong assumptions doesn t means it doesn t works. For example, geocentric models were good enough to predict the position of planets like Jupiter during the medieval time but later inadequate and hence the need to shift to simpler heliocentric models. Getting all clinical trials on Alzheimer of the past 25 years failing or performing poorly in humans might suggest we are completely wrong on the inner workings of brains somewere. As an undergraduate student, please correct me if I said garbage.
  1. I got initial funding from Larry Page on the order of 10^4 USD and then funding from Peter Thiel on the order of 10^5 USD. The full budget for completing the Nemaload project was 10^6 USD, and Thiel lost interest in seeing it through.
1wassname1y
Do you know why they lost interest? Assuming their funding decision were well thought out, it might be interesting.

While we're sitting around waiting for revolutionary imaging technology or whatever, why not try and make progress on the question of how much and what type of information can we obscure about a neural network and still approximately infer meaningful details of that network from behavior. For practice, start with ANNs and keep it simple. Take a smallish network which does something useful, record the outputs as it's doing its thing, then add just enough random noise to the parameters that output deviates noticeably from the original. Now train the perturbed version to match recorded data. What do we get here, did we recover the weights and biases almost exactly? Assuming yes, how far can this go before we might as well have trained the thing from scratch? Assuming success, does it work equally on different types and sizes of networks, if not what kind of scaling laws does this process obey? Assuming some level of success, move on to a harder problem, a sparse network, this time we throw away everything but connectivity information and try to repeat the above. How about something biologically realistic but we try to simulate the spiking neurons with groups of standard artificial ones.. you get the drift.

The tone of strong desirability for progress on WBE in this was surprising to me. The author seems to treat progress in WBE as a highly desirable thing; a perspective I expect most on LW do not endorse.

The lack of progress here may be a quite good thing.

As many other people here, I strongly desire to avoid death. Mind uploading is an efficient way to prevent many causes of death, as it is could make a mind practically indestructible (thanks to backups, distributed computing etc). WBE is a path towards mind uploading, and thus is desirable too.

Mind uploading could help mitigate OR increase the AI X-risk, depending on circumstances and implementation details. And the benefits of uploading as a mitigation tool seem to greatly outweigh the risks. 

The most preferable future for me is the future there mind uploading is ubiquitous, while X-risk is avoided. 

Although unlikely, it is still possible that mind uploading will emerge sooner than AGI. Such a future is much more desirable than the future without mind uploading (some possible scenarios).

-6Benjamin Spiegel2y
3niconiconi2y
Did I miss some subtle cultural changes at LW? I know the founding principles of LW are rationalism and AI safety from the start. But in my mind, LW always has all types of adjacent topics and conversations, with many different perspectives. Or at least this was my impression of the 2010s LW threads on the Singularity and transhumanism. Did these discussions become more and more emphasized on AI safety and derisking over time? I'm not a regular reader of LW, any explanation would be greatly appreciated. 
1J Thomas Moros2y
While I can understand why many would view advances toward WBE as an AI-safety risk, many in the community are also concerned with cryonics. WBE is an important option for the revival of cryonics patients. So I think the desirability of WBE should be clear. It just may be the case that we need to develop safe AI first.
9Matt Goldenberg2y
It's not just an AI safety risk, it's also an S-risk in it's own right.
1RomanS2y
While discussing a new powerful tech, people often focus on what could go horribly wrong, forgetting to consider what could go gloriously right.  What could could go gloriously right with mind uploading? It could eliminate involuntary death, saving trillions of future lives. This consequence alone massively outweighs the corresponding X- and S-risks.
7davidad2y
At least from the orthodox QALY perspective on "weighing lives", the benefits of WBE don't outweigh the S-risks, because for any given number of lives, the resources required to make them all suffer are lesser than the resources required for the glorious version. The benefits of eventually developing WBE do outweigh the X-risks, if we assume that * human lives are the only ones that count, * WBE'd humans still count as humans, and * WBE is much more resource-efficient than anything else that future society could do to support human life. However, orthodox QALY reasoning of this kind can't justify developing WBE soon (rather than, say, after a Long Reflection), unless there are really good stories about how to avoid both the X-risks and the S-risks.
1RomanS2y
As far as I know, mind uploading is the only tech that can reduce the risk of death (from all causes) to almost zero. It is almost impossible to destroy a mind that is running on resilient distributed hardware with tons of backups hidden in several star systems.  There is a popular idea that some very large amount of suffering is worse than death. I don't subscribe to it. If I'm tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death - because (by definition) it cannot be repaired. And everything else can be repaired, including the damage from any amount of suffering.  In such calculations, I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe.  I also don't see how eliminating any arbitrary large amount of suffering could be preferable to saving 1 life. Unless the suffering leads to permadeath, the sufferers can get over it. The dead - cannot. Bad feelings are vastly less important than saved lives. It's a good idea to reduce suffering. But the S-risk is trivially eliminated from the equitation if the tech in question is life-saving. There are currently ~45 million permadeaths per year. Thus, any additional year without widely accessible mind uploading means there is an equivalent of 45+ million more humans experiencing the worst possible suffering until the end of the universe. In 10 years, it's a half a billion. In 1000 years, it's a half a trillion. This high cost of the Long Reflection is one more reason why it should never be forced upon humanity. 

I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe.

This is so incredibly far from where I would place the equivalence, and I think where almost anyone would place it, that I'm baffled. You really mean this?

5RomanS2y
There is an ancient and (unfortunately) still very popular association between death and sleep / rest / peace / tranquility. The association is so deeply engraved, it is routinely used by most people who have to speak about death. E.g. "rest in peace", "put to sleep", "he is in a better place now" etc.  The association is harmful.  The association suggests that death could be a valid solution to pain, which is deeply wrong.  It's the same wrongness as suggesting to kill a child to make the child less sad.  Technically, the child will not experience sadness anymore. But infanticide is not a sane person's solution to sadness.  The sane solution is to find a way to make the child less sad (without killing them!).  The sane solution to suffering is to reduce suffering. Without killing the sufferer. For example, if a cancer patient is in great pain, the most ethical solution is to cure them from cancer, and use efficient painkillers during the process. If there is no cure, then utilize cryonics to transport them into the future where such a cure becomes available. Killing the patient because they're in pain is a sub-optimal solution (to put it mildly). I can't imagine any situation where permadeath is preferable to suffering. With enough tech and time, all kinds of suffering can be eliminated, and their effects can be reversed. But permadeath is, by definition, non-reversible and non-repairable.  If one must choose between a permanent loss of human life and some temporary discomfort, it doesn't make sense to prefer the permanent loss of life, regardless of the intensity of the discomfort. 
7[DEACTIVATED] Duncan Sabien2y
(I agree wholeheartedly with almost everything you've said here, and have strong upvoted, but I want to make space for the fact that some people don't make sense, and some people reflectively endorse not making sense, and so while I will argue against their preference for death over discomfort, I will also fight for their right to make the wrong choice for themselves, just as I fight for your and my right to make the correct choice for ourselves.  Unless there is freedom for people to make wrong choices, we can never move beyond a socially-endorsed "right" choice to something Actually Better.)
5lc2y
Something is handicapping your ability to imagine what the "worst possible discomfort" would be.
1RomanS2y
The thing is: regardless of how bad is the worst possible discomfort, dying is still a rather stupid idea, even if you have to endure the discomfort for millions of years. Because if you live long enough, you can find a way to fix any discomfort.  I wrote in more detail about it here.
6Matt Goldenberg2y
  This sweeps a large amount of philosphical issues under the rug by begging the conclusion (that death is the worst thing), and then using that justify itself (death is the worst thing, and if you die, you're stuck dead, so that's the worst thing).
0RomanS2y
I predict that most (all?) ethical theories that assume that some amount of suffering is worse than death - have internal inconsistencies.  My prediction is based on the following assumption: * permanent death is the only brain state that can't be reversed, given sufficient tech and time The non-reversibility is the key.  For example, if your goal is to maximize happiness of every human, you can achieve more happiness if none of the humans ever die, even if some humans will have periods of intense and prolonged suffering. Because you can increase happiness of the humans who suffered, but you can't increase happiness of the humans who are non-reversibly dead. If your goal is to minimize suffering (without killing people), then you should avoid killing people. Killing people includes withholding life extension technologies (like mind uploading), even if radical life extension will cause some people to suffer for millions of years. You can decrease suffering of the humans who are suffering, but you can't do that for the humans who are non-reversibly dead. The mere existence of the option of voluntary immortality necessitates some quite interesting changes in ethical theories.  Personally, I simply don't want to die, regardless of the circumstances. The circumstances might include any arbitrary large amount of suffering. If a future-me ever begs for death, consider him in the need of some brain repair, not in the need of death. 
6Erhannis2y
While I (a year late) tentatively agree with you (though a million years of suffering is a hard thing to swallow compared to the instinctually almost mundane matter of death) I think there's an assumption in your argument that bears inspection.  Namely, I believe you are maximizing happiness at a given instance in time - the present, or the limit as time approaches infinity, etc.  (Or, perhaps, you are predicating the calculations on the possibility of escaping the heat death of the universe, and being truly immortal for eternity.)  A (possibly) alternate optimization goal - maximize human happiness, summed over time.  See, I was thinking, the other day, and it seems possible we may never evade the heat death of the universe.  In such a case, if you only value the final state, nothing we do matters, whether we suffer or go extinct tomorrow.  At the very least, this metric is not helpful, because it cannot distinguish between any two states.  So a different metric must be chosen.  A reasonable substitute seems to me to be to effectively take the integral of human happiness over time, sum it up.  The happy week you had last week is not canceled out by a mildly depressing day today, for instance - it still counts.  Conversely, suffering for a long time may not be automatically balanced out the moment you stop suffering (though I'll grant this goes a little against my instincts).  If you DO assume infinite time, though, your argument may return to being automatically true.  I'm not sure that's an assumption that should be confidently made, though.  If you don't assume infinite time, I think it matters again what precise value you put on death, vs incredible suffering, and that may simply be a matter of opinion, of precise differences in two people's terminal goals. (Side note: I've idly speculated about expanding the above optimization criteria for the case of all-possible-universes - I forget the exact train of thought, but it ended up more or less behaving in a manne
9RomanS2y
Our current understanding of physics (and of our future capabilities) is so limited, I assume that our predictions on how the universe will behave trillions years from now are worthless. I think we can safely postpone the entire question to the times after we achieve a decent understanding of physics, after we became much smarter, and after we can allow ourselves to invest some thousands of years of deep thought on the topic.

Connectome scanning continues to scale up drastically, particularly on fruit flies. davidad highlights some very recent work:

I could be wrong, but we're still currently unable to get that original C. elegans neural map to do anything (like run a simulated worm body), right?

I think @AndrewLeifer is almost there but, yes, still hasn’t gone all the way to a demonstration of behavior in a virtual environment: "A11.00004 : A functional connectivity atlas of C. elegans measured by neural activation".

Neural processing and dynamics are governed by the details of how neural signals propagate from one neuron to the next through the brain. We systematically measured functional properties of neural connections in the head of the nematode Caenorhabditis elegans by direct optogenetic activation and simultaneous calcium imaging of 10,438 neuron pairs. By measuring responses to neural activation, we extracted the strength, sign, temporal properties, and causal direction of the connections and created an atlas of causal functional connectivity.

We find that functional connectivity differs from predictions based on anatomy, in part, because of extrasynaptic signaling. The measured properties o

... (read more)
8PeterMcCluskey7mo
Another report of progress: Mapping the Mind: Worm’s Brain Activity Fully Decoded (full paper).

I want to point out that there has been some very small amounts of progress in the last 10 years on the problem of moving from connectome to simulation rather than no progress. 

First, there has been interesting work at the JHU Applied Physics Lab which extends what Busbice was trying to do when he tried to run as simulation of c elegans in a Lego Mindstorms robot (by the way, that work by Busbice was very much overhyped by Busbice and in the media, so it's fitting that you didn't mention it). They use a basic integrate and fire model to simulate the neurons (which is probably actually not very accurate here because c elegans neurons don't actually seem to spike much and seem to rely on subthreshold activity more so than in other organisms). To assign weights to the different synapses they used what appears to be a very crude metric - the weight was determined in proportion to the total number of synapses the two neurons on either side of the synapse share. Despite the crudeness of their approach, the simulated worm did manage to reverse it's direction when bumping into walls.  I believe this work was a project that summer interns did and didn't have a lot of funding, whic... (read more)

4niconiconi2y
Thanks for the info. Your comment is the reason why I'm on LessWrong.

Imagine you have two points, A and B. You're at A, and you can see B in the distance. How long will it take you to get to B?

Well, you're a pretty smart fellow. You measure the distance, you calculate your rate of progress, maybe you're extra clever and throw in a factor of safety to account for any irregularities during the trip. And you figure that you'll get to point B in a year or so.

Then you start walking.

And you run into a wall. 

Turns out, there's a maze in between you and point B. Huh, you think. Well that's ok, I put a factor of safety into my calculations, so I should be fine. You pick a direction, and you keep walking. 

You run into more walls.

You start to panic. You figured this would only take you a year, but you keep running into new walls! At one point, you even realize that the path you've been on is a dead end — it physically can't take you from point A to point B, and all of the time you've spent on your current path has been wasted, forcing you to backtrack to the start.

Fundamentally, this is what I see happening, in various industries: brain scanning, self-driving cars, clean energy, interstellar travel, AI development. The list goes on.

Laymen see a point... (read more)

7orthonormal2y
On the other hand, sometimes people end up walking right through what the established experts thought to be a wall. The rise of deep learning from a stagnant backwater in 2010 to a dominant paradigm today (crushing the old benchmarks in basically all of the most well-studied fields) is one such case. In any particular case, it's best to expect progress to take much, much longer than the Inside View indicates. But at the same time, there's some part of the research world where a major rapid shift is about to happen.

It seems to me that this task has an unclear goal. Imagine I linked you a github repo and said "this is a 100% accurate and working simulation of the worm." How would you verify that? If we had a WBE of Ray Kurzweil, we could at least say "this emulated brain does/doesn't produce speech that resembles Kurzweil's speech." What can you say about the emulated worm? Does it wiggle in some recognizable way? Does it move towards the scent of food?

Quote jefftk.

To see why this isn't enough, consider that nematodes are capable of learning. [...] For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don't do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can't learn. They also don't read weights off of any individual worm, which means we can't talk about any specific worm as being uploaded.

If this doesn't count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.

(just included the quotation in my post)

A study by Alcor trained C. elegans worms to react to the smell of a chemical. They then demonstrated that the worms retained this memory even after being frozen and revived. Were it possible to upload a worm, the same exact test would show that you had successfully uploaded a worm with that memory vs. one without that memory.

Study here: Persistence of Long-Term Memory in Vitrified and Revived Caenorhabditis elegans

3jacobjacob2y
You could imagine an experimental designs where you train worms to accomplish a particular task (i.e. learn which scent means food. If indeed they use scent. I don't know.) You then upload both trained and untrained worms. If trained uploads perform better from the get-go than untrained ones, it's some evidence it's the same worm. To make it more granular there's a lot of learning tasks from behavioural neuroscience you could adapt.  You could also do single neuron studies: train the worm on some task, find a neuron that seems to correspond to a particular abstraction. Upload worm; check that the neuron still corresponds to the abstraction.  Or ablation studies: you selectively impair certain neurons in the live trained worm, uploaded worm, and control worms, in a way that causes the same behaviour change only in the target individual.    Or you can get causal evidence via optogenetics. 
4niconiconi2y
Optogenetics was exactly the method proposed by David, I just updated the article and included a full quote. I originally thought my post was already a mere summary of the previous LW posts by jefftk, excessive quotation could make it too unoriginal, interested readers could simply read more by following the links. But I just realized giving sufficient context is important when you're restarting a forgotten discussion.

As someone interested in seeing WBE become a reality, I have also been disappointed by the lack of progress. I would like to understand the reasons for this better. So I was interested to read this post, but you seem to be conflating two different things. The difficulty of simulating a worm and the difficulty of uploading a worm. There are a few sentences that hint both are unsolved, but they should be clearly separated.

Uploading a worm requires being able to read the synaptic weights, thresholds, and possibly other details from an individual worm. No... (read more)

This post was a great dive into two topics:

  • How an object-level research field has gone, and what are the challenges it faces.
  • Forming a model about how technologically optimistic projects go.

I think this post was good on it's first edition, but became great after the author displayed admirable ability to update their mind and willingness to update their post in light of new information.

Overall I must reluctantly only give this post a +1 vote for inclusion, as I think the books are better served by more general rationality content, but I'm terms of what I would like to see more of on this site, +9. Maybe I'll compromise and give +4.

One complication glossed over in the discussion (both above and below) is that a single synapse, even at a single point in time, may not be well characterized as a simple "weight". Even without what we might call learning per se, the synaptic efficacy seems, upon closer examination, to be a complicated function of the recent history, as well as the modulatory chemical environment. Characterizing and measuring this is very difficult. It may be more complicated in a C. elegans than in a mammal, since it's such a small but highly optimized hunk of circuitry.

There’s a scan of 1 mm^3 of a human brain, 1.4 petabytes with hundred(s?) of millions of synapses

‪https://ai.googleblog.com/2021/06/a-browsable-petascale-reconstruction-of.html‬

Do we at least have some idea of what kind of technology would be needed for reading out connection weights?

6RomanS2y
A possible indirect way of doing that is by recording the worm's behavior: 1. record the behavior under many conditions 2. design an ANN that has the same architecture as the real worm 3. train many instances of the ANN on a half of the recordings 4. select an instance that shows the right behavior on the withheld half of the recordings 5. if the records are long and diverse enough, and if the ANN is realistic enough, the selected instance will have the weights that are functionally equivalent to the weights of the real worm The same approach could be used to emulate the brain of a specific human. Although the required compute in this case might be too large to become practical in the next decades.
4niconiconi2y
David believed one can develop optogenetic techniques to do this. Just added David's full quote to the post. 

One review criticized my post for being inadequate at world modeling - readers who wish to learn more about predictions are better served by other books and posts (but also praised me for being willing to update its content after new information arrived). I don't disagree, but I felt it was necessary to clarify my background of writing it.

First and foremost, this post was meant specifically as (1) a review of the research progress on Whole Brain Emulation of C. elegans, and (2) a request for more information from the community. I became aware of this resea... (read more)

Two years later, there are now whole brain wide recordings on C. Elegans via calcium imaging. This includes models apparently at least partially predictive of behavior and analysis of individual neuron contributions to behavior. 

If you want the "brain-wide recordings and accompanying behavioral data" you can apparently download them here!

It is very exciting to finally have measurements for this. I still need to do more than skim the paper though. While reading it, here are the questions on my mind:
* What are the simplest individual neuron models that ... (read more)

This is a total nitpick but there is a typo in the title of both this post and the one from jefftk referenced by it. It's "C. elegans", not "C. elgans".

1niconiconi1y
Typo has been fixed.

There's a fundamental difficulty with these sorts of attempts to emulate entire nervous systems (which gets exponentially worse as you scale up) that I don't think gets enough attention:  failure of averaging. See this paper on simulating single neurons: https://pubmed.ncbi.nlm.nih.gov/11826077/#:~:text=Averaging%20fails%20because%20the%20maximal,does%20not%20contain%20its%20mean.

The abstract:

"Parameters for models of biological systems are often obtained by averaging over experimental results from a number of different preparations. To explore the va... (read more)

Curated. The topic of uploads and whole-brain emulation is a frequent one, and one whose feasibility is always assumed to be true. While this post doesn't argue otherwise, it's fascinating to hear where we're with the technology for this.

4Rob Bensinger2y
Post is about tractability / difficulty, not feasibility
5ESRogs2y
What's the distinction you're making? A quick google suggests this as the definition for "feasibility": This matches my understanding of the term. It also sounds a lot like tractability / difficultly. Are you thinking of it as meaning something more like "theoretical possibility"?
4Rob Bensinger2y
That isn't the definition I'm familiar with -- the one I was using is in Webster's:
3Ruby2y
Indeed! By "feasibility is assumed to be true", I meant in other places and posts.

Hey, 

TL;DR I know a researcher who's going to start studying C. elegans worms in a way that seems interesting as far as I can tell. Should I do something about that?

 

I'm trying to understand if this is interesting for our community, specifically as a path to brain emulation, which I wonder if could be used to (A) prevent people from dying, and/or (B) creating a relatively-aligned AGI.

This is the most relevant post I found on LW/EA (so far).

I'm hoping someone with more domain expertise can say something like:

  • "OMG we should totally extra fund this
... (read more)

OpenWorm seems to be a project with realistic goals but unrealistic funding in contrast to the EU's Human Brain Project (HBP): a project with an absurd amount of funding, with absurdly unrealistic goals. Even ignoring the absurd endpoint, any 1billion Euro project should be split up into multiple smaller ones with time to take stock of things in between.

What could the EU have achieved by giving $50million to OpenWorm to spend in 3 years (before getting more ambitious)? 

Would it not have done so in the first place because of hubris? The worm is somehow... (read more)

My impression of OpenWorm was that it was not focused on WBE. It tried to be a more general-purpose platform for biological studies. It attracted more attention than a pure WBE project would, by raising vague hopes of also being relevant to goals such as studying Alzheimer's.

My guess is that the breadth of their goals led them to become overwhelmed by complexity.

It's possible to donate to Openworm to accelerate the development, via their website

Maybe the problem is figuring out how to realistically simulate a SINGLE neuron, which could then be extended 302 or 100,000,000,000 times. Also due to shorter generation times any random c.elegans has 50 times more ancestors than any human, so evolution may have had time to make their neurons more complex.