Whole Brain Emulation: No Progress on C. elgans After 10 Years

by niconiconi8 min read1st Oct 202167 comments

174

Whole Brain EmulationMind UploadingForecasts (Specific Predictions)CryonicsWorld Modeling
Curated

Since the early 21st century, some transhumanist proponents and futuristic researchers claim that Whole Brain Emulation (WBE) is not merely science fiction - although still hypothetical, it's said to be a potentially viable technology in the near future. Such beliefs attracted significant fanfare in tech communities such as LessWrong.

In 2011 at LessWrong, jefftk did a literature review on the emulation of a worm, C. elegans, as an indicator of WBE research progress.

Because the human brain is so large, and we are so far from having the technical capacity to scan or emulate it, it's difficult to evaluate progress.  Some other organisms, however, have much smaller brains: the nematode C. elegans has only 302 cells in its entire nervous system.  It is extremely well studied and well understood, having gone through heavy use as a research animal for decades. Since at least 1986 we've known the full neural connectivity of C. elegans, something that would take decades and a huge amount of work to get for humans.  At 302 neurons, simulation has been within our computational capacity for at least that long.  With 25 years to work on it, shouldn't we be able to 'upload' a nematode by now?

There were three research projects from the 1990s to the 2000s, but all are dead-ends that were unable to reach the full research goals, giving a rather pessimistic vision of WBE. However, immediately after the initial publication of that post, LW readers Stephen Larson (slarson) & David Dalrymple (davidad) pointed out in the comments that they were working on it, the two ongoing new projects of their own made the future look promising again.

The first was the OpenWorm project, coordinated by slarson. Its goal is to create a complete model and simulation of C. elegans, and to release all tools and data as free and open source software. Implementing a structural model of all 302 C. elegans neurons in the NeuroML description language was an early task completed by the project.

The next was another research effort at MIT by davidad. David explained that the OpenWorm project focused on anatomical data from dead worms, but very little data exists from living animals' cells. They can't tell scientists about the relative importance of connections between neurons within the worm's neural system, only that a connection exists.

  1. The "connectome" of C. elegans is not actually very helpful information for emulating it. Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data.
  2. What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.
  3. With optogenetic techniques, we are just at the point where it's not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.

In a year or two, he believed an automated device can be built to gather such data. And he was confident.

I'm a disciple of Kurzweil, and as such I'm prone to putting ridiculously near-future dates on major breakthroughs. In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Surprised, for whatever that's worth, if this is still an open problem in 2020.

When asked by gwern for a statement for PredictionBook.com, davidad said:

  • "A complete functional simulation of the C. elegans nervous system will exist on 2014-06-08." 76% confidence
  • "A complete functional simulation of the C. elegans nervous system will exist on 2020-01-01." 99.8% confidence

(disappointingly, these statements were not actually recorded on PredictionBook).

Unfortunately, 10 years later, both projects appear to have made no significant progress and failed to develop a working simulation that is able to resemble biological behaviors. In a 2015 CNN interview, slarson said the OpenWorm project was "only 20 to 30 percent of the way towards where we need to get", and seems to be in the development hell forever since. Meanwhile, I was unable to find any breakthrough from davaidad before the project ended. David personally left the project in 2012.

When the initial review was published, there was already 25 years of works on C. elegans, and right now yet another decade has passed, yet we're still unable to "upload" a nematode. Therefore, I have to end my post with the pessimistic vision of WBE by quoting the original post.

This seems like a research area where you have multiple groups working at different universities, trying for a while, and then moving on.  None of the simulation projects have gotten very far: their emulations are not complete and have some pieces filled in by guesswork, genetic algorithms, or other artificial sources.  I was optimistic about finding successful simulation projects before I started trying to find one, but now that I haven't, my estimate of how hard whole brain emulation would be has gone up significantly.  While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.

This is discouraging.

Closing thoughts: What went wrong? What are the unsolvable difficulties here?

Update

Some technical insights behind the failure was given in a 2014 update ("We Haven't Uploaded Worms"), jefftk showed the major problems are:

  1. Knowing the connections isn't enough, we also need to know the weights and thresholds. We don't know how to read them from a living worm.
  2. C. elegans is able to learn by changing the weights. We don't know how weights and thresholds are changed in a living worm.

The best we can do is modeling a generic worm - pretraining and running the neural network with fixed weights. Thus, no worm is "uploaded" because we can't read the weights, and these simulations are far from realistic because they are not capable of learning. Hence, it's merely a boring artificial neural network, not a brain emulation.

To see why this isn't enough, consider that nematodes are capable of learning. [...] For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don't do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can't learn. They also don't read weights off of any individual worm, which means we can't talk about any specific worm as being uploaded.

If this doesn't count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.

Furthermore, in a Quora answer, davidad hinted that his project was discontinued partially due to the lack of funding.

If I'd had $1 million seed, I wouldn't have had to cancel the project when I did...

Conclusion: Relevant neural recording technologies are needed to collect data from living worms, but they remain undeveloped, and the funding simply isn't there. 

Update 2

I just realized David actually had an in-depth talk about his work and the encountered difficulties at MIRI's AI Risk for Computer Scientists workshop in 2020, according to this LW post ("AIRCS Workshop: How I failed to be recruited at MIRI"). 

Most discussions were pretty high level. For example, someone presented a talk where they explained how they tried and failed to model and simulate a brain of C. Elegansis. A worm with an extremely simple and well understood brain. They explained to us a lot of things about biology, and how they had been trying and scanning precisely a brain. If I understood correctly, they told us they failed due to technical constraints and what those were. They believe that, nowadays, we can theoretically create the technology to solve this problem. However there is no one interested in said technology, so it won't be developed and be available to the market.

Does anyone know any additional information? Is the content of that talk available in paper form?

Update 3

Note to the future readers: within a week of the initial publication of this post, I received some helpful insider comments, including David himself, on the status of this field. The followings are especially worth reading.

174

67 comments, sorted by Highlighting new comments since Today at 3:22 PM
New Comment

Let's look at a proxy task. "Rockets landing on their tail". The first automated landing of an airliner was in 1964. Using a similar system of guidance signals from antenna on the ground surely a rocket could land after boosting a payload around the same time period. While SpaceX first pulled it off in 2015.

In 1970 if a poorly funded research lab said they would get a rocket to land on its tail by 1980 and in 1980 they had not succeeded, would you update your estimated date of success to "centuries"? C elegans has 302 neurons and it takes, I think I read 10 ANN nodes to mimic the behavior of one biological neuron. With a switching frequency of 1 khz and it's fully connected you would need 302 * 100^2 *1000 operations per second. This is 0.003 TOPs, and embedded cards that do 200-300 TOPs are readily available.

So the computational problem is easy. Did David build an automated device to collect data from living cells? If not, was the reason it wasn't done because of some sudden unexpected huge difficulty that 100+ people and multi-million dollar budget couldn't solve, or was it because...those people weren't there and neither was the funding?

My larger point is that if the 'math checks out' on the basic feasibility of an idea, either there is something about the problem that makes it enormously harder than it appears, or simply not enough resources were invested to make progress. SpaceX, for example, had approximately 2-5 billion dollars spent by 2015 and the first rocket landing. A scrappy startup with say 1 million dollars might not get anywhere. How much funding did these research labs working on C Elegans have?

Did David build an automated device to collect data from living cells? If not, was the reason it wasn't done because of some sudden unexpected huge difficulty that 100+ people and multi-million dollar budget couldn't solve, or was it because...those people weren't there and neither was the funding?

Good points, I did more digging and found some relevant information I initially missed, see "Update". He didn't, and funding was indeed a major factor. 

Let's look at a proxy task. "Rockets landing on their tail"… While SpaceX first pulled it off in 2015.

The DC-X did this first in 1993, although this video is from 1995.

https://youtube.com/watch?v=wv9n9Casp1o

(And their budget was 60 million 1991 dollars, Wolfram Alpha says that’s 117 million in 2021 dollars) https://en.m.wikipedia.org/wiki/McDonnell_Douglas_DC-X

In 1970 if a poorly funded research lab said they would get a rocket to land on its tail by 1980 and in 1980 they had not succeeded, would you update your estimated date of success to "centuries"

That isn't the right interpretation of the proxy task. In 2011, I was using progress on nematodes to estimate the timing of whole brain emulation for humans. That's more similar to using progress in rockets landing on their tail to estimate the timing of self-sustaining Mars colony.

(I also walked back from "probably hundreds of years" to "I don't think we'll be uploading anyone in this century" after the comments on my 2011 post, writing the more detailed: https://www.jefftk.com/p/whole-brain-emulation-and-nematodes)

Ok. Hypothetical experiment. In 2042 someone does demonstrate a convincing dirt dynamics simulation and a flock of emulated nematodes. The emulated firing patterns correspond well with experimentally observed nematodes.

With that information you would still feel safe in concluding the solution is 58 years away for human scale?

I'm not sure what you mean by "convincing dirt dynamics simulation and a flock of emulated nematodes"? I'm going to assume you mean the task I described in my post: teach one something, upload it, verify it still has the learned behavior.

Yes, I would still expect it to be at least 58 years away for human scale. The challenges are far larger for humans, and it taking over 40 years from people starting on simulating nematodes to full uploads would be a negative timeline update to me. Note that in 2011 I expected this for around 2021, which is nowhere near on track to do: https://www.jefftk.com/p/whole-brain-emulation-and-nematodes

Ok. I have thought about it further and here is the reason I think you're wrong. You implicitly have made an assumption that the tools available to neuroscientist today are good, and we have a civilization with the excess resources to support such an endeavor.

This is false. Today the available resources for such endeavors is only enough to fund small teams. The research that is profitable like silicon chip improvement gets hundreds of billions invested into it.

So any extrapolation is kinda meaningless. It would be like asking in 1860 how many subway tunnels would be in NYC in 1940. The heavy industry to build it simply didn't exist so you would have to conclude it would be slow going.

Similarly, the robotic equipment to do bioscience is currently limited and specialized. It's why a genome can be sequenced for a few thousand dollars but graduate students still use pipettes.

Oh and if you wanted to know when the human genome would be sequenced in 1920, and in 1930 learned that zero genes had been sequenced, you might make a similar conclusion.

Do you have a better way of estimating the timing of new technologies that require many breakthroughs to reach?

I'll try to propose one.

  • Is the technology feasible with demonstrated techniques at a laboratory level.
  • Will there likely be gain to the organization that sells or deploys this technology in excess of it's estimated cost?
  • Does the technology run afoul of existing government regulation that will slow research into it?
  • Does the technology have a global market that will result in a sigmoidal adoption curve?

Electric cars should have been predictable this way:

           They were feasible since 1996, or 1990.  (LFP battery is the first lithium chemistry with the lifespan to be a net gain for an EV, 1990 is the first 'modern' lithium battery assembled in a lab)

           The gain is reduced fuel cost, maintenance cost, and supercar acceleration and vehicle performance with much cheaper drivetrains.

           Governments perceive a benefit in EVs so they have subsidized the research

           Yes, and the adoption curve is sigmoidal.

smartphones follow a similar such set of arguments, and the chips that made them possible were only low power enough around the point that the sigmoidal adoption started.  They were not really possible much prior.  Also, Apple made a large upfront investment to deliver an acceptable user experience all at once, rather than incrementally adding features like other early smartphone manufacturers did.

I will put my stake in the sand and say that autonomous cars fit all these rules:

          - Feasible, as the fundamental problem of assessing collision risk for a candidate path, the only part the car has to have perfect, is a simple and existing algorithm

         - enormous gain, easily north of a trillion dollars in annual revenue or hundreds of billions in annual profit will be available to the industry

         - Governments are reluctant but are obviously allowing the research and testing

         - The adoption curve will be sigmoidal, because it has obvious self gain.  The first shipping autonomous EVs will likely produce a cost advantage for a taxi firm or be rented directly, and will be immediately adopted, and the revenue reinvested makes their cost advantage grow until on the upward climb of the adoption curve the limit is simply how fast the hardware can be manufactured.

I will take it a step further, and say that general robots that solve problems of the same form as the problem of autonomous cars also fit all the same rules, will be adopted, it will be sigmoidal, and other reports have estimated that about half of all jobs will be replaced.

 

Anyways, for uploading a nematode, the optical I/O techniques to debug a living neural system I think are still in the laboratory prototype stage.   Does anyone have this working in any lab animal anywhere?  So it doesn't even meet condition 1.  And what's the gain if you upload a nematode? Not much.  Certainly not in excess of the tens of millions of dollars it likely will cost.  Governments are disinterested as nematodes are not vertebrates.  And there's no "self-gain", upload a nematode and no one is going to be uploading nematodes all over the planet.

 

There still will be progress, and with advances in other areas - the general robotics mentioned above - this would free up resources and make possible something like a human upload project.

And that, if you had demonstrations of feasibility, does meet all 4 conditions.

     -assume you have demonstrated feasibility with neuroscience experiments that will be performed and can "upload" a tiny section of human brain tissue.

     - The gain is you get to charge each uploaded human all their assets accumulated in life, and/or they will likely have superhuman capabilities once uploaded.  This gain is more like "divide by zero" gain, uploaded humans would be capable of organizing projects to tear down the solar system for raw materials, or essentially "near infinite money".

      - Governments will have to push it with all-out efforts near the end-game because to not have uploaded humans or AI is to lose all sovereignty

     -  Adoption curve is trivially sigmoidal.

 

I don't know when all the conditions will be met, but uploading humans is a project similar to nuclear weapons, in terms of gain and how up until just 29 months! before detonation the amount of fission done on earth by humans was zero.  In 1900 you might feel safe in predicting no fission before the end of the century like you do now.

 

Also, you can use this method to disprove personal jetpacks or fusion power.

   feasible-  personal jetpacks, no, rocket jetpacks of the 1960s had 22 second flight times.  

                    fusion power - no, no experiments were developing significant fusion gain without a fission device to provide the containment pressure

  gain - no.  Jetpacks would guzzle jet fuel even with more practical forms that worked more like a VTOL aircraft, and the value of this fuel is going to exceed the value of the time saved to almost all users.  Fusion power is a method to boil water using high energy laboratory equipment and is unlikely to be cheaper than the competition over any feasible timescale.

 government- no. Jetpacks cause extreme noise pollution and extra fires and deaths from when they fall out of the sky.  Fusion is a nuclear proliferation risk as a fusion reactor provides a source of neutrons that could be used to transmute to plutonium.

  sigmoidal - no, you can't have this without large gain.  Maybe this criterion is redundant.

 

If you got this far in reading, one notable fault of this proposed algorithm is it does not predict technologies requiring a true breakthrough.  You could not predict lasers, for instance, as these were not known to be feasible until the 1960s when the first working models existed.  That's a real breakthrough.  The distinction I am making is that if you do not know if physics will allow something to work, or if physics will allow something to work well, then you need a breakthrough to get it working.

Ditto argument for math algorithms, neural networks I would say are another real breakthrough, as they are much "better" for how little we know what we are doing than they should be.

We do know that physics will allow us to build a computer big enough to emulate a brain, to scan at least the synaptome of a once living brain, and get some detail on the weights.  We also know that learning means we do not really have be all that exact.  

Were they trying to simulate a body and an environment? Seems to me that would make the problem much harder, as you’d be trying to simulate reality. (E.g. How does an organic body move through physical space based on neural activity? How does the environment’s effects on the body stimulate neural changes?)

You have to or you haven't really solved the problem.  It's very much bounded, you do not need to simulate "reality", you need to approximate the things that the nematode can experience to slightly higher resolution than it can perceive.  

So basically you need some kind of dirt dynamics model that has sufficient fidelity to the nematodes very crude senses to be equivalent.  It might be easier with an even smaller organism.  

Maybe someone should ask the people who were working on it what their main issues were.

I might have time for some more comments later, but here's a few quick points (as replies):

  1. I was certainly overconfident about how easy Nemaload would be, especially given the microscopy and ML capabilities of 2012, but moreso I was overconfident that people would continue to work on it. I think there was very little work toward the goal of a nematode-upload machine from 2013 to 2017. Once or twice an undergrad doing a summer internship at the Boyden lab would look into it for a month, and my sense is that accounted for something like 3-8% of the total global effort.

Why there was not a postdoc or a PhD hired for doing this work? Was it due to the lack of funding?

I can't say for sure why Boyden or others didn't assign grad students or postdocs to a Nemaload-like direction; I wasn't involved at that time, there are many potential explanations, and it's hard to distinguish limiting/bottleneck or causal factors from ancillary or dependent factors.

That said, here's my best explanation. There are a few factors for a life-science project that make it a good candidate for a career academic to invest full-time effort in:

  1. The project only requires advancing the state of the art in one sub-sub-field (specifically the one in which the academic specializes).
  2. If the state of the art is advanced in this one particular way, the chances are very high of a "scientifically meaningful" result, i.e. it would immediately yield a new interesting explanation (or strong evidence for an existing controversial explanation) about some particular natural phenomenon, rather than just technological capabilities. Or, failing that, at least it would make a good "methods paper", i.e. establishing a new, well-characterized, reproducible tool which many other scientists can immediately see is directly helpful for the kind of "scientifically meaningful" experiments they already do or know they want to do.
  3. It is easy to convince people that your project is plausibly on a critical path in the roadmap towards one of the massive medical challenges that ultimately motivate most life-science funding, such as finding more effective treatments for Alzheimer's, accelerating the vaccine pipeline, preventing heart disease, etc.

The more of these factors are present, the more likely your effort as an academic will lead to career advancement and recognition. Nemaload unfortunately scored quite poorly on all three counts, at least until recently:

(1) It required advancing the state-of-the-art in, at least: C. elegans genetic engineering, electro-optical system integration, computer vision, quantitative structural neuroanatomy of C. elegans, mathematical modeling, and automated experimental design.

(2) Even the final goal of Nemaload (uploading worms who've learned different behaviors and showing that the behaviors are reproduced in simulations) is barely "scientifically meaningful". All it would demonstrate scientifically (as opposed to technically) is that learned behaviors are encoded in some way in neural dynamics. This hypothesis is at the same time widely accepted and extremely difficult to convince skeptics of. Of course, studying the uploaded dynamics might yield fascinating insights into how nature designs minds, but it also might be pretty black-boxy and inexplicable without advancing the state of the art in yet further ways.

(2b) Worse, partial progress is even less scientifically meaningful, e.g. "here's a time-series of half the neurons, I guess we can do unsupervised clustering on it, oh look at that, the neural activity pattern can predict whether the worm is searching for food or not, as can, you know, looking at it." To get an upload, you need all the components of the uploading machine, and you need them all to work at full spec. And partial progress doesn't make a great methods paper either, for the following reason. Any particular experiment that worm neuroscientists want to do, they can do more cheaply and effectively in other ways, like genetically engineering only the specific neurons they care about for that experiment to fluoresce when they're active. Even if they're interested in a lot of neurons, they're going to average over a population anyway, so they can just look at a handful of neurons at a time. And they also don't mind doing all kinds of unnatural things to the worms like microfluidic immobilization to make the experiment easier, even though that makes the worms' overall mental-state very, shall we say, perturbed, because they're just trying to probe one neural circuit at a time, not to get a holistic view of all behaviors across the whole mental-state-space.

(3) The worm nervous system is in most ways about as far as you can get from a human nervous system while still being made of neural cells. C. elegans is not the model organism of choice for any human neurological disorder. Further, the specific technical problems and solutions are obviously not going to generalize to any creature with a bony skull, or with billions of neurons. So what's the point? It's a bit like sending a lander to the Moon when you're trying to get to Alpha Centauri. There are some basic principles of celestial mechanics and competencies of system design and integration that will probably mostly generalize, and you have to start acquiring those with feedback from attempting easier missions. Others may argue that Nemaload on a roadmap to any science on mammals (let alone interventions on humans) is more like climbing a tree when you're trying to get to the Moon. It's hard to defend against this line of attack.

If a project has one or two of these factors but not all three, then if you're an ambitious postdoc with a good CV already in a famous lab, you might go for it. But if it has none, it's not good for your academic career, and if you don't realize that, your advisor has a duty of care to guide you towards something more likely to keep your trajectory on track. Advisors don't owe the same duty of care to summer undergrads.

Adam Marblestone might have more insight on this question; he was at the Boyden lab in that time. It also seems like the kind of phenomenon that Alexey Guzey likes to try to explain.

Note, (1) is less bad now, post-2018-ish. And there are ways around (2b) if you're determined enough. Michael Skuhersky is a PhD student in the Boyden lab who is explicitly working in this direction as of 2020. You can find some of his partial progress here https://www.biorxiv.org/content/biorxiv/early/2021/06/10/2021.06.09.447813.full.pdf and comments from him and Adam Marblestone over on Twitter, here: https://twitter.com/AdamMarblestone/status/1445749314614005760

  1. I got initial funding from Larry Page on the order of 10^4 USD and then funding from Peter Thiel on the order of 10^5 USD. The full budget for completing the Nemaload project was 10^6 USD, and Thiel lost interest in seeing it through.
  1. There has been some serious progress in the last few years on full functional imaging of the C. elegans nervous system (at the necessary spatial and temporal resolutions and ranges).

However, despite this I haven't been able to find any publications yet where full functional imaging is combined with controlled cellular-scale stimulation (e.g. as I proposed via two-photon excitation of optogenetic channels), which I believe is necessary for inference of a functional model.

The tone of strong desirability for progress on WBE in this was surprising to me. The author seems to treat progress in WBE as a highly desirable thing; a perspective I expect most on LW do not endorse.

The lack of progress here may be a quite good thing.

As many other people here, I strongly desire to avoid death. Mind uploading is an efficient way to prevent many causes of death, as it is could make a mind practically indestructible (thanks to backups, distributed computing etc). WBE is a path towards mind uploading, and thus is desirable too.

Mind uploading could help mitigate OR increase the AI X-risk, depending on circumstances and implementation details. And the benefits of uploading as a mitigation tool seem to greatly outweigh the risks. 

The most preferable future for me is the future there mind uploading is ubiquitous, while X-risk is avoided. 

Although unlikely, it is still possible that mind uploading will emerge sooner than AGI. Such a future is much more desirable than the future without mind uploading (some possible scenarios).

This really depends on whether you believe a mind-upload retains the same conscious agent from the original brain. If it did, we would need to solve the hard problem of consciousness, which seems significantly harder than just WBE. The delay between solving WBE and the hard problem of consciousness is so vast in my opinion that being excited for mind-uploading when WBE progress is made is like being excited for self-propelled cars after making progress in developing horse-drawn wagons. In both cases, little progress has been made on the most significant component of the desired thing.

If it did, we would need to solve the hard problem of consciousness, which seems significantly harder than just WBE.

Doesn't WBE involve the easy rather than hard problem of consciousness? You don't need to solve why anything is conscious in the first place, because you can just take it as a given that human brains are conscious and re-implement the computational and biological mechanisms that are relevant for their consciousness.

You don't need to solve why anything is conscious in the first place, because you can just take it as a given that human brains are conscious and re-implement the computational and biological mechanisms that are relevant for their consciousness.

I'm pretty sure the problem with this is that we don't know what it is about the human brain that gives rise to consciousness, and therefore we don't know whether we are actually emulating the consciousness-generating thing when we do WBE. Human conscious experience could be the biological computation of neurons + X. We might be able to emulate biological computation perfectly, but if X is necessary for conscious experience then we've just created a philosophical zombie. To find out whether our emulation is sufficient to produce consciousness, we would need to find out what X is and how to emulate it. I'm pretty sure this is exactly the hard problem of consciousness.

Even if biological computation is sufficient for generating consciousness, we will have no way of knowing until we solve the hard problem of consciousness.

Human conscious experience could be the biological computation of neurons + X. We might be able to emulate biological computation perfectly, but if X is necessary for conscious experience then we've just created a philosophical zombie.

David Chalmers had a pretty convincing (to me) argument for why it feels very implausible that an upload with identical behavior and functional organization to the biological brain wouldn't be conscious (the central argument starts from the subheading "3 Fading Qualia"): http://consc.net/papers/qualia.html

What a great read! I suppose I'm not convinced that Fading Qualia is an empirical impossibility, and therefore that there exists a moment of Suddenly Disappearing Qualia when the last neuron is replaced with a silicon chip. If consciousness is quantized (just like other things in the universe), then there is nothing wrong in principle with Suddenly Disappearing Qualia when a single quantum of qualia is removed from a system with no other qualia, just like removing the last photon from a vacuum.

Joe is an interesting character which Chalmers thinks is implausible, but aside from it rubbing up against a faint intuition, I have no reason to believe that Joe is experiencing Fading Qualia. There is no indication for any reason that the workings of consciousness should obey any intuitions we may have about it.

There is no indication for any reason that the workings of consciousness should obey any intuitions we may have about it.

The mind is an evolved system out to do stuff efficiently, not just a completely inscrutable object of philosophical analysis. It's likelier that the parts like sensible cognition and qualia and the subjective feeling of consciousness are coupled and need each other to work than that they were somehow intrinsically disconnected and cognition could go on as usual without subjective consciousness using anything close to the same architecture. If that were the case, we'd have the additional questions of how consciousness evolved to be a part of the system to begin with and why hasn't it evolved out of living biological humans.

I agree with you, though I personally wouldn't classify this as purely an intuition since it is informed by reasoning which itself was gathered from scientific knowledge about the world. Chalmers doesn't think that Joe could exist because it doesn't seem right to him. You believe your statement because you know some scientific truths about how things in our world come to be (i.e. natural selection) and use this knowledge to reason about other things that exist in the world (consciousness), not merely because the assertion seems right to you.

The brain is changing over time. It is likely that there is not a single atom in your 2021-brain that was present in your 2011-brain. 

If you agree that the natural replacements haven't killed you (2011-you and 2021-you are the same conscious agent), then it's possible to transfer your mind to a machine in a similar manner. Because you've already survived a mind uploading into a new brain. 

Gradual mind uploading (e.g. by gradually replacing neurons with emulated replicas) circumvents the philosophical problems attributed to non-gradual methods.

Personally, although I prefer gradual uploading, I would agree to a non-gradual method too, as I don't see the philosophical problems as important. As per the Newton's Flaming Laser Sword: 

if a question, even in principle, can't be resolved by an experiment, then it is not worth considering. 

If a machine behaves like me, it is me. Whatever we share some unmeasurable sameness - is of no importance for me. 

The brain is but a computing device. You give it inputs, and it returns outputs. There is nothing beyond that. For all practical purposes, if two devices have the same inputs→outputs mapping, you can replace one of them with another. 

As Dennett put it, everyone is a philosophical zombie.

There are a lot of interesting points here, but I disagree (or am hesitant to agree) with most of them.

If you agree that the natural replacements haven't killed you (2011-you and 2021-you are the same conscious agent), then it's possible to transfer your mind to a machine in a similar manner. Because you've already survived a mind uploading into a new brain.

Of course, I'm not disputing whether mind-uploading is theoretically possible. It seems likely that it is, although it will probably be extremely complex. There's something to be said about the substrate independence of computation and, separately, consciousness. No, my brain today does not contain the same atoms as my brain from ten years ago. However, certain properties of the atoms (including the states of their constituent parts) may be conserved such as spin, charge, entanglement, or some yet undiscovered state of matter. So long as we are unaware of the constraints on these properties that are necessary for consciousness (or even whether these properties are relevant to consciousness), we cannot know with certainty that we have uploaded a conscious mind.

If a machine behaves like me, it is me. Whatever we share some unmeasurable sameness - is of no importance for me. 

The brain is but a computing device. You give it inputs, and it returns outputs. There is nothing beyond that. For all practical purposes, if two devices have the same inputs→outputs mapping, you can replace one of them with another.

These statements are ringing some loud alarm bells for me. It seems that you are rejecting consciousness itself. I suppose you could do that, but I don't think any reasonable person would agree with you. To truly gauge whether you believe you are conscious or not, ask yourself, "have I ever experienced pain?" If you believe the answer to that is "yes," then at least you should be convinced that you are conscious.

What you are suggesting at the end there is that WBE = mind uploading. I'm not sure many people would agree with that assertion.

No, my brain today does not contain the same atoms as my brain from ten years ago. However, certain properties of the atoms (including the states of their constituent parts) may be conserved such as spin, charge, entanglement, or some yet undiscovered state of matter. So long as we are unaware of the constraints on these properties that are necessary for consciousness (or even whether these properties are relevant to consciousness), we cannot know with certainty that we have uploaded a conscious mind.

Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain?

It seems to me that this can't be verified by any experiment, and thus must be cut off by the Newton's Flaming Laser Sword. 

It seems that you are rejecting consciousness itself.

As far as I know, it is impossible to experimentally verify if some entity posses consciousness (partly because how fuzzy are its definitions). It is a strong indicator that consciousness is one of those abstractions that don't correspond to any real phenomenon. 

"have I ever experienced pain?"

If certain kinds of damage are inflicted upon my body, my brain generates an output typical for a human in pain. The reaction can be experimentally verified. It also has a reasonable biological explanation, and a clear mechanism of functioning. Thus, I have no doubt that pain does exist, and I've experienced it. 

I can't say the same about any introspection-based observations that can't be experimentally verified. The human brain is a notoriously unreliable computing device which is known to produce many falsehoods about the world and (especially!) about itself.

Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain?

No, we cannot. Just as we cannot know with certainty whether a mind-upload is conscious. Just because we presume that our 2021 brain is a related conscious agent to our 2011 brain, and granting the fact that we cannot verify the properties that enabled the conscious connection between the two brains, does not mean that the properties do not exist.

It seems to me that this can't be verified by any experiment, and thus must be cut off by the Newton's Flaming Laser Sword.

Perhaps we presently have no way of testing whether some matter is conscious or not. This is not equivalent to saying that, in principle, the conscious state of some matter cannot be tested. We may one day make progress toward the hard problem of consciousness and be able to perform these experiments. Imagine making this argument throughout history before microscopes, telescopes, and hadron colliders. We can now sheath Newton’s Flaming Laser Sword.

I can't say the same about any introspection-based observations that can't be experimentally verified.

I believe this hedges on an epistemic question about whether we can have have knowledge of anything using our observations alone. I think even a skeptic would say that she has consciousness, as the fact that one is conscious may be the only thing that one can know with certainty about themself. You don’t need to verify any specific introspective observation. The act of introspection itself should be enough for someone to verify that they are conscious.

The human brain is a notoriously unreliable computing device which is known to produce many falsehoods about the world and (especially!) about itself.

This claim refers to the reliability of the human brain to verify the truth value of certain propositions or indentify specific and individuable experiences. Knowing whether oneself is conscious is not strictly a matter of verifying a proposition, nor identifying an individuable experience. It’s only about verifying whether one has any experience whatsoever, which should be possible. Whether I believe your claim to consciousness or not is a different problem.

The lack of progress here may be a quite good thing.

Did I miss some subtle cultural changes at LW?

I know the founding principles of LW are rationalism and AI safety from the start. But in my mind, LW always has all types of adjacent topics and conversations, with many different perspectives. Or at least this was my impression of the 2010s LW threads on the Singularity and transhumanism. Did these discussions become more and more emphasized on AI safety and derisking over time?

I'm not a regular reader of LW, any explanation would be greatly appreciated. 

While I can understand why many would view advances toward WBE as an AI-safety risk, many in the community are also concerned with cryonics. WBE is an important option for the revival of cryonics patients. So I think the desirability of WBE should be clear. It just may be the case that we need to develop safe AI first.

It's not just an AI safety risk, it's also an S-risk in it's own right.

While discussing a new powerful tech, people often focus on what could go horribly wrong, forgetting to consider what could go gloriously right. 

What could could go gloriously right with mind uploading? It could eliminate involuntary death, saving trillions of future lives. This consequence alone massively outweighs the corresponding X- and S-risks.

At least from the orthodox QALY perspective on "weighing lives", the benefits of WBE don't outweigh the S-risks, because for any given number of lives, the resources required to make them all suffer are lesser than the resources required for the glorious version.

The benefits of eventually developing WBE do outweigh the X-risks, if we assume that

  • human lives are the only ones that count,
  • WBE'd humans still count as humans, and
  • WBE is much more resource-efficient than anything else that future society could do to support human life.

However, orthodox QALY reasoning of this kind can't justify developing WBE soon (rather than, say, after a Long Reflection), unless there are really good stories about how to avoid both the X-risks and the S-risks.

As far as I know, mind uploading is the only tech that can reduce the risk of death (from all causes) to almost zero. It is almost impossible to destroy a mind that is running on resilient distributed hardware with tons of backups hidden in several star systems. 

There is a popular idea that some very large amount of suffering is worse than death. I don't subscribe to it. If I'm tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death - because (by definition) it cannot be repaired. And everything else can be repaired, including the damage from any amount of suffering. 

In such calculations, I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe. 

I also don't see how eliminating any arbitrary large amount of suffering could be preferable to saving 1 life. Unless the suffering leads to permadeath, the sufferers can get over it. The dead - cannot. Bad feelings are vastly less important than saved lives.

It's a good idea to reduce suffering. But the S-risk is trivially eliminated from the equitation if the tech in question is life-saving.

orthodox QALY reasoning of this kind can't justify developing WBE soon (rather than, say, after a Long Reflection)

There are currently ~45 million permadeaths per year. Thus, any additional year without widely accessible mind uploading means there is an equivalent of 45+ million more humans experiencing the worst possible suffering until the end of the universe. In 10 years, it's a half a billion. In 1000 years, it's a half a trillion. This high cost of the Long Reflection is one more reason why it should never be forced upon humanity. 

I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe.

This is so incredibly far from where I would place the equivalence, and I think where almost anyone would place it, that I'm baffled. You really mean this?

There is an ancient and (unfortunately) still very popular association between death and sleep / rest / peace / tranquility.

The association is so deeply engraved, it is routinely used by most people who have to speak about death. E.g. "rest in peace", "put to sleep", "he is in a better place now" etc. 

The association is harmful. 

The association suggests that death could be a valid solution to pain, which is deeply wrong. 

It's the same wrongness as suggesting to kill a child to make the child less sad. 

Technically, the child will not experience sadness anymore. But infanticide is not a sane person's solution to sadness. 

The sane solution is to find a way to make the child less sad (without killing them!). 

The sane solution to suffering is to reduce suffering. Without killing the sufferer.

For example, if a cancer patient is in great pain, the most ethical solution is to cure them from cancer, and use efficient painkillers during the process. If there is no cure, then utilize cryonics to transport them into the future where such a cure becomes available. Killing the patient because they're in pain is a sub-optimal solution (to put it mildly).

I can't imagine any situation where permadeath is preferable to suffering. With enough tech and time, all kinds of suffering can be eliminated, and their effects can be reversed. But permadeath is, by definition, non-reversible and non-repairable. 

If one must choose between a permanent loss of human life and some temporary discomfort, it doesn't make sense to prefer the permanent loss of life, regardless of the intensity of the discomfort. 

(I agree wholeheartedly with almost everything you've said here, and have strong upvoted, but I want to make space for the fact that some people don't make sense, and some people reflectively endorse not making sense, and so while I will argue against their preference for death over discomfort, I will also fight for their right to make the wrong choice for themselves, just as I fight for your and my right to make the correct choice for ourselves.  Unless there is freedom for people to make wrong choices, we can never move beyond a socially-endorsed "right" choice to something Actually Better.)

There is a popular idea that some very large amount of suffering is worse than death. I don't subscribe to it. If I'm tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death - because (by definition) it cannot be repaired.

 

This sweeps a large amount of philosphical issues under the rug by begging the conclusion (that death is the worst thing), and then using that justify itself (death is the worst thing, and if you die, you're stuck dead, so that's the worst thing).

I predict that most (all?) ethical theories that assume that some amount of suffering is worse than death - have internal inconsistencies. 

My prediction is based on the following assumption:

  • permanent death is the only brain state that can't be reversed, given sufficient tech and time

The non-reversibility is the key. 

For example, if your goal is to maximize happiness of every human, you can achieve more happiness if none of the humans ever die, even if some humans will have periods of intense and prolonged suffering. Because you can increase happiness of the humans who suffered, but you can't increase happiness of the humans who are non-reversibly dead.

If your goal is to minimize suffering (without killing people), then you should avoid killing people. Killing people includes withholding life extension technologies (like mind uploading), even if radical life extension will cause some people to suffer for millions of years. You can decrease suffering of the humans who are suffering, but you can't do that for the humans who are non-reversibly dead.

The mere existence of the option of voluntary immortality necessitates some quite interesting changes in ethical theories. 

Personally, I simply don't want to die, regardless of the circumstances. The circumstances might include any arbitrary large amount of suffering. If a future-me ever begs for death, consider him in the need of some brain repair, not in the need of death. 

While we're sitting around waiting for revolutionary imaging technology or whatever, why not try and make progress on the question of how much and what type of information can we obscure about a neural network and still approximately infer meaningful details of that network from behavior. For practice, start with ANNs and keep it simple. Take a smallish network which does something useful, record the outputs as it's doing its thing, then add just enough random noise to the parameters that output deviates noticeably from the original. Now train the perturbed version to match recorded data. What do we get here, did we recover the weights and biases almost exactly? Assuming yes, how far can this go before we might as well have trained the thing from scratch? Assuming success, does it work equally on different types and sizes of networks, if not what kind of scaling laws does this process obey? Assuming some level of success, move on to a harder problem, a sparse network, this time we throw away everything but connectivity information and try to repeat the above. How about something biologically realistic but we try to simulate the spiking neurons with groups of standard artificial ones.. you get the drift.

Imagine you have two points, A and B. You're at A, and you can see B in the distance. How long will it take you to get to B?

Well, you're a pretty smart fellow. You measure the distance, you calculate your rate of progress, maybe you're extra clever and throw in a factor of safety to account for any irregularities during the trip. And you figure that you'll get to point B in a year or so.

Then you start walking.

And you run into a wall. 

Turns out, there's a maze in between you and point B. Huh, you think. Well that's ok, I put a factor of safety into my calculations, so I should be fine. You pick a direction, and you keep walking. 

You run into more walls.

You start to panic. You figured this would only take you a year, but you keep running into new walls! At one point, you even realize that the path you've been on is a dead end — it physically can't take you from point A to point B, and all of the time you've spent on your current path has been wasted, forcing you to backtrack to the start.

Fundamentally, this is what I see happening, in various industries: brain scanning, self-driving cars, clean energy, interstellar travel, AI development. The list goes on.

Laymen see a point B in the distance, where we have self-driving cars run on green energy powered by AGI's. They see where we are now. They figure they can estimate how long it'll take to get to that point B, slap on a factor of safety, and make a prediction. 

But the real world of problem solving is akin to a maze. And there's no way to know the shape or complexity of that maze until you actually start along the path. You can think you know the theoretical complexity of the maze you'll encounter, but you can't. 

On the other hand, sometimes people end up walking right through what the established experts thought to be a wall. The rise of deep learning from a stagnant backwater in 2010 to a dominant paradigm today (crushing the old benchmarks in basically all of the most well-studied fields) is one such case.

In any particular case, it's best to expect progress to take much, much longer than the Inside View indicates. But at the same time, there's some part of the research world where a major rapid shift is about to happen.

It seems to me that this task has an unclear goal. Imagine I linked you a github repo and said "this is a 100% accurate and working simulation of the worm." How would you verify that? If we had a WBE of Ray Kurzweil, we could at least say "this emulated brain does/doesn't produce speech that resembles Kurzweil's speech." What can you say about the emulated worm? Does it wiggle in some recognizable way? Does it move towards the scent of food?

Quote jefftk.

To see why this isn't enough, consider that nematodes are capable of learning. [...] For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don't do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can't learn. They also don't read weights off of any individual worm, which means we can't talk about any specific worm as being uploaded.

If this doesn't count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.

(just included the quotation in my post)

A study by Alcor trained C. elegans worms to react to the smell of a chemical. They then demonstrated that the worms retained this memory even after being frozen and revived. Were it possible to upload a worm, the same exact test would show that you had successfully uploaded a worm with that memory vs. one without that memory.

Study here: Persistence of Long-Term Memory in Vitrified and Revived Caenorhabditis elegans

You could imagine an experimental designs where you train worms to accomplish a particular task (i.e. learn which scent means food. If indeed they use scent. I don't know.) You then upload both trained and untrained worms. If trained uploads perform better from the get-go than untrained ones, it's some evidence it's the same worm. To make it more granular there's a lot of learning tasks from behavioural neuroscience you could adapt. 

You could also do single neuron studies: train the worm on some task, find a neuron that seems to correspond to a particular abstraction. Upload worm; check that the neuron still corresponds to the abstraction. 

Or ablation studies: you selectively impair certain neurons in the live trained worm, uploaded worm, and control worms, in a way that causes the same behaviour change only in the target individual.   

Or you can get causal evidence via optogenetics

Optogenetics was exactly the method proposed by David, I just updated the article and included a full quote.

I originally thought my post was already a mere summary of the previous LW posts by jefftk, excessive quotation could make it too unoriginal, interested readers could simply read more by following the links. But I just realized giving sufficient context is important when you're restarting a forgotten discussion.

I want to point out that there has been some very small amounts of progress in the last 10 years on the problem of moving from connectome to simulation rather than no progress. 

First, there has been interesting work at the JHU Applied Physics Lab which extends what Busbice was trying to do when he tried to run as simulation of c elegans in a Lego Mindstorms robot (by the way, that work by Busbice was very much overhyped by Busbice and in the media, so it's fitting that you didn't mention it). They use a basic integrate and fire model to simulate the neurons (which is probably actually not very accurate here because c elegans neurons don't actually seem to spike much and seem to rely on subthreshold activity more so than in other organisms). To assign weights to the different synapses they used what appears to be a very crude metric - the weight was determined in proportion to the total number of synapses the two neurons on either side of the synapse share. Despite the crudeness of their approach, the simulated worm did manage to reverse it's direction when bumping into walls.  I believe this work was a project that summer interns did and didn't have a lot of funding, which makes it more impressive in my mind than it might seem at first glance. 

Another line of work that seems worth pointing out is this 2018 work simulating "hexagonal cells" in the drosophilia which has been done at Janelia: "A Connectome Based Hexagonal Lattice Convolutional Network Model of the Drosophila Visual System". They claim "Our work is the first demonstration, that knowledge of the connectome can enable in silico predictions of the functional properties of individual neurons in a circuit". I skimmed this paper and found it a bit underwhelming since it appears the validation of the model was mostly in terms of summary statistics. 

Finally, for anyone who wants to learn what happened with the OpenWorm project, the CarbonCopies Foundation did a workshop in June 2021 with Steven Larson. A recording of the 4 hour event is online. I was present for a bit of it at the time it aired, but my recollection is dim. I believe part of the issue they ran into was figuring out how to simulate the physiology of the neuron (ie all the non-neuronal cells). Some people in the OpenWorm open source community managed to build a  3D model (you can view it here). If I recall correctly, he mentioned there was some work to embed that model in a fluid dynamics simulation and "wire it" with a crude simulation of the nervous system, and they got it to wiggle in some way that looked plausible. 

Thanks for the info. Your comment is the reason why I'm on LessWrong.

As someone interested in seeing WBE become a reality, I have also been disappointed by the lack of progress. I would like to understand the reasons for this better. So I was interested to read this post, but you seem to be conflating two different things. The difficulty of simulating a worm and the difficulty of uploading a worm. There are a few sentences that hint both are unsolved, but they should be clearly separated.

Uploading a worm requires being able to read the synaptic weights, thresholds, and possibly other details from an individual worm. Note that it isn't accurate to say it must be alive. It would be sufficient to freeze an individual worm and then spend extensive time and effort reading that information. Nevertheless, I can imagine that might be very difficult to do. According to wormbook.org, C. elegans has on the order of 7,000 synapses. I am not sure we know how to read the weight and threshold of a synapse. This strikes me as a task requiring significant technological development that isn't in line with existing research programs. That is, most research is not attempting to develop the technology to read specific weights and thresholds. So it would require a significant well-funded effort focused specifically on it. I am not surprised this has not been achieved given reports of lack of funding. Furthermore, I am not surprised there is a lack of funding for this.

Simulating a worm should only require an accurate model of the behavior of the worm nervous system and a simulation environment. Given that all C. elegans have the same 302 neurons this seems like it should be feasible. Furthermore, the learning mechanism of individual neurons, operation of synapses, etc. should all be things researchers outside of the worm emulation efforts should be interested in studying. Were I wanting to advance the state of the art, I would focus on making an accurate simulation of a generic worm that was capable of learning. Then simulate it in an environment similar to its native environment and try to demonstrate that it eventually learned behavior matching real C. elegans including under conditions which C. elegans would learn. That is why I was very disappointed to learn that the "simulations are far from realistic because they are not capable of learning." It seems to me this is where the research effort should focus and I would like to hear more about why this is challenging and hasn't already been done.

I believe that worm uploading is not needed to make significant steps toward showing the feasibility of WBE. The kind of worm simulation I describe would be more than sufficient. At that point, reading the weights and thresholds of an individual worm becomes only an engineering problem that should be solvable given a sufficient investment or level of technological advancement.

There’s a scan of 1 mm^3 of a human brain, 1.4 petabytes with hundred(s?) of millions of synapses

‪https://ai.googleblog.com/2021/06/a-browsable-petascale-reconstruction-of.html‬

Do we at least have some idea of what kind of technology would be needed for reading out connection weights?

David believed one can develop optogenetic techniques to do this. Just added David's full quote to the post. 

With optogenetic techniques, we are just at the point where it's not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.

A possible indirect way of doing that is by recording the worm's behavior:

  1. record the behavior under many conditions
  2. design an ANN that has the same architecture as the real worm
  3. train many instances of the ANN on a half of the recordings
  4. select an instance that shows the right behavior on the withheld half of the recordings
  5. if the records are long and diverse enough, and if the ANN is realistic enough, the selected instance will have the weights that are functionally equivalent to the weights of the real worm

The same approach could be used to emulate the brain of a specific human. Although the required compute in this case might be too large to become practical in the next decades.

Curated. The topic of uploads and whole-brain emulation is a frequent one, and one whose feasibility is always assumed to be true. While this post doesn't argue otherwise, it's fascinating to hear where we're with the technology for this.

Post is about tractability / difficulty, not feasibility

What's the distinction you're making? A quick google suggests this as the definition for "feasibility":

the state or degree of being easily or conveniently done

This matches my understanding of the term. It also sounds a lot like tractability / difficultly.

Are you thinking of it as meaning something more like "theoretical possibility"?

That isn't the definition I'm familiar with -- the one I was using is in Webster's:

1: capable of being done or carried out

Indeed! By "feasibility is assumed to be true", I meant in other places and posts.

One complication glossed over in the discussion (both above and below) is that a single synapse, even at a single point in time, may not be well characterized as a simple "weight". Even without what we might call learning per se, the synaptic efficacy seems, upon closer examination, to be a complicated function of the recent history, as well as the modulatory chemical environment. Characterizing and measuring this is very difficult. It may be more complicated in a C. elegans than in a mammal, since it's such a small but highly optimized hunk of circuitry.

My impression of OpenWorm was that it was not focused on WBE. It tried to be a more general-purpose platform for biological studies. It attracted more attention than a pure WBE project would, by raising vague hopes of also being relevant to goals such as studying Alzheimer's.

My guess is that the breadth of their goals led them to become overwhelmed by complexity.

It's possible to donate to Openworm to accelerate the development, via their website

Maybe the problem is figuring out how to realistically simulate a SINGLE neuron, which could then be extended 302 or 100,000,000,000 times. Also due to shorter generation times any random c.elegans has 50 times more ancestors than any human, so evolution may have had time to make their neurons more complex.