Should We Shred Whole-Brain Emulation?

Would uploads avoid self improvement? If we are going to try to address this question, we should first consider the plausibility and importance of the whole upload concept.

Given the power and relatively young age of some Silicon Valley executives who seem to see uploading as part of their future, we might want to check to see whether the pursuit of uploading would have any side-effects.

If we believe that uploads are malleable and improvable, then the technology to create uploads would also permit the creation of more powerful minds, with all the consequences.

Should We Shred Whole-Brain Emulation?

Uploads and those creating a WBE-like entity as progeny most likely would prefer to add improvements to a greater or lesser extent, rather than complete fidelity.

Some people may argue that WBEs should lead as natural an existence as possible, one very much like people.

On the assumption that these people value their uploads or progeny, however, some aspects of life experience would be edited out. For example, what would motivate one of these creators to pass their WBEs through an unpleasant end-of-life experience, like vascular dementia?

The emulated lives of uploads and progeny would, to a greater or lesser extent, be edited. We could try to reason more about that.

Should We Shred Whole-Brain Emulation?

Suppose that a emulations will be created to study how the brains of flesh-and-blood people work in general, or to study and forecast how a particular, living person will react to stimulus.

This is a reasonable application of high-fidelity whole-brain emulation. To use such emulations to forecast behavior, though, the emulation would have to be "run" on a multi-dimensional distribution of possible future sets of environmental stimuli. The variation in these distributions grows combinatorially, so even tens of thousands of runs would only provide some information about what the person is likely to do next.

Such WBEs would be only one tool in a toolbox to predict human behavior. However, they would be useful for that purpose. Your WBE could be fed many possible future lives, allowing you to make better choices about your future in the physical world, if using WBEs in that manner was considered ethical.

People on this site generally seem to agree, though, that using a high-fidelity WBE as a guinea pig to test out life scenarios is ethically problematic. If these life scenarios were biased in favor of delivering positive outcomes to the WBEs, maybe we would not have as much of a problem with that. Perhaps the interaction of two WBEs could be observed over many scenarios, allowing people to better choose companions.

WBEs could end up being used for this purpose, ethical or not. Again, though, I suspect that more data about people's reactions could be gained if modified WBEs were used in some of the tests.

It's worth exploring, but high-performance neuromorphic or algorithmic minds would still be the better choice for actually controlling physical conditions.

Should We Shred Whole-Brain Emulation?

There is a tremendous amount of good material in here, thanks...

The thing that I would like to see added is a perspective on how changeable, or malleable, WBEs would be, once created.

One of the main reasons I am challenging WBEs is because I think that brain emulations would be very easy to change, alter and improve along the axes of defined performance metrics.

If they will be highly malleable, those who wish to use them to generate productivity would instead use improved (neuromorphic) versions. Additionally, a WBE which had some control of its own make-up would consider improving itself.

I believe that we can show that WBEs would be very readily improvable and changeable. Once they are improved and changed enough, they are no longer WBEs and are instead neuromorphic entities.

Should We Shred Whole-Brain Emulation?

For the next few years and possibly decades, the development of brain emulation technology will occur alongside the development of neuromorphic technology.

Some teams will be primarily focused on achieving extremely accurate renditions of sections of actual brain tissue, as well as increasingly accurate neural maps which are sometimes based on high-throughput scans of actual brain tissue. These teams will wish to based their work on individual neurons and glia that are very much like actual cells.

However, Henry Markram, director of Europe's Human Brain Initiative, has asserted that we need not model anything like the full complexity of gene expression and protein formation in human neurons in order to accurately represent firing patterns. Those pursuing the path toward WBEs will be willing to compromise on the issue of the level of detail at which individual cells are modeled, to varying degrees. Perhaps some will discover ways to measure whether these simplifications generate a statistically significant difference in how the simulated brain might react to stimulus.

Other teams will be more concerned with using models of groups of neurons as a calculation tool may be less concerned with whether they accurately represent individual neurons. "Neural net" technology was not intended to accurately model the brain, and the individual elements in a neural net are nothing like cells.

Nonetheless, these teams will learn everything they can from those who are trying to simulate actual brain function, and some of the same people will work in both sub-disciplines and different points in their careers.

If people from the simulation and human connectome camps develop an understanding of some new aspect of brain function, those who are just trying to find new AI methods to build into software will be able to take advantage of the results. However, they may be able to shave a lot of compute cycles out by utilizing abstractions of the newly-realized insight about structure and function to idealized neurons and glia that do not really try to approximate the function of living tissue much at all.

We cannot entirely predict whether extremely detailed models of individual cells are necessary for neuromorphic AI. However, I am interested in whatever evidence is available.

Should We Shred Whole-Brain Emulation?

Engineers attempting to improve either a WBE or a piece of neuromorphic tissue would have considerable advantages that are unavailable to medical teams working with actual brains and nerves.

Medical teams who work to repair spinal injuries are able to stimulate nerve fibers and trace the nerves into the brain. However, a vast set of experimental tools would be available to WBE or Neuromorphic Engineers.

These engineers would be able to write program which cause any specific neuron or group of neurons to fire at any time. They would be able to select the firing pattern for each, and the relative timing of a group of neurons.

They would be able to configure neurotransmitter output at will, and, importantly, they would also be able to set the number of neurotransmitter receptors on cell surfaces.

Altering the concentration of cell surface receptors would, for example, allow the neuromorphic tissue engineers to greatly influence what stimuli are pleasurable. They would be able to set patterns for these cell surface receptors which never occur in the natural course of gene expression in the brain.

We have already done a fair amount of mapping of the neural basis of pain and pleasure. Forthcoming in the next ten years, we will also have results from NIH's human connectome project. Neuromorphic tisssue engineers will begin their work with vast resources of data on the generation of pain and pleasure, the purpose and use of these sensations.

If they had either a WBE, or a differently-configured piece of neural tissue available to them, seemingly they would have a strong ability to re-wire what causes pain and pleasure in order to suit their needs.

Such techniques alone could allow a WBE to cross the line from an accurate representation of a human mind to something fundamentally different.

Should We Shred Whole-Brain Emulation?

Apparently, an advantage of creating a thread with a controversial and heterodox first entry is that for a time you get to write all of it yourself! :) That's OK, because I have a fair amount of brain dumping to do on this subject.

Should We Shred Whole-Brain Emulation?

Brain grafts are a very difficult idea in actual brain tissue today.

One of the key reasons, however, will begin to become a non-factor: tissue rejection. We can now grow neurons that have the same genetic code as yours or mine in the lab (I actually did this.) A method is to turn induced pluripotent stem cells (iPSCs) which may have been created from your own skin, into nerve cells.

I grew a small plate of such cells. I did not try to distinguish which among them were neurons, and which were glia. I am sure how far along we are toward growing a complete neural column, or a section of brain.

Assuming the neurons were grown, however, installation would still be very difficult. I am not willing to say impossible, but we have some challenges.

The balance and configuration of the glia would be difficult to control. Blood flow through both large vessels and capillaries would have to be restored to the added section.

Another issue importantly, perhaps, is that neurons in the brain have long axons. The "white matter" of the brain contains portions of these axons that string from brain region to brain region. It is a tangled net. Replacement neurons might have to be literally "woven in" to this net. Advantageously, the axons that are already there are sometimes bundled, but they are stuck together. It is not like stripping a large wire and seeing many filaments pop out.

Physically "weaving" new neurons into the brain is a lot more challenging than weaving them into a WBE or a piece of neuromorphic tissue.

At any given point in the early history of neuromorphic engineering, there will be a greater or lesser understanding of the relationship between structure and function. However, using WBEs and neuromorphic tissue in experiments to try to elicit function from structure will be very inexpensive. Tens of thousands or billions of experiments could be run with a single set of macros.

For this reason, I forecast, with considerable but not complete certainty, that the existence of either WBEs or functional neuromorphic tissue would quickly lead to a much greater understanding of the relationship between structure and function.

Can we be absolutely certain that this understanding would very quickly permit designs of purpose-built brain configurations that improve along the dimensions of particular performance metrics? We should try to build the case for and against that hypothesis. My instinct is to believe that these experiments would facilitate mind design, but people could present other evidence.

Should We Shred Whole-Brain Emulation?

The future of AI will come out very differently if sections of neural tissue cannot be made to function usefully, separately from from a WBE.

Similarly, the future of AI will come out very differently if removing parts of the brain from an emulation causes the brain to become non-functional.

We know from studies of stroke and other forms of brain damage that brain function does not immediately degrade if a small section of brain is injured. Therefore, removing sections from a WBE might reduce the functionality of the WBE, but would not diminish it entirely.

There is no precedent for adding sections of brain matter to an existing brain. If such operations were performed on a WBE, however, the changes would be very different than they would be to actual brain tissue.

Should We Shred Whole-Brain Emulation?

If we are able to conclude that alteration or removal of part of the WBE would be desirable for the purposes of the emulation's controllers, then we should conclude that WBE technology in a sense flows into neuromorphic technology, and is not separate from it in a fundamental way.

Load More