Reading this article on requiring lots of processing power to emulate the snes accurately, made me think that we will likely have similar issues when emulating humans. 

I'd imagine weird timing and chemical interactions being used by the brain as it is an adaptable system and might be able adapt to use them if they turn out to be helpful.

This suggested to me a few issues with no easy answers that I could see.

  • Is it better to emulate 1 human faithfully or 10 humans with occasional glitches (for example could no longer appreciate music in the same way)
  • How glitch free would you want the emulation to be before you gave up your body. 
  • How glitch free would you want the emulation to be before letting it use heavy machinery.
  • How glitch free would you want the emulation to be before you had it working on FAI.
Also please ignore the 3Ghz vs 25Mhz comparison, it perpetuates the myth that computational power is about clock speed and not operations per second and memory bandwidth.

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 6:40 PM

How glitch free would you want the emulation to be before you gave up your body.

If we're going to keep the analogy going, I'd jump on it as soon as it doesn't hard BSOD. Lets take a look at the advantages even poorly emulated ROMs have over the originals (and presumably uploaded humans have over their meatspace analogues):

-The ability to make copies and store them remotely, in case my house explodes

-No degradation of the stored data, so it'll still be around in 1000 years as long as we make copies

-Easy ability to inject code or modify hexes on the fly, so we can change anything we want

-The ability to view any layer independently, so we can isolate/examine/troubleshoot any process we want

-Save/Reload states, in case we want to explore theoretical situations or past states

-Time acceleration in case we want to skip boring parts or spend a year of calculation in one realside minute

-Disassembly and reassembly, in case we want to tear something down, fully mod it, then build something new

-Convenience, in case we want to use it somewhere we don't or can't take the original, like Alpha Centauri

-Online Multiplayer, because the original made you travel there while the emulator lets you do it over the interwebs

-Efficiency, since we can have a dozen copies running at triple speed for the same cost of running one of the original

Last but not least - One day the original will be gone, and it's better to have it emulated than not at all.

I think I would wait a little while longer, myself. I might get glitches in my gullibility section and give away all my money or make other very bad long term decisions such as modifying my code on the fly in silly-hard to recover ways.

I'd imagine weird timing and chemical interactions being used by the brain as it is an adaptable system and might be able adapt to use them if they turn out to be helpful.

Human designers have every reason to work very hard to make sure they understand their own designs and that they are free from weird issues. Chips aren't designed [by humans] to have strange EM interactions, but sometimes they do anyway and that occasionally gets exploited---not often though. On the other hand, evolution has no such motive, so I imagine that weird edge cases are vastly more important in biological brains. It seems quite possible that whereas only a few NES games are rendered unplayable on emulations with less fidelity, humans brains just won't work at all until we represent most of what happens on a lower level.

However, I hope that's not the case, since if it is, we have that much longer to wait for whole-brain emulation. I also suspect we will still have heuristics that perform adequately at speeds many orders-of-magnitude faster than molecular simulations (and that quantum effects are negligible).

Also please ignore the 3Ghz vs 25Mhz comparison, it perpetuates the myth that computational power is about clock speed and not operations per second and memory bandwidth.

If we were comparing clock speeds, I would be more interested in the 3GHz to 1.79MHz (the actual NES processor, rather than the first emulation) comparison.

Is it better to emulate 1 human faithfully or 10 humans with occasional glitches (for example could no longer appreciate music in the same way)

Depends on what you want the emulations for.

Am I the only one who thinks the main problem isn't determining what level of accuracy is important but the fact that in order to assess accuracy you have to run the emulation?

[-][anonymous]13y20

1: Is it better to emulate 1 human faithfully or 10 humans with occasional glitches (for example could no longer appreciate music in the same way)

This really depends on whether you are emulating the humans for personal reasons or for industrial reasons. If I want more time to spend with soon to be dead my wife, I will probably want a faithful reproduction. If I want mass produced servants, it's probably simpler to go for the 10 with occasional glitches as long as those glitches meet certain standards of industrial safety. If the glitches are only minor aesthetic differences, it wouldn't really bother me. It would be entirely different if the glitch was "Violent uprising."

2: How glitch free would you want the emulation to be before you gave up your body. This would substantially depend on how many years I had likely had left to live and also seems heavily dependent on the proportion and types of the glitches. For instance, "Accidentally goes corrupt and violently insane attacking loved ones in irreparable manner" can be considered a glitch. So can "A memory leak requires a shutdown and clean boot every 2 hours. In a few years we think we can upgrade this to 4 hours." But my tolerance for for the glitches of incurable violent psychosis and curable narcolepsy is substantially different.

It's hard to say what kind of glitches we might theoretically run into. I would imagine narcolepsy like glitches would be common because computers have those types of problems right now, where they have been run too long and need to be rebooted. And also, brains themselves have a very similar process (of needing to sleep periodically) But that doesn't necessarily mean this will be the biggest problem with brain emulations.

I suppose it comes down to a utility function calculation depending on my disvalue of various types of glitches, how long brain emulations can run glitch free, and what kinds of glitches they have and my projected lifespan. I do want to note that it seems likely that the first people who try emulation are likely to include a substantial portion of old and sick people. If you are going to die next week of terminal cancer, you have much less to lose from a failed emulation. If this guess is correct, people may be permitted to try brain uploading slowly in situations that are incrementally further away from death. If brain uploading happens in the near future, It is likely that at some point my health and age will be comparable to the health and age of other people trying brain uploading, in which case I may consider it at that point based on the glitch history.

I could go into more detail, but I don't know if it would be relevant to anyone other than me, since most people do not have identical utility functions.

3: How glitch free would you want the emulation to be before letting it use heavy machinery. There are already tests for qualifying for heavy machinery that we give to humans. As an example: http://www.dot.state.tx.us/hrd/tdp/skills/skills.htm

If human brain emulation software working in a robotic form passes all of these tests, then it at least appears to be equally competent to a human at first glance and we can allow them to attempt piloting vehicles on a provisional basis to begin collecting accident statistics on things like accidents per unit of operator time.

If Human operated asphalt spreaders have 1 serious accident per 10,000 operator hours, and Brain emulation robots asphalt spreaders have 1 serious accident every 100 operator hours, then clearly we need to design some more stringent tests on the Brain Emulation controlled robots.

On the other hand, if Human operated asphalt spreaders have 1 serious accident per 10,000 operator hours, and Brain emulation robots asphalt spreaders have 1 serious accident every 1,000,000 operator hours, then trying to increase Brain emulation robot safety has likely hit the point of diminishing returns.

4: How glitch free would you want the emulation to be before you had it working on FAI.

This is going to be similar to #3, but the idea is "Less glitchy than humans." http://en.wikipedia.org/wiki/Mental_disorder Based on Wikipedia, mental disorders can be suprisingly common. A third of people in most countries report meeting criteria for the major categories at some point in their lives. If you assume that three peoples lifetimes are 100,000 days (Dying in your early 90's) There should be 1 day where a Human had a mental glitch at some point in there. This seems to mean that a Human working on FAI has a roughly 1 in 100,000 of coming down with some form of glitch on a daily basis (This is a rough estimate. I am aware that there are confounding factors and am not going to be perfectly accurate)

If Human brain emulations are more resistant to glitches than this, and if the glitches to do not seem qualitatively worse, than it doesn't seem like it should hurt our chances statistically to hand over development to them. Obviously, I would want to run this by other Mathematicians for finer details because the cost of failure might be very high. As I mentioned, there are a number of confounding factors, such as, Can we diagnose glitches before they strike? How frequently? I'm well aware I'm not going to list every possible confounding factor, which is why I'd want expert advice.

I agree that these answers aren't easy, since they require a lot of details about circumstances and are context dependent. Not only that, they may require a substantial amount of in field knowledge. While I don't see any immediate flaws in my answers, I would not be surprised if I was completely wrong about everything on multiple answers. But I hope that trying to break it down like this helps as a starting point.

Given that this universe is pretty big, and that biotechnology will probably let us avoid terminal issues with biological bodies for a while, I would be inclined to keep a modified human body at least until emulations are pretty damn glitch free. There doesn't seem to be any reason to rush this.

Disclaimer: my formal background here consists only of an undergraduate intro to neuroscience course taken to fulfill a distribution requirement.

I'm wondering if this is actually a serious problem. Assuming we are trying to perform a very low-level emulation (say, electro-chemical interactions in and amongst neurons, or lower), I'd guess that one of two things would happen.

0) The emulation isn't good enough, meaning every interaction between neurons has a small but significant error in it. The errors would compound very, very quickly, and the emulated mind's thought process would be easily distinguishable from a human's within minutes if not seconds. In the long term, if the emulation is even stable at all, its behavior would fall very much into the trough of the mental uncanny valley, or else be completely inhuman. (I don't know if anyone has talked about a mental uncanny valley before, but it seems like it would probably exist.)

1) The emulation is good enough, so the local emulation errors are suppressed by negative feedback instead of accumulating. In this case, the emulation would be effectively totally indistinguishable from the original brain-implemented mind, from both the outside and the inside.

My reason for rejecting borderline cases as unlikely is basically that I think an "uncanny valley" effect would occur whenever local errors accumulate into larger and larger discrepancies, and that for a sufficiently high fidelity emulation, errors would be suppressed by negative feedback. (I know this isn't a very concrete argument, by my intuition strongly suggests that the brain already relies on negative feedback to keep thought processes relatively stable.) The true borderline cases would be ones in which the errors accumulate so slowly that it would take a long time before a behavior discrepancy is noticeable, but once it is noticeable, that would be the end of it, in that no one could take seriously the idea that the emulation is the same person (at least, in the sense of personal identity we're currently used to). But even this might not be possible, if the negative feedback effect is strong.

I would love to hear from someone who knows better.

I wonder if a lossy emulation might feel like/act like a human with a slightly altered brain chemistry. We have lots of examples of what it's like to have your neurons operating abnormally, due to emotion, tiredness, alcohol, other chemicals, etc etc. I'm not sure "uncanny valley" is the best term to capture that.

But I think those are examples of neurons operating normally, not abnormally. Even in the case of mind-influencing drugs, mostly the drugs just affect the brain on its own terms by altering various neurotransmitter levels. On the other hand, a low-level emulation glitch could distort the very rules by which information is processed in the brain.

Note that I am distinguishing "design shortcomings" from "bugs" here.

I don't quite see how you'd get "the overall rules" wrong. I figure standard software engineering is all that's required to make sure that the low-level pieces are put together properly. Possibly this is just a failure of imagination on my part, but I can't think of an example of a defect that is more pervasive than "we got the neuron/axion model wrong." And if you're emulating at the neuron level or below, I'd figure that an emulation shortcoming would look exactly like altering neural behavior.

  • Is it better to emulate 1 human faithfully or 10 humans with occasional glitches (for example could no longer appreciate music in the same way)

Of those, the 10, but a single entity (For the sake of argument asume the same general quality of design, so not because it'd imply superintelligence, although it probably would) using glitches neurons but 10 times as many would probably be better than both.

  • How glitch free would you want the emulation to be before you gave up your body.

"Duuude, my eyes have .jpg artefacts, why is that hat insisting it's my wife?"

  • How glitch free would you want the emulation to be before letting it use heavy machinery.

depends on what other safety precautions are present, and what kinds of glitches it is.

  • How glitch free would you want the emulation to be before you had it working on FAI.

Depends on the the time available, but if it's feasible simulated down to the molecular level and inhabiting a fairly faithfully simulated body in a simulated natural environment.

Edit: changed my mind after reading the comments, but my new opinions are boring so I'll leave these here instead.

Just do whatever results in the most happiness. I don't care if the resulting mind is nothing like mine. Minds don't have to be like mine to be valuable.

It seems plausible that similar issues could occur when emulating a human brain. But if they do, wouldn't it be probable that they could be resolved with a simple increase in processing power? (Or possibly by buffering of sensory input.)

[-][anonymous]13y50

If you read the article, you'll see the answer is simply "no". The whole point of the article is that throwing more resources at the problem doesn't, by itself, increase ease of emulation.

I spent three years working on a product whose only function was to take binaries compiled for a GNU/Linux distro on one CPU, and make them runnable on the same distro but another CPU. Having seen how difficult this is to do even when you're talking about the same OS, to which you have the source code, and two human-designed Von Neuman architecture chips, I know that 'uploading' will take far, far, far more effort than most people on this site currently believe.