Jay

Wiki Contributions

Comments

Sorted by
Jay30

I certainly agree that brains are complicated.

I think part of the difference is that I'm considering the uploading process; it seems to me that you're skipping past it, which amounts to assuming it works perfectly.

Consider the upload of Bob the volunteer.  The idea that software = Bob is based on the idea that Bob's connectome of roughly 100 trillion synapses is accurately captured by the upload process.  It seems fairly obvious to me that this process will not capture every single synapse with no errors (at least in early versions).  It will miss a percentage and probably also invent some that meat-Bob doesn't have.

This raises the question of how good a copy is good enough.  If brains are chaotic, and I would expect them to be, even small error rates would have large consequences for the output of the simulation.  In short, I would expect that for semi-realistic upload accuracy (whatever that means in this context), simulated Bob wouldn't think or behave much like actual Bob.  

Jay-11

Surely both (1) and (2) are true, each to a certain extent.

Are the random thermal fluctuations pushing me around somehow better than the equally random measurement errors pushing my soft-copy around?

It depends.  We know from experience how meat brains change over time.  We have no idea how software brains change over time; it surely depends on the details of the technology used.  The changes might be comparable, but they might be bizarre.  The longer you run the program, the more extreme the changes are likely to be.

I can't rule it out either.  Nor can I rule it in.  It's conceivable, but there are enough issues that I'm highly skeptical.  

Jay-10

Let's try again.  Chaotic systems usually don't do exactly what you want them to, and they almost never do the right thing 1000 times in a row.  If you model a system using ordinary modeling techniques, chaos theory can tell you whether the system is going to be finicky and unreliable (in a specific way).  This saves you the trouble of actually building a system that won't work reliably.  Basically, it marks off certain areas of solution space as not viable.

Also, there's Lavarand.  It turns out that lava lamps are chaotic.

Jay20

That wasn't well phrased.  Oops.

Jay10

Any physical system has a finite amount of mass and energy that limit its possible behaviors.  If you take the log of (one variable of) the system, its full range of behaviors will use fewer numbers, but that's all that will happen.  For example, the wind is usually between 0.001 m/s (quite still) and 100 m/s (unprecedented hurricane).  If you take the base-10 log, it's usually between -3 and 2.  A change of 2 can mean a change from .001 to .1 m/s (quite still to barely noticeable breeze) or a change from 1 m/s to 100 m/s (modest breeze to everything's gone).  For lots of common phenomena, log scales are too imprecise to be useful.

Chaotic systems can't be predicted in detail, but physics and common sense still apply.  Chaotic weather is just ordinary weather.

Jay42

That's the point.  Nobody thought such tiny variations would matter.  The fact that they can matter, a lot, was the discovery that led to chaos theory.

Jay-20

Consider - A typical human brain has ~100 trillion synapses.  Any attempt to map it would have some error rate.  Is it still "you" if the error rate is .1%?  1%? 10%?  Do positive vs. negative errors make a difference (i.e. missing connections vs. spurious connections)?  

Is this a way to get new and exciting psychiatric disorders?

I don't know the answers, or even how we'd try to figure out the answers, but I don't want to spend eternity as this guy.  

Jay21

an exponential decrease in measurement error will only buy you a linear increase in how long that simulation is good for.

True, and in the real world attempts to measure with extreme precision eventually hit limits imposed by quantum mechanics.  Quantum systems are unpredictable in a way that has nothing to do with chaos theory, but that cashes out to injecting tiny amounts of randomness in basically every physical system.  In a chaotic system those tiny perturbations would eventually have macroscopic effects, even in the absence of any other sources of error.

Jay10

The seminal result for chaos theory came from weather modeling.  An atmospheric model was migrated to a more powerful computer, but it didn't give the same results as it had on the old computer.  It turned out that, in the process of migration, the initial condition data had been rounded to the eighth decimal place.  The tiny errors compounded into larger errors, and over the course of an in-model month the predictions completely diverged.  An error in the eighth decimal place is roughly comparable to the flap of a butterfly's wing, which led to the cliche about butterflies and hurricanes.

If you're trying to model a system, and the results of your model are extremely sensitive to miniscule data errors (i.e. the system is chaotic), and there is no practical way to obtain extremely accurate data, then chaos theory limits the usefulness of the model.  It may still have some value; using standard models and available data it's possible to predict the weather rather accurately for a few days and semi-accurately for a few days more, but it may not be able to predict what you need.

This is one reason I've always been skeptical of the "uploaded brain" idea.  My intuition is that inevitable minor errors in the model of the brain would cause the model to diverge from the source in a fairly short time.

Load More