Posts

Sorted by New

Wiki Contributions

Comments

Your high-capacity Einstein would come to the conclusion, left to those parameters, that the picture never changes. The pattern for that is infinitely stronger, thinking so quickly, than any of the smaller patterns within. Indeed, processing the same information so many times, it will encounter information miscopies nigh-infinitely more often than it encounters a change in the data itself - because, after all, a quantum computer will be operating on information storage mechanisms sensitive enough to be altered by a microwave oven a mile away.

You have a severe bootstrapping problem which you're ignoring - thought requires subject. Consciousness requires something to be conscious of. You can't design a consciousness and throw things for it to be conscious of after the fact. You have to start with the webcam and build up to the mind - otherwise the bits flowing in are meaningless. No amount of pattern recognition will give meaning to patterns.

I think you read something which left out something; Belle's Theorem disproved "neo-realism," which is the idea that there was a classical-physics explanation, i/e, with real particles with real properties. It's the model EPR was trying to assert over the Copenhagen interpretation - and that, indeed, was its only purpose, and I find it odd that you bring that thought experiment up out of the context of its intent.

Well, actually, Everette's Many-Worlds actually repermits classical physics within its confines, and hence real particles, as do other superdimensional interpretations - within his model, you're still permitted all the trappings of classical physics. (As they break an assumption of normality in Belle's Theorem, namely, that there is only one universe, or in the case of superdimensionality, that the universe doesn't extend in other directions we can only detect abstractly.)

Eliezer -

"Information" in this case is the properties; my apologies, I am loose with language. The properties were transformed - and, in the case of a splitting beam, with a 1-1 function. The properties were "lost" when they were split - they weren't the same as they were before. But they weren't irrecoverably lost. (At least close enough for testing; you may have medium degradation, i/e, property attenuation, depending upon the quality of the crystals and the intermediate material provided it isn't in a vacuum.)

To irrecoverably lose properties, you need a non 1-1 function - which is exactly what we had when we sent them through the filter rather than the splitter.

The fundamental descriptive mathematics are known - the interpretations are still debated. As has been the case for nearly a century now, and I don't see that changing anytime in the immediate future. And if you recombine all four sets of split beams, then there isn't anything interesting going on there, either; half still goes through, same as before, and predictably so. That is, if you direct one polarization one direction, and another in another, and then recombine them - and there's the snag, see. You can't combine them without re-emitting both of them; you're performing an additional operation which is generating/modifying information. You aren't reproducing lost information; you're generating new information which is equivalent to the lost information.

For the fundamental physics to be known, they must be falsifiable, and have passed that test. This is not the case. The mathematics are passing with flying colors, of course - nobody is entirely sure what the mathematics mean, however. (Everybody thinks they do, though.)

There is, of course, a fairly simple alternative solution, dealing with "real" particles; the photons coming out of the filters are not the photons that went in. Photons don't travel through the sheet; the energy is absorbed, and the properties of individual components of energy determine what happens next. The properties of some chunks of energy cause similarly-propertied energy to be re-emitted on the other side. It's not that the photons have mysteriously lost the information about their "spin" in the middle sheet - it's that we're dealing with new photons with new property sets, which are being re-emitted with the emission properties of the second sheet, rather than the first.

With this interpretation, the phenomenon makes perfect sense, and the old textbooks are right - after a fashion - that the second measurement destroyed the information that the first measurement generated.

"Well, it's physics, and physics is math, and you've got to come to terms with thinking in pure mathematical objects."

  • Physics is modeled as math - physics and math are not the same thing. Math is a functionally complete descriptive language - but it is not a definitive one. Newton's laws were mathematically perfect, it was the physical things that they incorrectly represented which ultimately broke their backs. Newton's laws also had perfect predictive power over everything in their realm - the macroscopic - for more than a century, before we developed instruments finely tuned enough to detect their inaccuracies.

If you're trying to convince anybody here, you're going to fail, because you start by assuming the very mathematical models which they challenge - asserting repeatedly that particles have no definition, and therefore particles have no definition, is an advancement towards nowhere. If you're trying to enlighten people, you do so from the perspective of one biased in favour of a particular mathematical model.

I can't prove my position, but I generally favour a variant of multiverse theory in which the uncertainty principle is the result of consciousness. That is, the human mind as a conscious entity is a functional quantum computer, and the uncertainty principle is a result of that, rather than a fundamental property of the universe. (The uncertainty is not about what state the particle is in, but what spectrum of probability space the mind inhabits, and thus what spectrum of particle states the mind observes.)

You'll notice that this is a functionally equivalent interpretation. Which is the problem with quantum mechanics - the mathematics describe something, but interpretation is, for now, still up in the air.

You'll also notice that this interpretation suggests that a 'slice' of probability space produces a universe of zombies. But a slice of probability space as an independent structure is no less ridiculous in this model than a slice of 2D space taken out of our "normal" three dimensions when treated independently.

Will - field theory is pretty good, yup, although...

We're basically at the same point in physics we were a little more than a century ago. Back then, there were two major camps - the atomicists, and the energists. The energists' position was essentially that everything was made of energy, the atomicists' position was that there were these tiny particles we hadn't seen yet, but they were in fact real.

Now, at the time, both camps had equally valid positions, although the energists had the stronger support - but there was a very interesting distinction between the two. If the energists were right, we were in a position where we knew all the basic rules of the universe, and it was just a matter of sorting out a few weird details. If the atomicists were right, there was a LOT of stuff we didn't know yet.

The atomicists, as history will back me, were right, and physics went right on trucking. Well, actually, that isn't quite correct - the atomicists were mostly right; the particles they thought existed weren't quite what we found. We did indeed find the particles, but not the fundamental indivisible particles much of their camp had been expecting. A few years later, the roles were reversed; the atomicist position had some smaller particles, and everything, except for a few weird details, was sorted out. (One can say something of the amazing predictive power of quantum physics - well, it wasn't any more remarkable than the amazing predictive power of Newtonian physics.) And the energists owned the next age, although not quite the way they had ever expected.

We've reached that same point again today. The atomicists for the most part no longer believe in an atomic (indivisible) particle, but the fundamentals are otherwise the same; if the energists are right, then we basically know all the basic principles of physics, and it is just a matter of sorting out a few really weird details. Meanwhile, you have the atomicists, now called neo-realists, inspired by the late giants Einstein and Feynman, finding some curious approaches to handling those few weird details - although pushed into a much harder corner this time by Bell's theorem. Third time is the charm, I suppose?

Anybody who is proposing we know all the fundamentals of a field should arouse your instant suspicions - this is a hubris from which men have fallen every time they've mounted it. It's a very seductive idea to those who chase order. It is also a mindkiller.

Will -

The reasoning is better understood in terms of in wave mechanics; if the particle states diverged in the least, then the cancellation wouldn't be complete, and the experimental results would differ.

That is, they must be identical, not indistinguishable, for wave cancellation to operate. (sin-1(sin(x) +.0000000000001) isn't x.

However, again, this depends upon a particular mathematical definition of the particles - in particular, a model which has already defined that particles have no discrete existence. Eliezer is by far my favorite author here, but he has a consistent fault in confusing mathematical descriptions with mathematical definitions. That is, he seems to believe a model which accurately describes and even predicts behavior must be the "correct" model.

Equivalence is not correctness. To put it in programming terms, two functions which return the same result are equivalent - you can describe one function with the other. But you cannot define the behavior of one by the other, because they may operate by completely different processes to arrive at the same result.

You also can't make inferences, by looking at the algorithm of one, as to what data is acceptable input to both, if it's not data you have the capability of putting in. In terms of programming, this is like saying a blackbox text algorithm can't operate Unicode input because the equivalent function you've written can't, and your operating system only has ASCII installed. In terms of the argument, this is saying the universe can't have particles because the mathematical model you utilize will throw up non-numbers if you do (not that this is any special behavior in a field of physics where the canceling out of infinities is a regular exercise), and you don't have a universe where you know particles exist to compare ours to.

"But the "electrons" we see today, would still be computed as amplitude flows between simulated configuration"

- Eliezer, the argument being posted against you is that the MODEL could be wrong. Remember, it's a mathematical model - it describes, it doesn't define.

Remember, there are quite a few models of quantum physics that describe the behavior of quantum "particles" - and that presumes on the particles' very existence. It is quite possible to invent a model which describes physics perfectly but which omits the existence of electrons, photons, and other quantum particles, as nothing more than artifacts of interaction between particle's fields. (The math gets ugly in a way that is reminiscent of the models of universal motion which pushed a geocentric model, but the models can still function descriptively.)

There are currently a dozen mathematical models which accurately describe quantum physics - predictive behavior is nonexistent for a couple of them (generally the more obviously taoist-nonsense), but curiously correlative among the others.

Superdimensional models, such as those derived from Hilbert space, can be defined to both permit and to deny individual particles; it depends upon the assumptions you put in. You're assuming special cases for "normal" dimensions; i/e, that the additional n to infinity dimensions don't behave exactly the same way our usual four (three and a half) dimensions operate.

If you remove special behavior from the extra dimensions - permit particles to move on them, rather than have characteristics defined on them (phase space) - then you can derive an interference model which exactly parallels that which a configuration space will generate, without defeating individuality of particles in the process, similar in nature to multiverse theory. (Although you end up with some other curiousities as a result - i/e, wave behavior must be defined as rotation against an arbitrary pair of additional dimensions.)

In other words, your proof makes the assumption that the mathematical model IS the universe, rather than merely describing it. And remember that any finite set of data can be described by an infinite set of formulas; that is, we can never be certain that a mathematical model is "the one." This is a mathematical - not a philosophical, as you imply - limitation.

(Or, in other words - the universe doesn't have to be a lie for the sun to turn into chocolate cake - you'll still have a finite data set, you can still write formulas which will describe the transformation behavior.)

"Bayes-language can represent statements with very small probabilities, but then, of course, they will be assigned very small probabilities. You cannot assign a probability of .1% to the Sun rising without fudging the evidence (or fudging the priors, as Eli pointed out)."

  • Yes you can. You can have insufficient evidence. (Your probability "assignment" will have very low probability of being correct, but the assignment itself could still easily by .1%.)

"So much for begging the question. Please do a calculation, using the theorems of Bayes (or theorems derived from Bayesian theorems), which gives an incorrect number given correct numbers as input."

  • How about this as a counterchallenge - produce a correct number, any correct number at all, as it relates to the actual universe.

Incorrect numbers are generated constantly using probabilistic methods - they're eliminated or refined as more evidence comes along.

"Using mathematics to describe the universe goes all the way back to Ptolemy. It isn't going away anytime soon."

  • If you're going to address a single statement, you should really pay attention to context.

"Ah, here we have found one who does not comprehend the beauty of math. Alas, it is beyond my ability to impart such wisdom in a blog comment. Just drive down to your local university campus and start taking math classes- you'll get it eventually."

  • Beauty is truth, truth beauty? If you're going to argue reality you'll have to do better than the aesthetic value of mathematics.

"Neither GR nor QED requires a coordinate system of any sort. This is, admittedly, hard to wrap your head around, especially without going into the math. To name a simple example, it is mathematically impossible to cover the surface of a sphere (or, by topological extension, any closed surface) with a single coordinate system without creating a singularity. Needless to say, this does not mean that there must be some point on Earth where numbers go to infinity."

  • Everything requires a coordinate system. For every value that HAS a value, there is an axis upon which its values are calculated. It might be a very simple boolean axis, and it might be a more complex one, representing a logarithmic function. But if a value has value, that value will be stored in some sort of mathematic concept space.

"We can predict that they won't violate the earlier ones."

  • No, we can't.

"You simply flip the sign on the gravitational constant G. No geometric transformations required."

  • Which is utterly irrelevant to the point I was making. Yes, there are simpler transformations, and less lossy ones in many cases. But the point was that any model can represent the universe, not that all are equally messy.
Load More