I am reading through the sequence on quantum physics and have had some questions which I am sure have been thought about by far more qualified people. If you have any useful comments or links about these ideas, please share.

Most of the strongest resistance to ideas about rationalism that I encounter comes not from people with religious beliefs per se, but usually from mathematicians or philosophers who want to assert arguments about the limits of knowledge, the fidelity of sensory perception as a means for gaining knowledge, and various (what I consider to be) pathological examples (such as the zombie example). Among other things, people tend to reduce the argument to the existence of proper names a la Wittgenstein and then go on to assert that the meaning of mathematics or mathematical proofs constitutes something which is fundamentally not part of the physical world.

As I am reading the quantum physics sequence (keep in mind that I am not a physicist; I am an applied mathematician and statistician and so the mathematical framework of Hilbert spaces and amplitude configurations makes vastly much more sense to me than billiard balls or waves, yet connecting it to reality is still very hard for me) I am struck by the thought that all thoughts are themselves fundamentally just amplitude configurations, and by extension, all claims about knowledge about things are also statements about amplitude configurations. For example, my view is that the color red does not exist in and of itself but rather that the experience of the color red is a statement about common configurations of particle amplitudes. When I say "that sign is red", one could unpack this into a detailed statement about statistical properties of configurations of particles in my brain.

The same reasoning seems to apply just as well to something like group theory. States of knowledge about the Sylow theorems, just as an example, would be properties of particle amplitude configurations in a brain. The Sylow theorems are not separately existing entities which are of themselves "true" in any sense.

Perhaps I am way off base in thinking this way. Can any philosophers of the mind point me in the right direction to read more about this?

 

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 1:49 PM

You really shouldn't bring quantum physics into this, even if, strictly speaking, it would be incorrect if you use the wrong physics.

Just call it states of knowledge as states of one's brain or something.

[-][anonymous]13y00

But if you call it "states of one's brain", then anyone would (rightfully) just ask what "states of one's brain" means. Calling it "states of one's brain" seems to me to be the same as fake causality, no? The people I am discussing this with are not the type to happily accept some abstracted, black-boxy level of quantum physics where we can treat a mind as we treat the wing of an airplane.

My original question was more to the point of: is the description of brain states in terms of quantum mechanics a sufficient rebuttal to positive ontological assertions about cognitive objects? If someone argues that "mathematical objects actually exist", do we merely have to begrudgingly dismiss this as "unsupported by any evidence" or can we go further and actually make a case that cognitive objects are just fuzzy clusters in some space of arrangements-of-particle-configurations-in-brains?

[-][anonymous]13y10

Quoting from Eliezer's post on the second law of thermodynamics:

And don't tell me that knowledge is "subjective". Knowledge has to be represented in a brain, and that makes it as physical as anything else. For M to physically represent an accurate picture of the state of Y, M's physical state must correlate with the state of Y. You can take thermodynamic advantage of that - it's called a Szilard engine.

Or as E.T. Jaynes put it, "The old adage 'knowledge is power' is a very cogent truth, both in human relations and in thermodynamics."

And conversely, one subsystem cannot increase in mutual information with another subsystem, without (a) interacting with it and (b) doing thermodynamic work. Otherwise you could build a Maxwell's Demon and violate the Second Law of Thermodynamics - which in turn would violate Liouville's Theorem - which is prohibited in the standard model of physics.

Which is to say: To form accurate beliefs about something, you really do have to observe it. It's a very physical, very real process: any rational mind does "work" in the thermodynamic sense, not just the sense of mental effort.

(It is sometimes said that it is erasing bits in order to prepare for the next observation that takes the thermodynamic work - but that distinction is just a matter of words and perspective; the math is unambiguous.)

(Discovering logical "truths" is a complication which I will not, for now, consider - at least in part because I am still thinking through the exact formalism myself. In thermodynamics, knowledge of logical truths does not count as negentropy; as would be expected, since a reversible computer can compute logical truths at arbitrarily low cost. All this that I have said is true of the logically omniscient: any lesser mind will necessarily be less efficient.)

I think it is exactly this last "complication" with logical truths that I am asking about. Are there later LW posts with more formulated thoughts / comments about this?

Added: I found this post and I would be very eager to hear thoughts on how this connects to claims about mathematical truths. I think many arguments about ontology conflate mathematical entities with the ontologically basic mental things of this post. This quote seems to support what I am saying:

A "supernatural" explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.

I don't see the advantage of treating states of knowledge as arbitrary complex numbers (quantum amplitudes) rather than real numbers on the closed interval [0,1] (probabilities).

[-][anonymous]13y00

I think that for there to be an advantage of one type or another, you have to have some kind of goal or cost functional in mind. If you're talking about survival, belief propagation, etc., then it certainly is often advantageous to encode large, unwieldy descriptors of states of knowledge down into probabilities.

There are different types of knowledge that we categorize things into. What comes to my mind is the difference between a conclusion drawn by investigating axioms of logic and conclusions drawn from empirical evidence. When facing the claim that, "empirical reasoning cannot play any role in conclusions determined by investigating logical axioms", I am curious about the rebuttal: "but conclusions determined by investigating logical axioms are themselves in principle experimentally detectable and thus, in the extreme limit of sensitivity of measuring devices, one could draw conclusions of logic experimentally."

There doesn't exist such a thing as a conclusion drawn from logic which differs from that conclusion's instantiation on some brain-hardware somewhere. I guess what I am saying is that either we embrace logically proper names (something that philosophy seems to have abandoned) in the sense that we agree that a cognitive object is an ontologically existing entity and that our local instantiation of that object is merely an encoded representation of it... or else, what we say is "a conclusion from the axioms of logic" is really just the label we attach to a cluster over a subspace in the all-of-physics-amplitude-distribution.

Just because it's complicated doesn't mean it has that particular complicated feature.

You can build a non-yourself machine that does logic, but knowing that the machine's function corresponds to logical reasoning requires that you can do logic using the same machine that is the referent of "you".

I guess I don't really see where you're going with this. In what circumstances might you need to know the answer to your question? Can you reduce it to an empirical or decision-theoretic question?

[-][anonymous]13y00

I am not trying to assess whether or not it is "good" or "practically useful" to pose questions about knowledge in terms of quantum mechanical descriptions of brains. I'm trying to find resources for questions about philosophy of mind and discovery of logical knowledge.

For example, someone might say that for propositions A and B, (if A -> B then ~B -> ~A) is a discovered piece of knowledge about the way all truth functions work. Thus, the (if ... then ...) I just mentioned is "true" and its truth exists in a wholly separate magisterium from propositions that can be subjected to empirical inquiry and arise as the arg max of some posterior probability distribution and be thought of as "true" (or "exceedingly probable give current evidence") in that sense.

My point is that, fundamentally, the knowledge that "(if A -> B then ~B -> ~A) is a discovered piece of knowledge about the way all truth functions work" is itself "subjectable to empirical inquiry and arises as the arg max of some posterior probability distribution and be thought of as "true" (or "exceedingly probable give current evidence") in that sense" (i.e., the empirical evidence would be some examination of amplitudes in a quantum configuration subspace dealing with human minds).

I'm specifically trying to get at aspects of the theory of knowledge which whole branches of philosophers claim are outside of the magisterium in which Bayesian decision theory is applicable, and that this therefore entitles them to hold certain beliefs on the basis that they are "true" in that magisterium that can't be touched by Bayes. My counterargument is that such knowledge about the alleged other magisteria must itself be (at least in principle) experimentally detectable in brains, at the level of QM.

Whether we can do such detection or have useful, specific models for it is a whole different ball of wax that doesn't concern me in this specific question.

In general, if you find yourself stuck or confused on a question of philosophy, try the following things in order:

  1. Try to reduce it to a decision problem.

  2. Walk away and come back to it later with a fresh perspective.

  3. Ignore the question, it probably didn't matter anyway.

I'd agree to the extend that I think I've understood your (difficult to winnow out) point. You can have cognitive objects that point to cognitive objects, and cognitive objects are facts about the world. There is perhaps some trickiness if you refer to outside facts like the validity of something like logic (one of the possible uses of the word "true"), but don't account for it that might lead people to think that "this theorem is true" doesn't refer to the observable universe, when in fact it just doesn't refer only to cognitive objects.