All of mlionson's Comments + Replies

David Deutsch: A new way to explain explanation

I don't think he is saying, "good explanations are hard to vary while preserving their predictions".

As described above the statement "Everyone just acts in his own interest" very easily preserves its predictive power in a multitude of situations. Indeed, the problem with it is that the statement preserves its predictive power in too many situations! The explanation is consistent with just about whatever happens, so one cannot design a test that makes one believe that the statement is certainly false. So it is too easy to vary and hence a bad explanation.

Against Modal Logics

Evolution does not increase a species' implicit knowledge of the niche by replicating genes. Mutation (evolution's conjectures) creates potential new knowledge of the niche. Selection decreases the "false" implicit conjectures of mutations and previous genetic models of the niche.

So induction does not increase the implicit knowledge of gene sequences.
Trial (mutation) and error (falsification) of implicit theories does. This is the process that the critical rationalist says happens but more efficiently with humans.

Many Worlds, One Best Guess

I think I see where we are disagreeing.

Consider a quantum computer. If the laws of physics say that only our lack of knowledge limits the amount of complexity in a superposition, and the logic of quantum computation suggests that greater complexity of superposition leads to exponentially increased computational capacity for certain types of computation, then it will be quite possible to have a quantum computer sit on a desktop and make more calculations per second than there are atoms in the universe. My quote above from David Deutsch makes that point.... (read more)

4[anonymous]12yCertainly the default extrapolation is that quantum computers can efficiently perform some types of computation that would on a classical computer take more cycles than the number of atoms in the universe. But that's not quite what you asserted. Suppose I have a classical random access machine, that runs a given algorithm in time O(N), where the best equivalent algorithm for a classical 1D Turing machine takes O(N^2). Would you say that I really performed N^2 arithmetic ops, and theorize about where the extra calculation happened? Or would you say that the Turing machine isn't a good model of the computational complexity class of classical physics? I do subscribe to Everett, so I don't object to your conclusion. But I don't think exponential parallelism is a good description of quantum computation, even in the cases where you do get an exponential speedup. Edit: I said that badly. I think I meant that the parallelism is not inferred from the class of problems you can solve, except insofar as the latter is evidence about the implementation method.
Many Worlds, One Best Guess

“To really make progress here, what we need is a thought-experiment in which a macroscopic superposition is made to yield information about more than one branch, as the counterfactualist rhetoric claims. Unfortunately, your needle-in-the-arm experiment is not there yet, because we haven't gone into the exact details of how it's supposed to work. You can't just say, 'If we did a quantum experiment where we could produce data about glucose levels in someone's bloodstream, without the needle having gone into their arm, why, that would prove that the multivers... (read more)

2Mitchell_Porter12ySomehow I never examined these experiments and arguments. But what I've learned so far is to reject counterfactualism. If you have an Everett camera in your Schrodinger cat-box which sometimes takes a picture of a dead cat, even when the cat later walks out of the box alive, then as a single-world theorist I should say the cat was dead when the photo was taken, and later came back to life. That may be a thermodynamic miracle, but that's why I need to know exactly how your Everett camera is supposed to work. It may turn out that that it works so rarely that this is the reasonable explanation. Or it may be that you are controlling the microscopic conditions in the box so tightly – in order to preserve quantum coherence – that you are just directly putting the cat's atoms back into the living arrangement yourself. Such an experiment allegedly involves a superposition of histories, one of the form |alive> -> |alive> -> |alive> and the other |alive> -> |dead> -> |alive> And then the camera is supposed to have registered the existence of the |dead> component of the superposition during the intermediate state. But how did that second history even happen? Either it happened by itself, in which case there was the thermodynamic miracle (dead cat spontaneously became live cat). Or, it was caused to happen, in which case you somehow made it happen! Either way, my counter-challenge would be: what's the evidence that the cat was also alive at the time it was photographed in a dead state?
6Mitchell_Porter12ySlow down there. In order to "simulate" the behavior of an entity X using counterfactual measurement, you need X (both actually and counterfactually) to be isolated from the rest of the universe (interactions must be weak enough to not decohere the superposition). To say that we must be able to simulate the rest of the universe because we could instead be measuring Y, Z, etc is confusing the matter. The basic claim of the counterfactualists is: We can find out about a possible state of X - call it X' - by inducing a temporary superposition in X - schematically, |X> goes to |X>+|X'>, and then back to |X> - while it is coupled to some other quantum system. We find out something about X' by examining the final state of that other system, but X' itself never actually existed, just X. So the core claim is that by having quantum control over an entity, you can find out about how it would behave, without actually making it behave that way. This applies to any entity or combination of entities, though it will be much easier for some than others. Now first I want to point out that being a single-world theorist does not immediately make you a counterfactualist about these measurements. All a single-world theorist has to do is to explain quantum mechanics without talking about a multiverse. Suppose someone were to say of the above process that what actually existed was X, then X', and then X again, and that X' while it existed interacted a little with the auxiliary quantum system. Suddenly the counterfactualist magic is gone, and we know about X' simply because situation X' really did exist for a while, and it left a trace of its existence in something else. So here is the real issue: The discourse of quantum mechanics is full of "superpositions". Not just E-V bomb-testing and a superposition which goes from one component to two and back to one - but superpositions in great multitudes. Quantum computers in exponentially large superpositions; atoms and molecules in pers
Update Yourself Incrementally

"What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something)."

I do appreciate your honesty in making this assumption. Usually inductivists are less candid (but believe exactly as you do, secretly. We call them crypto-inductivists!)

But there is no law of physics, psychology, economics, or philosophy that says that the future must resemble the past. There also is no law ... (read more)

8Jack12yOf course not. Though I'm pretty sure induction occurs in humans without them willing it. This is just Hume's view, certain perceptions become habitual to the point where we are surprised if we do not experience them, We have no choice but to do induction. But none of this matters. Induction is just what we're doing when we do science. If we can't trust it we can't trust science I'm sorry, my "a priori" theory? In what sense could I possibly know about Thanksgiving a priori? It certainly isn't an analytic truth and it isn't anything like math or something Kant would have considered a priori. Where exactly are these theories coming from if not from induction? And how come inductivists aren't allowed to have theories? I have lots of theories- probably close to the same theories you do. The only difference between our positions is that I'm explaining how those theories got here in the first place. I'm afraid I don't know what to make of your calendar and number examples. Just because I think science is about induction doesn't mean I don't think that social conventions can be learned. Someone explaining math, that after 1999 comes 2000 counts as pretty good Bayesian evidence that that is how the rest of the world counts. Of course most children aren't great Bayesians and just accept what they are told as true. But the fact that people aren't actually naturally perfect scientists isn't relevant. Rationality is just the process of doing induction right. You have to explain what you mean if you mean something else by it :-) (And obviously induction does not mean everything stays the same but that there are enough regularities to say general things about the world and make predictions. This is crucial. If there were no regularities the notion of a "theory" wouldn't even make sense. There would be nothing for the theory to describe. Theories explain large class of phenomena over many times. They can't do that absent regularities.)
Many Worlds, One Best Guess

And if no law of physics precludes something from being done, then only our lack of knowledge prevents it from being done.

So if there are no laws of physics that preclude developing bomb testing and sugar measuring devices, our arguments against this have nothing to do with the laws of physics, but instead have to do with other parameters, like lack of knowledge or cost. So if the laws of physics do not preclude things form happening, we might as well assume that they can happen, in order to learn from the physics of these possible situations.

So for th... (read more)

In the Elitzur-Vaidman bomb test, information about whether the bomb has exploded does not feed into the experiment at any point. When you shoot photons through the interferometer, you are not directly testing whether the bomb would explode or has exploded elsewhere in the multiverse; you are testing whether the sensitive photon detector in the bomb trigger works.

As wnoise said, to directly gather information from a possible history, the history has to end in a physical configuration identical to the one it is being compared with. The two histories repres... (read more)

Many Worlds, One Best Guess

The Elitzur-Vaidman bomb testing device is an example of a similar phenomenon. What law of physics precludes the construction of a device that measures blood sugar but with the needle (virtually never) penetrating the skin?

-2mlionson12yAnd if no law of physics precludes something from being done, then only our lack of knowledge prevents it from being done. So if there are no laws of physics that preclude developing bomb testing and sugar measuring devices, our arguments against this have nothing to do with the laws of physics, but instead have to do with other parameters, like lack of knowledge or cost. So if the laws of physics do not preclude things form happening, we might as well assume that they can happen, in order to learn from the physics of these possible situations. So for the purposes of understanding what our physics says can happen, it becomes reasonable to posit that devices have been constructed that can test the activity of Elitzur-Vaidman bombs without (usual) detonation or measure blood sugars without needles (usually) penetrating the skin. It is reasonable to posit this because the known laws of physics do not forbid this. So those who do not believe in the multiverse but still believe in their own rationality do need to answer the question, "Where is the arm from which the blood was drawn?" Or, individuals denying the possibility of such a measuring device being constructed need to posit a new law of physics that prevents Elitzur-Vaidman bomb testing devices from being constructed and blood sugar measuring devices (that do not penetrate the skin) from being constructed. If they posit this new law, what is it?
Update Yourself Incrementally

"And if the event happens even more when you expect it to then

it is even more evidence for the theory, "

I am not sure you agreed with this based on your response but I will assume that you did. But correct me if I am wrong!

If you did agree, then consider the Bayesian turkey. Every time he gets fed in November, he concludes that his owner really wants what's best for him and likes him, because he enjoys eating and keeps getting food. Every day more food is provided, exactly as he expects given his theory, so he uses Bayesian statistical inference... (read more)

4Jack12yA perfect Bayesian turkey would produce multiple hypotheses to explain why he is being fed. One hypothesis would be that his owner loves him, another would be that he is being fattened for eating. Let us stipulate that those are the only possibilities. When the turkey continues to be fed that is new data. But that data doesn't favor one hypothesis over the other. Both hypotheses are about equally consistent with the turkey continuing to be fed so little updating will occur in either direction. But this gives the game away. What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something). If the turkey had this information it isn't even close. The probability distribution immediately shifts drastically in favor of the Thanksgiving meal hypothesis. Then, if Thanksgiving comes and goes and the turkey is still being fed he can update on that information and the probability his owner loves him goes up again.
5wnoise12yYou are badly confused. When you describe things as being in superposition, then only what happened (the entire superposition) effects what does happen (in the entire superposition). If you take some sort of "coherent histories" view, then, again, all coherent histories can equally well have been said to happen. Correct. No. We get a superposition of the result being recorded, and the result not being recorded. I do accept the reality of the multiverse. But I know how to use quantum mechanics to make predictions, and I get different ones than you do.
Update Yourself Incrementally

"evidence they've gathered adds up to a sufficiently high probability for P"

Perhaps I should ask what you mean by "evidence"? By evidence do you mean examples of an event happening that corroborates a particular theory that someone holds ?

So if

  1. you have an expectation of something happening, and
  2. that something happens,

then you are saying that the event is evidence in favor of the theory. And if the event happens even more when you expect it to then

  1. it is even more evidence for the theory, and this increased probability is c
... (read more)
0Tyrrell_McAllister12yAll input that you have access to is potentially evidence. That is, ideally, all your input would figure into your evaluation of the probability of any proposition whatsoever. And if some input E weren't evidence with respect to some particular proposition H, you would still have to run the Bayesian updating computation to determine that E didn't change the probability that you ought to assign to H. Obviously, in practice, computing the upshot of all your input is so ideal as to be physically impossible. But, in principle, everything is evidence. Contradicting prior expectation is a particularly potent kind of evidence. But it is only a special case. Search for "Popper" at Eliezer's An Intuitive Explanation of Bayes' Theorem [].
Update Yourself Incrementally

"For example I don't know how something that is true cannot ever be justified (how else do you know it's true!"

You can't know that something is true. We are fallible. And our best theories are often wrong. We gain knowledge by arguing with each other and trying to point out logical contradictions in our explanations. Experiments can help us to show that competing explanations are wrong (or that ours is!) .

Induction as a scientific methodology has been known (since Hume) to be impossible. Happy to discuss this further if you like. I will cert... (read more)

9Jack12yI agree with Hume about just about everything. You're misreading him. Induction definitely isn't impossible. We do it all the time. Scientists do it for a living. Hume certainly didn't think it was impossible. What he thought was that there was no deductive reason for expecting that today will be like yesterday. They only justification is induction itself. Thus, any inductive argument begs the question. But his solution definitely wasn't to throw it out and wallow in extreme skepticism. He thought induction was inevitable (not even something we will, just part of psychological habit formation) and was pretty much the only way of having knowledge about anything. Hume's position is basically my position. Though I have some sketchy arguments in my head that might let us go farther than Hume, I'm more than comfortable with that. Now it turns out that if your psychological habit formation occurs in a certain way (the Bayesian way) you'll start winning bets against those who form beliefs in different ways. It also lets us do statistical/probabilistic experimentation which would never falsify anything but can provide evidence for and against theories. It also explains why we like unfalsified theories that have been tested many, many times more than unfalsified theories that have rarely been tested. If Deutsch has other arguments you can spell out here I'd be happy to hear them.
6Tyrrell_McAllister12yThis is true if you take "know" to mean "absolute certainty". And, precisely because absolute certainty never happens [], taking "know" in this sense would be pointless. We would never have the opportunity to use such a word, so why bother having it? For that reason, people on this site take the assertion that they "know" a proposition P to mean that the evidence they've gathered adds up to a sufficiently high probability for P. Here, 1. "sufficiently high" depends on the context — for example, the expected cost/benefit of acting as though P is true; and 2. the evidence that they've gathered "adds" in the sense of Bayesian updating []. That's all that they mean by "know". On the Bayesian interpretation, induction is just a certain mathematical computation. The only limits on its possibility are the limits on your ability to carry out the computations.
6wnoise12yTrue, but many will say it is impossible for all practical purposes. The situation resolves into either: 1. The measuring apparatus pierces the skin, has a bloody needle, and reports the result. 2. The measuring apparatus does not pierce the skin, does not have a bloody needle, and does not report the result. Histories only interfere when they come to the same end result. That doesn't happen in this case.
5Jack12yThis is pretty muddled and wrong. You use a lot of terms in an unorthodox way. For example I don't know how something that is true cannot ever be justified (how else do you know it's true!). Also, there is no such thing as science without induction, no laws of physics or predictions. So I'm pretty confused about what your position is. That's okay though because it looks like you've never heard of Bayesian inference. In which case this is a really important day in your life. The wikipedia enty [] The SEP entry [] Eliezer's explanation of the Math [] Also: the "Rationality and Science" subsection at the bottom here []. Who has better links? Edit: Welcome to less wrong, btw! Feel free to introduce yourself []. Edit again: This PDF looks good. []
David Deutsch: A new way to explain explanation

He does not mean "lacking unnecessary details". For example the statements "Everyone just acts in his own interest" or "Everyone is really an altruist" are simple and lack unnecessary details, explain quite a lot, and are consistent with Occam's razor. But by Deutsch's criteria they are bad explanation because they are too easy to vary. For example, someone who believes in the self-interest theory could say, "John gave to charity because he would have felt guilty otherwise. So he really was selfish" .

We see that... (read more)