Wiki Contributions

Comments

At least some of the arguments offered by Richard Rorty in Philosophy and the Mirror of Nature are great. Understanding the arguments takes time because they are specific criticisms of a long tradition of philosophy. A neophyte might respond to his arguments by saying "Well, the position he's attacking sounds ridiculous anyway, so I don't see why I should care about his criticisms." To really appreciate and understand the argument, the reader needs to have sense of why prior philosophers were driven to these seemingly ridiculous positions in the first place, and how their commitment to those positions stems from commitment to other very common-sensical positions (like the correspondence theory of truth). Only then can you appreciate how Rorty's arguments are really an attack on those common-sensical positions rather than some outre philosophical ideas.

Perhaps explicitly thinking of them as systems of equations (or transformations on a vector) would be helpful.

As an example, suppose you are asked to multiply matrices A and B, where A is [1 2, 0 4, -1 2] (the commas represent the end of a row) and B is [2 1 0, 3 1 2]. Start out by taking the rightmost matrix (B in this case) and converting it into a series of equations, one for each row. So since the first row is 2 1 0, the relevant equation will be 2x + 1y + 0z. Assign each of these equations to some other variable. So we now have

X = 2x + y

Y = 3x + y + 2z

Now do the same thing with the matrix on the left, except this time use the new variables you've introduced (X and Y), so the three equations you end up with (one for each row) will be

X + 2Y

4Y

-X + 2Y

Now that you have these formulae, substitute in the values of X and Y based on your earlier equations. You get

(2x + y) + 2(3x + y + 2z)

4(3x + y + 2z)

-(2x + y) + 2(3x + y + 2z)

Simplifying, you get

8x + 3y + 4z

12x + 4y + 8z

4x + y + 4z

The coefficients of these equations are the result of the multiplication. So the product of the two matrices is [8 3 4, 12 4 8, 4 1 4].

I'll admit this is not the quickest way to go about multiplying matrices, but it might be easier for you to remember since it doesn't seem as arbitrary. And maybe once you get used to thinking about multiplication this way, the usual visual rule will start making more sense to you.

I think Bostrom's argument applies even if they aren't "highly accurate". If they are simulated at all, you can apply his argument.

I don't think that's true. The SSA will have different consequences if the simulated minds are expected to be very different from ours.

If we suppose that simulated minds will have very different observations, experiences and memories from our own, and we consider the hypothesis that the vast majority of minds in our universe will be simulated, then SSA simply disconfirms the hypothesis. If I should reason as if I am a random sample from the pool of all observers, then any theory which renders my observations highly atypical will be heavily disconfirmed. SSA will simply tell us it is unlikely that the vast majority of minds are simulated. Which means that either civilizations don't get to the point of simulating minds or they choose not to run a significant number of simulations.

If, on the other hand, we suppose that a significant proportion of simulated minds will be quite similar to our own, with similar thoughts, memories and experiences, and we further assume that the vast majority of minds in the universe are simulated, then SSA tells us that we are likely simulated minds. It is only under those conditions that SSA delivers this verdict.

This is why, when Bostrom describes the Simulation Argument, he focuses on "ancestor-simulations". In other words, he focuses on post-human civilizations running detailed simulations of their evolutionary history, not just simulations of any arbitrary mind. It is only under the assumption that post-human civilzations run ancestor-simulations that the SSA can be used to conclude that we are probably simulations (assuming that the other two possible conclusions of the argument are rejected).

So i think it matters very much to the argument that the simulated minds are a lot like the actual minds of the simulators' ancestors. If not, the argument does not go through. This is why I said you seem to simply be accepting (2), the conclusion that post-human civilizations will not run a significant number of ancestor-simulations. Your position seems to be that the simulations will probably be radically dissimilar to the simulators (or their ancestors). That is equivalent to accepting (2), and does not conflict with the simulation argument.

You seem to consider the Simulation Argument similar to the Boltzmann brain paradox, which would raise the same worries about empirical incoherence that arise in that paradox, worries you summarize in the parent post. The reliability of the evidence that seems to point to me being a Boltzmann brain ts itself predicated on me not being a Boltzmann brain. But the restriction to ancestor-simulations makes the Simulation Argument importantly different from the Boltzmann brain paradox.

I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.

(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.

He's saying that (3) doesn't hold if we are not in a simulation, so either (1) or (2) is true. He's not saying that if we're not in a simulation, we somehow are actually in a simulation given this logic.

Right. When I say "his conclusion is still true", I mean the conclusion that at least one of (1), (2) or (3) is true. That is the conclusion of the simulation argument, not "we are living in a simulation".

If I conclude that there are more simulated minds than real minds in the universe, I simply do not think that implies that I am probably a simulated mind.

This, I think, is a possible difference between your position and Bostrom's. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post).

I actually suspect that you are doing neither of these things, though. You seem to be simply denying that the minds our post-human descendants will simulate (if any) will be similar to our own minds. This is what your game AI comparisons suggest. In that case, your argument is not incompatible with Bostrom's conclusion. Remember, the conclusion of the simulation argument is that either (1), (2), or (3) is true. You seem to be saying that (2) is true -- that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants. If that's all you're claiming, then you're not disagreeing with the simulation argument.

First, Bostrom is very explicit that the conclusion of his argument is not "We are probably living in a simulation". The conclusion of his argument is that at least one of the following three claims is very likely to be true -- (1) humans won't reach the post-human stage of technological development, (2) post-human civilizations will not run a significant number of simulations of their ancestral history, or (3) we are living in a simulation.

Second, Bostrom has addressed the objection you raise here (in his Simulation Argument FAQ, among other places). He essentially flips your disjunctive reasoning around. He argues that we are either in a simulation or we are not. if we are in simulation, then claim 3 is obviously true, by hypothesis. If we are not in a simulation, then our ordinary empirical evidence is a veridical guide to the universe (our universe, not some other universe). This means the evidence and assumptions used as the basis for the simulation argument are sound in our universe. It follows that since claim 3 is false by hypothesis, either claim 1 or claim 2 is very likely to be true. It's worth noting that these two are claims about our universe, not about some parent universe.

In other words, your objection is based on the argument that if we are in a simulation, there is no good reason to trust the assumptions of the simulation argument (such as assumptions about how our simulators will behave). Bostrom's reply is that if we are in a simulation, then his conclusion is true anyway, even if the specific reasoning he uses doesn't apply. If we are not in a simulation, then the reasoning he uses does apply, so his conclusion is still true.

There does seem to be some sort of sleight-of-mind going on here, if you want my opinion. I generally feel that way about most non-trivial uses of anthropic reasoning. But the exact source of the sleight is not easy for me to detect. At the very least, Bostrom has a prima facie response to your objection, so you need to say something about why his response is flawed. Making your objection and Bostrom's response mathematically precise would be a good way to track down the flaw (if any).

"Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all?"

-- David Chalmers

These questions may be a product of conceptual confusion, but they don't seem that way to me. Perhaps I am confused in the same way.

When you update, you're not simply imagining what you would believe in a world where E was true, you're changing your actual beliefs about this world. The point of updates is to change your behavior in response to evidence. I'm not going to change my behavior in this world simply because I'm imagining what I would believe in a hypothetical world where E is definitely true. I'm going to change my behavior because observation has led me to change the credence I attach to E being true in this world.

Updating by Bayesian conditionalization does assume that you are treating E as if its probability is now 1. If you want an update rule that is consistent with maintaining uncertainty about E, one proposal is Jeffrey conditionalization. If P1 is your initial (pre-evidential) distribution, and P2 is the updated distribution, then Jeffrey conditionalization says:

P2(H) = P1(H | E) P2(E) + P1(H | ~E) P2(~E).

Obviously, this reduces to Bayesian conditionalization when P2(E) = 1.

Credit and accountability seem like good things to me, and so I want to live in a world where people/groups receive credit for good qualities, and are held accountable for bad qualities.

If this is your concern, then you should take into account what sorts of groups are appropriate loci for credit and accountability. This will, of course, depend on what you think is the point of credit/accountability.

If you believe, as I do, that the function of credit and accountability is to influence future behavior, then it seems that the appropriate loci of credit/accountability should be "agential". In other words, objects of credit and blame should be capable of something resembling goal-directed alteration of behavior. Individual people are appropriate loci on this account, since they are (at least, mostly) paradigmatic agents.

Some groups might also qualify as agential, and thus as appropriate loci of credit and blame. Corporations come to mind, as do nations. But that is because those groups have a particular organizational structure that makes them somewhat agent-like. Not every group has this quality. The group of all left-handed people, for instance, is not agent-like in any relevant sense, so I don't see the point of assigning credit or blame to it. Similarly for racial groups or genders.

It seems to me that your objection here is driven mainly by a general dislike of Gleb's contributions (and perhaps his presence on LW), rather than a sincere conviction about the importance of your point. I mean, this is a ridiculous nitpick, and the hostility of your call-out is completely disproportionate to the severity of Gleb's supposed infraction.

While Gleb's aside might be a "lie" by some technical definition, it certainly doesn't match the usual connotations of the term. I see virtually zero harm in the kind of "lie" you're focusing on here, so I'm not sure about the value of your piece of advice, other than signalling your aversion towards Gleb.

Load More