Posts

Sorted by New

Wiki Contributions

Comments

I think that logical positivism generally is self-refuting. It typically makes claims about what is meaningful that would be meaningless under its own standards. It generally also depends on an ideas about what counts as observable or analytically true that also are not defensible—again, under its own standards. It doesn't change things to say formulate it as a methodological imperative. If the methodology of logical positivism is imperative, then on what grounds? Because other stuff seems silly?

I am obviously reading something into lukeprog's post that may not be there. But the materials on his curriculum don't seem very useful in answering a broad class of questions in what is normally considered philosophy. And when he's mocking philosophy abstracts, he dismisses the value of thinking about what counts as knowledge. But if that's not worthwhile, then, um, how does he know?

This is a very good point. If we agree cognitive biases make our understanding of the world flawed, why should we assume that our moral intuitions aren't equally flawed? That assumption makes sense only if you actually equate morality with our moral intuitions. This isn't what I mean by the word "moral" at all—and as a matter of historical fact many behaviors I consider completely reprehensible were at one time or another widely considered to be perfectly acceptable.

I agree that there is good work to be done with math in all of those fields. But there's plenty of good work in most of them that can be done without math too.

The things on your curriculum don't seem like philosophy at all in the contemporary sense of the word. They are certainly very valuable at figuring out the answers to concrete questions within their particular domains. But they are less useful for understanding broader questions about the domains themselves or the appropriateness of the questions. Learning formal logic, for example, isn't that much help in understanding what logic is. Likewise, knowing how people make moral decisions is not at all the same as knowing what the moral thing to do would be. I gather your point is that it's only certain concrete questions that have any real meaning.

This naive logical positivism is dismaying in a blog about rationality. I certainly agree that there is plenty of garbage philosophy, and that most of Aristotle's scientific claims were wrong. But the problem with logical positivism is that its claim about what's meaningful and what isn't fails to be a meaningful claim under its own criteria.

Your dismissal of certain types of philosophy inevitably rests on particular implicit answers to the kinds of philosophical questions you dismiss as worthless (like what makes a philosophical idea wrong?). Dismissing those questions—failing to think through the assumptions on which your viewpoint rests—only guarantees that your answers to those questions will be pretty bad. And that's something that you could learn from a careful reading of Plato.

If your point is that it isn't necessarily useful to try to say in what sense our procedures "correspond," "represent," or "are about" what they serve to model, I completely agree. We don't need to explain why our model works, although some theory may help us to find other useful models.

But then I'm not sure see what is at stake when you talk about what makes a proof correct. Obviously we can have a valuable discussion about what kinds of demonstration we should find convincing. But ultimately the procedure that guides our behavior either gives satisfactory results or it doesn't; we were either right or wrong to be convinced by an argument.

I thought of a better way of putting what I was trying to say. Communication may be orthogonal to the point of your question, but representation is not. An AI needs to use an internal language to represent the world or the structure of mathematics—this is the crux of Wittgenstein's famous "private language argument"—whether or not it ever attempts to communicate. You can't evaluate "syntactic legality" except within a particular language, whose correspondence to the world is not given a matter of logic (although it may be more or less useful pragmatically).

The mathematical realist concept of "the structure of mathematics"—at least as separate from the physical world—is problematic once you can no longer describe what that structure might be in a non-arbitrary way. But I see your point. I guess my response would be that the concept of "a proof"—which implies that you have demonstrated something beyond the possibility of contradiction—is not what really matters for your purposes. Ultimately, how an AI manipulates its representations of the world and how it internally represents the world are inextricably related problems. What matters is how well the AI can predict/retrodict/manipulate physical phenomena. Your AI can be a pragmatist about the concept of "truth."

The motivation for the extremely unsatisfying idea that proofs are social is that no language—not even the formal languages of math and logic—is self-interpreting. In order to understand a syllogism about kittens, I have to understand the language you use to express it. You could try to explain to me the rules of your language, but you would have to do that in some language, which I would also have to understand. Unless you assume some a priori linguistic agreement, explaining how to interpret your syllogism, requires explaining how to interpret the language of your explanation, and explaining how to interpret the language of that explanation, and so on ad infinitum. This is essentially the point Lewis Carroll was making when he said that "A" and "if A then B" were not enough to conclude B. You also need "if 'A' and 'if A then B' then B" and and so on and so on. You and I may understand that as implied already. But it is not logically necessary that we do so, if we don't already understand the English language the same way.

This is, I think, the central point that Ludwig Wittgenstein makes in The Philosophical Investigations and Remarks on the Foundation of Mathematics. It's not your awesome kitten pictures are problematic. You and I might well see the same physical world, but not agree on how your syllogism applies to it. Along the same lines, Saul Kripke argued that the statement 2+2=4 was problematic, not because of any disagreement about the physical world, but because we could disagree about the meaning of the statement 2+2=4 in ways that no discussion could ever necessarily clear up (and that it was therefore impossible to really say what the meaning of that simple statement is).

In math and logic, this is typically not much of a problem. We generally all read the statement 2+2=4 in a way that seems the same as the way everyone else reads it. But that is a fact about us as humans with our particular education and cognitive abilities, not an intrinsic feature of math or logic. Proof of your kitten syllogism is social in the sense that we agree socially on the meaning of the syllogism itself, but not because states of affairs you represent in your pictures are in any way socially determined.

And if this is normally much of an issue in mathematics, it is incredibly problematic in any kind of metamathematical or metalogical philosophy—in other words, once we start try to explain why math and logic must necessarily be true. The kep point is not that the world is socially constructed, but that any attempt to talk about the world is socially constructed, in a way that makes final, necessary judgments about its truth or falsity impossible.