Not long ago, I added a tag to LessWrong for the problem of the criterion. Shortly after that its text received an edit claiming it is an open problem. However, I think the problem of the criterion is not really an open problem. Here's why.

Here on LessWrong we say open problems are "the things in a field that haven't yet been figured out". On Wikipedia, an open problem is described as "a known problem which can be accurately stated, and which is assumed to have an objective and verifiable solution, but which has not yet been solved (i.e., no solution for it is known)." By either of these standards, does the problem of the criterion qualify as an open problem?

If our standard is "has it been figured out?", I'd say yes. I think not everyone likes what was figured out, which is a reason to make a bid to claim the problem is open, but not liking the answer should be insufficient reason to make such a claim. After all, I don't exactly like that the solution to the halting problem is that we can't create a general program that can determine if an arbitrary program halts, but so be it, that's the universe we find ourselves in, let's move on. So I think it is with the problem of the criterion: yeah, kinda sucks the kind of truth we can achieve when we restrict ourselves to mathematical thinking can't be achieved everywhere, but that's how the world is, so I guess we'll figure out how to live with it because we already are.

What about if we use Wikipedia's definition of an open problem? The problem of the criterion is certainly "a known problem which can be accurately stated". Is it "assumed to have an objective and verifiable solution"? The whole point of my and other's analysis of the problem is to show that this assumption is unfounded in some way and that no objective and verifiable solution can be found because the question is fundamentally flawed by asking for something impossible. In this way it seems similar to the "complete and consistent problem" or the "momentum and position problem": Gödel's incompleteness theorems and Heisenberg's uncertainty principle respectively show these are problems that don't have "objective and verifiable solutions". In this same way, I and others claim the problem of the criterion is "solved" because we've shown that solving it is impossible.

This is small issue, so why have I bothered to take the time to talk about it? Because I think letting something like calling the problem of the criterion an "open problem" slip by is on par with, say, letting belief in astrology as a causal force slip by: it's bad epistemic hygiene to leave this kind of thing lying around in your mind or in a community and provides a door through which more stuff can enter. So although this may seem a minor point, it's a point worth clarifying in my ongoing project to upgrade the epistemology of the LessWrong community.

If you click through to the problem of the criterion tag page on the date of publication (2022-01-06) you'll notice it still contains the open problem claim. Rather than unilaterally making this edit, I'm publishing this post to see what the response is. Since there's some existing disagreement (I wouldn't call it an open problem and at least one fellow LessWronger would), it's reasonable that I may be mistaken and should at least check with others first. So if you disagree, make the case below and see if you can convince me! Somewhere out of any conversation I hope will fall some compelling evidence that it's worth changing or leaving as is.


5 comments, sorted by Click to highlight new comments since: Today at 1:29 PM
New Comment

I don't really disagree with the main claim here, but I'll steelman the opposite claim for a moment. Why call the problem of the criterion open?

To my knowledge (and please tell me if I'm wrong here), there is no widely accepted mathematical framework for the problem of the criterion in which the problem has been proved unsolvable. In that regard it is not analogous to e.g. Gödel's theorems. This is important: if some formal version of the problem of the criterion comes up when I'm working on a theorem about agency, or trying to design an AI architecture with some property, then I want the formal argument, not just a natural-language argument that my problem is intractable. Such natural-language arguments are not particularly reliable; they tend to sneak in a bunch of hidden premises, and a mathematical version of the problem which shows up in practice can violate those hidden premises.

For example: for most of the 20th century, it was basically-universally accepted that no statistical analysis of correlation could reliably establish causation. Post-Judea-Pearl, this is clearly wrong. The formal arguments that correlation cannot establish causation had loopholes in them - most importantly, they were only about two variables, and largely fell apart with three or more variables. If I were working on some theorem about AI or agency, and wanted to show something about an agent's ability to deduce causation from observation of a large number of variables, I might have noticed my inability to prove the theorem I wanted. At the very least, I would have noticed the lack of a robust mathematical framework for talking about what causality even is, and likely would have needed to develop one. (Indeed, this is basically what Pearl and others did.) But the natural language arguments glossed over such subtleties; it wasn't until people actually started developing the mathematical framework for talking about causality that we noticed correlative data could be sufficient to deduce it.

By contrast, I find it hard to imagine something like that being overlooked by Gödel's theorems. There, we do have a mathematical framework, and we know what kinds-of-things allow loopholes, and roughly how big those loopholes can be.

I don't see any framework for the problem of the criterion which would make me confident that we won't have a repeat of "correlation doesn't imply causation", the way Gödel's theorems give me such confidence. Again, this may just be my ignorance in not having read up on the topic much; please correct me if so.

Mathematics is not exempt from the problem of the criterion.

I'm not aware of anything quite so rigorous beyond what we might call "philosophical math" of using words in a precise way to evaluate doxastic logic. Maybe this is enough, but does feel like we should at least write it down somewhere in formal notation to make sure there's no gaps.

I have to admit I'm not actually sure what lesson most people take from the problem of the criterion.

Do people think circularity is a mistake, and we should use a typed theory of truth (so we might have an algorithm that accepts inputs of statements about the world and evaluates empirical_truth but returns a type error when run on algorithms with the same input/output structure as itself?). Or do people think that circularity is untrustworthy but not a sign of failure, and shows that we necessarily have some non-truth-based process by which we arrive at true algorithms for evaluating statements about the world? Or do people think our algorithms for determining truth can in fact be arrived at for truth-related reasons, there's merely some essential self-reference inherent in how we define "truth" and "truth-seeking?"

Will people fight contingency to their dying breath, or are they mostly okay with Neurath's boat?

I hope they are okay with Neurath's boat, because that seems to be the world we live in. That is, the problem of the criterion shows us there is no solid foundation because we are born into contingency.

There's certainly folks who wanted a typed theory of truth (logical positivists) and there are folks who refashion truth in the image of referential integrity (coherentists), but even for them it either means giving up completeness to get a typed theory or giving up objectivity since truth must be coherent with our subjective experience. So circularity isn't really a sign of failure, it's just how it is, and truth isn't really about truth, it's about signaling (just kidding, it's about what we care about).