(These are the touched up notes from a class I took with CMU's Kevin Kelly this past semester on the Topology of Learning. Only partially optimized for legibility)

One of the whole points of this project is to create a clear description of various forms of "success", and to be able to make claims about what is the highest form of success one can hope for given the problem that is being faced. The ultimate point of this is to have a useful frame for justifying the use of different methods. Now I'll introduce the gist of our formalization of methods so that we can get back to the good stuff.

In it's most general form, a method is just a function from info states to hypothesis.

Often I might use notation like to highlight "This method is responding to a yes or no question given evidence . This is mostly useful to be able to talk about certain relations between the question being asked, and your answers.

Gettier Problem

We are going to look at methods that respond with an articulation of an answer. So st . There's an interesting reason this matters, and part of it has to do with the Gettier Problem.

The Gettier problem has to do with believing the right thing for the wrong reasons. Consider this example.

There's three possible worlds, boxes are possible info states, and the two Hypothesis and are outlined. We haven't given a formal notion of what it means to be Occam / "to act in accord with simplicity". But pretend we have. An occam method would say:

, and . In a bit, we're going to make a big deal about the criterion of progressive learning (never drop the truth once you have it). The method I just outline drops the truth in this problem. Suppose is the true world. In is says , which is true, but then we drop the truth and say in , only to return to it later. Can you see why this is sorta a gettier problem? In we proclaim "!" but we do it for inductive reasons. It's the simplest hypothesis right now. So it is true that , but we don't have a super sure reason for saying it. That means that when we get more info, , we drop the truth, because our previous "bad reason" for saying has been disconfirmed.

Our way around these sorts of gettier problems is to not restrict the method to only giving a yes or no answer. That's what an articulation is. A method that gives an articulation would look like . This makes a lot of sense. The reason you're saying when you're in is different from the reason you say it in . Letting methods give articulations instead of flat yay or nay let's you not loose that information.

Success, Convergences, and Verification

Convergence to an articulation

converges to an articulation of in

Plain English: A method converges to a hypothesis in a given world iff that world has some information state such that no matter what further info you get, your method will stick to it's guns and give an articulation of

Convergence to a true articulation

converges to a true articulation of in

Plain English: Same as converging to an articulation of , with the added stipulation that your articulations must include the the world

Verification in the Limit

verifies in the limit in converges to a true articulation of if and does not converge to a true articulation of if

Strong Verification in the Limit

verifies in the limit in converges to a true articulation of if and does not converge to an articulation of if

Difference between strong and normal verification:

Consider someone pondering if the true form of the law is a polynomial degree, and what degree it it. This question can be verified in the limit, but it can't be strongly verified. Forever shifting the polynomial degree that you think the law is counts as converging to an articulation of "The true law is polynomial". To strongly verify, at some point your method would have to say "It's not polynomial!" But if you had to keep inter-spacing those in between your other guesses, you don't get to converge at all.

Retractions

Here's a picture.

Basically a retraction is any time you get more information and say something that isn't strictly a refinement of your earlier hypothesis. Retractions are really important because they are going to be a key measure of success, one which we connect to various topological properties of a question.

Some brief philosophical motivation for caring about retractions: At first glance, minimizing retractions sounds like being closed minded, and that sounds like a bad quality to have. Luckily, retractions aren't the only thing we're paying attention to when we talk about success. Often we'll talk about converging to the truth while also minimizing retractions. The closed-minded kersmudgeon who sticks to their guns forever doesn't even converge to the truth in most scenarios, and is thus not appealing to us. One way to think about minimizing retractions to "getting to the truth with minimum fuss". It's like missile pursuit.

It's totally expected that for most scientific problems, you're going to have to dodge and weave. But the more of a sequitous path you take in pursuing the truth, the less it feels like it's even right to call what you are doing "pursuit". Converging to the truth while minimizing retractions is like pursuing a target with minimal waste.

A retraction is is a sequence of of info states ... such that each pair is a retraction. We'd call this a retraction sequence of length .

N-verify

Now to the most important definition.

-verifies in verifies in the limit in and the longest possible retraction chain for in is of length

This concept is about to become very important. A sneak peak at the rest: we have some notion of different types of success you could achieve on a problem. You can verify, refute, or decide a question with 0- retraction. Next we're going to hop back to topology and construct a topological notion of complexity, one that allows us to make climbs like

is -topologically complex there exists a method such that -verifies

If we could do that, then we'd have a way to talk about scientific problems in terms of there complexity, and have a strong way that caches out. For a given problem, you might be able to prove upper or lower bounds on the topological complexity, and thus be able to re-calibrate expectations about what sort of success you can expect from your methods. You might be able to show that a given methods achieves the best possible success, given the topological complexity of the problem. That would be pretty dope. Let's get to it.

(note: So far, for every definition of verification we have given, you can create an analogous definition for refutability and decidability)

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 11:42 AM

Upvote for attempting foundational work on reference class forecasting which seems underexplored in terms of implementable by humans heuristics.

Meta: I think it would have been better to post these 1 per day?

1) noting that all the research is Kevin Kelly's, I'm just taking his class 2) I agree that it seems underexplored and interesting.

meta: agreed. I'm putting all the posts up now for logistical reasons related to the class.

Errata:

If we could do that, then we'd have a way to talk about scientific problems in terms of [their] complexity,

Errata:

If we could do that, then we'd have a way to talk about scientific problems in terms of [their] complexity,

[This comment is no longer endorsed by its author]Reply