'Finishable philosophy' is Eliezer Yudkowsky's term for philosophical reasoning aimed at arriving at a correct answer to some question in time to build an AI, which avoids what he believes to be failure modes of standard academic philosophy.
Since many problems of philosophy have seemingly remained open for at least several centuries, if one of those philosophical problems seems to be important to AI, we cannot go on considering it using the same type of methodology that has been used for the last couple of centuries. Thus, the notion that we can resolve these problems, correctly, in time to resolve questions of AI design that seem to be dependent on them, is sometimes a source of skepticism by others about Yudkowsky's approach.
Finishable philosophy has several core concepts that render it distinct from philosophy as usually practiced in academia:
A final trope of finishable philosophy is not to be intimidated by how long a problem has been left open (since this sense of intimidation would itself be a motion). "Ignorance exists in the mind, not in reality; uncertainty is in the map, not in the territory; if I don't know whether a coin landed heads or tails, that's a fact about me, not a fact about the coin." There can't be any unresolvable confusions out there in reality, or any inherently confusing substances in the mathematically lawful unified low-level physical process we call the universe. Any seemingly unresolvable or impossible question must represent a place where we are confused, not an actually impossible question out there in reality. This doesn't mean we can quickly or immediately solve the problem, but it does mean that there's some way to wake up out of the confusing dream.
Although all confusing questions must be places where our own cognitive algorithms are running skew to reality, this, again, doesn't mean that we can immediately see and right the skew, nor that it is "finishable philosophy" to insist in a very loud voice that a problem is solvable, nor that when a solution is presented we should immediately seize on it because the problem must be solvable and behold here is a solution. An important step in the method is to check whether there is any lingering sense of something that didn't get resolved; whether we really feel less confused; whether it seems like we could write out the code for an AI that would be confused in the same way we were; whether there is any sense of dissatisfaction or feeling that we have taken a photograph of the problem and chopped off all the parts that didn't fit in the photograph.
Finishable philosophy relies heavily on the skills for confusion and dissonance.
tutorial: finishable philosophy applied to 'free will'. (don't forget to distinguish plausible wrong ways to do it on each step. is there a good example besides free will that can serve as a homework problem?)