'AI-grade philosophy' is Eliezer Yudkowsky's term for philosophical discourse meant for use in actually building an AGI. This has two related components:
To put it another way, the reason we can't plug the products of standard analytic philosophy into AI problems is that:
One. The academic incentives favor continuing to dispute small possibilities because everyone gets a paper out of them. As a sheerly cultural matter, this means that academic philosophy hasn't accepted that everything is made out of quarks without any non-natural or irreducible properties attached. In turn, this means that when academic philosophers have tried to do metaethics, the result has been a proliferation of different theories that are mostly about non-natural or irreducible properties, with a few philosophers taking a stand on trying to do metaethics for a strictly natural and reducible universe. Even these philosophers are still having to argue for a natural universe rather than being able to accept this and move on to do further analysis inside that.
Two. Many academic philosophers haven't learned the programmers' discipline of distinguishing concepts that might compile, or what constitutes progress toward a concept that might compile. If we imagine rewinding the state of understanding of computer chess to what obtained in the days when Edgar Allen Poe proved that no mere automaton could play chess, then the modern style of philosophy would produce, among other papers, a lot of papers considering the 'goodness' of a chess move as a non-reduced property and arguing about the relation of goodness to reducible properties like controlling the center of a chessboard. There's a particular mindset that programmers have for realizing which of their own thoughts are going to compile and run, and which of their thoughts are not getting any closer to compiling.
Talking about the non-reduced 'goodness' of a chess move, and properties of this mysterious goodness predicate, may not be getting you any closer to compiling a chess program. This similarly holds for a number of other philosophical analyses in terms of non-reduced predicates. "AI-grade philosophy", according to Yudkowsky, needs to import this mindset - without falling prey to greedy reductionism, where you simply redefine the "goodness" of a chess move as "center control" even though these two concepts don't perfectly overlap. There is a reductionist concept that perfectly captures the notion of a good chess move, but "controlling the center of the board" is not that concept.
In more detail, Yudkowsky lists these as some tenets or practices of what he sees as AI-grade philosophy:
A final trope of AI-grade philosophy is to not be intimidated by how long a problem has been left open. "Ignorance exists in the mind, not in reality; uncertainty is in the map, not in the territory; if I don't know whether a coin landed heads or tails, that's a fact about me, not a fact about the coin." There can't be any unresolvable confusions out there in reality. There can't be any inherently confusing substances in the mathematically lawful, unified, low-level physical process we call the universe. Any seemingly unresolvable or impossible question must represent a place where we are confused, not an actually impossible question out there in reality. This doesn't mean we can quickly or immediately solve the problem, but it does mean that there's some way to wake up from the confusing dream.
Although all confusing questions must be places where our own cognitive algorithms are running skew to reality, this, again, doesn't mean that we can immediately see and correct the skew; nor that it is "AI-grade philosophy" to insist in a very loud voice that a problem is solvable; nor that when a solution is presented we should immediately seize on it because the problem must be solvable and behold here is a solution. An important step in the method is to check whether there is any lingering sense of something that didn't get resolved; whether we really feel less confused; whether it seems like we could write out the code for an AI that would be confused in the same way we were; whether there is any sense of dissatisfaction; whether we have merely taken a photograph of the problem and chopped off all the parts that didn't fit in the photograph.
An earlier guide to some of the same ideas was the Reductionism Sequence.