Naturalized induction is an open problem in Friendly AI: Build an algorithm for producing accurate generalizations and predictions from data sets, that treats itself, its data inputs, and its hypothesis outputs as reducible to its physical posits. More broadly, design a workable reasoning method that allows the reasoner to treat itself as fully embedded in the world it's reasoning about.
Related: Embedded agency
Naturalized inductors are associated with naturalism in contrast to 'Cartesian' inductors, reasoners that assume a strict boundary between themselves and their environments. The standard example of an idealization of Cartesian induction is Solomonoff induction, an uncomputable but theoretically fruitful specification of a hypothesis space, prior probability distribution, and consistent reassignment of probabilities given data inputs. As Solomonoff induction is currently the leading contender for a formalization of universally correct — albeit physically unrealizable — inductive reasoning, an essential step in formally defining the problem of naturalized induction will be evaluating the limitations of Solomonoff inductors such as AIXI.
Naturalized induction is a particular angle of approach on larger Friendly AI superproblems such as the problem of hypotheses ('what formalism should a Friendly AI's hypotheses look like? how wide a range of possibilities should a Friendly AI be able to consider?') and the problem of priors ('before receiving any data, what prior probabilities should a Friendly AI assign to its hypotheses?'). Here the emphasis is on making sure the AI has a realistic conception of nature and of its own place in nature, whereas other angles of approach to the problem of hypotheses and the problem of priors will put the emphasis on issues like computational tractability, leverage penalties, logical uncertainty, or epistemic stability under self-modification. Subproblems specific to naturalized induction include: