**Naturalized induction** is an open problem in Friendly AI: Build an algorithm for producing accurate generalizations and predictions from data sets, that treats itself, its data inputs, and its hypothesis outputs as reducible to its physical posits. More broadly, design a workable reasoning method that allows the reasoner to treat itself as fully embedded in the world it's reasoning about.*Related: *Embedded agency

•

Created by Rob Bensinger at 3y

- Building Phenomenological Bridges by
~~RobbBB~~Rob Bensinger - Bridge
~~Collapse~~Collapse: Reductionism as Engineering Problem by~~RobbBB~~Rob Bensinger - Can
~~we do without bridge hypotheses~~We Do Without Bridge Hypotheses? by~~RobbBB~~ ~~The problem with AIXI~~~~by RobbBB~~Rob Bensinger- Solomonoff Cartesianism by
~~RobbBB~~Rob Bensinger - The Problem with AIXI by Rob Bensinger
- Formalizing Two Problems of Realistic World-Models by Nate Soares

- Building Phenomenological Bridges by RobbBB
- Bridge Collapse Reductionism by RobbBB
- Can we do without bridge hypotheses by RobbBB
- The problem with AIXI by RobbBB
- Solomonoff Cartesianism by RobbBB
- Formalizing Two Problems of Realistic World-Models by Nate Soares

- Building Phenomenological Bridges by RobbBB
- Bridge Collapse Reductionism by RobbBB
- Can we do without bridge hypotheses by RobbBB
- The problem with AIXI by RobbBB
- Solomonoff Cartesianism by RobbBB

- Building Phenomenological Bridges
~~(RobbBB)~~by RobbBB

**Naturalized induction** is an open problem in Friendly AI: Build an algorithm for producing accurate generalizations and predictions from data sets, that treats itself, its data inputs, and its hypothesis outputs as reducible to its physical posits. More broadly, design a workable reasoning method that allows the reasoner to treat itself as fully embedded in the world it's reasoning about.

Naturalized inductors are associated with naturalism in contrast to 'Cartesian' inductors, reasoners that assume a strict boundary between themselves and their environments. The standard example of an idealization of Cartesian induction is Solomonoff induction, an uncomputable but theoretically fruitful specification of a hypothesis space, prior probability distribution, and consistent reassignment of probabilities given data inputs. As Solomonoff induction is currently the leading contender for a formalization of universally correct — albeit physically unrealizable — inductive reasoning, an essential step in formally defining the problem of naturalized induction will be evaluating the limitations of Solomonoff inductors such as AIXI.

Naturalized induction is a particular angle of approach on larger Friendly AI superproblems such as the problem of hypotheses ('what formalism should a Friendly AI's hypotheses look like? how wide a range of possibilities should a Friendly AI be able to consider?') and the problem of priors ('before receiving any data, what prior probabilities should a Friendly AI assign to its hypotheses?'). Here the emphasis is on making sure the AI has a realistic conception of nature and of its own place in nature, whereas other angles of approach to the problem of hypotheses and the problem of priors will put the emphasis on issues like computational tractability, leverage penalties, logical uncertainty, or epistemic stability under self-modification. Subproblems specific to naturalized induction include:

**Solomonoff bug-spotting**: finding limits on the robustness of AIXI approximations, e.g., formalizing or generalizing the anvil problem**hypothesis idiom selection**: selecting the right formalism for representing hypotheses, e.g., algorithmic, automata-theoretic, or model-theoretic**expressivity**: setting upper and lower bounds on the diversity of hypotheses given human uncertainty about exotic physics scenarios (e.g., time-travel, hypercomputation, or unusual mathematical structures)**first-person reductionism**: formalizing and defining reasonable priors for bridge hypotheses linking agent-internal representations to physical posits**anthropics**: conditioning on the reasoner's existence, e.g., in scenarios of indexical uncertainty or self-replication

- Building Phenomenological Bridges (RobbBB)

In the spirit of: https://www.lesswrong.com/posts/Zp6wG5eQFLGWwcG6j/focus-on-the-places-where-you-feel-shocked-everyone-s

Why do we need naturalized induction? Sorry that I’m showing up late and probably asking a dumb question here-

We seem to doing it for the purpose of, like, constructing idealized infinitely-powerful

models which are capable of self-modeling…

…are we making the problem harder than it needs to be?

Since we eventually want to apply this in "reality", can we just use the time dimension to make the history of the world partially ordered?

Reality is indistinguishable from a version of itself that is constantly blinking in/out of existence, with every object immediately destroyed and then instantiated as a new copy from that point forward.

So, if we instantiate our agent within the set at time t, and its models concern only the new copies of itself which “aren’t really itself” in times t-, t+… are we now technically all good? Badabing, badaboom? Or is this cheating?

Since we’re assuming we have infinite computing power this seems fine to me?

And it also seems totally amenable to establishing finite approximations of the process later.

And I thought that was kind of the whole point of doing set-theory shenanigans...

Are we unhappy that we’re destroying the agent by instantiating a copy? But the possibility of that destruction/modification seemed to also be the point of embedding it in a universe with physics.

To what specific problem is this not an adequate solution?