Naturalized Induction

mikes1y10

In the spirit of: https://www.lesswrong.com/posts/Zp6wG5eQFLGWwcG6j/focus-on-the-places-where-you-feel-shocked-everyone-s

Why do we need naturalized induction?  Sorry that I’m showing up late and probably asking a dumb question here-

We seem to doing it for the purpose of, like, constructing idealized infinitely-powerful

 models which are capable of self-modeling…

…are we making the problem harder than it needs to be?

Since we eventually want to apply this in "reality", can we just use the time dimension to make the history of the world partially ordered?

Reality is indistinguishable from a version of itself that is constantly blinking in/out of existence, with every object immediately destroyed and then instantiated as a new copy from that point forward.

So, if we instantiate our agent within the set at time t, and its models concern only the new copies of itself which “aren’t really itself” in times t-, t+… are we now technically all good? Badabing, badaboom? Or is this cheating? 

Since we’re assuming we have infinite computing power this seems fine to me? 
And it also seems totally amenable to establishing finite approximations of the process later. 
And I thought that was kind of the whole point of doing set-theory shenanigans...

Are we unhappy that we’re destroying the agent by instantiating a copy? But the possibility of that destruction/modification seemed to also be the point of embedding it in a universe with physics.

To what specific problem is this not an adequate solution?

Naturalized induction is an open problem in Friendly AI: Build an algorithm for producing accurate generalizations and predictions from data sets, that treats itself, its data inputs, and its hypothesis outputs as reducible to its physical posits. More broadly, design a workable reasoning method that allows the reasoner to treat itself as fully embedded in the world it's reasoning about.

Related: Embedded agency

Created by Rob Bensinger at 4y

Naturalized induction is an open problem in Friendly AI: Build an algorithm for producing accurate generalizations and predictions from data sets, that treats itself, its data inputs, and its hypothesis outputs as reducible to its physical posits. More broadly, design a workable reasoning method that allows the reasoner to treat itself as fully embedded in the world it's reasoning about.

Naturalized inductors are associated with naturalism in contrast to 'Cartesian' inductors, reasoners that assume a strict boundary between themselves and their environments. The standard example of an idealization of Cartesian induction is Solomonoff induction, an uncomputable but theoretically fruitful specification of a hypothesis space, prior probability distribution, and consistent reassignment of probabilities given data inputs. As Solomonoff induction is currently the leading contender for a formalization of universally correct — albeit physically unrealizable — inductive reasoning, an essential step in formally defining the problem of naturalized induction will be evaluating the limitations of Solomonoff inductors such as AIXI.

Naturalized induction is a particular angle of approach on larger Friendly AI superproblems such as the problem of hypotheses ('what formalism should a Friendly AI's hypotheses look like? how wide a range of possibilities should a Friendly AI be able to consider?') and the problem of priors ('before receiving any data, what prior probabilities should a Friendly AI assign to its hypotheses?'). Here the emphasis is on making sure the AI has a realistic conception of nature and of its own place in nature, whereas other angles of approach to the problem of hypotheses and the problem of priors will put the emphasis on issues like computational tractability, leverage penalties, logical uncertainty, or epistemic stability under self-modification. Subproblems specific to naturalized induction include:

  1. Solomonoff bug-spotting: finding limits on the robustness of AIXI approximations, e.g., formalizing or generalizing the anvil problem
  2. hypothesis idiom selection: selecting the right formalism for representing hypotheses, e.g., algorithmic, automata-theoretic, or model-theoretic
  3. expressivity: setting upper and lower bounds on the diversity of hypotheses given human uncertainty about exotic physics scenarios (e.g., time-travel, hypercomputation, or unusual mathematical structures)
  4. first-person reductionism: formalizing and defining reasonable priors for bridge hypotheses linking agent-internal representations to physical posits
  5. anthropics: conditioning on the reasoner's existence, e.g., in scenarios of indexical uncertainty or self-replication

External links