In the spirit of: https://www.lesswrong.com/posts/Zp6wG5eQFLGWwcG6j/focus-on-the-places-where-you-feel-shocked-everyone-s
Why do we need naturalized induction? Sorry that I’m showing up late and probably asking a dumb question here-
We seem to doing it for the purpose of, like, constructing idealized infinitely-powerful
models which are capable of self-modeling…
…are we making the problem harder than it needs to be?
Since we eventually want to apply this in "reality", can we just use the time dimension to make the history of the world partially ordered?
Reality is indistinguishable from a version of itself that is constantly blinking in/out of existence, with every object immediately destroyed and then instantiated as a new copy from that point forward.
So, if we instantiate our agent within the set at time t, and its models concern only the new copies of itself which “aren’t really itself” in times t-, t+… are we now technically all good? Badabing, badaboom? Or is this cheating?
Since we’re assuming we have infinite computing power this seems fine to me? And it also seems totally amenable to establishing finite approximations of the process later. And I thought that was kind of the whole point of doing set-theory shenanigans...
Are we unhappy that we’re destroying the agent by instantiating a copy? But the possibility of that destruction/modification seemed to also be the point of embedding it in a universe with physics.
To what specific problem is this not an adequate solution?
In the spirit of: https://www.lesswrong.com/posts/Zp6wG5eQFLGWwcG6j/focus-on-the-places-where-you-feel-shocked-everyone-s
Why do we need naturalized induction? Sorry that I’m showing up late and probably asking a dumb question here-
We seem to doing it for the purpose of, like, constructing idealized infinitely-powerful
models which are capable of self-modeling…
…are we making the problem harder than it needs to be?
Since we eventually want to apply this in "reality", can we just use the time dimension to make the history of the world partially ordered?
Reality is indistinguishable from a version of itself that is constantly blinking in/out of existence, with every object immediately destroyed and then instantiated as a new copy from that point forward.
So, if we instantiate our agent within the set at time t, and its models concern only the new copies of itself which “aren’t really itself” in times t-, t+… are we now technically all good? Badabing, badaboom? Or is this cheating?
Since we’re assuming we have infinite computing power this seems fine to me?
And it also seems totally amenable to establishing finite approximations of the process later.
And I thought that was kind of the whole point of doing set-theory shenanigans...
Are we unhappy that we’re destroying the agent by instantiating a copy? But the possibility of that destruction/modification seemed to also be the point of embedding it in a universe with physics.
To what specific problem is this not an adequate solution?