Ah, I understand what your getting at now dxu, thanks for taking the time clarify. Yes, there likely are not extra bits of information hiding away somewhere, unless there really are hidden parameters in space-time (as one of the possible resolutions to Bell’s theorem).
When I said ‘there will always be’ I meant it as ‘any conceivable observer will always encounter an environment with extra bits of information outside of their observational capacity’, and thus beyond any model or mapping. I can see how it could have been misinterpreted.
In r... (read more)
Well, if we were to know that assertion is unprovable, or undecidable, then we can treat it as any other unprovable assertion.
Interesting idea, though the first round seems unverifiable. “How many nuclear weapons will states possess on December 31, 2022?”
Referencing Leo Szilard is amusing for this topic as his moment of genius insight, that the atomic nucleus can be split to generate enormous explosions, is one of those few ideas so genuinely beyond the then current paradigm (1930’s) that it seems like real precognition.
Allegedly he spent a significant fraction of every day sitting in a hotel bathtub in rumination, and he lived permanently in hotels for decades. I assume that is how he developed that depth of thinking.
It seems that your comment got cut off at the end there.
“All Nature is but art, unknown to theeAll chance, direction, which thou canst not see;All discord, harmony not understood; All partial evil, universal good.”
When you said ’not directly extensible’ I understood that as meaning ‘logistically impossible to perfectly map onto a model communicable to humans’. With the fishes fluctuating in weight, in reality, between and during every observation, and between every batch. So even if, perfect, weight information was obtained somehow, that would only be for that specific Planck second. And then averaging, etc., will always have some error inherently. So every step on the way there is a ’loose coupling’, so that the final product, a mental-model of what we just read, i... (read more)
You are misunderstanding the post. There are no "extra bits of information" hiding anywhere in reality; where the "extra bits of information" are lurking is within the implicit assumptions you made when you constructed your model the way you did.
As long as your model is making use of abstractions--that is, using "summary data" to create and work with a lower-dimensional representation of reality than would be obtained by meticulously tracking every variable of relevance--you are implicitly making a choice about what information you are summarizing and how ... (read more)
Right, for determinism to work in practice, some method of determining that ‘previous world state’ must be viable. But if there are no viable methods, and if somehow that can be proven, then we can be confident that determinism is impossible, or at the very least, that determinism is a faulty idea.
You‘ve really put some thought into this, thanks for sharing.
Though I don’t want to make a critique I would like to save you a bit of future trouble as a courtesy from someone who has trodden down the same path.
The issue with basing a philosophy on Mozi is that there are no ‘fixed standards’. All standards, like the rest of the universe, are forever in flux. Universal frameworks can not exist.
For the next stage I found reading Liezi was helpful.
Therefore, determinism is impossible? You’ve demonstrated quite a neat way of showing that reality is of unbounded complexity whereas the human nervous system is of course finite and as such everything we ‘know’, and everything that can be ‘known’, necessarily is, in some portion, illusory.
That is true, the desired characteristics may not develop as one would hope in the real world. Though that is the case for all training, not just AGI. Humans, animals, even plants, do not always develop along optimal lines even with the best ‘training’, when exposed to the real environment. Perhaps the solution you are seeking for, one without the risk of error, does not exist.
Could the hypothetical AGI be developed in a simulated environment and trained with proportionally lower consequences?
“We can group projects into subprojects without changing the overall return”
What if this were not true? Would that make the problem intractable?
Query: How do you define ‘feasibly’? as in ‘Incentive landscapes that can’t feasibly be induced by a reward function’
As from my perspective all possible incentive landscapes can be induced by reward, with sufficient time and energy. Of course a large set of these are beyond the capacity of present human civilization.