Wiki Contributions

Comments

Another constraint is from computational complexity; should we treat things that are not polynomial-time computable as basically unknowable? Humans certainly can't solve NP-complete problems efficiently.

Generalized chess is EXPTIME-complete and while chess "exact solution" may be unavailable, we are pretty good at constructing chess engines.

When I read word "bargaining" I assume that we are talking about entities that have preferences, action set, have beliefs about relations between actions and preferences and exchange information (modulo acausal interaction) with other entities of the same composition. Like, Kelly betting is good because it equals to Nash bargaining between versions of yourself from inside different outcomes and this is good because we assume that you in different outcomes are, actually, agent with all arrtibutes of agentic system. Saying "systems consist of parts, this parts interact and sometimes result is a horrific incoherent mess" is true, but doesn't convey much of useful information.

I feel like the whole "subagent" framework suffers from homunculus problem: we fail to explain behavior using the abstraction of coherent agent, so we move to the abstraction of multiple coherent agents, and while it can be useful, I don't think it displays actual mechanistic truth about minds.

When I plan something and then fail to execute plan it's mostly not like "failure to bargain". It's just when I plan something I usually have good consequences of plan in my imagination and this consequences make me excited and then I start plan execution and get hit by multiple unpleasant details of reality. Coherent structure emerges from multiple not-really-agentic pieces.

It doesn't matter? Like, if your locations are identical (say, simulations of entire observable universe and you never find any difference no matter "where" you are), your weight is exactly the weight of program. If you expect dfferences, you can select some kind of simplicity prior to weight this differences, because there is basically no difference between "list all programs for this UTM, run in parallel".

Okay, we have wildly different models of tech tree. In my understanding, to make mind uploads you need Awesome Nanotech and if you have misaligned AIs and not-so-awesome nanotech it's sufficient to kill all humans and start to disassemble Earth. The only coherent scenario that I can imagine misaligned AIs actually participating in human economy in meaningful amounts is scenario where you can't design nanotech without continent-sized supercomputers.

But it still feels that the lesson could be summarized as: "talk like everyone outside the rationalist community does all the time".

If non-rationalist people knew it all along, there wouldn't be need to write such books.

On the other hand, I think if average rationalist person tries to say speech from pure inspiration, the result is going to be weird. Like, for example, speech of HJPEV before the first battle. HJPEV got away with this, because he has reputation of Boy Who Lived and he already pulled some awesome shenanigans, so his weird speech got him weirdness points instead of losing them, but it's not the trick average rationalist should try on first attempt to say inspiring speech.

It's kinda ill-formed question, because you can get the same performance if you compute moves longer with lower power. I guess you are searching for something like"energy per move".

You somehow managed to misunderstand me in completely opposite direction. I'm not talking about size of the universe, I'm talking about complexity of description of the universe. Description of the universe consists of initial conditions and laws of evolution. The problem with hidden variables hypotheses is that they postulate initial conditions of enormous complexity (literally, they postulate that at the start of the universe list of all coordinates and speeds of all particles exists) and then postulate laws of evolution that don't allow to observe any differences between these enourmously complex initial conditions and maximum-entropy initial conditions. Both are adding complexity, but hidden variables contain most of it.

The general problem with "more intuitive metaphysics" is that your intuition is not my intuition. My intuition finds zero problem with many worlds interpretation.

And I think you underestimate complexity issues. Many worlds interpretation requires as many information as all wave functions contain, but pilot wave requires as many information as required to describe speed and position of all particles compatible with all wave functions, which for universe with 10^80 particles requires c*10^80, c>=1 additional bits, which drives Solomonoff probability of pilot wave interpretation somewhere into nothing.

Load More