This is a special post for short-form writing by Martín Soto. Only they can create top-level comments. Comments here also appear on the Shortform Page and All Posts page.

New to LessWrong?

6 comments, sorted by Click to highlight new comments since: Today at 8:19 PM

Re embedded agency, and related problems like finding the right theory of counterfactuals:

I feel like these are just the kinds of philosophical questions that don’t ever get answered? (And are instead "dissolved" in the Wittgensteinian sense.) Consider, for instance, the Sorites paradox: well, that’s just how language works, man. Why’d you expect to have a solution for that? Why’d you expect every semantically meaningful question to have an answer adequate to the standards of science?

(A related perspective I've heard: "To tell an AI to produce a cancer cure and do nothing else, let's delineate all consequences that are inherent, necessary, intended or common for any cancer cure" (which might be equivalent to solving counterfactuals). Again, by Wittgenstein's intuitions this will be a fuzzy family resemblance type of thing, instead of there existing a socratic "simple essence" (simple definition) of the object/event.)

Maybe I just don’t understand the mathematical reality with which these issues seem to present themselves, with a missing slot for an answer (and some answers seeked by embedded agency do seem to not be at odds with the nature of physical reality). But on some level they just feel like “getting well-defined enough human concepts into the AI”, and such well-defined human concepts (given all at once, factual and complete, as contrasted to potentially encoded in human society) might not exist, similar to how a satisfying population ethics doesn’t exist, or maybe the tails come apart, etc.

Take as an example “defining counterfactuals correctly”. It feels like there’s not an ultimate say in the issue, just “whatever is most convenient for our reasoning, or for predicting correctly etc.”. And there might not be a definition as convenient as we expect there to be. Maybe there’s no mathematically robust definition of counterfactuals, and every conceivable definition fails in different corners of example space. That wouldn’t be so surprising. After all, reality doesn’t work that way. Maybe our apparent sense of “if X had been the case then Y would have happened” being intuitive, and correct, and useful is just a jumble of lived and hard-coded experience, and there’s no compact core for it other than “approximately the whole of human concept-space”.

The problem of counterfactuals is not just the problem of defining them.

The problem of counterfactuals exists for rationalists only: they are not considered a problem in mainstream philosophy.

The rationalist problem of countefactuals is eminently disolvable. You start making realistic assumptions about agents: that they have incomplete world-models, and imperfect self-knowledge.

I agree about embedded agency. The way in which agents are traditionally defined in expected utility theory requires assumptions (e.g. logical omniscience and lack of physical side effects) that break down in embedded settings, and if you drop those assumptions you're left with something that's very different from classical agents and can't be accurately modeled as one. Control theory is a much more natural framework for modeling reinforcement learner (or similar AI) behavior than expected utility theory.

In the past I had the thought: "probably there is no way to simulate reality that is more efficient than reality itself". That is, no procedure implementable in physical reality is faster than reality at the task of, given a physical state, computing the state after t physical ticks. This was motivated by intuitions about the efficiency of computational implementation in reality, but it seems like we can prove it by diagonalization (similarly to how we can prove two systems cannot perfectly predict each other), because the machine could in particular predict itself.

Indeed, suppose you have a machine M that calculates physical states faster than reality. Modify into M', which first uses M to calculate physical states, and then takes some bits from that physical state, does some non-identity operation to them (for example, negates them) and outputs them. Then, feed the physical description of M', its environment and this input itself to M', and suppose those privileged bits of the physical state are so that they perfectly correspond to the outputs of M' in-simulation. This is a contradiction, because M' will simulate everything up until simulated-M' finishing its computation, and then output something different from simulated-M'.

It seems like the relevant notion of "faster" here is causality, not time.

Wait, the input needs to contain the whole information in the input, plus some more (M' and the environment), which should be straightforwardly impossible information-theoretically? Unless somehow the input is a hash which generates both a copy of itself and the description of M' and the environment. But then would something already contradictory happen when M decodes the hash? I think not necessarily. But maybe getting the hash (having fixed the operation performed by M in advance) is already impossible, because we need to calculate what the hash would produce when being run that operation on. But this seems possible through some fix-point design, or just a very big brute-force trial and error (given reality has finite complexity). Wait, but whatever M generates from the hash won't contain more information than the system hash+M contained (at time 0), and the generated thing contains hash+M+E information. So it's not possible unless the environment is nothing (that is, the whole isolated environment initial state is the machine which is performing operations on the hash? but that's trivially always the case right?...). I'm not clear on this.

In any event it seems like the paradox could truly reside here, in the assumption that something could carry semantically all the information about its physical instantiation (and that does resonate with the efficiency intuition above), and we don't even need to worry about calculating the laws of physics, just encoding information of static physical states.

Other things to think about:

  • What do we mean by "given a physical state, compute the state after t physical ticks?". Do I give you a whole universe, or a part of the universe completely isolated from the rest so that the rest doesn't enter the calculations? (that seems impossible) What do t physical ticks mean? Allegedly they should be fixed by our theory. What if the ticks are continuous and so infinitely expensive to calculate any non-zero length of time? What about relativity messing up simultaneity? (probably in all of these there are already contradictions without even needing to the calculation, similarly to the thing above)
  • If the complexity of the universe never bottoms out, that is after atoms there's particles, then quarks, then fields, then etc. ad infinitum (this had a philosophical name I don't remember now), then it's immediately true.
  • How does this interact with that "infinite computation" thing?

The Singularity

Why is a rock easier to predict than a block of GPUs computing? Because the block of GPUs is optimized so that its end-state depends on a lot of computation.
[Maybe by some metric of “good prediction” it wouldn’t be much harder, because “only a few bits change”, but we can easily make it the case that those bits get augmented to affect whatever metric we want.]
Since prediction is basically “replicating / approximating in my head the computation made by physics”, it’s to be expected that if there’s more computation that needs to be finely predicted, the task is more difficult.
In reality, there is (in the low level of quantum physics) as much total computation going on, but most of it (those lower levels) are screened off enough from macro behavior (in some circumstances) that we can use very accurate heuristics to ignore them, and go “the rock will not move”. This is purposefully subverted in the GPU case: to cram a lot of useful computation into a small amount of space and resources, the micro computations (at the level of circuitry) are orderly secured and augmented, instead of getting screened off due to chaos.

Say we define the Singularity as “when the amount of computation / gram of matter (say, on Earth) exceeds a certain threshold”. What’s so special about this? Well, exactly for the same reason as above, an increase in this amount makes the whole setup harder to predict. Some time before the threshold, maybe we can confidently predict some macro properties of Earth for the next 2 months. Some time after it, maybe we can barely predict that for 1 minute.

But why would we care about this change in speed? After all, for now (against the backdrop of real clock time in physics) it doesn’t really matter whether a change in human history takes 1 year or 1 minute to happen.
[In the future maybe it does start mattering because we want to cram in more utopia before heat death, or because of some other weird quirk of physics.]
What really matters is how far we can predict “in terms of changes”, not “in terms of absolute time”. Both before and after the Singularity, I might be able to predict what happens to humanity for the next X FLOP (of total cognitive labor employed by all humanity, including non-humans). And that’s really what I care about, if I want to steer the future. The Singularity just makes it so these FLOP happen faster. So why be worried? If I wasn’t worried before about not knowing what happens after X+1 FLOP, and I was content with doing my best at steering given that limited knowledge, why should that change now?
[Of course, an option is that you were already worried about X FLOP not being enough, even if the Singularity doesn’t worsen it.]

The obvious reason is changes in differential speed. If I am still a biological human, then it will indeed be a problem that all these FLOP happen faster relative to clock time, since they are also happening faster relative to me, and I will have much less of my own FLOP to predict and control each batch of X FLOP made by humanity-as-a-whole.

In a scenario with uploads, my FLOP will also speed up. But the rest of humanity/machines won’t only speed up, they will also build way more thinking machines. So unless I speed up even more, or my own cognitive machinery also grows at that rate (via tools, or copies of me or enlarging my brain), the ratio of my FLOP to humanity’s FLOP will still decrease.

But there’s conceivable reasons for worry, even if this ratio is held constant:

  • Maybe prediction becomes differentially harder with scale. That is, maybe using A FLOPs (my cognitive machinery pre-Singularity) to predict X FLOPs (that of humanity pre-Singularity) is easier than using 10A FLOPs (my cognitive machinery post-Singularity) to predict 10X FLOPs (that of humanity post-Singularity). But why? Can’t I just split the 10X in 10 bins, and usea an A to predict each of them as satisfactorily as before? Maybe not, due to the newly complex interconnections between these bins. Of course, such complex interconnections also become positive for my cognitive machinery. But maybe the benefit for prediction from having those interconnections in my machinery is lower than the downgrade from having them in the predicted computation.

[A priori this seems false if we extrapolate from past data, but who knows if this new situation has some important difference.]

  • Maybe some other properties of the situation (like the higher computation-density in the physical substrate requiring the computations to take on a slightly different, more optimal shape [this seems unlikely]) lead to the predicted computation having some new properties that make it harder to predict. Such properties need not even be something absolute, that “literally makes prediction harder for everyone” (even for intelligences with the right tools/heuristics). It could just be “if I had the right heuristics I might be able to predict this just as well as before (or better), but all my heuristics have been selected for the pre-Singularity computation (which didn’t have this property), and now I don’t know how to proceed”. [I can run again a selection for heuristics (for example running again a copy of me growing up), but that takes a lot more FLOP.]

Another way to think of this is not speed, but granularity - amount of variation in a given 4D bounding box (volume and timeframe).  A rock is using no power, is pretty uniform in information, and therefore easy to predict.  A microchip is turning electricity into heat and MANY TINY changes of state, which are obviously much more detailed than a rock.