Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Thanks to Paul Christiano, Mark Xu, Abram Demski, Kate Woolverton, and Beth Barnes for some discussions which informed this post.

In the ELK report, Paul, Mark, and Ajeya express optimism about penalizing computation time as a potentially viable way to select the direct translator over the human imitator:

Human imitation requires doing inference in the entire human Bayes net to answer even a single question. Intuitively, that seems like much more work than using the direct translator to simply “look up” the answer.

[...]

Compared to all our previous counterexamples, this one offers much more hope. We can’t rule out the possibility of a clever dataset where the direct translator has a large enough computational advantage to be preferred, and we leave it as an avenue for further research.

I am more skeptical—primarily because I am more skeptical of the speed prior's ability to do reasonable things in general. That being said, the speed prior definitely has a lot of nice things going for it, and I do think it's worth taking a careful look at both the good and the bad that the speed prior has to offer. Conceptually, what we want to pay attention to when evaluating a prior from an AI safety perspective is threefold: it needs to favor good models over bad models (e.g. the direct translator over the human imitator), it needs to be competitive to implement, and it needs to favor models with good generalization over models with bad generalization (e.g. the resulting models need to themselves be performance competitive).

Before I do that, however, an important preliminary: there are multiple different forms/types of speed priors, so when I say “the speed prior,” I really mean a class of priors including:

  • various combinations of circuit-size and circuit-depth complexity and
  • various combinations of Turing machine time and description complexity, where Turing machine time complexity can be measured either by taking the max number of steps taken across all inputs or the average number of steps taken on a particular distribution of inputs.

The basic structure of this post will be a dialog of sorts between a pro-speed-prior and an anti-speed-prior perspective. I'll start with some of the arguments in favor of the speed prior and how the anti-speed-prior perspective might respond, then give some arguments against the speed prior and how the pro-speed-prior perspective might respond.

Why you should love the speed prior

  • As in the ELK report, there is a plausible argument for why there exists a speed prior that would prefer the direct translator to the human imitator. Naively, the problem with the speed prior here is that the computation required for the human imitator is proportional to the size of the human's Bayes net, whereas the computation required for the direct translator is proportional to the size of the model's Bayes net—and in the superhuman limit we should expect the latter to be substantially larger than the former. The argument in the ELK report, however, is that while this argument is valid in the limit, there's reason to believe it might be invalid for all the finite cases that we care about. That's because perfect inference in either Bayes net, and thus perfect loss, shouldn't be possible in any finite case. Thus, the performance of the ontology mapping function, and thus its loss, should be proportional to how much computation it puts into its inference task—for which the direct translator has a big advantage, since it gets to reuse the computation performed by the model.
    • Response: The obvious response here, and the response that is given in the ELK report, is that the above argument is very fragile—it relies on inference in the human's Bayes net being too hard to always get right on the training distribution, which is a strong assumption both about the difficulty of inference and the complexity of the training data. Furthermore, as the ELK report also notes, it's not enough for the direct translator to just be more efficient than the human imitator: the direct translator has to be a cost-effective improvement (in terms of how much loss/computation it saves per increase in description complexity) compared to all other possible mappings. In my opinion, I think this is where this argument really fails. To start with, note that if you push your speed bias far enough (e.g. a strong enough circuit depth complexity or Turing machine time complexity penalty), you just get a lookup table that memorizes everything. Thus, to get this to work, you have to use a prior with a pretty meaningful simplicity component—but then you're back at the original problem that the direct translator could be substantially more complex than the human imitator, potentially so much so that it outweighs whatever loss/time advantage the direct translator might have. One thing I have learned in spending a lot of time thinking about ELK is that there are a lot of extra terms that you can add to a simplicity prior that help the direct translator more than the human imitator—the problem is that none of them that I have found so far work on their own without the simplicity prior term there as well, which means that as long as you're working in the worst-case world where the direct translator can be arbitrarily more complex than the human imitator, they don't constitute full solutions—and I think that speed-prior-based solutions should be put in this same category.
  • Not only might there exist a speed prior which solves ELK, there might exist a speed prior that prevents deceptive alignment as well. That's because planning how to be deceptive is one step removed from planning how to accomplish the intended goal. Thus, while both deceptive and non-deceptive models have to spend time on how to optimize for the base objective, deceptive models also have to spend time deducing that optimizing for the base objective is the right thing for them to be doing.
    • Response: Unfortunately, there are three pretty major issues with the above argument. The first is the same issue that we had with the ELK argument, which is that once you include a simplicity component, it's very hard to know whether the non-deceptive model is actually a cost-effective improvement in speed in terms of how much additional description complexity that improvement in speed costs. The second problem here is that the above argument becomes substantially less strong in the presence of memory—if the deceptive model can just cache its notion of the base objective, the speed penalty reduces to the difficulty of that lookup rather than the difficulty of fully rediscovering the base objective each time. Finally, the last problem with the above argument is that it assumes that both the deceptive and non-deceptive models will in some sense be performing optimization “directly” rather than doing their own search over algorithms to find one which solves the problem effectively—with such a search being a pretty plausible mechanism if the model is trained to solve complex learning problems. The problem with this, however—as I detail in “Are minimal circuits deceptive?” (also summarized here)—is that the fastest way to search for an algorithm to solve a problem isn't to find the fastest possible algorithm. Rather, the fastest search procedure over algorithms is one that's going to be heavily biased towards finding the most “reachable” algorithms that are easiest for the search procedure to find quickly—which is highly likely to be simple, easy-to-specify algorithms. Thus, if you use a speed prior and find a model that's doing its own internal such over algorithms, that search might be heavily simplicity-biased, thus resulting in the eventual algorithm being executed being strongly simplicity-biased and thus highly likely to be deceptive.
  • Speed priors are much closer to what our brains do than simplicity priors. Compared to computers, the human brain has a huge amount of accessible, local memory, but runs at a very slow serial computation speed. Thus, storing lots of bits of information is quite cheap for the brain, whereas doing lots of serial computation is quite expensive for it. Therefore, if we think of the brain's architecture as imposing a prior over what algorithms it can learn, that prior is pretty heavily speed-weighted. Thus, to the extent that we like the prior used to select us—e.g. to the extent that humans are aligned, good at generalization, etc.—we should like the speed prior.
    • Response: The obvious response to the above is that it's not clear that we should actually like the prior used by human brains all that much once it's divorced from all of the random hard-coded bits that produce human values. That being said, this does seem like a reasonably strong argument for why the speed prior has the potential to be competitive at solving complex tasks, since human brains are clearly capable of that—though the key word there is “potential,” since it's very unclear how important the other aspects of the human brain prior are relative to the speed part.

Why you should hate the speed prior

  • Speed priors often don't generalize properly even for very simple learning problems. In particular, speed priors heavily incentivize procedures like lookup-table-style memorization (which doesn't generalize to anything not in the lookup table) and loop inlining (which doesn't generalize to any cases where more iterations of the loop than were ever needed previously are necessary). As a simple example, suppose you want to use a speed prior to model Newtonian physics. We'll say you have perfect spheres bouncing around in a fixed cube. However, suppose that, during training, most spheres are only locally interacting with their close neighbors. In such a case, a simplicity prior should learn a general algorithm for handling collisions between any two spheres—trying to only handle local collisions would be more complex from a simplicity perspective. For a speed prior, however, the exact opposite is the case—the speed prior is heavily incentivized to avoid checking for collisions between any pair of objects and instead only check for local collisions, which substantially improves speed during training but completely breaks down once the spheres start mixing.
    • Response: It's unclear how bad this generalization problem really is, especially for a sufficiently diverse training set and a reasonable speed/simplicity mixture. For example, a circuit size complexity prior on the physics example above should generalize correctly for collisions at new positions within a local region, so it's not as if it has no generalization ability—and for a dataset diverse enough to include substantial non-local mixing, it should be able to fully learn the problem. Furthermore, though it might seem like this is a major competitiveness hit relative to a simplicity prior, remember that an actual Kolmogorov-style simplicity prior is impossible to implement competitively, so that's not really a valid basis for comparison. Once you start comparing to more concrete priors, it is entirely plausible that speed penalties can be introduced while still staying on the Pareto frontier between competitiveness and generalization.
  • Most speed priors come with tunable parameters, e.g. how to balance between speed and simplicity, that you have to get right. In many of the arguments in favor of the speed prior above, I led with “there exists a speed prior such that”—but that's hardly comforting if we don't know how to find such a speed prior, a problem we saw in many of my responses above. It's worth pointing out that this problem is much more pronounced for Turing-machine-style speed priors than it is for circuit-complexity-style speed priors, as Turing machine speed alone is insufficient to give you an integrable probability distribution, requiring the inclusion of something like simplicity—whereas circuit size does give you an integrable distribution all on its own. Conceptually, however, that's just because circuit size complexity includes its own implicit simplicity prior, as the total number of logic gates is a type of description length.
    • Response: As above, I think the best way out of this issue is through circuit size complexity, since it uniquely doesn't have a tunability problem. As I just pointed out, however, that's not because it has no simplicity component, but rather because the simplicity component is implicit in circuit size. Unfortunately, it's not currently clear why the specific balance struck by circuit size complexity would end up in the right spot to solve the ELK and deception problems above—but I also don't currently have a strong reason why circuit size complexity wouldn't do the right thing in those cases, which leaves open the possibility that circuit size really is the right way to go here.
  • Evidence from double descent implies that strongly selecting for speed gives substantially worse performance. In the standard double descent setup, as you increase the size of your model, you first get better performance (less underfitting), then worse performance (more overfitting), then very bad performance right when you hit zero training error (the interpolation threshold), then better and better performance as you make your model larger after that (the interpolation regime). If we equate model size to speed (which is a reasonably good proxy, since larger models require strictly more computation to run), selecting the fastest model that fits the data—which is essentially what it means to use a speed prior—would put you exactly on the interpolation threshold, which double descent implies is a uniquely bad place to be for generalization. Thus, double descent seems to provide concrete, empirical evidence that speed priors don't generalize very well when translated into neural networks and used on real-world machine learning tasks, which seems like a strong competitiveness argument to avoid them.
    • Response: While the above does seem true for overall model size—and thus total computation time—the same does not seem to be true if you just look at model depth as your proxy for speed instead. That does mean we're looking at circuit depth rather than circuit size and max Turing machine steps rather than average Turing machine steps, so we do have to give up half of our possible speed priors. If we do that, however, scaling laws find that you can train a model with a very large range of possible width/depth ratios and get equivalent performance, indicating that heavily penalizing depth is clearly compatible with good performance across a wide range of possible situations.
  • As I pointed out previously, the speed prior has the unfortunate property that the fastest way to do a search over algorithms isn't to search for the fastest algorithm, as I detail in “Are minimal circuits deceptive?” (again also summarized here). Furthermore, it seems likely that fast methods of searching over algorithms would bias towards simple algorithms, since simple algorithms are by definition the easiest to specify and thus likely to be the easiest to find. Therefore, any guarantees regarding the speed prior that we might start with could collapse if we end up with a model doing its own internal search over algorithms, resulting in the prior that actually determines the final algorithm being far more simplicity-biased than was originally intended.
    • Response: I think the best response to this problem is that it's unclear if this problem is really meaningfully unique to the speed prior. Even for a simplicity prior, the simplest search algorithm isn't necessarily going to use the exact same criteria for simplicity in its search compared to the original simplicity prior.
  • The simplicity prior does a much better job at predicting modern physics than the speed prior. Quantum mechanics is notoriously hard to simulate, but quite compact to specify, with the full standard model Lagrangian being small enough to fit on a single page. Furthermore, despite there being many known techniques of approximating quantum systems to get essentially the same result much faster, we see no evidence that nature ever actually takes such shortcuts—and before anyone claims that wave function collapse is such a shortcut, in fact it's the exact opposite: the phenomenon of apparent collapse is a facet of the universe computing an entire Everettian multiverse, which is definitely not a fast thing to do. Thus, if you want to effectively model the physical world, it seems that the simplicity prior is likely to do a much better job.
    • Response: While it seems clear that fundamental physics is heavily simplicity-biased, I think it's very unclear whether the same thing is true for more high-level phenomenon. There are clearly a lot of things in the world that are heavily speed-biased—e.g. humans, as I pointed out previously—such that using a speed prior might do very well at modeling them. Furthermore, there might even be a general principle behind this (and the same general principle behind why the brain is so speed-biased): energy efficiency is heavily correlated with speed, so when trying to model phenomena that were selected to be highly energy efficient, a speed prior seems like a pretty good choice.

Conclusion?

I don't really have a single, unified conclusion to take away from all of this. Like I said at the beginning, I think I tend towards skepticism of the speed prior's ability to solve AI safety problems, at least singlehandedly, but I can't dismiss it completely and I think there are clearly strong and compelling reasons to like it. I do feel like moving in the direction of speed bias is likely to increase safety all things considered, though I also feel like there's a reasonable chance that doing so might also reduce competitiveness—in which case it's very unclear to me if that's where we want to place our alignment tax.

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 4:57 AM
[-]A Ray2yΩ580

I think there’s a lot going on with your equivocating the speed prior over circuits w/ a speed prior over programs.


 

I think a lot of the ideas in this direction are either confused by the difference between circuit priors and program priors, or at least treating them as equivalent.  Unfortunately a lot of this is vague until you start specifying the domain of model.  I think specifying this more clearly will help communicating about these ideas.  To start with this myself, when I talk about circuit induction, I’m talking about things that look like large randomly initialized bayes nets (or deep neural networks).

Program Induction Priors are bad:  I would claim that any program induction priors (simplicity prior, speed prior, others) are almost always a bad fit for developing useful intuitions about the behavior of large random bayes net machines.

Confusion between circuit induction speed prior and simplicity prior:  I think your point about double descent is wrong — in particular, the speed is largely unchanged in double descent experiments, since the *width* is the parameter being varied, and all deep neural networks of the same depth have approximately the same speed (unless you mean something weird by speed).

Circuit Simplicity:  You give circuit-size and circuit-depth as examples of a “speed prior”, which seems pretty nonstandard, especially when describing it as “not the simplicity prior”.

More than Speed and Simplicity: I think there are other metrics that provide interesting priors over circuits, like likelihood under some initialization distribution.  In particular, I think “likelihood under the initialization distribution” is the prior that matters most, until we develop techniques that let us “hack the prior”.

Connection to Infinite-Size Neural Networks: I think research about neural networks approaching/at the infinite limit looks a lot like physics about black holes — and similarly can tell us interesting things about dynamics we should expect.  In particular, for systems optimized by gradient descent, we end up with infinitesimal/nonexistent feature learning in the limit — which is interesting because all of the sub-modules/sub-circuits we start with are all we’ll ever have!  This means that even if there are “simple” or “fast” circuits, if they’re not likely under the initialization distribution, then we expect they’ll have a vanishingly small effect on the output.  (One way of thinking about this is in terms of the NTK, that even if we have extremely powerfully predictive modules, their predictive power will be overwhelmed by the much more common and simple features)

Hacking the Prior: Right now we don’t have a good understanding of the behavior of partially-hand coded neural networks, but I think they could serve as a new/distinct class of models (with regards to what functions are likely under the initialization distribution).  Concretely, this could look like us “hand-programming” circuits or parts of neural networks, then randomly initializing the rest, and see if during training the model learns to use those programmed functions.

[-]TLW2yΩ350

To start with, note that if you push your speed bias far enough (e.g. a strong enough circuit depth complexity or Turing machine time complexity penalty), you just get a lookup table that memorizes everything.

This is true in the TM model[1]. This is not true in the circuit-depth complexity model. Remember that an arbitrary lookup table is O(log n) circuit depth. If my function I'm trying to memorize is f(x) = (x & 1), the fastest circuit is O(1), whereas a lookup table is O(log n).

(This gets even worse in models where lookup is [2] or [3])

I humbly suggest a variant family of priors: realizable-speed[4] priors. That is:

  1. Pick a particular physics model
    1. This could be even things like "Conway's Game of Life".
  2. Don't generally worry too much about constant factors[5].
  3. The algorithm that gives an answer in the least time is the most probable.

A simplish example of this prior might be the following: assume a simple 3d circuit model where gates take O(1) time and space and wires take O(length) time and space, and neither gates nor wires can overlap.

This prior discourages giant lookup tables even in the limiting case, while retaining many of the advantages of speed priors.

(It does have the interesting issue that it says nothing about what happens outside the lightcone of the computation...)

(I realize here that I'm being very sloppy with asymptotic notation. I'm bouncing between bits / gates / elements / etc.)

  1. ^

    It also isn't true in the TM model variant where you include the depth of the FSM decode circuit instead of treating a single TM step as constant-time.

  2. ^

    Volume accessible within time t scales as 

  3. ^

    Assuming power/heat scales linearly with #gates[6], max gates within a radius scales with  and so you get  instead. Ditto, eventually you run into e.g. Bekenstein bound issues, although this isn't a particularly realistic concern.

  4. ^

    This is a terrible name. I am fully open to better ones.

  5. ^

    Which is a terrible idea given that galactic algorithms are a thing.

  6. ^

    Assuming it scales superlinearly you scale even worse...

[-]evhub2yΩ240

This is not true in the circuit-depth complexity model. Remember that an arbitrary lookup table is O(log n) circuit depth. If my function I'm trying to memorize is f(x) = (x & 1), the fastest circuit is O(1), whereas a lookup table is O(log n).

Certainly, I'm assuming that the intended function is not in O(log n), though I think that's a very mild assumption for any realistic task.


I think the prior you're suggesting is basically a circuit size prior. How do you think it differs from that?

[-]TLW2yΩ350

Certainly, I'm assuming that the intended function is not in O(log n), though I think that's a very mild assumption for any realistic task.

In  time, the brain (or any realistic agent) can do  processing... but receives  sensory data.

I think the prior you're suggesting is basically a circuit size prior. How do you think it differs from that?

Realizable-speed priors are certainly correlated with circuit size priors to some extent, but there are some important differences:

  • The naive circuit size prior assumes gates take O(1) space and wiring takes zero space, and favors circuits that take less space.
    • There are more complex circuit size priors that e.g. assign O(1) space to a gate and O(length) space to wiring.
  • The  variant of the realizable-speed prior has no simple analog in the circuit size prior, but roughly corresponds to the circuit-depth prior.
  • The  variant of the realizable-speed prior has no simple analog in the circuit size prior.
  • The  variant of the realizable-speed prior roughly corresponds to the complex circuit-size prior described above, with differences described below.
  • Circuit size priors ignore the effects of routing congestion.
    • A circuit-size prior will prefer one complex circuit of N-1 gates over two simpler circuits of N/2 gates.
    • A realizable-speed prior will tend to prefer the two simpler circuits, as, essentially, they are easier to route (read: lower overall latency due to shorter wiring)
    • My (untested) intuition here is that a realizable-speed prior will be better at structuring and decomposing problems than a circuit-size prior, as a result.
  • Circuit size priors prefer deeper circuits than realizable-speed priors.
    • A circuit-size prior will prefer a circuit of max depth 10D and N gates over a circuit of max depth D and N-1 gates.
    • A realizable-speed prior will (typically) prefer the slightly larger far shallower circuit.
      • Note that the human brain is surprisingly shallow, when you consider speed of neuron activation versus human speed of response. But also very wide...