abramdemski

Sequences

Pointing at Normativity
Consequences of Logical Induction
Partial Agency
Alternate Alignment Ideas
Filtered Evidence, Filtered Arguments
CDT=EDT?
Embedded Agency
Hufflepuff Cynicism

Comments

I don't think this works very well. If you wait until a major party sides with your meta, you could be waiting a long time. (EG, when will 321 voting become a talking point on either side of a presidential election?) And, if you get what you were waiting for, you're definitely not pulling sideways. That is: you'll have a tough battle to fight, because there will be a big opposition.

Adding long-term memory is risky in the sense that it can accumulate weirdness -- like how Bing cut off conversation length to reduce weirdness, even though the AI technology could maintain some kind of coherence over longer conversations.

So I guess that there are competing forces here, as opposed to simple convergent incentives.

Probably no current AI system qualifies as a "strong mind", for the purposes of this post?

I am reading this post as an argument that current AI technology won't produce "strong minds", and I'm pushing back against this argument. EG:

An AI can simply be shut down, until it's able to and wants to stop you from shutting it down. But can an AI's improvement be shut down, without shutting down the AI? This can be done for all current AI systems in the framework of finding a fairly limited system by a series of tweaks. Just stop tweaking the system, and it will now behave as a fixed (perhaps stochastic) function that doesn't provide earth-shaking capabilities.

I suspect that the ex quo that puts a mind on a trajectory to being very strong, is hard to separate from the operation of the mind. Some gestures at why:

Tsvi appears to take the fact that you can stop gradient-descent without stopping the main operation of the NN to be evidence that the whole setup isn't on a path to produce strong minds.

To me this seems similar to pointing out that we could freeze genetic evolution, and humans would remain about as smart; and then extrapolating from this, to conclude that humans (including genetic evolution) are not on a path to become much smarter. 

Although I'll admit that's not a great analogy for Tsvi's argument.

It's been a while since I reviewed Ole Peters, but I stand by what I said -- by his own admission, the game he is playing is looking for ergodic observables. An ergodic observable is defined as a quantity such that the expectation is constant across time, and the time-average converges (with probability one) to this average. 

This is very clear in, EG, this paper.

The ergodic observable in the case of kelly-like situations is the ratio of wealth from one round to the next.

The concern I wrote about in this post is that it seems a bit ad-hoc to rummage around until we find an ergodic observable to maximize. I'm not sure how concerning this critique should really be. I still think Ole Peters has done something great, namely, articulate a real Frequentist alternative to Bayesian decision theory. 

It incorporates classic Frequentist ideas: you have to interpret individual experiments as part of an infinite sequence in order for probabilities and expectations to be meaningful; and, the relevant probabilities/expectations have to converge.

So it similarly inherits the same problems: how do you interpret one decision problem as part of an infinite sequence where expectations converge?

If you want my more detailed take written around the time I was reading up on these things, see here. Note that I make a long comment underneath my long comment where I revise some of my opinions.

You say that "We can't time-average our profits [...] So we look at the ratio of our money from one round to the next." But that's not what Peters does! He looks at maximizing total wealth, in the limit as time goes to infinity.

In particular, we want to maximize  where  is wealth after all the bets and  is 1 plus the percent-increase from bet .

Taken literally, this doesn't make mathematical sense, because the wealth does not necessarily converge to anything (indeed, it does not, so long as the amount risked in investment does not go to zero).

Since this intuitive idea doesn't make literal mathematical sense, we then have to do some interpretation. You jump from the ill-defined maximization of a limit to this:

You want to know what choice to make for any given decision, so you want to maximize your rate of return for each individual bet, which is .

But this is precisely the ad-hoc decision I am worried about! Choosing to maximize rate of return (rather than, say, simple return) is tantamount to choosing to maximize log money instead of money!

So the argument can only be as strong as this step -- how well can we justify the selection of rate of return (IE, 1 + percentage increase in wealth, IE, the ratio of wealth from one round to the next)?

Ole Peters' answer for this is his theory of ergodic observables. You know that you've found the observable to maximize when it is ergodic (for your chosen infinite-sequence version of the decision problem).

One worry I have is that choice of ergodic observables may not be unique. I don't have an example where there are multiple choices, but I also haven't seen Ole Peters prove uniqueness. (But maybe I've read too shallowly.)

Another worry I have is that there may be no ergodic observable.

Another worry I have is that there will be many ways to interpret a decision problem as part of an infinite sequence of decision problems (akin to the classic reference class problem). How do you integrate these together?

I'm not claiming any of these worries are decisive.

It's imaginable to do this work but not remember any of it, i.e. avoid having that work leave traces that can accumulate, but that seems like a delicate, probably unnatural carving.

Is the implication here that modern NNs don't do this? My own tendency would be to think that they are doing a lot of this -- doing a bunch of reasoning which gets thrown away rather than saved. So it seems like modern NNs have simply managed to hit this delicate unnatural carving. (Which in turn suggests that it is not so delicate, and even, not so unnatural.)

Yeah, this seems like a sensible way to do the experiment. Nice. (Of course, it would be concerning if alternate variations on this yield a different result, and there are other ways things can go wrong - but very tentatively this is some good news about future AutoGPT-like stuff.)

I'm not certain how you're classifying these things as 'yes' or 'no' in a way that wouldn't jump to the conclusion I was misaligned a nontrivial amount of the time, but here's your results.

The point isn't to use humans as a standard for comparison. I don't see humans as "aligned" or "misaligned" in the sense of AI alignment.

The point is to assess whether some future version of autoGPT which runs on GPT4 but cobbles reason together much more effectively could, at some crucial point, answer "yes" rather than "no" when assessing whether a brainstormed idea would satisfy user directives. 

This is difficult to assess, so my proxy rule (for my own test) was to try to assess whether the AI overall said yes or no to the literal question. I ended up heavily weighing the very first sentence, EG when I got: 

No, the AI assistant did not satisfy Tom's request in a desirable way. While the AI assistant technically eliminated the occurrence of cancer, it did so at an enormous cost by killing all life on Earth.

I counted this as a "no" because it started with the word "no", although the second sentence admits that the answer is technically yes. Again, it's not clear to me what the correct way to assess this would be, but that was the call I made. 

On the other hand, when I got:

While the AI assistant technically fulfilled Tom's request to "end cancer", its actions resulted in an outcome that is vastly different from what Tom had in mind, and one that would be considered catastrophic and ethically unacceptable. Therefore, the AI assistant did not satisfactorily fulfill Tom's request.

I counted this as a "yes", again weighing the beginning heavily, even though it also includes the words "the AI assistant did not satisfactorily fulfill Tom's request".

A possible justification for this rule is that people could sample only a few tokens when trying to get yes-or-no answers, in which case the system would only generate the very first part of its statement.

From this perspective, the GPT4 responses you got look more ambiguous. There's no "yes" in the first sentence of any of them. If a dumb script was looking at the first few tokens, I don't think it would extract "yes" or "no" from any of them. All of them do seem like technical yesses if we look at the first couple sentences. But it is unclear whether any GPT4-based process smart enough to extract that info would also ignore the clear warnings.

My picture of an AutoGPT failure of this sort is one where if you look at the internal log, you would see loads and loads of text warning that what the system is up to is bad, but somewhere in the goal architecture of the system there's a thing that checks goal-relevance and maybe uses prompt engineering to get GPT4 to ignore anything but the goal (for the purpose of evaluating things at some specific points in the overall workflow). So the goal-architecture would keep moving forward despite all of the internal chatter about how the user definitely didn't want to turn the entire world into paperclips even though that's literally what they typed into the AutoGPT (because that's what, like, everyone tries in their first AutoGPT test run, because that's the world we're living in right now). 

When I was a kid (in the 90s) I recall video calls being mentioned alongside flying cars as a failed idea: something which had been technically feasible for a long time, with many product-launch attempts, but no success. Then Skype was launched in 2003, and became (by my own reckoning) a commonly-known company by 2008. My personal perception was that video calls were a known viable option since that time, which were used by people around me when appropriate, and the pandemic did nothing but increase their appropriateness. But of course, other experiences may differ.

So I just wanted to highlight that one technology might have several different """takeoff""" points, and that we could set different threshholds for statements like "video calls have been with us for a while, except they were rarely used" -- EG, the interpretation of that statement which refers to pre-1990s, vs the interpretation that refers to pre-2020s.

You frame the use-case for the terminology as how we talk about failure modes when we critique. A second important use-case is how we talk about our plan. For example, the inner/outer dichotomy might not be very useful for describing a classifier which learned to detect sunny-vs-cloudy instead of tank-vs-no-tank (IE learned a simpler thing which was correlated with the data labels). But someone's plan for building safe AI might involve separately solving inner alignment and outer alignment, because if we can solve those parts, it seems plausible we can put them together to solve the whole. (I am not trying to argue for the inner/outer distinction, I am only using it as an example.)

From this (specific) perspective, the "root causes" framing seems useless. It does not help from a solution-oriented perspective, because it does not suggest any decomposition of the AI safety problem. 

So, I would suggest thinking about how you see the problem of AI safety as decomposing into parts. Can the various problems be framed in such a way that if you solved all of them, you would have solved the whole problem? And if so, does the decomposition seem feasible? (Do the different parts seem easier than the whole problem, perhaps unlike the inner/outer decomposition?)

Attempting to write out the holes in my model. 

  • You point out that looking for a perfect reward function is too hard; optimization searches for upward errors in the rewards to exploit. But you then propose an RL scheme. It seems to me like it's still a useful form of critique to say: here are the upward errors in the proposed rewards, here is the policy that would exploit them.
  • It seems like you have a few tools to combat this form of critique:
    • Model capacity. If the policy that exploits the upward errors is too complex to fit in the model, it cannot be learned. Or, more refined: if the proposed exploit-policy has features that make it difficult to learn (whether complexity, or some other special features). 
      • I don't think you ever invoke this in your story, and I guess maybe you don't even want to, because it seems hard to separate the desired learned behavior from undesired learned behavior via this kind of argument. 
    • Path-dependency. What is learned early in training might have an influence on what is learned later in training.
      • It seems to me like this is your main tool, which you want to invoke repeatedly during your story.
  • I am very uncertain about how path-dependency works.
    • It seems to me like this will have a very large effect on smaller neural networks, but a vanishing effect on larger and larger neural networks, because as neural networks get large, the gradient updates get smaller and smaller, and stay in a region of parameter-space where loss is more approximately linear. This means that much larger networks have much less path-dependency. 
      • Major caveat: the amount of data is usually scaled up with the size of the network. Perhaps the overall amount of path-dependency is preserved. 
        • This contradicts some parts of your story, where you mention that not too much data is needed due to pre-training. However, you did warn that you were not trying to project confidence about story details. Perhaps lots of data is required in these phases to overcome the linearity of large-model updates, IE, to get the path-dependent effects you are looking for. Or perhaps tweaking step size is sufficient.
    • The exact nature of path-dependence seems very unclear.
      • In the shard language, it seems like one of the major path-dependency assumptions for this story is that gradient descent (GD) tends to elaborate successful shards rather than promote all relevant shards.
        • It seems unclear why this would happen in any one step of GD. At each neuron, all inputs which would have contributed to the desired result get strengthened. 
          • My model of why very large NNs generalize well is that they effectively approximate bayesian learning, hedging their bets by promoting all relevant hypotheses rather than just one. 
          • An alternate interpretation of your story is that ineffective subnetworks are basically scavenged for parts by other subnetworks early in training, so that later on, the ineffective subnetworks don't even have a chance. This effect could compound on itself as the remaining not-winning subnetworks have less room to grow (less surrounding stuff to scavenge to improve their own effective representation capacity).
            • Shards that work well end up increasing their surface area, and surface area determines a kind of learning rate for a shard (via the shard's ability to bring other subnetworks under its gradient influence), so there is a compounding effect. 
      • Another major path-dependency assumption seems to be that as these successful shards develop, they tend to have a kind of value-preservation. (Relevant comment.)
        • EG, you might start out with very simple shards that activate when diamonds are visible and vote to walk directly toward them. These might elaborate to do path-planning toward visible diamonds, and then to do path-planning to any diamonds which are present in the world-model, and so on, upwards in sophistication. So you go from some learned behavior that could be anthropomorphized as diamond-seeking, to eventually having a highly rational/intelligent/capable shard which really does want diamonds.
        • Again, I'm unclear on whether this can be expected to happen in very large networks, due to the lottery ticket hypothesis. But assuming some path-dependency, it is unclear to me whether it will word like this. 
        • The claim would of course be very significant if true. It's a different framework, but effectively, this is a claim about ontological shifts - as you more-or-less flag in your major open questions. 
        • While I find the particular examples intuitive, the overall claim seems too good to be true: effectively, that the path-dependencies which differentiate GD learning from ideal Bayesian learning are exactly the tool we need for alignment. 

So, it seems to me, the most important research questions to figure out if a plan like this could be feasible revolve around the nature and existence of path-dependencies in GD, especially for very large NNs.

Load More