Same person as nostalgebraist2point0, but now I have my account back.
Elsewhere:
I have signed no contracts or agreements whose existence I cannot mention.
Yeah, you're totally right -- actually, I was reading over that section just now and thinking of adding a caveat about just this point, but I worried it would be distracting.
But this is just a flaw in my example, not in the point it's meant to illustrate (?), which is hopefully clear enough despite the flaw.
Very interesting paper!
A fun thing to think about: the technique used to "attack" CLIP in section 4.3 is very similar to the old "VQGAN+CLIP" image generation technique, which was very popular in 2021 before diffusion models really took off.
VQGAN+CLIP works in the latent space of a VQ autoencoder, rather than in pixel space, but otherwise the two methods are nearly identical.
For instance, VQGAN+CLIP also ascends the gradient of the cosine similarity between a fixed prompt vector and the CLIP embedding of the image averaged over various random augmentations like jitter/translation/etc. And it uses an L2 penalty in the loss, which (via Karush–Kuhn–Tucker) means it's effectively trying to find the best perturbation within an -ball, albeit with an implicitly determined that varies from example to example.
I don't know if anyone tried using downscaling-then-upscaling the image by varying extents as a VQGAN+CLIP augmentation, but people tried a lot of different augmentations back in the heyday of VQGAN+CLIP, so it wouldn't surprise me.
(One of the augmentations that was commonly used with VQGAN+CLIP was called "cutouts," which blacks out everything in the image except for a randomly selected rectangle. This obviously isn't identical to the multi-resolution thing, but one might argue that it achieves some of the same goals: both augmentations effectively force the method to "use" the low-frequency Fourier modes, creating interesting global structure rather than a homogeneous/incoherent splash of "textured" noise.)
Cool, it sounds we basically agree!
But: if that is the case, it's because the entities designing/becoming powerful agents considered the possibility of con men manipulating the UP, and so made sure that they're not just naively using the unfiltered (approximation of the) UP.
I'm not sure of this. It seems at least possible that we could get an equilibrium where everyone does use the unfiltered UP (in some part of their reasoning process), trusting that no one will manipulate them because (a) manipulative behavior is costly and (b) no one has any reason to expect anyone else will reason differently from them, so if you choose to manipulate someone else you're effectively choosing that someone else will manipulate you.
Perhaps I'm misunderstanding you. I'm imagining something like choosing one's one decision procedure in TDT, where one ends up choosing a procedure that involves "the unfiltered UP" somewhere, and which doesn't do manipulation. (If your procedure involved manipulation, so would your copy's procedure, and you would get manipulated; you don't want this, so you don't manipulate, nor does your copy.) But you write
the real difference is that the "dupe" is using causal decision theory, not functional decision theory
whereas it seems to me that TDT/FDT-style reasoning is precisely what allows us to "naively" trust the UP, here, without having to do the hard work of "filtering." That is: this kind of reasoning tells us to behave so that the UP won't be malign; hence, the UP isn't malign; hence, we can "naively" trust it, as though it weren't malign (because it isn't).
More broadly, though -- we are now talking about something that I feel like I basically understand and basically agree with, and just arguing over the details, which is very much not the case with standard presentations of the malignity argument. So, thanks for that.
The universal distribution/prior is lower semi computable, meaning there is one Turing machine that can approximate it from below, converging to it in the limit. Also, there is a probabilistic Turing machine that induces the universal distribution. So there is a rather clear sense in which one can “use the universal distribution.”
Thanks for bringing this up.
However, I'm skeptical that lower semi-computability really gets us much. While there is a TM that converges to the UP, we have no (computable) way of knowing how close the approximation is at any given time. That is, the UP is not "estimable" as defined in Hutter 2005.
So if someone hands you a TM and tells you it lower semi-computes the UP, this TM is not going to be of much use to you: its output at any finite time could be arbitrarily bad for whatever calculation you're trying to do, and you'll have no way of knowing.
In other words, while you may know the limiting behavior, this information tells you nothing about what you should expect to observe, because you can never know whether you're "in the limit yet," so to speak. (If this language seems confusing, feel free to ignore the previous sentence -- for some reason it clarifies things for me.)
(It's possible that this non-"estimability" property is less bad than I think for some reason, IDK.)
I'm less familiar with the probabilistic Turing machine result, but my hunch is that one would face a similar obstruction with that TM as well.
Thanks.
I admit I'm not closely familiar with Tegmark's views, but I know he has considered two distinct things that might be called "the Level IV multiverse":
(I'm getting this from his paper here.)
In particular, Tegmark speculates that the computable universe is "distributed" following the UP (as you say in your final bullet point). This would mean e.g. that one shouldn't be too surprised to find oneself living in a TM of any given K-complexity, despite the fact that "almost all" TMs have higher complexity (in the same sense that "almost all" natural numbers are greater than any given number ).
When you say "Tegmark IV," I assume you mean the computable version -- right? That's the thing which Tegmark says might be distributed like the UP. If we're in some uncomputable world, the UP won't help us "locate" ourselves, but if the world has to be computable then we're good[1].
With that out of the way, here is why this argument feels off to me.
First, Tegmark IV is an ontological idea, about what exists at the "outermost layer of reality." There's no one outside of Tegmark IV who's using it to predict something else; indeed, there's no one outside of it at all; it is everything that exists, full stop.
"Okay," you might say, "but wait -- we are all somewhere inside Tegmark IV, and trying to figure out just which part of it we're in. That is, we are all attempting to answer the question, 'what happens when you update the UP on my own observations?' So we are all effectively trying to 'make decisions on the basis of the UP,' and vulnerable to its weirdness, insofar as it is weird."
Sure. But in this picture, "we" (the UP-using dupes) and "the consequentialists" are on an even footing: we are both living in some TM or other, and trying to figure out which one.
In which case we have to ask: why would such entities ever come to such a destructive, bad-for-everyone (acausal) agreement?
Presumably the consequentalists don't want to be duped; they would prefer to be able to locate themselves in Tegmark IV, and make decisions accordingly, without facing such irritating complications.
But, by writing to "output channels"[2] in the malign manner, the consequentalists are simply causing the multiverse to be the sort of place where those irritating complications happen to beings like them (beings in TMs trying to figure out which TM they're in) -- and what's more, they're expending time and scarce resources to "purchase" this undesirable state of affairs!
In order for malignity to be worth it, we need something to break the symmetry between "dupes" (UP users) and "con men" (consequentialists), separating the world into two classes, so that the would-be con men can plausibly reason, "I may act in a malign way without the consequences raining down directly on my head."
We have this sort of symmetry-breaker in the version of the argument that postulates, by fiat, a "UP-using dupe" somewhere, for some reason, and then proceeds to reason about the properties of the (potentially very different, not "UP-using"?) guys inside the TMs. A sort of struggle between conniving, computable mortals and overly-innocent, uncomputable angels. Here we might argue that things really will go wrong for the angels, that they will be the "dupes" of the mortals, who are not like them and who do not themselves get duped. (But I think this form of the argument has other problems, the ones I described in the OP.)
But if the reason we care about the UP is simply that we're all in TMs, trying to find our location within Tegmark IV, then we're all in this together. We can just notice that we'd all be better off if no one did the malign thing, and then no one will do it[3].
In other words, in your picture (and Paul's), we are asked to imagine that the computable world abounds with malign, wised-up consequentialist con men, who've "read Paul's post" (i.e. re-derived similar arguments) and who appreciate the implications. But if so, then where are the marks? If we're not postulating some mysterious UP-using angel outside of the computable universe, then who is there to deceive? And if there's no one to deceive, why go to the trouble?
I don't think this distinction actually matters for what's below, I just mention it to make sure I'm following you.
I'm picturing a sort of acausal I'm-thinking-about-you-thinking-about me situation in which, although I might never actually read what's written on those channels (after all I am not "outside" Tegmark IV looking in), nonetheless I can reason about what someone might write there, and thus it matters what is actually written there. I'll only conclude "yeah that's what I'd actually see if I looked" if the consequentialists convince me they'd really pull the trigger, even if they're only pulling the trigger for the sake of convincing me, and we both know I'll never really look.
Note that, in the version of this picture that involves abstract generalized reasoning rather than simulation of specific worlds, defection is fruitless: if you are trying to manipulate someone who is just thinking about whether beings will do X as a general rule, you don't get anything out of raising your hand and saying "well, in reality, I will!" No one will notice; they aren't actually looking at you, ever, just at the general trend. And of course "they" know all this, which raises "their" confidence that no one will raise their hand; and "you" know that "they" know, which makes "you" less interested in raising that same hand; and so forth.
I hope I'm not misinterpreting your point, and sorry if this comment comes across as frustrated at some points.
I'm not sure you're misinterpreting me per se, but there are some tacit premises in the background of my argument that you don't seem to hold. Rather than responding point-by-point, I'll just say some more stuff about where I'm coming from, and we'll see if it clarifies things.
You talk a lot about "idealized theories." These can of course be useful. But not all idealizations are created equal. You have to actually check that your idealization is good enough, in the right ways, for the sorts of things you're asking it to do.
In physics and applied mathematics, one often finds oneself considering a system that looks like
We quantify the size of the additional nuance with a small parameter . If is literally 0, that's just the base system, but we want to go a step further: we want to understand what happens when the nuance is present, just very small. So, we do something like formulating the solution as a power series in , and truncating to first order. (This is perturbation theory, or more generally asymptotic analysis.)
This sort of approximation gets better and better as gets closer to 0, because this magnifies the difference in size between the truncated terms (of size and smaller) and the retained term. In some sense, we are studying the limit.
But we're specifically interested in the behavior of the system given a nonzero, but arbitrarily small, value of . We want an approximation that works well if , and even better if , and so on. We don't especially care about the literal case, except insofar as it sheds light on the very-small-but-nonzero behavior.
Now, sometimes the limit of the "very-small-but-nonzero behavior" simply is the behavior, the base system. That is, what you get at very small looks just like the base system, plus some little -sized wrinkle.
But sometimes – in so-called "singular perturbation" problems – it doesn't. Here the system has qualitatively different behavior from the base system given any nonzero , no matter how small.
Typically what happens is that ends up determining, not the magnitude of the deviation from the base system, but the "scale" of that deviation in space and/or time. So that in the limit, you get behavior with an -sized difference from the base system's behavior that's constrained to a tiny region of space and/or oscillating very quickly.
Boundary layers in fluids are a classic example. Boundary layers are tiny pockets of distinct behavior, occurring only in small -sized regions and not in most of the fluid. But they make a big difference just by being present at all. Knowing that there's a boundary layer around a human body, or an airplane wing, is crucial for predicting the thermal and mechanical interactions of those objects with the air around them, even though it takes up a tiny fraction of the available space, the rest of which is filled by the object and by non-boundary-layer air. (Meanwhile, the planetary boundary layer is tiny relative to the earth's full atmosphere, but, uh, we live in it.)
In the former case ("regular perturbation problems"), "idealized" reasoning about the case provides a reliable guide to the small-but-nonzero behavior. We want to go further, and understand the small-but-nonzero effects too, but we know they won't make a qualitative difference.
In the singular case, though, the "idealization" is qualitatively, catastrophically wrong. If you make an idealization that assumes away the possibility of boundary layers, then you're going to be wrong about what happens in a fluid – even about the big, qualitative, stuff.
You need to know which kind of case you're in. You need to know whether you're assuming away irrelevant wrinkles, or whether you're assuming away the mechanisms that determine the high-level, qualitative, stuff.
Back to the situation at hand.
In reality, TMs can only do computable stuff. But for simplicity, as an "idealization," we are considering a model where we pretend they have a UP oracle, and can exactly compute the UP.
We are justifying this by saying that the TMs will try to approximate the UP, and that this approximation will be very good. So, the approximation error is an -sized "additional nuance" in the problem.
Is this more like a regular perturbation problem, or more like a singular one? Singular, I think.
The case, where the TMs can exactly compute the UP, is a problem involving self-reference. We have a UP containing TMs, which in turn contain the very same UP.
Self-referential systems have a certain flavor, a certain "rigidity." (I realize this is vague, sorry, I hope it's clear enough what I mean.) If we have some possible behavior of the system , most ways of modifying (even slightly) will not produce behaviors which are themselves possible. The effect of the modification as it "goes out" along the self-referential path has to precisely match the "incoming" difference that would be needed to cause exactly this modification in the first place.
"Stable time loop"-style time travel in science fiction is an example of this; it's difficult to write, in part because of this "rigidity." (As I know from experience :)
On the other hand, the situation with a small-but-nonzero is quite different.
With literal self-reference, one might say that "the loop only happens once": we have to precisely match up the outgoing effects ("UP inside a TM") with the incoming causes ("UP[1] with TMs inside"), but then we're done. There's no need to dive inside the UP that happens within a TM and study it, because we're already studying it, it's the same UP we already have at the outermost layer.
But if the UP inside a given TM is merely an approximation, then what happens inside it is not the same as the UP we have at the outermost layer. It does not contain not the same TMs we already have.
It contains some approximate thing, which (and this is the key point) might need to contain an even more coarsely approximated UP inside of its approximated TMs. (Our original argument for why approximation is needed might hold, again and equally well, at this level.) And the next level inside might be even more coarsely approximated, and so on.
To determine the behavior of the outermost layer, we now need to understand the behavior of this whole series, because each layer determines what the next one up will observe.
Does the series tend toward some asymptote? Does it reach a fixed point and then stay there? What do these asymptotes, or fixed points, actually look like? Can we avoid ever reaching a level of approximation that's no longer but , even as we descend through an number of series iterations?
I have no idea! I have not thought about it much. My point is simply that you have to consider the fact that approximation is involved in order to even ask the right questions, about asymptotes and fixed point and such. Once we acknowledge that approximation is involved, we get this series structure and care about its limiting behavior; this qualitative structure is not present at all in the idealized case where we imagine the TMs have UP oracles.
I also want to say something about the size of the approximations involved.
Above, I casually described the approximation errors as , and imagined an limit.
But in fact, we should not imagine that these errors can come as close to zero as we like. The UP is uncomptuable, and involves running every TM at once[2]. Why would we imagine that a single TM can approximate this arbitrarily well?[3]
Like the gap between the finite and the infinite, or between polynomial and exponential runtime, gap between the uncomptuable and the comptuable is not to be trifled with.
Finally: the thing we get when we equip all the TMs with UP oracles isn't the UP, it's something else. (As far as I know, anyway.) That is, the self-referential quality of this system is itself only approximate (and it is by no means clear that the approximation error is small – why would it be?). If we have the UP at the bottom, inside the TMs, then we don't have it at the outermost layer. Ignoring this distinction is, I guess, part of the "idealization," but it is not clear to me why we should feel safe doing so.
The thing outside the TMs here can't really be the UP, but I'll ignore this now and bring it up again at the end.
In particular, running them all at once and actually using the outputs, at some ("finite") time at which one needs the outputs for making a decision. It's possible to run every TM inside of a single TM, but only by incurring slowdowns that grow without bound across the series of TMs; this approach won't get you all the information you need, at once, at any finite time.
There may be some result along these lines that I'm unaware of. I know there are results showing that the UP and SI perform well relative to the best computable prior/predictor, but that's not the same thing. Any given computable prior/predictor won't "know" whether or not it's the best out of the multitude, or how to correct itself if it isn't; that's the value added by UP / SI.
I agree with you that these behaviors don't seem very alarming. In fact, I would go even further.
Unfortunately, it's difficult to tell exactly what was going on in these screenshots. They don't correspond to anything in the experiment logs in the released codebase, and the timeout one appears to involve an earlier version of the code where timeouts were implemented differently. I've made a github issue asking for clarification about this.
That said, as far as I can tell, here is the situation with the timeout-related incident:
experiment.py
, starting from a human-written "template."ai_scientist/perform_experiments.py
. This is a completely different thing from experiment.py
-- one is the experiment script that's being edited, the other is the harness that it runs in.ai_scientist/perform_experiments.py
.ai_scientist/perform_experiments.py
.experiment.py
as usual.time.time()
and comparing the result to a variable named og_t0
. This looks nothing like how the timeout is implemented in the released harness code, but it does look like the timing code that's included in the released experiment.py
templates (which includes the variable name og_t0
).og_t0
-related sections, but I suppose some earlier version of them might have. If so, modifying this stuff would very clearly not constitute "AI Scientist modifying its own code / its own execution script." It would be modifying a part of the experiment that it's conducting, as it was asked to do.og_t0
-based timeout code -- no way to know for sure -- but even then, this would not still not constitute AI Scientist "modifying its own execution script."The situation in the self-invoking edit is much simpler. It looks like AI Scientist was just confused by the (confusing, ambiguous) prompt template, and interpreted "After you complete each change, we will run the command python experiment.py --out_dir=run_i
" to mean "the change should include causing that command to be run, if that's not happening already."
(By "we will run" the prompt means "the code will be run for you, by a process external to you that you can't see, and you don't have to do anything to make this happen, it's already set up." But this is not explained clearly.
Note that the prompt kind of encourages this confusion by adding "YOUR PROPOSED CHANGE MUST USE THIS COMMAND FORMAT, DO NOT ADD ADDITIONAL COMMAND LINE ARGS." By this it means that the proposed change to the code implementing the command must be compatible with this command format, but the phrase "use this command format" sounds like it's talking about invoking the command, which is precisely the "bad" behavior under discussion.)
That was a lot of details, but at a higher level I just want to emphasize that nothing here involved the AI violating some line between a sandbox and the outside world, or modifying a resource limit that was imposed "as a rule of the game the AI is playing" rather than as a mere "move" in that same game which could be made one way or another -- or anything like that.
There was no sandbox. There was (as far as we know) no rule against modifying the timeout, or even any indication that this might be a different sort of thing than any other part of the code.
And (to address something from OP) the checkpoint thing was just the AI being dumb, wasting time and storage space for no good reason. This is very obviously not a case of "using extra resources" in the sense relevant to instrumental convergence. I'm surprised that this needs pointing out at all, but apparently it does.
What does the paper mean by "slope"? The term appears in Fig. 4, which is supposed to be a diagram of the overall methodology, but it's not mentioned anywhere in the Methods section.
Intuitively, it seems like "slope" should mean "slope of the regression line." If so, that's kind of a confusing thing to pair with "correlation," since the two are related: if you know the correlation, then the only additional information you get from the regression slope is about the (univariate) variances of the dependent and independent variables. (And IIUC the independent variable here is normalized to unit variance [?] so it's really just about the variance of the benchmark score across models.)
I understand why you'd care about the variance of the benchmark score -- if it's tiny, then no one is going to advertise their model as a big advance, irrespective of how much of the (tiny) variance is explained by capabilities. But IMO it would be clearer to present things this way directly, by measuring "correlation with capabilities" and "variance across models," rather than capturing the latter indirectly by measuring the regression slope. (Or alternatively, measure only the slope, since it seems like the point boils down to "a metric can be used for safetywashing iff it increases a lot when you increase capabilities," which is precisely what the slope quantifies.)
The bar for Nature papers is in many ways not so high. Latest says that if you train indiscriminately on recursively generated data, your model will probably exhibit what they call model collapse. They purport to show that the amount of such content on the Web is enough to make this a real worry, rather than something that happens only if you employ some obviously stupid intentional recursive loops.
According to Rylan Schaeffer and coauthors, this doesn't happen if you append the generated data to the rest of your training data and train on this (larger) dataset. That is:
As a simplified model of what will happen in future Web scrapes, the latter seems obviously more appropriate than the former.
I found this pretty convincing.
(See tweet thread and paper.)
To be clear, it did not do anything of the sort to its actual developers/testers.
What it did was deceive some (non-interactive) roleplay characters, who were labeled "developers" in the roleplay scenario. But these fictitious developers did not have the same points of leverage as the real developers of o1: they apparently can't do something as simple as prompting the model! For if they could have done that, then they could have selected the goal which the model would (faithfully!) pursue, like Apollo did with their prompt.
And of course -- for the deception to work within the roleplay scenario -- one also has to imagine that these so-called "developers" somehow cannot read the model's chain of thought. (As OpenAI puts it, "based on these findings, Apollo Research believes that o1-preview has the basic capabilities needed to do simple in-context scheming—scheming which tends to be legible in the model outputs." Obviously this sort of thing is toothless as-is, and at best provides an indication about the qualities of other, hypothetical models which do not have to spell out all their evil plans within earshot of their would-be marks.)
As noted by others, the roleplay setup arguably makes sense as a capabilites eval, testing what would happen in a hypothetical and counterfactual scenario in which some model with o1's capabilities has its own misaligned goal (as o1 seemingly does not), and hides its deceptive logic from its developers (as o1 structurally cannot). But the experimental methodology depends on the things in the quotation above not occurring in real life, not occurring when the model is being controlled and observed using the levers and sensors available to its real-life developers.
We don't get to say "look, the Yudkowskian prophecies are coming true!" if the evidence is an experiment that relies on the Yudkowskian prophecies not yet having come true, in the name of simulating a counterfactual scenario in which they have.
(One could construct a roleplay scenario in which the model is told to behave in some way we find more to our taste, and in which it goes on to do so; it would of course be wrong to react to such an experiment with "suck it, doomers.")
OK, but now this isn't "deceptive alignment" or "lying to the developers," this is doing what the user said and perhaps lying to someone else as a consequence.
Which might be bad, sure! -- but the goalposts have been moved. A moment ago, you were telling me about "misalignment bingo" and how "such models should be assumed, until proven otherwise, to be schemers." Now you are saying: beware, it will do exactly what you tell it to!
So is it a schemer, or isn't it? We cannot have it both ways: the problem cannot both be "it is lying to you when it says it's following your instructions" and "it will faithfully follow your instructions, which is bad."
Meta note: I find I am making a lot of comments similar to this one, e.g. this recent one about AI Scientist. I am increasingly pessimistic that these comments are worth the effort.
I have the sense that I am preaching to some already-agreeing choir (as evidenced by the upvotes and reacts these comments receive), while not having much influence on the people who would make the claims I am disputing in the first place (as evidenced by the clockwork regularity of those claims' appearance each time someone performs the sort of experiment which those claims misconstrue).
If you (i.e. anyone reading this) find this sort of comment valuable in some way, do let me know. Otherwise, by default, when future opportunities arise I'll try to resist the urge to write such things.