All of evhub's Comments + Replies

"Decision Transformer" (Tool AIs are secret Agent AIs)

(Moderation note: added to the Alignment Forum from LessWrong.)

Survey on AI existential risk scenarios

(Moderation note: added to the Alignment Forum from LessWrong.)

Game-theoretic Alignment in terms of Attainable Utility

(Moderation note: added to the Alignment Forum from LessWrong.)

Thoughts on the Alignment Implications of Scaling Language Models
evhub9d11Ω6

(Moderation note: added to Alignment Forum from LessWrong.)

List of good AI safety project ideas?
Answer by evhubMay 27, 20215Ω4

Though they're both somewhat outdated at this point, there are certainly still some interesting concrete experiment ideas to be found in my “Towards an empirical investigation of inner alignment” and “Concrete experiments in inner alignment.”

Agency in Conway’s Game of Life

Have you seen “Growing Neural Cellular Automata?” It seems like the authors there are trying to do something pretty similar to what you have in mind here.

6alexflint20dYes - I found that work totally wild. Yes they are setting up a cellular automata in such a way that it evolves towards and then fixates at a target state, but iirc what they are optimizing over is the rules of the automata itself, rather than over a construction within the automata.
Knowledge Neurons in Pretrained Transformers

Knowledge neurons don't seem to include all of the model's knowledge about a given question. Cutting them out only decreases the probability on the correct answer by 40%.

Yeah, agreed—though I would still say that finding the first ~40% of where knowledge of a particular fact is stored counts as progress (though I'm not saying they have necessarily done that).

I don't think there's evidence that these knowledge neurons don't do a bunch of other stuff. After removing about 0.02% of neurons they found that the mean probability on other correct answers dec

... (read more)
Formal Inner Alignment, Prospectus
evhub1mo17Ω12

My third and final example: in one conversation, someone made a claim which I see as "exactly wrong": that we can somehow lower-bound the complexity of a mesa-optimizer in comparison to a non-agentic hypothesis (perhaps because a mesa-optimizer has to have a world-model plus other stuff, where a regular hypothesis just needs to directly model the world). This idea was used to argue against some concern of mine.

The problem is precisely that we know of no way of doing that! If we did, there would not be any inner alignment problem! We could just focus on th

... (read more)
3michaelcohen1moAgree.
8abramdemski1moIf the universal prior were benign but NNs were still potentially malign, I think I would argue strongly against the use of NNs and in favor of more direct approximations of the universal prior. But, I agree this is not 100% obvious; giving up prosaic AI is giving up a lot. Hopefully my final write-up won't contain so much polemicizing about what "the main point" is, like this write-up, and will instead just contain good descriptions of the various important problems.
Mundane solutions to exotic problems

Your link is broken.

For reference, the first post in Paul's ascription universality sequence can be found here (also Adam has a summary here).

2adamShimi1moSorry about that. I corrected it but it was indeed the first link you gave.
Why Neural Networks Generalise, and Why They Are (Kind of) Bayesian

I guess I would say something like: random search is clearly a pretty good first-order approximation, but there are also clearly second-order effects. I think that exactly how strong/important/relevant those second-order effects are is unclear, however, and I remain pretty uncertain there.

[AN #148]: Analyzing generalization across more axes than just accuracy or loss

Read more: Section 1.3 of this version of the paper

This is in the wrong spot.

2rohinmshah1moHuh, weird. Probably a data entry error on my part. Fixed, thanks for catching it.
Covid 4/22: Crisis in India

Is there ways to share this with EAs?

You could write a post about it on the EA Forum.

1CraigMichael2moWoah. It looks just like LessWrong. :)
NTK/GP Models of Neural Nets Can't Learn Features

(moved from LW to AF)

Meta: I'm going to start commenting this on posts I move from LW to AF just so there's a better record of what moderation actions I'm taking.

4Ben Pace2moThx for doing this!
6habryka2moThis seems like a great thing to do. Mild preference for calling it "added it to AF", just so that people don't get confused that this means content will disappear from LW.
Homogeneity vs. heterogeneity in AI takeoff scenarios

I suppose the distinction between "strong" and "weak" warning shots would matter if we thought that we were getting "strong" warning shots. I want to claim that most people (including Evan) don't expect "strong" warning shots, and usually mean the "weak" version when talking about "warning shots", but perhaps I'm just falling prey to the typical mind fallacy.

I guess I would define a warning shot for X as something like: a situation in which a deployed model causes obvious, real-world harm due to X. So “we tested our model in the lab and found deception”... (read more)

6rohinmshah2moI don't automatically exclude lab settings, but other than that, this seems roughly consistent with my usage of the term. (And in particular includes the "weak" warning shots discussed above.)
Open Problems with Myopia

Yes; episode is correct there—the whole point of that example is that, by breaking the episodic independence assumption, otherwise hidden non-myopia can be revealed. See the discussion of the prisoner's dilemma unit test in Krueger et al.'s “Hidden Incentives for Auto-Induced Distributional Shift” for more detail on how breaking this sort of episodic independence plays out in practice.

Open Problems with Myopia

Yeah, I agree—the example should probably just be changed to be about an imitative amplification agent or something instead.

Open Problems with Myopia

I think that trying to encourage myopia via behavioral incentives is likely to be extremely difficult, if not impossible (at least without a better understanding of our training processes' inductive biases). Krueger et al.'s “Hidden Incentives for Auto-Induced Distributional Shift” is a good resource for some of the problems that you run into when you try to do that. As a result, I think that mechanistic incentives are likely to be necessary—and I personally favor some form of relaxed adversarial training—but that's going to require us to get a better unde... (read more)

MIRI comments on Cotra's "Case for Aligning Narrowly Superhuman Models"

Possibly relevant here is my transparency trichotomy between inspection transparency, training transparency, and architectural transparency. My guess is that inspection transparency and training transparency would mostly go in your “active transparency” bucket and architectural transparency would mostly go in your “passive transparency” bucket. I think there is a position here that makes sense to me, which is perhaps what you're advocating, that architectural transparency isn't relying on any sort of path-continuity arguments in terms of how your training ... (read more)

MIRI comments on Cotra's "Case for Aligning Narrowly Superhuman Models"

Fwiw, I also agree with Adele and Eliezer here and just didn't see Eliezer's comment when I was giving my comments.

Formal Solution to the Inner Alignment Problem

Sure—by that definition of realizability, I agree that's where the difficulty is. Though I would seriously question the practical applicability of such an assumption.

Formal Solution to the Inner Alignment Problem

Perhaps I just totally don't understand what you mean by realizability, but I fail to see how realizability is relevant here. As I understand it, realizability just says that the true model has some non-zero prior probability—but that doesn't matter (at least for the MAP, which I think is a better model than the full posterior for how SGD actually works) as long as there's some deceptive model with greater prior probability that's indistinguishable on the training distribution, as in my simple toy model from earlier.

4Vanessa Kosoy3moWhen talking about uniform (worst-case) bounds, realizability just means the true environment is in the hypothesis class, but in a Bayesian setting (like in the OP) it means that our bounds scale with the probability of the true environment in the prior. Essentially, it means we can pretend the true environment was sampled from the prior. So, if (by design) training works by sampling environments from the prior, and (by realizability) deployment also consists of sampling an environment from the same prior, training and deployment are indistinguishable.
Formal Solution to the Inner Alignment Problem

Yeah, that's a fair objection—my response to that is just that I think that preventing a model from being able to distinguish training and deployment is likely to be impossible for anything competitive.

2Vanessa Kosoy3moOkay, but why? I think that the reason you have this intuition is, the realizability assumption is false. But then you should concede that the main weakness of the OP is the realizability assumption rather than the difference between deep learning and Bayesianism.
Formal Solution to the Inner Alignment Problem

Here's a simple toy model. Suppose you have two agents that internally compute their actions as follows (perhaps with actual argmax replaced with some smarter search algorithm, but still basically structured as below):

Then, comparing the K-complexity of the two models, we get

and the problem becomes that both and will produce behavior that looks aligned on the training distribution, bu... (read more)

5Vanessa Kosoy3moI understand how this model explains why agents become unaligned under distributional shift. That's something I never disputed. However, I don't understand how this model applies to my proposal. In my proposal, there is no distributional shift, because (thanks to the realizability assumption) the real environment is sampled from the same prior that is used for training with C. The model can't choose to act deceptively during training, because it can't distinguish between training and deployment. Moreover, the objective I described is not complicated.
Formal Solution to the Inner Alignment Problem

It's not that simulating is difficult, but that encoding for some complex goal is difficult, whereas encoding for a random, simple goal is easy.

4Vanessa Kosoy3moThe reward function I was sketching earlier is not complex. Moreover, if you can encode M then you can encode B, no reason why the latter should have much greater complexity. If you can't encode M but can only encode something that produces M, then by the same token you can encode something that produces B (I don't think that's even a meaningful distinction tbh). I think it would help if you could construct a simple mathematical toy model of your reasoning here?
I'm still mystified by the Born rule

I don't think I would take that bet—I think the specific question of what UTM to use does feel more likely to be off-base than other insights I associate with UDASSA. For example, some things that I feel UDASSA gets right: a smooth continuum of happeningness that scales with number of clones/amount of simulation compute/etc., and simpler things being more highly weighted.

8So8res3moCool, thanks. Yeah, I don't have >50% on either of those two things holding up to philisophical progress (and thus, eg, I disagree that future theories need to agree with UDASSA on those fronts). Rather, happeningness-as-it-relates-to-multiple-simulations and happeningness-as-it-relates-to-the-simplicity-of-reality are precisely the sort of things where I claim Alice-style confusion, and where it seems to me like UDASSA is alledging answers while being unable to dissolve my confusions, and where I suspect UDASSA is not-even-wrong. (In fact, you listing those two things causes me to believe that I failed to convey the intended point in my analogy above. I lean towards just calling this 'progress' and dropping the thread here, though I'd be willing to give a round of feedback if you wanna try paraphrasing or otherwise falsifying my model instead. Regardless, hooray for a more precise articulation of a disagreement!)
I'm still mystified by the Born rule

Yeah—I think I agree with what you're saying here. I certainly think that UDASSA still leaves a lot of things unanswered and seems confused about a lot of important questions (embeddedness, uncomputable universes, what UTM to use, how to specify an input stream, etc.). But it also feels like it gets a lot of things right in a way that I don't expect a future, better theory to get rid of—that is, UDASSA feels akin to something like Newtonian gravity here, where I expect it to be wrong, but still right enough that the actual solution doesn't look too different.

8So8res3moNeat! I'd bet against that if I knew how :-) I expect UDASSA to look more like a red herring from the perspective of the future, with most of its answers revealed as wrong or not-even-wrong or otherwise rendered inapplicable by deep viewpoint shifts. Off the top of my head, a bet I might take is "the question of which UTM meta-reality uses to determine the simplicity of various realities was quite off-base" (as judged by, say, agreement of both EY and PC or their surrogates in 1000 subjective years). In fact, I'm curious for examples of things that UDASSA seems to get right, that you think better theories must improve upon. (None spring to my own mind. Though, one hypothesis I have is that I've so-deeply-internalized all the aspects of UDASSA that seem obviously-true to me (or that I got from some ancestor-theory), that the only things I can percieve under that label are the controversial things, such that I am not attributing to it some credit that it is due. For instance, perhaps you include various pieces of the updateless perspective under that umbrella while I do not.)
I'm still mystified by the Born rule

FWIW I have decent odds on "a thicker computer (and, indeed, any number of additional copies of exactly the same em) has no effect", and that's more obviously in contradiction with UDASSA.

Absolutely no effect does seem pretty counterintuitive to me, especially given that we know from QM that different levels of happeningness are at least possible.

Like, I continue to have the dualing intuitions "obviously more copies = more happening" and "obviously, setting aside how it's nice for friends to have backup copies in case of catastrophe, adding an identic

... (read more)

Absolutely no effect does seem pretty counterintuitive to me, especially given that we know from QM that different levels of happeningness are at least possible.

I also have that counterintuition, fwiw :-p


I have the sense that you missed my point wrt UDASSA, fwiw. Having failed once, I don't expect I can transmit it rapidly via the medium of text, but I'll give it another attempt.

This is not going to be a particularly tight analogy, but:


Alice is confused about metaethics. Alice has questions like "but why are good things good?" and "why should we car... (read more)

I'm still mystified by the Born rule

For another instance, I weakly suspect that "running an emulation on a computer with 2x-as-thick wires does not make them twice-as-happening" is closer to the truth than the opposite, in apparent contradiction with UDASSA.

I feel like I would be shocked if running a simulation on twice-as-thick wires made it twice as easy to specify you, according to whatever the “correct” UTM is. It seems to me like the effect there shouldn't be nearly that large.

6So8res3moThis is precisely the thought that caused me to put the word 'apparent' in that quote :-p. (In particular, I recalled the original UDASSA post asserting that it took that horn, and this seeming both damning-to-me and not-obviously-true-for-the-reason-you-state, and I didn't want to bog my comment down, so I threw in a hedge word and moved on.) FWIW I have decent odds on "a thicker computer (and, indeed, any number of additional copies of exactly the same em) has no effect", and that's more obviously in contradiction with UDASSA. Although, that isn't the name of my true objection. The name of my true objection is something more like "UDASSA leaves me no less confused, gives me no sense of "aha!", or enlightenment, or a-mystery-unraveled, about the questions at hand". Like, I continue to have the dualing intuitions "obviously more copies = more happening" and "obviously, setting aside how it's nice for friends to have backup copies in case of catastrophe, adding an identical em of my bud doesn't make the world better, nor make their experiences different (never mind stronger)". And, while UDASSA is a simple idea that picks a horse in that race, it doesn't... reveal to each intuition why they were confused, and bring them into unison, or something? Like, perhaps UDASSA is the answer and I simply have not yet figured out how to operate it in a way that reveals its secrets? But I also haven't seen anyone else operate it in a way that reveals the sort of things that seem-like-deconfusion-to-me, and my guess is that it's a red herring.
I'm still mystified by the Born rule

The only reason that sort of discarding works is because of decoherence (which is a probabilistic, thermodynamic phenomenon), and in fact, as a result, if you want to be super precise, discarding actually doesn't work, since the impact of those other eigenfunctions never literally goes to zero.

-1TAG3moMaybe, but decoherence doesn't imply MW.
I'm still mystified by the Born rule

I think I basically agree with all of this, though I definitely think that the problem that you're pointing to is mostly not about the Born rule (as I think you mostly state already), and instead mostly about anthropics. I do personally feel pretty convinced that at least something like UDASSA will serve the test of time on that front—it seems to me like you mostly agree with that, but just think that there are problems with embededness + figuring out how to properly extract a sensory stream that still need to be resolved, which I definitely agree with, bu... (read more)

6So8res3moI agree that the Born rule is just the poster child for the key remaining confusions (eg, I would have found it similarly natural to use the moniker "Hilbert space confusions"). I disagree about whether UDASSA contains much of the answer here. For instance, I have some probability on "physics is deeper than logic" being more-true-than-the-opposite in a way that ends up tossing UDASSA out the window somehow. For another instance, I weakly suspect that "running an emulation on a computer with 2x-as-thick wires does not make them twice-as-happening" is closer to the truth than the opposite, in apparent contradiction with UDASSA. More generally, I'm suspicious of the whole framework, and the "physics gives us hints about the UTM that meta-reality uses" line of attack feels to me like it has gone astray somewhere. (I have a bunch more model here, but don't want to go into it at the moment.) I agree that these questions likely go to the heart of population ethics as well as anthropics :-)
Formal Solution to the Inner Alignment Problem

I think that “think up good strategies for achieving [reward of the type I defined earlier]” is likely to be much, much more complex (making it much more difficult to achieve with a local search process) than an arbitrary goal X for most sorts of rewards that we would actually be happy with AIs achieving.

2Vanessa Kosoy3moWhy? It seems like M would have all the knowledge required for achieving good rewards of the good type, so simulating M should not be more difficult than achieving good rewards of the good type.
Formal Solution to the Inner Alignment Problem

A's structure can just be “think up good strategies for achieving X, then do those,” with no explicit subroutine that you can find anywhere in A's weights that you can copy over to B.

4Vanessa Kosoy3moIIUC, you're saying something like: suppose trained-A computes the source code of the complex core of B and then runs it. But then, define B′ as: compute the source code of the complex core of B (in the same way A does it) and use it to implement B. B′ is equivalent to B and has about the same complexity as trained- A. Or, from a slightly different angle: if "think up good strategies for achieving X" is powerful enough to come up with M, then "think up good strategies for achieving [reward of the type I defined earlier]" is powerful enough to come up with B.
Formal Solution to the Inner Alignment Problem

A is sufficiently powerful to select M which contains the complex part of B. It seems rather implausible that an algorithm of the same power cannot select B.

A's weights do not contain the complex part of B—deception is an inference-time phenomenon. It's very possible for complex instrumental goals to be derived from a simple structure such that a search process is capable of finding that simple structure that yields those complex instrumental goals without being able to find a model with those complex instrumental goals hard-coded as terminal goals.

2Vanessa Kosoy3moHmm, sorry, I'm not following. What exactly do you mean by "inference-time" and "derived"? By assumption, when you run A on some sequence it effectively simulates M which runs the complex core of B. So, A trained on that sequence effectively contains the complex core of B as a subroutine.
Formal Solution to the Inner Alignment Problem

B can be whatever algorithm M uses to form its beliefs + confidence threshold (it's strategy stealing.)

Sure, but then I think B is likely to be significantly more complex and harder for a local search process to find than A.

Why? If you give evolution enough time, and the fitness criterion is good (as you apparently agreed earlier), then eventually it will find B.

I definitely don't think this, unless you have a very strong (and likely unachievable imo) definition of “good.”

Second, why? Suppose that your starting algorithm is good at finding results

... (read more)
2Vanessa Kosoy3moA is sufficiently powerful to select M which contains the complex part of B. It seems rather implausible that an algorithm of the same power cannot select B. CNNs are specific in some way, but in a relatively weak way (exploiting hierarchical geometric structure). Transformers are known to be Turing-complete [https://arxiv.org/abs/2006.09286], so I'm not sure they are specific at all (ofc the fact you can express any program doesn't mean you can effectively learn any program, and moreover the latter is false on computational complexity grounds, but it still seems to point to some rather simple and large class of learnable hypotheses). Moreover, even if our world has some specific property that is important for learning, this only means we may need to enforce this property in our prior. For example, if your prior is an ANN with random weights then it's plausible that it reflects the exact inductive biases of the given architecture.
Are the Born probabilities really that mysterious?

Imo, I think b) is a correct objection, in the sense that I think it's literally true, but also a bit unfair, since I think you could level a similar objection against essentially any physical law.

Are the Born probabilities really that mysterious?
Answer by evhubMar 02, 202115

I had a very similar reaction when I first read Everett's thesis as well (after having previously read Eliezer's quantum physics sequence). I reproduced Everett's proof in “Multiple Worlds, One Universal Wave Function,” precisely because I felt like it did dissolve a lot of the problem—and I also reference a couple of other similar proofs from more recent authors in that paper which you might find interesting; see references 11 and 18.

I also once showed Everett's proof to Nate Soares (who can perhaps serve as a stand-in for Eliezer here), who had a couple ... (read more)

4ADifferentAnonymous3moI think b) is what I always assumed was meant by the Born rule being called mysterious?
Formal Solution to the Inner Alignment Problem

This seems very sketchy to me. If we let A = SGD or A = evolution, your first claim becomes “if SGD/evolution finds a malign model, it must understand that it's malign on some level,” which seems just straightforwardly incorrect. The last claim also seems pretty likely to be wrong if you let C = SGD or C = evolution.

Moreover, it definitely seems like training on data sampled from a simplicity prior (if that's even possible—it should be uncomputable in general) is unlikely to help at all. I think there's essentially no realistic way that training on synthet... (read more)

3Vanessa Kosoy3moTo me it seems straightforwardly correct! Suppose you're running evolution in order to predict a sequence. You end up evolving a mind M that is a superintelligent malign consequentialist: it makes good predictions on purpose, in order to survive, and then produces a malign false prediction at a critical moment. So, M is part of the state of your algorithm A. All of M's knowledge is also part of the state. M knows that M is malign, M knows the prediction it's making this time is false. In this case, B can be whatever algorithm M uses to form its beliefs + confidence threshold (it's strategy stealing.) Why? If you give evolution enough time, and the fitness criterion is good (as you apparently agreed earlier), then eventually it will find B. First, obviously we use a bounded simplicity prior, like I said in the beginning of this thread. It can be something like weighting programs by 2^{-length} while constraining their amount of computational resources, or something like an ANN with random weights (the latter is more speculative, but given that we know ANNs have inductive bias to simplicity, an untrained ANN probably reflects that.) Second, why? Suppose that your starting algorithm is good at finding results when a lot of data is available, but is also very data inefficient (like deep learning seems to be). Then, by providing it with a lot of synthetic data, you leverage its strength to find a new algorithm which is data efficient. Unless you believe deep learning is already maximally data efficient (which seems very dubious to me)?
Formal Solution to the Inner Alignment Problem

Because that's never what machine learning actually looks like in practice—essentially any access to text from the internet (let alone actual ability to interface with the world, both of which seem necessary for competitive systems) will let it determine things like the year, whether RSA-2048 has been factored, or other various pieces of information that are relevant to what stage in training/testing/deployment it's in, how powerful the feedback mechanisms keeping it in check are, whether other models are defecting, etc. that can let it figure out when to defect.

2Vanessa Kosoy3moHmm, maybe you misunderstood my proposal? I suggested to train the model by meta-learning on purely synthetic data, sampled from some kind of simplicity prior, without any interface to the world. Maybe you just think this wouldn't be competitive? If so, why? Is the argument just that there are no existing systems like this? But then it's weak evidence at best. On the contrary, even from a purely capability standpoint, meta-learning with synthetic data might be a promising strategy to lower deep learning's sample complexity. Here's why I think it will be competitive: * When a hypothetical competitive prediction algorithm A produces a malign prediction, the knowledge that it is malign clearly exists somewhere inside A in some form: otherwise it wouldn't come up with the prediction. In particular, the knowledge that there are multiple plausible hypotheses consistent with the data also exists somewhere inside A. * Therefore, there must exist some competitive algorithm B that would be able to use the knowledge of this ambiguity to abstain from predicting in such cases. * There is no reason why B should be tailored to fine details of our physical world: it can be quite generic (as all deep learning algorithms). * If we have an ML algorithm C that is powerful enough to produce superintelligence, then it is likely powerful enough to come up with B. Since B is generic, the task of finding B doesn't require any real-world data, and can be accomplished by meta-learning on synthetic data like I suggested.
Formal Solution to the Inner Alignment Problem

I agree that random defection can potentially be worked around—but the RSA-2048 problem is about conditional defection, which can't be dealt with in the same way. More generally, I expect it to be extremely difficult if not impossible to prevent a model that you want to be able to operate in the real world from being able to determine at what point in training/deployment it's in.

2Vanessa Kosoy3moWhy do you expect it? During training, it finds itself in random universe. During deployment, it finds itself in another universe drawn from the same prior (the realizability assumption). How would it determine the difference?
Formal Solution to the Inner Alignment Problem

Right, but but you can look at the performance of your model in training, compare it to the theoretical optimum (and to the baseline of making no predictions at all) and get lots of evidence about safety from that. You can even add some adversarial training of the synthetic environment in order to get tighter bounds.

Note that adversarial training doesn't work on deceptive models due to the RSA-2048 problem; also see more detail here.

If on the vast majority of synthetic environments your model makes virtually no mispredictions, then, under the realizab

... (read more)
4Vanessa Kosoy4moI think that adversarial training working so well that it can find exponentially rare failures is an unnecessarily strong desideratum. We need to drive to probability of catastrophic failures to something very low, but not actually zero. If a system is going to work for time t during deployment, then running it in n different synthetic environments for time t during training is enough to drive the probability of failure down to O(1n). Now, this is prohibitively costly, but it's not exponentially prohibitively costly. And this doesn't use adversarial training at all! So, while it's not obvious how to get a good solution, it seems at least plausible that this can lead to a solution.
Formal Solution to the Inner Alignment Problem

that's a problem with deep learning in general and not just inner alignment

I think you are understanding inner alignment very differently than we define it in Risks from Learned Optimization, where we introduced the term.

The point is, if your model is not optimizing that reward function then its performance during training will be suboptimal.

This is not true for deceptively aligned models, which is the situation I'm most concerned about, and—as we argue extensively in Risks from Learned Optimization—there are a lot of reasons why a model might end u... (read more)

1michaelcohen4moIt may be helpful to point to specific sections of such a long paper. (Also, I agree that a neural network trained trained with that reward could produce a deceptive model that makes a well-timed error.)
Formal Solution to the Inner Alignment Problem

Sure, but you have no guarantee that the model you learn is actually going to be optimizing anything like that reward function—that's the whole point of the inner alignment problem. What's nice about the approach in the original paper is that it keeps a bunch of different models around, keeps track of their posterior, and only acts on consensus, ensuring that the true model always has to approve. But if you just train a single model on some reward function like that with deep learning, you get no such guarantees.

3Vanessa Kosoy4moRight, but but you can look at the performance of your model in training, compare it to the theoretical optimum (and to the baseline of making no predictions at all) and get lots of evidence about safety from that. You can even add some adversarial training of the synthetic environment in order to get tighter bounds. If on the vast majority of synthetic environments your model makes virtually no mispredictions, then, under the realizability assumption, it is very unlikely to make mispredictions in deployment. Ofc the realizability assumption should also be questioned: but that's true in the OP as well, so it's not a difference between Bayesianism and deep.
1Vanessa Kosoy4moYou have no guarantees, sure, but that's a problem with deep learning in general and not just inner alignment. The point is, if your model is not optimizing that reward function then its performance during training will be suboptimal. To the extent your algorithm is able to approximate the true optimum during training, it will behave safely during deployment.
Formal Solution to the Inner Alignment Problem

I had thought that maybe since a Q-learner is trained as if the cached point estimate of the Q-value of the next state is the Truth, it won't, in a single forward pass, consider different models about what the actual Q-value of the next state is. At most, it will consider different models about what the very next transition will be.

a) Does that seem right? and b) Aren't there some policy gradient methods that don't face this problem?

This seems wrong to me—even though the Q learner is trained using its own point estimate of the next state, it isn't, at i... (read more)

1michaelcohen4moYeah I was agreeing with that. Right, but one thing the Q-network, in its forward pass, is trying to reproduce is the point of estimate of the Q-value of the next state (since it doesn't have access to it). What it isn't trying to reproduce, because it isn't trained that way, is multiple models of what the Q-value might be at a given possible next state.
Formal Solution to the Inner Alignment Problem

Hmmm... I don't think I was ever even meaning to talk specifically about RL, but regardless I don't expect nearly as large of a difference between Q-learning and policy gradient algorithms. If we imagine both types of algorithms making use of the same size massive neural network, the only real difference is how the output of that neural network is interpreted, either directly as a policy, or as Q values that are turned into a policy via something like softmax. In both cases, the neural network is capable of implementing any arbitrary policy and should be g... (read more)

3michaelcohen4moI interpreted this bit as talking about RL But taking us back out of RL, in a wide neural network with selective attention that enables many qualitatively different forward passes, gradient descent seems to be training the way different models get proposed (i.e. the way attention is allocated), since this happens in a single forward pass, and what we're left with is a modeling routine that is heuristically considering (and later comparing) very different models. And this should include any model that a human would consider. I think that is main thread of our argument, but now I'm curious if I was totally off the mark about Q-learning and policy gradient. I had thought that maybe since a Q-learner is trained as if the cached point estimate of the Q-value of the next state is the Truth, it won't, in a single forward pass, consider different models about what the actual Q-value of the next state is. At most, it will consider different models about what the very next transition will be. a) Does that seem right? and b) Aren't there some policy gradient methods that don't face this problem?
Formal Solution to the Inner Alignment Problem

I agree that this is progress (now that I understand it better), though:

if SGD is MAP then it seems plausible that e.g. SGD + random initial conditions or simulated annealing would give you something like top N posterior models

I think there is strong evidence that the behavior of models trained via the same basic training process are likely to be highly correlated. This sort of correlation is related to low variance in the bias-variance tradeoff sense, and there is evidence that not only do massive neural networks tend to have pretty low variance, but that this variance is likely to continue to decrease as networks become larger.

3Vanessa Kosoy4moHmm, added to reading list, thank you.
Formal Solution to the Inner Alignment Problem

I agree that at some level SGD has to be doing something approximately Bayesian. But that doesn't necessarily imply that you'll be able to get any nice, Bayesian-like properties from it such as error bounds. For example, if you think of SGD as effectively just taking the MAP model starting from sort of simplicity prior, it seems very difficult to turn that into something like the top posterior models, as would be required for an algorithm like this.

3Vanessa Kosoy4moHere's another way how you can try implementing this approach with deep learning. Train the predictor using meta-learning on synthetically generated environments (sampled from some reasonable prior such as bounded Solomonoff or maybe ANNs with random weights). The reward for making a prediction is 1−(1−δ) maxipi+ϵqi+ϵ, where pi is the predicted probability of outcome i, qi is the true probability of outcome i and ϵ,δ∈(0,1) are parameters. The reward for making no prediction (i.e. querying the user) is 0. This particular proposal is probably not quite right, but something in that general direction might work.
5Vanessa Kosoy4moI mean, there's obviously a lot more work to do, but this is progress. Specifically if SGD is MAP then it seems plausible that e.g. SGD + random initial conditions or simulated annealing would give you something like top N posterior models. You can also extract confidence from NNGP.
Formal Solution to the Inner Alignment Problem

It's hard for me to imagine that an agent that finds an "easiest-to-find model" and then calls it a day could ever do human-level science.

I certainly don't think SGD is a powerful enough optimization process to do science directly, but it definitely seems powerful enough to find an agent which does do science.

if local search is this bad, I don't think it is a viable path to AGI

We know that local search processes can produce AGI, so viability is a question of efficiency—and we know that SGD is at least efficient enough to solve a wide variety of prob... (read more)

3michaelcohen4moOkay I think we've switched from talking about Q-learning to talking about policy gradient. (Or we were talking about the latter the whole time, and I didn't notice it). The question that I think is relevant is: how are possible world-models being hypothesized and analyzed? That's something I expect to be done with messy heuristics that sometimes have discontinuities their sequence of outputs. Which means I think that no reasonable DQN is will be generally intelligent (except maybe an enormously wide one attention-based one, such that finding models is more about selective attention at any given step than it is about gradient descent over the whole history). A policy gradient network, on the other hand, could maybe (after having its parameters updated through gradient descent) become a network that, in a single forward pass, considers diverse world-models (generated with a messy non-local heuristic), and analyzes their plausibility, and then acts. At the end of the day, what we have is an agent modeling world, and we can expect it to consider any model that a human could come up with. (This paragraph also applies to the DQN with a gradient-descent-trained method for selectively attending to different parts of a wide network, since that could amount to effectively considering different models).
Formal Solution to the Inner Alignment Problem

Yeah; I think I would say I disagree with that. Notably, evolution is not a generally intelligent predictor, but is still capable of producing generally intelligent predictors. I expect the same to be true of processes like SGD.

3michaelcohen4moIf we ever produce generally intelligent predictors (or "accurate world-models" in the terminology we've been using so far), we will need a process that is much more efficient than evolution. But also, I certainly don't think that in order to be generally intelligent you need to start with a generally intelligent subroutine. Then you could never get off the ground. I expect good hypothesis-generation / model-proposal to use a mess of learned heuristics which would not be easily directed to solve arbitrary tasks, and I expect the heuristic "look for models near the best-so-far model" to be useful, but I don't think making it ironclad would be useful. Another thought on our exchange: If what you say is correct, then it sounds like exclusively-local search precludes human-level intelligence! (Which I don't believe, by the way, even if I think it's a less efficient path). One human competency is generating lots of hypotheses, and then having many models of the world, and then designing experiments to probe those hypotheses. It's hard for me to imagine that an agent that finds an "easiest-to-find model" and then calls it a day could ever do human-level science. Even something as simple as understanding an interlocuter requires generating diverse models on the fly: "Do they mean X or Y with those words? Let me ask a clarfying question." I'm not this bearish on local search. But if local search is this bad, I don't think it is a viable path to AGI, and if it's not, then the internals don't for the purposes of our discussion, and we can skip to what I take to be the upshot:
Formal Solution to the Inner Alignment Problem

Here's the setup I'm imagining, but perhaps I'm still misunderstanding something. Suppose you have a bunch of deceptive models that choose to cooperate and have larger weight in the prior than the true model (I think you believe this is very unlikely, though I'm more ambivalent). Specifically, they cooperate in that they perfectly mimic the true model up until the point where the deceptive models make up enough of the posterior that the true model is no longer being consulted, at which point they all simultaneously defect. This allows for arbitrarily bad worst-case behavior.

3michaelcohen4moThis thread began by considering deceptive models cooperating with each other in the sense of separating the timing of their treacherous turns in order to be maximally annoying to us. So maybe our discussion on that topic is resolved, and we can move on to this scenario. if alpha is low enough, this won't ever happen, and if alpha is too high, it won't take very long. So I don't think this scenario is quite right. Then the question becomes, for an alpha that's low enough, how long will it take until queries are infrequent, noting that you need a query any time any treacherous model with enough weight decides to take a treacherous turn?
Load More