I think this post is very thoughtful, with admirable attempts at formalization and several interesting insights sprinkled throughout. I think you are addressing real questions, including:
That said, I think some of your answers and conclusions are off/wrong:
Overall, nice work, strong up, medium disagree. :)
[heuristics are] statements of the following form: "if you take such action in such situation, this will correlate with higher reward".
I think that heuristics are reflections of historical facts of that form, but not statements themselves.
But these tendencies were put there by the selection process because the correlations are valid.
In a certain set of historical reward-attainment situations, perhaps (because this depends on the learning alg being good, but I'm happy to assume that). Not in general, of course.
a) The World-Model. Initially, there wouldn't have been a unified world-model. Each individual heuristic would've learned some part of the environment structure it cared about, but it wouldn't have pooled the knowledge with the other heuristics. A cat-detecting circuit would've learned how a cat looks like, a mouse-detecting one how mice do, but there wouldn't have been a communally shared "here's how different animals look" repository.
However, everything is correlated with everything else (the presence of a cat impacts the probability of the presence of a mouse), so pooling all information together would've resulted in improved predictive accuracy. Hence, the agent would've eventually converged towards an explicit world-model.
What is the difference, on your view, between a WM which is "explicit" and one which e.g. has an outgoing connection from is-cat
circuit to is-animal
?
b) Cross-Heuristic Communication.
I really like the insight in this point. I'd strong-up a post containing this, alone.
Anything Else? So far as I can tell now, that's it. Crucially, under this model, there doesn't seem to be any pressure for heuristics to make themselves legible in any other way. No summaries of how they work, no consistent formats, nothing.
If the agent is doing SSL on its future observations and (a subset of its) recurrent state activations, then the learning process would presumably train the network to reflectively predict its own future heuristic-firings, so as to e.g. not be surprised by going near donuts and then stopping to stare at them (instead of the nominally "agreed-upon" plan of "just exit the grocery store").
Furthermore, there should be some consistent formatting since the heuristics are functions . And under certain "simplicity pressures/priors", heuristics may reuse each other's deliberative machinery (this is part of how I think the GPS forms). EG there shouldn't be five heuristics each of which slightly differently computes whether the other side of the room is reachable.
That's very much non-ideal. The GPS still can't access the non-explicit knowledge — it basically only gets hunches about it.
So, what does it do? Starts reverse-engineering it. It's a general-purpose problem-solver, after all — it can understand the problem specification of this, given a sufficiently rich world-model, and then solve it. In fact, it'll probably be encouraged to do this.
I'm trying to imagine a concrete story here. I don't know what this means.
The second would plausibly be faster, the same way deception is favoured relative to alignment[4].
I don't positively buy reasoning about whether "deceptive alignment" is probable, on how others use the term. I'd have to revisit it, since it's on my very long list of alignment reasoning downstream of AFAICT-incorrect premises or reliant on extremely handwavy, vague, and leaky "selection"-based reasoning.
we might imagine a heuristic centered around "chess", optimized for winning chess games. When active, it would query the world-model, extract only the data relevant to the current game of chess, and compute the appropriate move using these data only.
Just one heuristic for all of chess?
Consider this situation:
I wish this were an actual situation, not an "example" which is syntactic. This would save a lot of work for the reader and possibly help you improve your own models.
That'll work... if it had infinite time to think, and could excavate all the procedural and implicit knowledge prior to taking any action. But what if it needs to do both in lockstep?
(Flagging that this makes syntactic sense but I can't actually give an example easily, what it means to "excavate" the procedural and implicit knowledge.)
This makes the combination of all contextual goals, let's call it
Can you give me an example of what this means for a network which has a diamond-shard and an ice-cream-eating shard?
Prior to the GPS' appearance, the agent was figuratively pursuing
Don't you mean "figuratively pursuing "? How would one "pursue" contextual behaviors?
So the interim objective can be at least as bad as .
Flag that I wish you would write this as "during additional training, the interim model performance can be at least as U-unperformant as the contextual behaviors." I think "bad" leads people to conflate "bad for us" with "bad for the agent" with "low-performance under formal loss criterion" with something else. I think these conflations are made quite often in alignment writing.
Prior to the GPS' appearance, the agent was figuratively pursuing ("figuratively" because it wasn't an optimizer, just an optimized). So the interim objective can be at least as bad as . On the other hand, pursuing directly would probably be an improvement, as we wouldn't have to go through two layers of proxies.
Example? At this point I feel like I've gotten off your train; you seem to be assuming a lot of weird-seeming structure and "pressures", I don't understand what's happening or what experiences I should or shouldn't anticipate. I'm worried that it feels like most of my reasoning is now syntactic.
The obvious solution is obvious: make heuristics themselves control the GPS. The GPS' API is pretty simple, and depending on the complexity of the cross-heuristic communication channel, it might be simple enough to re-purpose its data formats for controlling the GPS.
I think that heuristics controlling GPS-machinery is probably where the GPS comes from to begin with, so this step doesn't seem necessary.
Once that's done, the heuristics can make it solve tasks for them, and become more effective at achieving (as this will give them better ability to runtime-adapt to unfamiliar circumstances, without waiting for the SGD/evolution to catch them up).
Same objection as above -- to "achieve" ? How do you "achieve" behaviors? And, what, this would happen how? What part of training are we in? What is happening in this story, is SGD optimizing the agent to be runtime-adaptive, or..?
At the limit of optimality, everything wants to be a wrapper-mind.
Strong disagree.
The GPS can recover all of these mechanics, and then just treat the sum of all "activation strengths" as negative utility to minimize-in-expectation.
Seems like assuming "activation strengths increase the further WM values are from target values" leads us to this bizarre GPS goal. While that proposition may be true as a tendency, I don't see why it should be true in any strict sense, or if you believe that, or whether the analysis hinges on it?
In short, the same way it's non-trivial to know what heuristics/instincts are built into your mind, it's non-trivial to know what you're currently thinking of.
Aside: I think self-awareness arises from elsewhere in the shard ecosystem.
One issue is that the value-humans shard would need to be perfectly aligned with human values, and that's most of this approach's promised advantage gone. That's not much of an issue, though: I think we'd need to do that in any workable approach.
What? Why? Why would a value-human shard bid for plans generated via GPS which involve people dying? (I think I have this objection because I don't buy/understand your story for how GPS "rederives" values into some alien wrapper object.)
Is there any difference between "goals" and "values"? I've used the terms basically interchangeably in this post, but it might make sense to assign them to things of different types.
I use "values" to be decision-influence, and "goal" as, among other things, an instrumental subgoal in the planning process which is relevant to one or more values (e.g. hang out with friends more as relevant to a friend-shard).
Other points:
Thanks for extensive commentary! Here's an... unreasonably extensive response.
what it means to "excavate" the procedural and implicit knowledge
1) Suppose that you have a shard that looks for a set of conditions like "it's night AND I'm resting in an unfamiliar location in a forest AND there was a series of crunching sounds nearby". If they're satisfied, it raises an alarm, and forms and bids for plans to look in the direction of the noises and get ready for a fight.
That's procedural knowledge: none of that is happening at the level of conscious understanding, you're just suddenly alarmed and urged to be on guard, without necessarily understanding why. Most of the computations are internal to the shard, understood by no other part of the agent.
You can "excavate" this knowledge by reflecting on what happened: that you heard these noises in these circumstances, and some process in you responded. Then you can look at what happened afterward (e. g., you were attacked by an animal), and realize that this process helped you. This would allow you to explicate the procedural knowledge into a conscious heuristic ("beware of sound-patterns like this at night, get ready if you hear them"), which you put in the world-model and can then consciously access.
That "conscious access" would allow you to employ the knowledge much more fluidly, such as by:
None of that fluidity, on my understanding, would be easily replicable by the initial shard. If you're planning in advance, or are teaching someone, it'd only activate if you vividly imagine the specific scenario that'd activate it ("I'm in my camp at night and there's this noise"), which (1) you may not know to do to begin with, (2) is an excruciatingly slow style of planning. And the non-obvious logical generalizations are certainly not the thing it can do.
If you have that knowledge explicitly, though, you can just connect it to a node like "how to survive in a forest", and it'd be brought to your attention every time you poke that node.
2) Also, in a different thread, you note that the predictions generated by the world-model can sometimes be also hard to make sense of, so maybe it's not consistently-formatted either. I think what's happening, there, is that when you imagine concrete scenarios, you're not using just the world-model — you're actually "spoofing" the mental context of that scenario, and that can cause your shards to activate as if it were really happening. That allows you to make use of your procedural knowledge without actually being in the situation, and so make better predictions without consciously understanding why you're making them.
(E. g., the weird-noises-at-night shard puts simulated!you on high alert, and your WM conditions on that, and that makes it consider "is going to be attacked" more likely. So now it's predicting so, and it's a more accurate prediction than it would've been able to make with just the explicit knowledge, but it doesn't know why exactly it ended up in this state.)
But none of that makes that procedural knowledge explicit! (Though such simulated counterfactuals are a great way to reverse-engineer it. See: thought experiments to access and reverse-engineer morality heuristics.)
3) Also something worth noting: explicit knowledge can loop non-explicit procedural knowledge in! E. g., you can have an explicit heuristic like "if you're in a situation like this, listen to your instincts and do what they say". That's also entirely in-line with my model: you can know to do the things your shards urge you to, even if you don't know why. And yet, knowing that a black-box is useful isn't the same as knowing what's in it.
(I suppose my definition is kind of circular, here: I'm saying that the world-model is only the thing that's consciously accessible and consistently-formatted. That's... Yeah, I think I'll bite that bullet.)
Here, it's "implicit" that you should be complying with the urge to engage in the contextual behavior = "if you heard weird noises in the forest at night, be on guard". The question to answer here is: why? Why does it make sense to be on guard in such circumstances?
There's several ways to explain it, but let's go with "because it decreases the chance that a predator could take me by surprise, which is (apparently) something I don't want to happen". That's the implicit contextual goal here.
Explicating it, and setting it as the plan-making target ("how can I ensure I'm not ambushed?"), can allow you to consciously generate a bunch of other heuristics for achieving it. Like looking out for weird smells as well, or soundless but visible disturbance in the tall grass around you, etc. This, likewise, boosts your ability to generalize: both in the environment you're in, and even if you end up displaced to e. g. an alien environment.
I also refer you to my previous example of a displaced value-child. Although his study-promoting shards end up inapplicable, he can nonetheless remain studious if he has "be studious" as an explicit goal, in the course of optimizing for which he can re-derive new heuristics appropriate for the completely unfamiliar environment. Another example: the "deontologist vs. utilitarian in an alien society" from the fourth bullet-point here.
Okay, and this naturally extends into my broader point about value compilation.
Suppose you explicate a bunch of these contextual goals, like "avoid being ambushed by a predator" and "try to escape if you can't win this fight" and "good places to live have an abundance of prey around".
You can view these as heuristics as well. Much like the behaviors you were urged to engage in, which only hinted at your actual goal, you can view these derived goals not as your core values, but as yet more hints about your real values. As next-level procedural knowledge, with some hypothetical broader goal that generated them, and which is implicit in them.
Upon reflection on this new set of goals, you can extrapolate them into something like "avoid death".
Doing that has all the benefits of going from "if at night in a forest and hear crunching sounds, be on guard" to "decreases the chance that a predator could take me by surprise". You can now pursue death-avoidance across a broader swathe of environments, and with more consistency and fluidity. You can generate new lower-level goals/heuristics for supporting it.
Then you generate some more higher-level goals, e. g. "avoid death" + "make my loved ones happy" + "amass resources for the tribe", and compile them into something like "human prosperity is important".
And so on and on, until all contextual behaviors and goals have been incorporated into some unified global goal.
Those last few steps is what you disagree with, I think, but you see how it's just a straightforward extrapolation of basic lower-level self-reflection mechanisms? And it passes my sanity-checks: it sure seems consistent with moral philosophy and meaning-of-life questioning and such.
I don't know why GPS should control reverse-engineering, rather than there being generalized shards driving GPS.
Okay, so my thinking on this updated a bit since I've written the post. I think the above process, "treat shards as hints towards your goals, then treat the derived goals as hints towards higher-level goals, then iterate", isn't something that shard economies want to do. Rather, it's something that's convergently "chiseled into" all sufficiently advanced minds by greedy algorithms generating them.
Consider a standard setup, where the SGD is searching for some agent that scores best according to some reward function . Would you disagree that a wrapper-mind with that function as its terminal objective would be a great design for the SGD to find, by the SGD's own lights? Not that it would "select" for such a mind, just that it would be pretty good for it if it did find a way to it?
Shard economies and systems of heuristics may be faster out-of-the-box, better adapted to whatever specific environment they're in. But an -maximizing wrapper-mind would at least match their performance, given some time to do runtime optimization of itself. If it would improve its ability to optimize for , it can just derive contextual shards/heuristics for itself and act on them.
In other words, an -maximizer is strictly more powerful according to than any shard economy, inasmuch as it can generate any purpose-built shard economy from scratch, and ensure that this shard economy would be optimized for scoring well at .
Shard economies not governed by wrapper-minds, in turn, are inferior: they're worse at generalizing (see my points about non-explicit knowledge above), and tend to go astray if placed in unfamiliar environments (where whatever goals they embody no longer correlate with ).
And inasmuch as the level of adversity the agent was subjected to is so strong as to cause it to develop general reasoning at all, it's probably put in environments so complex/diverse that runtime re-optimization of its entire swathe of heuristics is called-for. Environments where nothing less than this will do.
So the practical advanced mind design is probably something like a shard economy optimized for the immediate deployment environment (for computation speed and good out-of-the-box performance) + an -aligned wrapper-mind governing it (for handling distribution shifts and for strategic planning). So I speculate that the SGD tries to converge to something like this, for the purposes of maximizing .
Except, as per section 5, there's no gradients towards representing in the agent, so the SGD uses weird hacks to point the GPS in the right direction. It does the following:
This hack lets the SGD point the "proto-wrapper-mind" in the direction of without actually building into it. The agent was already optimized for achieving , so the SGD basically tasks it with "figure out what you're optimized for, and go do that", and the agent complies. (But the unified goal implicit in the agent's design isn't quite , so we get inner misalignment.)
So, in this very round-about way, we get a goal-maximizer.
I think that heuristics are reflections of historical facts of that form, but not statements themselves.
Does "evidence of historical facts of that form" work for you?
You rely a lot on selection-level reasoning in a way which feels sketchy.
Specific examples? I specifically tried to think in terms of local gradients ("in which direction would it be advantageous for the SGD to move the model from this specific point?"), not global properties ("what is the final mind-design that would best satisfy the SGD's biases, regardless of the path taken there?"). Or do you disagree with that style of reasoning as well?
What is the difference, on your view, between a WM which is "explicit" and one which e.g. has an outgoing connection from
is-cat
circuit tois-animal
?
I've outlined some reasons above — the main point is whether it's accessible to the GPS/deliberative planner, because if it is, it allows WM-concepts to be employed much more flexibly and generally.
(I'm actually planning a separate post on this matter, though.)
If the agent is doing SSL on its future observations and (a subset of its) recurrent state activations, then the learning process would presumably train the network to reflectively predict its own future heuristic-firings
Yeah, but that's not shards making themselves legible, that's a separate process in the agent trying to build their generative models from their externally-observed behavior, no?
Furthermore, there should be some consistent formatting since the heuristics are functions
Consistent input-output formatting, sure: an API, where each shard takes in the WM, then outputs stuff into the planner/the GPS/the bid-resolver/the cross-heuristic communication channel/some such coordination mechanism.
That's not what I'm getting at. It still wouldn't allow to predict what a shard will do without observing its actions. No consistent design structure, where each shard has a part you can look at and go "aha, that's what it's optimizing for!". No meta-data summary/documentation to this effect attached to every shard.
And under certain "simplicity pressures/priors", heuristics may reuse each other's deliberative machinery
Agreed; I think I mention that in the post, even. Issue: such structures would be as ad-hoc as the shards' inner implementation. You wouldn't get alliances that change at runtime, where shards can look at each other's local incentives and choose to e. g. "engage in horse-trading", or where they can somehow "figure out" that some other shard is doing the same thing they're doing in this specific context only and so only re-use its activations in that context.
No, you'd just get some shards that are hard-wired to always fire with some other shards, or always inhibit some other shards. These alliances can be rewritten by the reward circuitry, but not by the shards themselves.
That doesn't require all shards to be legible to each other; that just requires there to be gradients towards some specific chains of shard activations.
I don't positively buy reasoning about whether "deceptive alignment" is probable
My outline of it here is also written with local gradients, not global selection targets, in mind. You might want to check it out?
Just one heuristic for all of chess?
Yeah, no. I recall wanting to make an aside like "obviously in practice chess-winning will be implemented via a lot of heuristics", but evidently I didn't.
Can you give me an example of what [value compilation] means for a network which has a diamond-shard and an ice-cream-eating shard?
First, note that I'm not saying that is necessarily "simple", as e. g. a hedonist's desire for pleasure. It can have many terms that can't be "merged" together. I'm just saying that we have an impulse to merge terms as much as possible. This is one of the cases where they can't be merged.
As per 6A, that would go as in the "disjunction" section. I. e., the agent would figure out tradeoffs it's willing to make WRT diamonds and ice cream, and then go for plans that maximize the weighted sum of diamonds-and-ice-cream it has.
... Alright, I see your point about "utility is not the optimization target": there's no inherent reason to think it'd want as many of these things as possible. E. g., ice-cream shard's activation power may be capped at 1000 ice creams, and the agent may interpret it as a hard limit. But okay, so then it'd try to maximize the probability of achieving that utility cap, or the time it'd stay in the max-utility state, or something along those lines.
Like... There are states in which shards activate, and where they're dormant. Thus, shards steer the agents they're embedded in towards some world-states. Interpreting/reverse-engineering this behavior into goals, it seems natural to view it as "I want to be in such world-states over such others". And then the GPS will be tasked with making that happen, and...
Well, it would try to output a "good" plan for making it happen, for some definition of "good". And... you disagree that this definition has to lead to arg-maxing, okay.
I guess instead of maximizing we can satisfice: as you describe here, we can just generate a bunch of plans and choose one that seems good enough, instead of generating the best possible plan. But:
(Re: magical question-answerers, yeah, we'd also want a provision like "but interpret that ask faithfully instead of doing a technical genie". But that's not an issue if the agent is the one doing the planning. Like, it doesn't prompt some separate plan-making module that it has reason to fear would output something that hacks/Goodharts it. It just consciously tries to come up with "a very good plan", and it's just so smart it has a lot of slack on optimizing that plan along dimensions like "probability of success" and "the optimal world-state will be very stable". And then that washes away everything in the universe that the agent is not explicitly optimizing for.)
Seems like assuming "activation strengths increase the further WM values are from target values" leads us to this bizarre GPS goal. While that proposition may be true as a tendency, I don't see why it should be true in any strict sense, or if you believe that, or whether the analysis hinges on it?
I think no, it doesn't hinge on it, as per the section just above? All we need is for shards to have some preferences for certain world-model-states over others.
Don't you mean "figuratively pursuing "? How would one "pursue" contextual behaviors?
In the vacuous way where any agent could be said to maximize what they're already doing? I did say "figuratively".
Same objection as above -- to "achieve" ? How do you "achieve" behaviors? And, what, this would happen how? What part of training are we in? What is happening in this story, is SGD optimizing the agent to be runtime-adaptive, or..?
... Yeah, okay, that phrasing is very bad. What I meant is: Suppose we have a shard that tries to figure out from where a predator could ambush the agent from. Before the GPS, it had some ad-hoc analysis heuristic that was hooked up to a bunch of WM concepts. After the GPS, that shard can instead loop general-purpose planning in, prompt it with "figure out from where the predator can ambush us, here's some ideas to start", and the GPS would do better than the shard's own ad-hoc algorithm.
Hence, we'll get an agent that would "get better at what it was already doing".
I agree that "become more effective at achieving " is a pretty nonsensical way to put it, though.
Flag that I wish you would write this as "during additional training, the interim model performance can be at least as U-unperformant as the contextual behaviors."
Sure.
I think that heuristics controlling GPS-machinery is probably where the GPS comes from to begin with, so this step doesn't seem necessary.
Agreed; also think I mentioned that in a footnote. I'm not sure, though, and I think we can design some weird training setups where the GPS might first appear in the WM or something (as part of a simulated human?), so my goal here was to show that the process would go this way regardless of where the GPS originated.
I agree that the way I phrased that there is weird, though.
What? Why? Why would a value-human shard bid for plans generated via GPS which involve people dying?
I don't think it'd bid for such plans. I think shards have less decision-making power in advanced agents, compared to the GPS' interpretation of shards' goals. Inasmuch as there would be imperfections in the value-humans shard's caring, the GPS would uncover them, and exploit them to make that shard play nicer with other shards.
E. g., suppose the value-humans shard isn't as upset as we would be if a human got their thumb torn off (and is anomalously non-upset about any second-order effects of that, etc.; it basically ignores tear-a-thumb-off plans), and there's some shard like "sadistic fun" that really enjoys seeing humans get their fingers torn off. Even if the value-humans shard is much more powerful, the GPS' desire to integrate all its values would lead to it adopting some combination value where it thinks it's fine to tear people's fingers off for fun.
That's not a realistic example, but I hope it conveys the broader point: any imperfections in value-humans will be exploited by the rest of the shard economy, and the broader process that tries to satisfy the goal implicitly embodied by the shard economy.
And then, even if the value-humans shard is perfect, the AI might just figure out some galaxy-brained merger of it with a bunch of other shards, that makes logical sense to it as an extrapolation, and just override the value-humans shard's protests. (Returning to a previous example: Suppose we've adopted "avoid ambush predators" as our explicit goal, then ended up in a forest environment where we're ~100% sure there are no predators. The "be afraid of crunchy noises at night" shard would activate, but we'd just dismiss it, because we know it has no clue and we know better.)
I use "values" to be decision-influence
Mm, I dispute that choice. I think "value" has the connotation of "sacred value" and "terminal value" and "something the agent wouldn't want to change about themselves", and that doesn't clearly map onto "a consistent way the agent's decisions are steered"? My broad point, here, is that shards-as-decision-infuencers aren't necessarily endorsed by agents in their initial form, and calling them "values" conveys wrong intuitions (for my purposes, at least).
I prefer "proto-values" for shards-when-viewed-as-repositories-of-contextual-goals, and... Yeah, I don't think I even have anything in my model that works well for "value". "Intermediary values" as a description of contextual goals, maybe.
Aside: I think self-awareness arises from elsewhere in the shard ecosystem.
Would be interested in your model of that!
Okay, more questions incoming: "Why would GPS be okay with value-compilation, when its expected outcome is to not satisfy in-distribution context behaviors through big-brain moves?"
If I understood correctly (can be skipped; not relevant to my argument, which starts after the bullet points):
To rephrase my question, why would the other (more early-formed) GPS API calls be okay with the API calls for value-compilation?
As you mentioned in a different comment thread, there is no reason for the GPS to obey in-distribution behavior (inner misalignment). So, from the perspective of a GPS that's API-called with pursuing a reverse-engineered-heuristic-goal, it would think:
I think this depends on the optimization "power" distribution between different GPS API-calls (tangent: how is it possible for them to have different power when the GPS, presumably, is a modular thing and the only difference is in how they're called? Whatever). Only if the API call for value compilation overwhelms the incentive against value compilation from the rest of the API calls (which all of them have an incentive for doing so, and would probably collude) then can value compilation actually proceed—which seems pretty unlikely.
Given this analysis, it seems like the default behavior is for the GPS API-calls to gradient hack away whatever other API-calls that would predictably result in in-distribution behaviors not getting preserved (e.g., value-compilation).
Is there any particular reason to believe the GPS API-calls-for-value-compilation would be so strongly chiseled in by the SGD (when SGD still has influence) as to overwhelm all the other API-calls (when SGD stops mattering)?
For reference, I think you've formed a pretty accurate model of my model.
Given this analysis, it seems like the default behavior is for the GPS API-calls to gradient hack away whatever other API-calls that would predictably result in in-distribution behaviors not getting preserved (e.g., value-compilation).
Yup. But this requires these GPS instances to be advanced enough to do gradient-hacking, and indeed be concerned with preventing their current values from being updated away. Two reasons not to expect that:
That said, you're absolutely right that it does happen in real-life agents. Some humans are suspicious of abstract arguments for the greater good and refuse to e. g. go from deontologists to utilitarians. The strength of the drive for value compilation relative to shards' strength is varying, and depending on it, the process of value compilation may be frozen at some arbitrary point.
It partly falls under the meta-cognition section. But in even more extreme cases, a person may simply refuse to engage in value-compilation at all, express a preference to not be coherent.
... Which is an interesting point, actually. We want there to be some value compilation, or agents just wouldn't be able to generalize OOD at all. But it's not obvious that we want maximum value compilation. Maximum value compilation leads to e. g. an AGI with a value-humans shard who decides to do a galaxy-brained merger of that shard with something else and ends up indifferent to human welfare. But maybe we can replicate what human deontologists are doing, and alter the "power distribution" among the AGI's processes such that value compilation freezes just before this point?
I may be overlooking some reason this wouldn't work, but seems promising at first glance.
0. Introduction
When we look inwards, upon the godshatter, how do we make sense of it? How do we sort out all the disparate urges, emotions, and preferences, and compress them into legible principles and philosophies? What mechanisms ensure our robustness to ontological crises? How do powerful agents found by a greedy selection process arrive at their morals? What is the algorithm for value reflection?
This post seeks to answer these questions, or at least provide a decent high-level starting point. It describes a simple toy model that embeds an agent in a causal graph, and follows its moral development from a bundle of heuristics to a superintelligent mesa-optimizer.
The main goal of this write-up is to serve as a gears-level model — to provide us with a detailed step-by-step understanding of why and how agents converge towards the values they do. This should hopefully allow us to spot novel pressure points — opportunities for interventions that would allow us to acquire a great deal of control over the final outcome of this process. From another angle, it should equip us with the tools to understand how different changes to the training process or model architecture would impact value reflection, and therefore, what kinds of architectures are more or less desirable.
Let's get to it.
1. The Setup
As the starting point, I'll be using a model broadly similar to the one from my earlier post.
Let's assume that we have some environment E represented as a causal graph. Some nodes in it represent the agent, the agent's observations, and actions.
Every turn t (which might be a new training episode or the next time-step in a RL setup), the agent (blue node) reads off information from the (green) observation nodes O, sets the values for the (red) action nodes A, all nodes' values update in response to that change, then the agent receives reward based on the (purple) reward nodes R. The reward is computed as some function U:Rt→R, where Rt represents the reward nodes' current values.
The agent is being optimized by some optimization/selection process — the SGD, evolution, human brain reward circuitry, whatever. What matters is that this process is non-intelligent and greedy: it only ever makes marginal improvements to the agent's architecture, with an eye for making it perform marginally better the next turn.
As per the previous post, the agent will have naturally learned an advanced world-model, which we'll also consider to be a causal graph, M. Every turn, after the agent makes the observations, it'll use that world-model to infer as much other information about the environment as it can (i. e., the current values of the non-observed nodes). Let's also assume that the world-model is multi-level, making heavy use of natural abstractions: both "an atom" and "a spider" are nodes in it, even if the actual environment contains only atoms.
Let's define Ms and As as some subsets of the world-model and the action nodes respectively. The agent will have developed a number of shallow heuristics, which are defined as follows:
h:Ms→AsThat is: a heuristic h is some function that looks at some inferred part of the world, and recommends taking certain actions depending on what it sees. Informally, we may assume that heuristics are interpretable — that is, they're defined over some specific world-model node or a coherent set of such nodes, perhaps representing a natural abstraction.
We'll assume that at the starting point we're considering, the entire suite of the heuristics H is subpar but much better than random behavior according to the outer reward function. E. g., they always allow the agent to secure at least 50% of the possible reward
We'll assume that the environment is too complex for such static heuristics to suffice. As the consequence, whatever process is shaping the agent, it has just now built General-Purpose Search into it:
GPS:Mt×Ms×MGs→AGs |(Ats=AGs)⇒(|Mts−MGs|=minAt(|Mts−MGs|))Where Mt is the current state of the world-model, and MGs, AGs are the "target values" for the nodes in Ms and As respectively.
That is: GPS is some function that takes in a world-model and some "problem specification" — some set of nodes in the world-model and their desired values — and output the actions that, if taken in that world-model, would bring the values of these nodes as close to the desired values as possible given the actions available to the agent.
Note that it's defined over the world-model, not over the real environment E, and so its ability to optimize in the actual E decreases the less accurate the world-model is.
Let's explore the following question:
2. How Will the GPS (Not) Be Used?
Pointing the GPS at the outer objective seems like the obvious solution. Let Rmax be the values of R that maximize U. Then we can just wire the agent to pass Mt×R×Rmax to the GPS at the start of every training episode, and watch it achieve the maximum reward it can given the flaws in its world-model. Turn it into a proper wrapper-mind.
Would that work?
Well, this idea assumes that the world-model already has the nodes representing the reward nodes. It might not be the case, the way stone-age humans didn't know what "genes" are for the evolution to point at them. If so, then pointing the GPS at the reward proxies is the best we can do.
But okay, let's assume the world-model is advanced enough to represent the reward nodes. Would that work then?
Sure — but only under certain, fairly unnatural conditions.[1]
In my previous post, I've noted that the very process of being subjected to a selection pressure necessarily builds certain statistical correlations into the agent.
The difference between the two types, of course, is that the second type is non-explicit. Internally, the agent doesn't act as the heuristics specify because it knows that this will increase the reward — it just has a mindless tendency to take such actions. Only the M→A mappings are internally represented — and not even as part of the world-model, they're just procedural!
But these tendencies were put there by the selection process because the (E→A)→U correlations are valid. In a way, they're as much part of the structure of the E environment as the explicitly represented O→E correlations. They're "skills", perhaps: the knowledge of how an agent like this agent needs to act to perform well at the task it's selected for.
The problem is, if we directly hard-code the GPS to be aligned with the outer objective, we'd be cutting all the pre-established heuristics out of the loop. And since the knowledge they represent is either procedural or implicit, not part of the explicit world-model, that would effectively decrease the amount of statistical correlations the agent has at its disposal — shrink its effective world-model dramatically. Set it back in its development.
And the GPS is only as effective as the world-model it operates in.
Our selection process is greedy, so it will never choose to make such a change.
3. Interfaces
Let's take a step back, and consider how the different parts of the agent must've learned to interface with each other. Are there any legible data structures?
a) The World-Model. Initially, there wouldn't have been a unified world-model. Each individual heuristic would've learned some part of the environment structure it cared about, but it wouldn't have pooled the knowledge with the other heuristics. A cat-detecting circuit would've learned how a cat looks like, a mouse-detecting one how mice do, but there wouldn't have been a communally shared "here's how different animals look" repository.
However, everything is correlated with everything else (the presence of a cat impacts the probability of the presence of a mouse), so pooling all information together would've resulted in improved predictive accuracy. Hence, the agent would've eventually converged towards an explicit world-model.
b) Cross-Heuristic Communication. As a different consideration, consider heuristics conflicts. Suppose that we have some heuristics hi and hk that both want to fire in a given training episode. However, they act at cross-purposes: the marginal increase of U achieved by hi firing at t would be decreased by letting hk fire at t, and vice versa. Both of them would want to suppress each other. Which should win?
On a similar note, consider heuristics that want to chain their activations. Suppose that some heuristic hm responds to a subset of the features hl detects. hm can learn to detect them from scratch, or it can just learn to fire when hl does, instead of replicating its calculations.
Both problems would be addressed by some shared channel of communication between the heuristics, where each of them can dump information indicating how strongly it wants to fire this turn. To formalize this, let's suppose that each heuristic has an associated "activation strength" function D:Ms→R. (Note that activation strength is not supposed to be normalized across heuristics. I. e., it's entirely possible to have a heuristic whose strength ranges from 0 to 10, and another with a range from 30 to 500, such that the former always loses if they're in contest.)
The actual firing pattern would be determined as some function of that channel's state.[2]
c) Anything Else? So far as I can tell now, that's it. Crucially, under this model, there doesn't seem to be any pressure for heuristics to make themselves legible in any other way. No summaries of how they work, no consistent formats, nothing. Their constituent circuits would just be... off in their own corners, doing their own things, in their own idiosyncratic ways. They need to be coordinated, but no part of the system needs to model any other part. Yet.
So those are the low-hanging fruits available to be learned by the GPS. The selection pressure doesn't need to introduce a lot of changes to plug the GPS into the world-model and the cross-heuristics communication channel[3], inasmuch as they both follow coherent data formats.
But that's it. Making the mechanics of the heuristics legible — both standardizing the procedural E→A knowledge and making the implicit (E→A)→U knowledge explicit — would involve a lot more work, and it'd need to be done for every heuristic individually. A lot of gradient steps/evolutionary generations/reinforcement events.
4. Reverse-Engineering the Heuristics
That's very much non-ideal. The GPS still can't access the non-explicit knowledge — it basically only gets hunches about it.
So, what does it do? Starts reverse-engineering it. It's a general-purpose problem-solver, after all — it can understand the problem specification of this, given a sufficiently rich world-model, and then solve it. In fact, it'll probably be encouraged to do this. The selection pressure would face the choice between:
The second would plausibly be faster, the same way deception is favoured relative to alignment[4]. So the selection pressure would hard-code some tendency for the GPS to infer the mechanics of the rest of the agent's mind, and write them down into the world-model. This is an important component of the need for self-awareness/reflectivity.
The GPS would gather statistical information about the way heuristics fire, what they seem to respond to or try to do, which of them fire together or try to suppress each other, what effects letting one heuristic or the other fire has on the world-model, and so on.
One especially powerful way to do that would be running counterfactuals on them. That is, instead of doing live-fire testing (search for a situation where heuristic hi wants to fire, let it, see what happens), it'd be nice to simulate different hypothetical states the world-model could be in, then see how the heuristics respond, and what happens if they're obeyed. And there'll likely already be a mechanism for rolling the world-model forward or generating hypotheticals, so the GPS can just co-opt it for this purpose.
What will this endeavor yield, ultimately? Well, if the natural abstractions hypothesis is true, quite a lot! As I've noted at the beginning, each heuristic is plausibly centered around some natural abstraction or a sensible cluster of natural abstractions[5], and it'd be doing some local computation around them aimed at causing a locally-sensible outcome. For example, we might imagine a heuristic centered around "chess", optimized for winning chess games. When active, it would query the world-model, extract only the data relevant to the current game of chess, and compute the appropriate move using these data only.
So we can expect heuristics to compress well, in general.
That said, we need to adjust our notation here. Suppose that you're the GPS, trying to reverse-engineer some heuristic hi. You obviously don't have access to the ground-truth of the world E, only a model of it M. And since M might be flawed, the world-model nodes hi is defined over might not even correspond to anything in the actual environment!
By the same token, the best way to compress hi's relationship with U might not be summarizing its relationship with the actual reward-nodes, but with some proxy node. Consider this situation:
We can imagine some heuristic hi which specializes in controlling the value of xp. Ultimately, that heuristic would've been created by the selection pressure because of its effect on r1. But hi's actual mechanical implementation would only be focused on xp, and the causal chain between it and the agent! It would pay no mind to x1,x2,...,x5, so its effect on r1 would be subject to a lot of noise — unlike its effect on xp. Thus, the agent would recover xp as the target, not r1. (And, of course, it might also be because r1 isn't yet represented in the world-model at all.)
As an example, consider chess. The algorithms for playing it well are convergent across all agents, irrespective of the agent's goals outside chess or its reason for trying to win at chess. Their implementations would only refer to chess-related objectives.
Thus, while an "outside-picture" view on heuristics describes them as (E→A)→U, internally they'd be best summarized as:
(M→A)→PWhere P is some proxy objective. For simplicity, let's say that P is a 2-tuple ⟨xi,di⟩, where the first entry is a world-model node and the second is the "target value" for that node. So every "goal" is to keep the value of a node as close to the target as possible.
As another technicality, let's say that the activation-strength function D increases the farther from the target the corresponding node's value goes.
5. The Wrapper Structure
All this time, I've been avoiding the subject of what the GPS will be pointed at. I've established that it'll be used for self-reflection, and that it won't be optimizing for a fixed goal aligned with the outer objective — won't be a wrapper-mind. But what spread of goals will it actually pursue at the object-level? What would be the wrapper structure around the GPS?
5A. Assumption Re-Check
First, let's check whether aligning it with the outer objective still doesn't work. What if we point it at the joint task of reward maximization plus self-reflection? Make it want the target objective plus inform it that there's some useful yet inaccessible knowledge buried in its mind. That'll work... if it had infinite time to think, and could excavate all the procedural and implicit knowledge prior to taking any action. But what if it needs to do both in lockstep?
In addition, "point it at the X task" hides a lot of complexity. Even if the reward-nodes are already represented in the world-model, building an utility function around them is potentially a complex problem, requiring many parameter updates/generations. All the while, the GPS would be sitting there, contributing nothing. That's not how our greedy selection process works — there just aren't gradients from "GPS does nothing" to "GPS is inner-aligned with the target objective".
No, we need some interim objective for the GPS — something we can immediately hook it up to and make optimize, and which would at least somewhat correlate with good performance on the target objective. Once that's done, then we can incrementally rewrite that proxy objective to the target objective... if we'll even want to, at that point.
5B. The Interim Objective
A few points:
This makes the combination of all contextual goals, let's call it GΣ, a good proxy objective for the target objective U. A combination of all contextual behaviors BΣ, in turn, is a proxy for GΣ, and a second-order proxy for U.
Prior to the GPS' appearance, the agent was figuratively pursuing BΣ ("figuratively" because it wasn't an optimizer, just an optimized). So the interim objective can be at least as bad as BΣ. On the other hand, pursuing GΣ directly would probably be an improvement, as we wouldn't have to go through two layers of proxies.
The GPS can help us with that: can help us move from BΣ to GΣ, and then continue on all the way to U.
We do that by first enslaving the GPS to the heuristics, then letting it take over the agent once it's capable enough.
5C. Looping Heuristics Back In
The obvious solution is obvious: make heuristics themselves control the GPS. The GPS' API is pretty simple, and depending on the complexity of the cross-heuristic communication channel, it might be simple enough to re-purpose its data formats for controlling the GPS. Once that's done, the heuristics can make it solve tasks for them, and become more effective at achieving BΣ (as this will give them better ability to runtime-adapt to unfamiliar circumstances, without waiting for the SGD/evolution to catch them up).
I can see it taking three forms:
The real answer is probably all of this. Indeed, as I've illustrated, I think we observe what looks like all three varieties in humans. Emotions, instincts, and so on.
5D. Nurturing the Mesa-Optimizer
As this is happening, we gradually increase the computational capacity allocated to the GPS' and the breadth of its employment. We gradually figure out how to set it up to do self-reflection.
It starts translating the procedural and the implicit knowledge into a language it understands — the language of the world-model. Bs and Gs are explicated and incorporated into it, becoming just more abstractions the GPS can make use of.
At this point, it makes sense to give the GPS ability to prompt itself — to have influence over what goes into the problem specifications of its future instances. It'll be able to know when to solve problems by engaging in contextual behaviors, even if it doesn't understand why they work, and optimizing for contextual goals is literally what it's best at.
This way of doing it would have an advantage over letting heuristics control the GPS directly:
We'll continue to improve the GPS, improving its ability to do this sort of deliberative long-term goal pursuit. At the same time, we'll lessen the heuristics' hold on it, and start turning heuristics towards the GPS' purposes — marginally improving their legibility, and ensuring that the process of reverse-engineering them aims the agent at U with more and more precision.
The agent as a whole will start moving from a BΣ-optimizer to a GΣ-optimizer, and then even beyond that, towards U.[6]
5E. Putting It Together
So, what's the wrapper structure around the GPS, at some hypothetical "halfway point" in this process?
6. Value Compilation
The previous section glossed over a crucial point: how do we turn a mass of contextual behaviors and goals into a proper unitary mesa-objective GΣ?
Because we want to do that. The complex wrapper structure described in the previous section is highly inefficient. At the limit of optimality, everything wants to be a wrapper-mind. There's plenty of reasons for that:
So, how do we collapse GnaiveΣ into a compact, coherent GΣ?
(For clarity: everything in this section is happening at runtime. The SGD/evolution are not involved, only the GPS. At the limit of infinite training, GΣ would become explicitly represented in the agent's parameters, with the GPS set up to single-mindedly pursue it and all the heuristics made into passive and legible pieces of the world-model. But I expect that the agent would become superintelligent and even "hyperintelligent" long before that — i. e., capable of almost arbitrary gradient-hacking — and so the late-game hard-coded GΣ would be chosen to by it, and therefore would likely be a copy of a GΣ the AI's earlier instance compiled at runtime. So the process here is crucial.)
6A. The Basic Algorithm
Suppose we've recovered contextual goals G1 and G2. How do we combine them?
a) Conjunction. Consider the following setup:
Suppose that we have a heuristic hi, which tries to keep the value of xi at some number, and another heuristic hk, which does the same for xk. Suppose that together, their effects keep the value of xv within some short range. That allows us to form a contextual goal Gi∧k=Gv, which activates if xv's value strays far from the center of the range hi and hk effectively kept it in.
In a sense, this is the same algorithm we must've followed to go from contextual actions to contextual goals in the first place! To do that, we gathered statistical information about a heuristic's activations, and tried to see if it consistently controlled the value of some node downstream of the action-nodes. Same here: we know that hi and hk control the values of xi and xk, we hypothesize that there's some downstream node whose value their activations consistently control, and we conclude that this is the "purpose" of the two heuristics.
Technical note: How do we compute the activation-strength function of the new goal, Dp? Well, Di and Dk increased as xi and xk's values went farther from their target values, and this relationship kept xv's value near some target of its own. In turn, this means that Di+Dk increased as xv's value went far from some target. From that, we can recover some function Dp which tracks the value of xv directly, not through the intermediaries of xi and xk. Note the consequence: the combined goal would be approximately as strong as the sum of its "parents".
Important note: After we compile xp, we stop caring about xi and xk! For example, imagine that off-distribution, the environmental tendencies change: the values of xi and xk that kept xv near a certain value no longer do so. If we'd retained the original contextual goals, we'd keep xi and xk near their target values, as before, even as that stops controlling xv. But post-compilation, we do the opposite: we ignore the target values for xi and xk to keep xv near its newly-derived target.
Human example: A deontologist would instinctively shy away from the action of murder. An utilitarian might extrapolate that the real reason for this aversion is because she dislikes it when people die. She'd start optimizing for the minimal number of people killed directly, and would be able to do things she wouldn't before, like personally kill a serial killer.
Another: Imagine a vain person who grew up enjoying a wealthy lifestyle. As a child, he'd developed preferences for expensive cars, silk pillows, and such. As he grew up, he engaged in value compilation, and ended up concluding that what he actually valued were objects that signify high status. Afterwards, he would still love expensive cars and silk pillows, but only as local instantiations of his more abstract values. Post-reflection, he would be able to exchange cars-and-pillows for yachts-and-champagne without batting an eye — even if that wouldn't make sense to his childhood self.
This shtick is going to cause problems for us in the future.
b) Disjunction. Consider this different setup:
Again, suppose we have two contextual goals Gi and Gk defined over xi and xk respectively. But there's no obvious way to combine them here: if their causal influences meet anywhere, we haven't discovered these parts of the world-model yet. Their contexts are entirely separate.
As such, there isn't really a way to unify them yet: we just go Gi∧k=Gi∧Gk, and hope that, as the world-model expands, the contexts would meet somewhere.
As an example, we might consider one's fruit preferences versus one's views on the necessity of the Oxford comma. They seem completely unrelated to each other. (And as a speculative abstract unification, perhaps one is entertained by ambiguity or duality-of-interpretation, and so prefers no Oxford comma and fruits with a mild bittersweet taste, as instantiations of that more abstract preferences? Though, of course, all human values don't have to ever converge this way.)
Now let's complicate it a bit:
Again, we have contextual goals Gi and Gk. We don't see a way to combine them, yet neither are they fully separate, as their contexts are entwined. If both xi and xk assume undesirable states, there might not be a distribution of values we may assign a1 and a2 such that both contextual goals are achieved. How do we deal with it?
Well, the selection pressure ran into this problem a while ago, well before the GPS, and it's already developed a solution: the cross-heuristic communication channel. Any given heuristic hi has an activation strength Di, and if two heuristics hi, hk contest, the actions taken are calculated as some function of the activation strengths Di, Dk, and the recommended actions Asi, Ask.
The GPS can recover all of these mechanics, and then just treat the sum of all "activation strengths" as negative utility to minimize-in-expectation. The actual trade-offs seem to heavily depend on the specifics of the situation (e. g., can we "half-achieve" every goal, or we have to choose one or the other?) — I've unfortunately failed to come up with a general yet compact way to describe conflict-resolution here. (Though I can point to some related ideas.)
A particular degenerate case is if the two contextual goals are directly opposed to each other. That is, suppose that the actions that bring xi near the target almost always move xk's value outside it — like a desire to smoke interferes with one's desire to look after their health. In this case, if Dk always outputs higher values than Di, Gi ends up fully suppressed: Gi∨k=Gk.
Suppose, however, that Gi wasn't a hard-coded heuristic. Rather, Gi was produced as the result of value compilation, perhaps as a combination of contextual goals over x3 and x6. In this case, we may "decompile" Gi back into G3 and G6, and try to find ways to re-compile them such that the result doesn't interfere with Gk. Perhaps G3 is "have a way to relax when stressed" and G6 is "I want to feel cool", and we can compile them into Gd = "carry a fabulous fidget toy".[7]
c) By iteratively using these techniques, we can, presumably, arrive at some GΣ. GΣ might end up fully unitary, like a perfect hedonist's desire to wirehead, or as a not-perfectly-integrated spread of values GΣ:G1∧G2…∧Gn. But even if it's the latter, it'll be a much shorter list than GnaiveΣ, and the more abstract goals should allow greater generalizability across environments.
One issue here is that there might be multiple GΣ consistent with the initial set of heuristics. As far as human value reflection goes, it's probably fine: either of them should be a fair representation of our desires, and the specific choice has little to do with AI Notkilleveryoneism[8]. But when considering how an AI's process of value reflection would go, well, it might turn out that even for a well-picked suite of proto-values, only some of the final GΣ don't commit omnicide.
Anyway, that was the easy part. Now let's talk about all the complications.
6B. Path-Dependence
Suppose that you have three different contextual goals, all of equal activation strength. For example, G1 is "I like looking at spiders, they're interesting", G2 is "I like learning about spider biology, it's interesting", and G3 is "I want to flee when there's a spider near me, something about being in their vicinity physically just sets me off".
Suppose that you live in a climate where there isn't a lot of spiders, so G3 almost never fires. On the other hand, you have Internet access, so you spend day after day looking at spider pictures and reading spider facts. You compile the first two proto-values into G1∧2: "I like spiders".
Then you move countries, discover that your new home has a lot of spiders, and to your shock, realize that you fear their presence. What happens?
Perhaps you compile a new value, G(1∧2)∧3 = "I like spiders, but only from a distance". Or you fully suppress G3, since it's revealed to be at odds with G1∧2 whenever it activates.
That's not what would've happened if you started out in an environment where all three contextual goals had equal opportunity to fire. G3 would've counterbalanced G1 and G2, perhaps resulting in G1∧2∧3 = "I guess spiders are kind of neat, but they freak me out".
Thus, the process of value compilation is path-dependent. It matters in which order values are compiled.
6C. Ontological Crises
What happens when a concept in a world-model turns out not to correspond to a concrete object in reality — such as the revelation that things consist of parts, or that souls aren't real? What if that happens to something you care about?
This is actually fairly easy to describe in this model. There are two main extreme cases and a continuum between them.
a) "Ontology expansion". The first possibility is that the object we cared about was a natural abstraction. This shift roughly looks like the following:
Suppose we cared about xp, and it turned out to be a natural abstraction over a system of xp1,xp2,xp3. That would merely mean that (1) the state of xp is downstream of the states of xp1,xp2,xp3, and (2) we can model the impact of our actions on xp more precisely by modelling xp1,xp2,xp3, if we so choose. Which we may not: maybe we're indifferent to the exact distribution of values in xp1,xp2,xp3, as long as the value of xp they compute remains the same.
And xp is still a reasonable object to place on our map. The same way it's meaningful to think of and care about "humans", even if they're not ontologically basic; and the same way we would care about the high-level state of a human ("are they happy?") and be indifferent towards the low-level details of that state ("okay, they're happy, but are the gut bacteria in their stomach distributed like this or like that?").
b) "Ontology break". In the second case, the object we cared about turns out to flat-out not exist; not correspond to anything in the territory:
As the example, consider a spiritual person realizing that spirits don't exist, or a religious person in the middle of a crisis of faith. We valued xp, but now there's just nothing resembling it in its place. What can we do?
Do value decompilation, again. Suppose that the initial set of heuristics from which the value was compiled wasn't defined over xp. Suppose we had contextual goals G1 and G2 defined over x1 and x2 respectively, which we then collapsed into Gp. We can fall back on them: we can remember why we decided we cared about xp, then attempt to re-compile new downstream values from G1 and G2 in the new world-model. Perhaps, say, we end up noticing that G1 and G2 do an awful lot to control the value of x7...?
As a human example, perhaps an apostate would decide that they cared about God because they sought a higher purpose, or wanted to feel a sense of belonging with their community. If so, they may find ways to satisfy these urges that would work in the new ontology.
The quantitative effects of this kind of ontology break can be quite dramatic. The non-existent node might be a linchpin of the world-model, having "infected" most of the nodes in it. It would entail a lengthy process of value re-compilation, and the final utility function might end up very different. Qualitatively, though, nothing would change.
We can imagine an extreme case where the basic initial heuristics were defined over empty concepts as well. In this case, perhaps they'll just have to be outright suppressed/ignored.
A degenerate case is if all initial heuristics are defined over empty concepts. I expect it's unlikely in real-life systems, though: many of them would likely be associated with mental actions, which would be self-evidently real by the very functioning of the system (by analogy with cogito ergo sum).
c) In practice, most of the cases will likely be somewhere between the two extremes. The world-model expansion would reveal that what the agent cared about wasn't precisely a natural abstraction, but also not entirely a thing that didn't exist. They might end up re-calculating the correct natural abstraction, then deciding to care for it, experiencing a slight ontology break. (Humans souls aren't real, but human minds are, so we switch over, maybe with a bit of anguish. A more mundane example: someone you knew turned out to be a very different person from what you'd thought of them, and you have to re-evaluate your relationship.)
Or maybe they'll end up caring about some low-level properties of the system in addition to the high-level ones, as the result of some yet-disjointed value finding something in the low-level to care about. (E. g., maybe some distributions of bacteria are aesthetically pleasing to us, and we'd enjoy knowing that one's gut bacteria are arranged in such a fashion upon learning of their existence? Then "a human" would cause not only empathy to fire, but also the bacteria-aesthetics value.)
6D. Meta-Cognition
This is the real problem.
a) Meta-heuristics. As I'd mentioned back in 5C, there'll likely be heuristics that directly intervene on the GPS' process. These interventions might take the form of interfering with value compilation itself.
The GPS can notice and explicate these heuristics as well, of course. And endorse them. If endorsed, they'll just take the form of preferences over cognition, or the world-model. A spiritualist's refusal to change their world-view to exclude spirits. A veto on certain mental actions, like any cognitive process that concludes you must hate your friends. A choice to freeze the process of value compilation at a given point, and accept the costs to the coherency of your decisions (this is how deontologists happen).
Any heuristic that can do that would gain an asymmetric advantage over those that can't, as far as its representation in the final compilation is concerned.
I hope the rest of this post has shown that such mechanisms are as likely to be present in AIs as they are in humans.
b) Meta-cognition itself. The core thing to understand, here, is that the GPS undergoing value compilation isn't some arcane alien process. I suspect that, in humans, it's done fully consciously.
So, what problems do humans doing value reflection run into?
First off, the GPS might plainly make a mistake. It doesn't have access to the ground truth of heuristics and can't confirm for sure when it did or did not derive a correct contextual goal or conducted a valid compilation.
Second, meta-cognitive preferences can originate externally as well. A principled stance to keep deontological principles around even if you're an utilitarian, for example.
Third, the GPS is not shy about taking shortcuts. If it encounters some clever way to skip past the lengthy process of value compilation and get straight to the answer, it'll go for it, the same we'd use a calculator instead of multiplying twenty-digit numbers manually. Hence: humans becoming hijacked by various ideologies and religions and philosophies, that claim to provide the ultimate answers to morality and the meaning of life.
Thus, the final compilation might not even have much to do with the actual initial spread of heuristics.
At the same time, meta-cognition might counteract the worst excesses of path-dependence. We can consciously choose to "decompile" our values if we realize we've become confused, look at our initial urges, then re-compile the more correct combination from them.
6E. Putting It Together
As such, the process for computing the final mesa-objective GΣ is a function of:
Is it any wonder the result doesn't end up resembling the initial outer objective U?
I'm not sure if that's as bad as it looks, as far as irreducible complexity/impossibility-to-predict goes. It might be.
7. Miscellanea
a) Heuristics' Flavour. You might've noticed that this post has been assuming that every heuristic ends up interpreted as a proto-value that at least potentially gets a say in the final compilation. That's... not right. Isn't it? I'm not actually sure.
I think the Shard Theory answer would be that yes, it's right. Every heuristic is a shard engaging in negotiation with every other shard, vying for influence. It might not be "strong" or very good at this, but an attempt will be made.
Counterpoint: Should, say, your heuristic for folding a blanket be counted as a proto-value? The GPS is reverse-engineering them to gain data on the environment, and this should really just be a "skill", not a proto-value.
Counter-counterpoint: Imagine an agent with a lot of heuristics for winning at chess. It was clearly optimized for playing chess. If the GPS' goal is to figure out what it was made for and then go optimize that, then "I value winning at chess" or "I value winning" should be plausible hypotheses for it to consider. It makes a certain kind of common sense, too. As to blanket-folding — sure, it's a rudimentary proto-value too. But it's too weak and unsophisticated to get much representation. In particular, it probably doesn't do meta-cognitive interventions, and is therefore at an asymmetrical disadvantage compared to those that do.
... Which is basically just the Shard Theory view again.
So overall, I think I'll bite that bullet, yes. Yes, every starting heuristic should be interpreted as a proto-value that plays a part in the overall process of value compilation. (And also as a skill that's explicated and integrated into the world-model.)
b) Sensitivity to the Training Process. Let's consider two different kinds of minds: humans, which are likely trained via on-line reinforcement learning, and autoregressive ML models, who can continuously cogitate forever even with frozen weights by chaining forward passes. (Suppose the latter scales to AGI.)
The internal dynamics in these minds might be quite different. The main difference is that in humans, heuristics can adapt at runtime, and new heuristics can form, while in the frozen-weights model, the initial spread of heuristics is static.
As one obvious consequence, this might make human heuristics "more agenty", in the sense of being able to conduct more nuanced warfare and negotiation between each other. In particular, they'd have the ability to learn to understand new pieces of the world-model the GPS develops, and learn new situations when they must tamper with the GPS for self-preservation (unlike static heuristic, for whom this is inaccessible information). "Heuristic" might be a bad way to describe such things, even; "shards" might be better. Perhaps such entities are best modeled as traders rather than functions-over-nodes?
But a potentially bigger impact is on value-compilation path-dependence.
In humans, when we compile contextual goals G1 and G2 into G1∧2, and then stay with G1∧2 for a while, we end up developing a new shard around G1∧2 — a structure of the same type as the initial G1, G2. Shards for G1 and G2, meanwhile, might die out as their "child" eats the reinforcement events that would've initially went for them. (Consider the example of the Vain Man from 6A.)
But if that initial set of heuristics is frozen, as in an ML model, perhaps the agent always ends up developing a "path-independent" generalization as described at the end of 6A? The runtime-compiled values would be of a different type to the initial heuristics: just mental constructs. And if we assume the AI to be superintelligent when it finalizes the compilation, it's not going to be fooled by whatever it reads in the data, so the "mistaken meta-cognition" concerns don't apply. Certainly it might make mistakes at the start of the process, but if incorrect compilations aren't "sticky" as they are with humans, it'll just decompile them, then re-compile them properly!
Reminder: This doesn't mean it'll become aligned with the outer objective. GΣ is still not U but a proxy for U, so even path-independent value compilation ends with inner misalignment.
But it does make GΣ derivable from just the parameters and the world-model, not the data.
c) Self-Awareness. I've belatedly realized that there's a third structure the GPS can interface with: the GPS itself. This fits nicely with some of my published and yet-unpublished thoughts on self-awareness and the perception of free will.
In short, the same way it's non-trivial to know what heuristics/instincts are built into your mind, it's non-trivial to know what you're currently thinking of. You need a separate feedback loop for that, a structure that summarizes the GPS' activity and feeds it back into the GPS as input. That, I suspect, directly causes (at least in humans):
But that probably ought to be its own separate post.
8. Summary
9. Implications for Alignment
My main practical takeaway is that I am now much more skeptical of any ideas that plan to achieve alignment by guiding the process of value formation, or by setting up a "good-enough" starting point for it.
Take the basic Shard Theory approach as the example.[10] It roughly goes as follows:
One issue is that the value-humans shard would need to be perfectly aligned with human values, and that's most of this approach's promised advantage gone. That's not much of an issue, though: I think we'd need to do that in any workable approach.
But even if we can do that, I fear this wouldn't work out even in the path-independent scenario. The Vain Man from 6A, again: over the course of value compilation, the AI might decide that it only likes humanity as an instantiation of some more abstract principle. Which might be something nice like "all possible sapient life"... or maybe it's more paperclips, and it trades us away like a car for a yacht.[11]
And then we get into the actually complicated path-dependent meta-cognitive mess (which we have to be ready to deal with, since we don't know how the last-minute AGI architecture will look), and... I don't think this is tractable at all. We'd need to follow the AI's explicit reasoning into superintelligence; it'd be hopeless. Would take decades to understand manually, to reverse-engineer and translate the abstractions it'll be thinking in.
So I suspect that we won't be able, in practice, to figure out how to set up some initial proto-values such that they'll compile into a non-omnicidal utility function. I suspect any plan that hinges on this is doomed.[12]
My current most promising alignment scheme is as follows:
If there are no proto-values leading the GPS astray, there's no problem. (Those are, by the way, the "unnatural conditions" I've mentioned all the way back in Section 2.)
Finding a way to identify learned world-models, or to somehow train up a pure world-model, therefore seem like high-priority research avenues.
Bonus: The E-Coli Test for Alignment
Alright, so an E. coli doesn't implement a GPS, so it can't do value compilation on its own. As such, it's unclear how meaningful it is to talk about its "values". But an attempt can be made! How we may do it:
Hm, that was insightful. For one, it appears that we don't need the agent's own world-model for path-independent (or random-path) value compilation! We can just get the "true" world-model (if we have access to it) and the heuristics set, then directly compile the final values using it. In essence, it's just equivalent to getting all the ontological crises out of the way at the start.
What we can do with subjective world-models is compute "impossible fantasy" values — values that the agent would have arrived at if the world really had the structure they mistakenly believe it to have; if the world's very ontology was optimized for their preferences. (E. g., valuing "spirits" if spirits were a thing.)
10. Future Research Directions
I think this model is both robust to a lot of possible changes, sufficing to describe the dynamics within many categories of agents-generated-by-a-greedy-algorithm, and very sensitive to other kinds of interventions. For example, the existence of some additional convergent data structure, or a different point for the GPS to originate from, might change the underlying path dynamics a lot.
That said, I now suspect that the work on value compilation/goal generalization is to a large extent a dead end, or at least not straightforwardly applicable. It seems that the greedy selection algorithms and the AI itself can be trusted with approximately 0% of the alignment work, and approximately 100% of it will need to be done by hand. So there may not be much point in modeling the value-formation process in detail...
The caveat here is that building a solid theoretical framework of this process will give us data regarding what features we should expect to find in trained models, and how to find them — so that we may do surgery on them. As such, I think the questions below are still worth investigating.
I see two ways to go from here: expanding this model, and concretizing it. Expansion-wise, we have:
On the concretization side:
We'll return to them in Section 9.
This function can be arbitrarily complex, too — maybe even implementing some complex "negotiations" between heuristics-as-shards. Indeed, this is plausibly the feature from which the GPS would originate in the first place! But this analysis tries to be agnostic as to the exact origins of the GPS, so I'll leave that out for now.
And plausibly some shared observation pre-processing system, but I'll just count it as part of the world-model.
Though this is potentially subject to the specifics of the training scheme. E. g., if the training episodes are long, or we're chaining a lot of forward passes together like in a RNN, that would make runtime-computations more effective at this than the SGD updates. That doesn't mean the speed prior is going to save us/reduce the path-dependence I'll go on to argue for here, because there'll still be some point at which the GPS-based at-runtime reverse-engineering outperforms selection-pressure-induced legibility. But it's something we'd want fine-grained data on.
Second-order natural abstraction?
Naively, this process would continue until the agent turns into a proper U-optimizer. But it won't, because of gradient starvation + the deception attractor. There are other posts talking about this, but in short:
Once GΣ agrees with U in 95% of cases, the selection pressure faces a choice between continuing to align GΣ, and improving the agent's ability to achieve GΣ. And it surely chooses the latter most of the time, because unless the agent is already superintelligently good at optimization, it probably can't actually optimize for GΣ so hard it decouples from U.
Then, once the agent is smart enough, it probably has strategic awareness, wants to protect GΣ from the selection pressure, and starts trying to do deceptive alignment. And then it's in the deception attractor: its performance on the target objective rises sharper as its general capabilities improve (since that improves both the ability to achieve U and the ability to figure out what it should be pretending to want), compared to improving its alignment.
Note: This isn't a precisely realistic example of value compilation, for a... few reasons, but mainly the phrasing. Rather than "smoking" and "using a fabulous fidget toy", it should really say "an activity which evokes a particularly satisfying mixture of relaxation and self-affirmation".
There seems to be some tendency for values to increase in abstractness as the process of compilation goes on: earlier values are revealed to be mere "instantiations" of later values, such that we become indifferent to the exact way they're instantiated (see the cars vs. yachts example). It works if "relax" and "feel cool" are just an instantiation of "feel an emotion that's a greater-than-its-parts mix of both", such that we're indifferent to the exact mix. But they're not an instantiation of "smoke a cigar": if smoking ceased to help the person relax and feel cool, they'd stop smoking and find other ways to satisfy those desires.
Although I imagine some interesting philosophy out of it.
Or maybe not. Something about this feels a bit off.
Note that this isn't the same as my disagreeing with the Shard Theory itself. No, I still think it's basically correct.
You might argue that we can set up a meta-cognition shard that implacably forbids the AI's GPS from folding humanity away like this, the way something prevents deontologists from turning into utilitarians, or the way we wouldn't kill-and-replace a loved one with a "better" loved one. I'm not sure one way or another whether that'll work, but I'm skeptical. I think it'll increase the problem difficulty dramatically: that it'd require the sort of global robust control over the AI's mind that we can use to just perfect-align it.
One idea here would be to wait until the AI does value compilation on its own, then hot-switch the GΣ it derives. That won't work: by the point the AI is able to do that, it'd be superintelligent, and it'll hack through anything we'll try to touch it with. We need to align it just after it becomes a GPS-capable mesa-optimizer, and ideally not a moment later.
One issue I don't address here is that in order to do so, the GPS would need some basic meta-cognitive wrapper-structure and/or a world-model that contains self-referencing concepts — in order to know how to solve the problem of giving its future instances good follow-up tasks. I've not yet assessed the tractability of this. We might need some way to distinguish such structures from other heuristics, or figure out how to hand-code them.