Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Distributed Decisions

11adamShimi

6Alexander Gietelink Oldenziel

4johnswentworth

4Pattern

3Leon Lang

2johnswentworth

New Comment

Do you think that thinking explicitly about distributed systems (in the theoretical computer science sense) could be useful for having different frames or understanding of the tradeoffs? Or are you mostly using the idea of distributed systems as an intuitive frame without seeing much value in taking it too seriously?

If I may be so bold, the answer should be a guarded yes.

A snag is that the correct theory of what John calls 'distributed systems' or 'Time' and what theoretical CS academics generally call 'concurrency' is as of yet not fully constructed. To be sure, there are many quite well-developed theoretical frameworks - e.g. the Pi calculus or the various models of concurrency like Petri nets, transitions systems, event structures etc. They're certainly on my list of 'important things I'd like to understand better'.

Our world, and our sensemaking of it, is fundamentally concurrent. If we had the 'correct' theory of concurrency and we would be able to coherently combine it with decision theory under uncertainty that would be very powerful.

Two answers:

- I agree with Self-Embedded Agent that there's likely powerful frames for thinking about distributed compute which have not yet been discovered, and existing work may hint toward those. That's the sort of thing which is probably not useful for most researchers to think about, but worth at least some thinking about.
- There's a shared core to distributed models which I do think basically-all technical researchers in the field should be familiar with. That's best picked up by seeing it in a few different contexts, and theory of distributed systems is one possible context to pick it up from. (Some others: Bayes nets/causality, working with structured matrices, distributed programming in practice.)

How technical is the use of the word 'distributed' here?

While arranging my evening, I may perform some Bayesian updates. Maybe I learn that the movie is not available on Netflix, so I ask a friend if they have a copy, then check Amazon when they don’t. This process is reasonably well-characterized as me having a centralized model of the places I might find the movie, and then Bayes-updating that model each time I learn another place where I can/can’t find it.

It seems more like going through a list of places and checking off 'not there' than Bayesian updating. Sure, that's a special case,

My friends and I, as a system, are

notwell-modeled as Bayesian updates to a single central knowledge-state; otherwise we wouldn’t check Netflix twice.

but it seems like 'centrality' is less likely to be the thing here than something else. Coordination is mentioned, but it seems more like you both check Netflix because you're not asking 'what if _ checks Netflix'. In other words, maybe you're not acting in a 'Bayesian manner'. Rather than evaluate the probability, you take the action. I would also guess you didn't say Netflix because 'the probability points that way'.

If you watch Netflix a lot (or have used it recently) then it might come to mind quickly. If your friend watches something else a lot, maybe they check there first.

There's not much of a benefit of more elaborate protocols here (beyond texting your friend it's not on netflix), if there's not a lot of services to search. (Otherwise you could come up with a list together (or independently) and handle your parts (or pick some off the list at random, figuring that if both of you do that, you're more likely to find it, even if you don't coordinate more).) So I won't go into a lot more detail here, other than mentioning:

There are other considerations at play here than probability: cost. You have Netflix so you check there.

**Summary**

This post considers “distributed decisions” and thinks about a few conceptual implications of thinking about that.

Roughly, distributed decisions are decisions that are not made by “one rational Bayesian agent”, but by a collective of nodes that together “share a goal”. Thus, this post is about the “width” of actions instead of the “depth” (the distance to “the thing impacted”).

Together, the nodes ideally form a “*distributed *Bayesian agent”, which by definition acts so as to rationally use the locally available information:

- Each node has the goal to maximize E[u(X)] over some shared outcome variable X and shared utility function u.
- Each node does that by trying to steer the probability distribution p_A(X, Y, Z, …) over world states by choosing the optimal action A.
- Side remark: the world state (X, Y, Z) can also contain variables encoding information from the past, like Y = “is the movie on Netflix?”.

- If one makes an observation (e.g., Y = y), then what’s steered is the conditional probability distribution p_A(X, Z, … | Y = y) instead, where Y = y is information that may only be accessible to
*one*of the nodes. So this node acts locally rationally, even if other nodes have maybe more useful information. - Sometimes, information is so useful that it is worth it to
**invest**to pass a summary of that information around to cause others to*make more overall useful decisions*.- This seems to suggest that
*summaries*of local information are*a resource*in the sense of the measuring stick of utility problem.

- This seems to suggest that

Overall, this tentatively suggests that natural abstraction summaries are a “resource”. However, this would require showing that they are additive across decisions, which is an open problem in itself.

**Confusions/Questions:**

- The telephone theorem seems to say (according to the “optimization at a distance” post) that the low-dimensional abstract summary is “arbitrarily well-preserved” at a distance. If it is, then It doesn’t seem like a “resource”? The following are highly confused elaborations on this:
- I consider a “resource” something that can be “used up” and then isn’t there anymore. If it’s arbitrarily well-preserved, then it isn’t used up.
- On the other hand, I also disagree that what I intuitively consider “actions” are arbitrarily well-preserved: If I do an action, and this changes the world, then the action will only be accessible to “what’s changed”, but not further along. E.g., if I build a house, then sufficiently far in the future, people will not see anymore what actions built that house, they just see the house.
- Maybe the house could be considered an “even lower-dimensional summary”, but it seems to me that this is
*also*not arbitrarily well-preserved. At some point, the house will be destroyed, eventually removing all traces of it ever having existed. **This suggests I should understand the telephone theorem better.**

- Maybe the house could be considered an “even lower-dimensional summary”, but it seems to me that this is
- Rebuttal to my claim that resources should be used up: if I use money to buy something, then money is
*used up for me*, but the person whom I gave the money can use it for something else.- Fundamentally, resources were more like… “traded” in this case. How should I think about that?

- Okay, let’s assume that “summary information is the fundamental notion of a resource”. Does this explain all types of resources in our world?
- If I want a house, I need money, so to a first approximation, “money” seems like a relevant resource.
- But then the question is “how do I get money”?
- The relevant answer to that question may be pieces of low-dimensional information that tell me how I can best use the configuration of my physical body to make money.
- So in that sense, information
*is*the more fundamental resource.

- So in that sense, information
- Is this related to David Deutsch’s
__The Beginning of Infinity__?- Wikipedia: “Knowledge here consists of information with a good explanatory function that has proven resistant to falsification. Any real process is physically possible to perform provided the knowledge to do so has been acquired.”

I currently share most of these confusions. It seems to me like "resources" have to be more than *just* well-conserved summaries; there should be some additional condition having to do with controllability/"ownership" and additivity.

Similarly, there's probably additional conditions for conserved summaries to be "actions" - none of these posts have really established a causal direction distinguishing "action" from "observation", for instance, which could be a hairy issue insofar as some summary dimensions are jointly determined.

And finally, there's the question of "conserved over what?". Summaries can be conserved over multiple length/time-scales, or over more abstract notions of "distance", and these posts haven't carefully stated the relevant conditions because I don't yet have a very precise idea of what the relevant conditions are.

Consider two prototypical “agents”: a human, and a company.

The human is relatively centralized and monolithic. As a rough approximation, every 100 ms or so observations flow into the brain from the eyes, ears, etc. This raw input data updates the brain’s world-model, and then decisions flow out, e.g. muscle movements. This is exactly the sort of “state-update model” which

Against Time In Agent Modelscriticized: observations update one central internal state at each timestep, and all decisions are made based on that central state. It’s not even all that accurate a model for a human, but let’s set that aside for now and contrast it to a more obviously decentralized example.In a company, knowledge and decisions are distributed. A cashier sees and hears customers in the store, and interacts with them in order to sell things. Meanwhile, a marketing editor tweaks some ad copy. Each mostly makes decisions based on their local information; most of that local information is not propagated to other decision-makers. Observations don’t update a single centralized state which informs all decisions. Instead, different decisions have different input information from different sources.

In

Optimization at a Distance, I suggested a mental picture of agents kinda like this:It’s like a phased array: there’s lots of little actions distributed over space/time, all controlled in such a way that their influence can add up coherently and propagate over a long distance to optimize some far-away target. Optimization at a Distance mainly emphasized the “height” of this picture, i.e. the distance between optimizer and target. This post is instead about the “width”: not only are the actions far from the optimization target, the actions themselves are also distributed in spacetime and potentially far apart from each other.

## Contrast: Bayesian Updates

Suppose I want to watch my favorite movie,

10 Things I Hate About You, in the evening. To make this happen, I do some optimization - I steer myself-in-the-evening and my-immediate-environment-in-the-evening into the relatively small set of states in which I’m watching the movie. Via the argument inUtility Maximization = Description Length Minimization, we should expect that I approximately-act-as-though I’m a Bayesian reasoner maximizing some expected utility over myself-in-the-evening and my-immediate-environment-in-the-evening. (Note that it’s a utility functionovermyself-in-the-evening and my-immediate-environment-in-the-evening, not just any old random utility function; something like e.g. a rock would not be well-described by such a utility function.)While arranging my evening, I may perform some Bayesian updates. Maybe I learn that the movie is not available on Netflix, so I ask a friend if they have a copy, then check Amazon when they don’t. This process is reasonably well-characterized as me having a centralized model of the places I might find the movie, and then Bayes-updating that model each time I learn another place where I can/can’t find it. (If I had checked Netflixed, then asked my friend, then checked Netflix again because I forgot whether it was on Netflix, that would not be well-modeled as Bayesian updates.)

By contrast, imagine that myself

and some friendsare arranging to watch 10 Things I Hate About You in the evening. I check to see if the movie is on Netflix, and at the same time my friend checks their parents’ pile of DVDs. My friend doesn’t find it in their parents’ DVD pile, and doesn’t know I already checked Netflix, so theyalsocheck Netflix. My friends and I, as a system, arenotwell-modeled as Bayesian updates to a single central knowledge-state; otherwise we wouldn’t check Netflix twice. And yet, it’s not obviously suboptimal (like me forgetting whether the movie is on Netflix would be). If there’s a lag in communication between us, it may just be faster and easier for us to both check Netflix independently, and then both check other sources independently if the movie isn’t there. We’re acting independently to optimize the same goal; our actions are chosen “locally” on the basis of whatever information is available, not necessarily based on a single unified knowledge-state.So, we don’t really have “Bayesian updates” in the usual sense. And yet… we’re still steering the world into a relatively narrow set of states, the argument in

Utility Maximization = Description Length Minimizationstill applies just fine, and that argument is still an essentially Bayesian argument. It’s still using a Bayesian distribution - i.e. a distribution which is ultimately part of a model, not necessarily a fundamental feature of the territory. It’s still about maximizing expected utility under that distribution. My friends and I, as a system, are still well modeled as a “Bayesian agent” in some sense. Just… not amonolithicBayesian agent. We’re adistributedBayesian agent, one in which different parts have different information.## Conditioning

Conditional probabilities do still enter the picture, just not as updates to a centralized world-state.

In the movie example, when I’m searching for the movie in various places, how do I steer the world into the state of us-watching-the-movie-in-the-evening? How do I maximize E[u(X)], jointly with my friends? Well, I act on the information I have, plus my priors about e.g. what information my friends will have and how they will act. If I have information Y (e.g. I know that the movie isn’t on Netflix, and know nothing else relevant other than priors) when making a particular decision, then I act to maximize E[u(X)|Y].

Why that particular mathematical form? Well, our shared optimization objective E[u(X)] is a sum over worlds (X,Y,…):

E[u(X)]=∑X,Y,…P[X,Y,…]u(X)

If I know that e.g. the movie is not on Netflix, then I know my current action won’t impact any of the worlds where the movie

ison Netflix. So I can ignore those worlds while making the current decision, and just sum over all the worlds in which the movie isnoton Netflix. My new sum is ∑X,…P[X,Y,…]u(X), which becomes E[u(X)|Y] after normalizing the probabilities. (Normalizing doesn’t change the optimal action, so we can do that “for free”.) By ignoring all the worlds I’m not in (based on the input information to the current decision), and taking the expectation over the rest, I’m effectively maximizing expected utility conditional on the information I have when making the decision.More generally:

each action is chosen to maximize expected utility conditional on whatever information is available as an input to that action(including priors about how the other actions will be taken). That’s the defining feature of adistributedBayesian agent.This post(and the more dense versionhere) spells out the mathematical argument in a bit more detail, starting from coherence rather than utility-maximization-as-description-length-minimization.(Side note: some decision theory scenarios attempt to mess with the “current action won’t impact any of the other worlds” part, by making actions in one world impact other worlds. Something

FDT-likewould fix that, but that’s out of scope for the current post.)## Resources

The "Measuring Stick of Utility" Problemtalks about how grounding the idea of “resources” in non-agenty concepts is a major barrier to using coherence theorems to e.g. identify agents in a given system. If we have distributed decisions, optimization at a distance, or both,andwe expect that information at a distance is mediated by relatively low-dimensional summaries (i.e. theTelephone Theorem), then there’s an intuitively-natural way to recognize “resources” for purposes of coherence arguments.Let’s go back to the example of a company, in which individual employees make many low-level decisions in parallel. The information relevant to each decision is mostly local - e.g. a cashier at a retail store in upstate New York does not need to know the details of station 13 on the company’s assembly line in Shenzhen. But there is some relevant information - for instance, if an extra 10 cents per item are spent at station 13 on the assembly line in Shenzhen, then the cashier needs to end up charging another ~10 cents per item to customers. Or, if the assembly line shuts down for a day and 10000 fewer items are produced, then the cashiers at all of the company’s stores need to end up selling 10000 fewer items.

So we have this picture where lots of different decisions are made mostly-locally, but with some relatively small summary information passed around between local decision makers. That summary consists mainly of a sum of “resources'' gained/lost across each decision. In our example, the resources would be dollars spent/gained, and items created/sold.

The key here is that we have lots of local decisions, with relatively low-dimensional coupling between them. The summary-information through which the decisions couple is, roughly speaking, the “resources”. (In practice, there will probably also be lots of extra summary-information between localities which isn’t controllable via the actions, and therefore needn’t be treated as a resource - e.g. all the facts about concrete one could learn from the store’s walls which would carry over to the concrete in the factory’s walls.)

Alternatively, rather than starting from distributed decisions, we could start from optimization at a distance. Because the optimization target is “far away” from the actions, only some relatively-low-dimensional summary of the actions impacts the target. Again, the components of that summary are, roughly speaking, the “resources”.

This picture fits in nicely with coherence theorems. The theorems talk about how a local decision maker needs to act in order to achieve pareto-optimal resource use, while still achieving local goals. For instance, the company’s marketing department should act-as-though it has a utility function over ads, otherwise it could run the same ads while spending pareto-fewer resources.

This picture also fits in nicely with natural abstractions. We have a large system with lots of parts “far away” from each other. The Telephone Theorem then says that they will indeed interact only via some relatively low-dimensional summary. In a decision framing, it says that only a relatively low-dimensional summary of the far-away decisions will be relevant to the local decision. Furthermore, we can in-principle derive that low-dimensional summary from the low-level physics of the world.

But this is still just an intuitive story. To make it rigorous, the Measuring Stick of Utility post argued that we need our resources to have two main properties:

Additivity across decisions, in particular, is the more restrictive condition mathematically. In order to identify natural abstraction summaries as “resources” for coherence purposes, those summaries need to be additive across all the local decisions.

… which is the main claim argued in

Maxent and Abstractions. Summaries of information relevant at a distance can indeed be represented as sums over local variables/decisions.