Tetraspace Grouping

Comments

Comparing LICDT and LIEDT

The statement of the law of logical causality is:

Law of Logical Causality: If conditioning on any event changes the probability an agent assigns to its own action, that event must be treated as causally downstream.

If I'm interpreting things correctly, this is just because anything that's upstream gets screened off, because the agent knows what action it's going to take.

You say that LICDT pays the blackmail in XOR blackmail because it follows this law of logical causality. Is this because, conditioned on the letter being sent, if there is a disaster the agent assigns  to sending money, and if there isn't a disaster the agent assigns  to sending money, so the disaster must be causally downstream of the decision to send money if the agent is to know whether or not it sends money?

Smoking Lesion Steelman

I didn't find the conclusion about the smoke-lovers and non-smoke-lovers obvious in the EDT case at first glance, so I added in some numbers and ran through the calculations that the robots will do to see for myself and get a better handle on what not being able to introspect but still gaining evidence about your utility function actually looks like.

Suppose that, out of the  robots that have ever been built,  are smoke-lovers and  are non-smoke-lovers. Suppose also the smoke-lovers end up smoking with probability  and non-smoke-lovers end up smoking with probability .

Then  robots smoke, and  robots don't smoke. So by Bayes' theorem, if a robot smokes, there is a   chance that it's killed, and if a robot doesn't smoke, there's a chance that it's killed.

Hence, the expected utilities are:

  • An EDT non-smoke-lover looks at the possibilities. It sees that if it smokes, it expects to get utilons, and that if it doesn't smoke, it expects to get  utilons.
  • An EDT smoke-lover looks at the possibilities. It sees that if it smokes, it expects to get  utilons, and if it doesn't smoke, it expects to get  utilons.

Now consider some equilibria. Suppose that no non-smoke-lovers smoke, but some smoke-lovers smoke. So  and . So (taking limits as  along the way):

  • non-smoke-lovers expect to get  utilons if they smoke, and  utilons if they don't smoke.  so non-smoke-lovers will choose not to smoke.
  • smoke-lovers expect to get  utilons if they smoke, and  utilons if they don't smoke. Smoke-lovers would be indifferent between the two if . This works fine if at least 90% of robots are smoke lovers, and equilibrium is achieved. But if less than 90% of robots are smoke-lovers, then there is no point at which they would be indifferent, and they will always choose not to smoke.

But wait! This is fine if more than 90% are smoke-lovers, but if fewer than 90% are smoke-lovers, then they would always choose not to smoke, that's inconsistent with the assumption that  is much larger than . So instead suppose that  is only only a little bit bigger than , say that . Then:

  • non-smoke-lovers expect to get  utilons if they smoke, and  utilons if they don't smoke. They will choose to smoke if , i.e. if smoke-lovers smoke so rarely that not smoking would make them believe they're a smoke-lover about to be killed by the blade runner.
  • smoke-lovers expect to get   utilons if they smoke, and  utilons if they don't smoke. They are indifferent between these two when . This means that, when  is at the equilibrium point, non-smoke-lovers will not choose to smoke when fewer than 90% of robots are smoke-lovers, which is exactly when this regime applies.

I wrote a quick python simulation to check these conclusions, and it was the case that  for , and  for  there as well.

Reductive Reference

Your reliable thermometer doesn't need to be well-calibrated - it only has to show the same value whenever it's used to measure boiling water, regardless of what that value is. So the dependence isn't quite so circular, thankfully.

Tetraspace Grouping's Shortform

So the definition of myopia given in Defining Myopia was quite similar to my expansion in the But Wait There's More section; you can roughly match them up by saying and , where is a real number corresponding to the amount that the agent cares about rewards obtained in episode and is the reward obtained in episode . Putting both of these into the sum gives , the undiscounted, non-myopic reward that the agent eventually obtains.

In terms of the definition that I give in the uncertainty framing, this is , and .

So if you let be a vector of the reward obtained on each step and be a vector of how much the agent cares about each step then , and thus the change to the overall reward is , which can be negative if the two sums have different signs.

I was hoping that a point would reveal itself to me about now but I'll have to get back to you on that one.

Tetraspace Grouping's Shortform

Thoughts on Abram Demski's Partial Agency:

When I read Partial Agency, I was struck with a desire to try formalizing this partial agency thing. Defining Myopia seems like it might have a definition of myopia; one day I might look at it. Anyway,

Formalization of Partial Agency: Try One

A myopic agent is optimizing a reward function where is the vector of parameters it's thinking about and is the vector of parameters it isn't thinking about. The gradient descent step picks the in the direction that maximizes (it is myopic so it can't consider the effects on ), and then moves the agent to the point .

This is dual to a stop-gradient agent, which picks the in the direction that maximizes but then moves the agent to the point (the gradient through is stopped).

For example,

  • Nash equilibria - are the parameters defining the agent's behavior. are the parameters of the other agents if they go up against the agent parametrized by . is the reward given for an agent going up against a set of agents .
  • Image recognition with a neural network - is the parameters defining the network, are the image classifications for every image in the dataset for the network with parameters , and is the loss function plus the loss of the network described by on classifying the current training example.
  • Episodic agent - are parameters describing the agents behavior. are the performances of the agent in future episodes. is the sum of , plus the reward obtained in the current episode.

Partial Agency due to Uncertainty?

Is it possible to cast partial agency in terms of uncertainty over reward functions? One reason I'd be myopic is if I didn't believe that I could, in expectation, improve some part of the reward, perhaps because it's intractable to calculate (behavior of other agents) or something I'm not programmed to care about (reward in other episodes).

Let be drawn from a probability distribution over reward functions. Then one could decompose the true, uncertain, reward into defined in such a way that for any ? Then this is would be myopia where the agent either doesn't know or doesn't care about , or at least doesn't know or care what its output does to . This seems sufficient, but not necessary.

Now I have two things that might describe myopia, so let's use both of them at once! Since you only end up doing gradient descent on , it would make sense to say , , and hence that .

Since for small , this means that , so substituting in my expression for gives , so . Uncertainly is only over , so this is just the claim that the agent will be myopic with respect to if . So it won't want to include in its gradient calculation if it thinks the gradients with respect to are, on average, 0. Well, at least I didn't derive something obviously false!

But Wait There's More

When writing the examples for the gradient descenty formalisation, something struck me: it seems there's a structure to a lot of them, where is the reward on the current episode, and are rewards obtained on future episodes.

You could maybe even use this to have soft episode boundaries, like say the agent receives a reward on each timestep so , and saying that so that for , which is basically the criterion for myopia up above.

Unrelated Note

On a completely unrelated note, I read the Parable of Predict-O-Matic in the past, but foolishly neglected to read Partial Agency beforehand. The only thing that I took away from PoPOM the first time around was the bit about inner optimisers, coincidentally the only concept introduced that I had been thinking about beforehand. I should have read the manga before I watched the anime.

Open & Welcome Thread—May 2020

The Whole City is Center:

This story had a pretty big impact on me and made me try to generate examples of things that could happen such that I would really want the perpetrators to suffer, even more than consequentialism demanded. I may have turned some very nasty and imaginative parts of my brain, the ones that wrote the Broadcast interlude in Unsong, to imagining crimes perfectly calculated to enrage me. And in the end I did it. I broke my brain to the point where I can very much imagine certain things that would happen and make me want the perpetrator to suffer – not infinitely, but not zero either.
A game designed to beat AI?

The AI Box game, in contrast with the thing it's a metaphor for, is a two player game played over text chat by two humans where the goal is for Player A to persuade Player B to let them win (traditionally by getting them to say "I let you out of the box"), within a time limit.

Tetraspace Grouping's Shortform

Thoughts on Dylan Hadfield-Menell et al.'s The Off-Switch Game.

  • I don't think it's quite right to call this an off-switch - the model is fully general to the situation where the AI is choosing between two alternatives A and B (normalized in the paper so that U(B) = 0), and to me an off-switch is a hardware override that the AI need not want you to press.
  • The wisdom to take away from the paper: An AI will voluntarily defer to a human - in the sense that the AI thinks that it can get a better outcome by its own standards if it does what the human says - if it's uncertain about the utilities, or if the human is rational.
  • This whole setup seems to be somewhat superseded by CIRL, which has the AI, uh, causally find by learning its value from the human actions, instead of evidentially(?) doing it by taking decisions that happen to land it on action A when is high because it's acting in a weird environment where a human is present as a side-constraint.
    • Could some wisdom to gain be that the high-variance high-human-rationality is something of an explanation as to why CIRL works? I should read more about CIRL to see if this is needed or helpful and to compare and contrast etc.
  • Why does the reward gained drop when uncertainty is too high? Because the prior that the AI gets from estimating the human reward is more accurate than the human decisions, so in too-high-uncertainty situations it keeps mistakenly deferring to the flawed human who tells it to take the worse action more often?
    • The verbal description, that the human just types in a noisily sampled value of , is somewhat strange - if the human has explicit access to their own utility function, they can just take the best actions directly! In practice, though, the AI would learn this by looking at many past human actions (there's some CIRL!) which does seem like it plausibly gives a more accurate policy than the human's (ht Should Robots Be Obedient).
    • The human is Boltzmann-rational in the two-action situation (hence the sigmoid). I assume that it's the same for the multi-action situation, though this isn't stated. How much does the exact way in which the human is irrational matter for their results?
Tetraspace Grouping's Shortform

PMarket Maker

Just under a month ago, I said "web app idea: one where you can set up a play-money prediction market with only a few clicks", because I was playing around on Hypermind and wishing that I could do my own Hypermind. It then occurred to me that I can make web apps, so after getting up to date on modern web frameworks I embarked in creating such a site.

Anyway, it's now complete enough to use, provided that you don't blow on it too hard. Here it is: pmarket-maker.herokuapp.com. Enjoy!

You can create a market, and then create a set of options within that market. Players can make buy and sell limit orders on those options. You can close an option and pay out a specific amount per owned share. There are no market makers, despite the pun in the name, but players start with 1000 internet points that they can use to shortsell.

Tetraspace Grouping's Shortform

Thoughts on Ryan Carey's Incorrigibility in the CIRL Framework (I am going to try to post these semi-regularly).

  • This specific situation looks unrealistic. But it's not really trying to be too realistic, it's trying to be a counterexample. In that spirit, you could also just use , which is a reward function parametrized by that gives the same behavior but stops me from saying "Why Not Just set ", which isn't the point.
    • How something like this might actually happen: you try to have your be a complicated neural network that can approximate any function. But you butcher the implementation and get something basically random instead, and this cannot approximate the real human reward.
  • An important insight this highlights well: An off-switch is something that you press only when you've programmed the AI badly enough that you need to press the off-switch. But if you've programmed it wrong, you don't know what it's going to do, including, possibly, its off-switch behavior. Make sure you know under which assumptions your off-switch will still work!
  • Assigning high value to shutting down is incorrigible, because the AI shuts itself down. What about assigning high value to being in a button state?
  • The paper considers a situation where the shutdown button is hardcoded, which isn't enough by itself. What's really happening is that the human either wants or doesn't want the AI to shut down, which sounds like a term in the human reward that the AI can learn.
    • One way to do this is for the AI to do maximum likelihood with a prior that assigns 0 probability to the human erroneously giving the shutdown command. I suspect there's something less hacky related to setting an appropriate prior over the reward assigned to shutting down.
  • The footnote on page 7 confuses me a bit - don't you want the AI to always defer to the human in button states? The answer feels like it will be clearer to me if I look into how "expected reward if the button state isn't avoided" is calculated.
    • Also I did just jump into this paper. There are probably lots of interesting things that people have said about MDPs and CIRLs and Q-values that would be useful.
Load More