Vivek Hebbar

Wiki Contributions

Comments

As I understand Vivek's framework, human value shards explain away the need to posit alignment to an idealized utility function. A person is not a bunch of crude-sounding subshards (e.g. "If food nearby and hunger>15, then be more likely to go to food") and then also a sophisticated utility function (e.g. something like CEV). It's shards all the way down, and all the way up.[10] 

This read to me like you were saying "In Vivek's framework, value shards explain away .." and I was confused.  I now think you mean "My take on Vivek's is that value shards explain away ..".  Maybe reword for clarity?

(Might have a substantive reply later)

Makes perfect sense, thanks!

"Well, what if I take the variables that I'm given in a Pearlian problem and I just forget that structure? I can just take the product of all of these variables that I'm given, and consider the space of all partitions on that product of variables that I'm given; and each one of those partitions will be its own variable.

How can a partition be a variable?  Should it be "part" instead?

ETA: Koen recommends reading Counterfactual Planning in AGI Systems first (or instead of) Corrigibility with Utility Preservation)

Update: I started reading your paper "Corrigibility with Utility Preservation".[1]  My guess is that readers strapped for time should read {abstract, section 2, section 4} then skip to section 6.  AFAICT, section 5 is just setting up the standard utility-maximization framework and defining "superintelligent" as "optimal utility maximizer".

Quick thoughts after reading less than half:

AFAICT,[2] this is a mathematical solution to corrigibility in a toy problem, and not a solution to corrigibility in real systems.  Nonetheless, it's a big deal if you have in fact solved the utility-function-land version which MIRI failed to solve.[3]  Looking to applicability, it may be helpful for you to spell out the ML analog to your solution (or point us to the relevant section in the paper if it exists).  In my view, the hard part of the alignment problem is deeply tied up with the complexities of the {training procedure --> model} map, and a nice theoretical utility function is neither sufficient nor strictly necessary for alignment (though it could still be useful).

So looking at your claim that "the technical problem [is] mostly solved", this may or may not be true for the narrow sense (like "corrigibility as a theoretical outer-objective problem in formally-specified environments"), but seems false and misleading for the broader practical sense ("knowing how to make an AGI corrigible in real life").[4]

Less important, but I wonder if the authors of Soares et al agree with your remark in this excerpt[5]:

"In particular, [Soares et al] uses a Platonic agent model [where the physics of the universe cannot modify the agent's decision procedure]  to study a design for a corrigible agent, and concludes that the design considered does not meet the desiderata, because the agent shows no incentive to preserve its shutdown behavior. Part of this conclusion is due to the use of a Platonic agent model." 

  1. ^

    Btw, your writing is admirably concrete and clear.

    Errata:  Subscripts seem to broken on page 9, which significantly hurts readability of the equations.  Also there is a double-typo "I this paper, we the running example of a toy universe" on page 4.

  2. ^

    Assuming the idea is correct

  3. ^

    Do you have an account of why MIRI's supposed impossibility results (I think these exist?) are false?

  4. ^

    I'm not necessarily accusing you of any error (if the contest is fixated on the utility function version), but it was misleading to be as someone who read your comment but not the contest details.

  5. ^

    Portions in [brackets] are insertions/replacements by me

To be more specific about the technical problem being mostly solved: there are a bunch of papers outlining corrigibility methods that are backed up by actual mathematical correctness proofs

Can you link these papers here?  No need to write anything, just links.

  1. Try to improve my evaluation process so that I can afford to do wider searches without taking excessive risk.

Improve it with respect to what?  

My attempt at a framework where "improving one's own evaluator" and "believing in adversarial examples to one's own evaluator" make sense:

  • The agent's allegiance is to some idealized utility function  (like CEV).  The agent's internal evaluator  is "trying" to approximate  by reasoning heuristically.  So now we ask Eval to evaluate the plan "do argmax w.r.t. Eval over a bunch of plans".  Eval reasons that, due to the the way that Eval works, there should exist "adversarial examples" that score very highly on Eval but low on .  Hence, Eval concludes that  is low, where plan = "do argmax w.r.t. Eval".  So the agent doesn't execute the plan "search widely and argmax".
  • "Improving " makes sense because Eval will gladly replace itself with  if it believes that  is a better approximation for  (and hence replacing itself will cause the outcome to score better on )

Are there other distinct frameworks which make sense here?  I look forward to seeing what design Alex proposes for "value child".

Yeah, the right column should obviously be all 20s.  There must be a bug in my code[1] :/

I like to think of the argmax function as something that takes in a distribution on probability distributions on  with different sigma algebras, and outputs a partial probability distribution that is defined on the set of all events that are in the sigma algebra of (and given positive probability by) one of the components.

Take the following hypothesis :

If I add this into  with weight , then the middle column is still nearly zero.  But I can now ask for the probablity of the event in  corresponding to the center square, and I get back an answer very close to zero.  Where did this confidence come from?

I guess I'm basically wondering what this procedure is aspiring to be.  Some candidates I have in mind:

  1. Extension to the coarse case of regular hypothesis mixing (where we go from P(w) and Q(w) to )
  2. Extension of some kind of Bayesian update-flavored thing where we go to  then renormalize
    1. ETA:  seems more plausible than 
  3. Some kind of "aggregation of experts who we trust a lot unless they contradict each other", which isn't cleanly analogous to either of the above

Even in case 3, the near-zeros are really weird.  The only cases I can think of where it makes sense are things like "The events are outcomes of a quantum process.  Physics technique 1 creates hypothesis 1, and technique 2 creates hypothesis 2.  Both techniques are very accurate, and the uncertainity they express is due to fundamental unknowability.  Since we know both tables are correct, we can confidently rule out the middle column, and thus rule out certain events in hypothesis 3."  

But more typically, the uncertainity is in the maps of the respective hypotheses, not in the territory, in which case the middle zeros seem unfounded.  And to be clear, the reason it seems like a real issue[2] is that when you add in hypothesis 3 you have events in the middle which you can query, but the values can stay arbitrarily close to zero if you add in hypothesis 3 with low weight.

  1. ^

    ETA: Found the bug, it was fixable by substituting a single character

  2. ^

    Rather than "if a zero falls in the forest and no hypothesis is around to hear it, does it really make a sound?"

Now, let's consider the following modification: Each hypothesis is no longer a distribution on , but instead a distribution on some coarser partition of . Now  is still well defined

Playing around with this a bit, I notice a curious effect (ETA: the numbers here were previously wrong, fixed now):

The reason the middle column goes to zero is that hypothesis A puts 60% on the rightmost column, and hypothesis B puts 40% on the leftmost, and neither cares about the middle column specifically.

But philosophically, what does the merge operation represent, which causes this to make sense?  (Maybe your reply is just "wait for the next post")

most egregores/epistemic networks, which I'm completely reliant upon, are much smarter than me, so that can't be right

*Egregore smiles*

Another way of looking at this question:  Arithmetic rationality is shift invariant, so you don't have to know your total balance to calculate expected values of bets.  Whereas for geometric rationality, you need to know where the zero point is, since it's not shift invariant.

Load More