midco

trying my best

Wiki Contributions

Comments

Sorted by
midco20

This comment proposes a strategy for (2) I'll call Active Matrix Completion (AMC), which allows querying of elements of  given some cost/budget. In our case, each query takes O(n), so we can afford O(m) total queries.

Some notes:

  • O(n) seems really fast - for instance, evaluating the bilinear form  takes O(n^2). This makes me suspect any solution to (2) will rely on some form of AMC.
  • AMC exists in literature - this paper gives a nice introduction and (I think) proves that O(n*rank(A)) queries suffice.
  • One strategy that seems intuitively appealing is querying enough elements to guarantee that the resulting adjacency graph is chordal, which (I think) allows for efficient PSD completion (see Vandenberghe; algorithm 10.2). Thus, the following proposition is stronger than (2):

PROPOSITION: given a graph G = (V, E) with n nodes and m edges, can we efficiently (in O(mn) time) determine a small (with at most Cm edges for universal constant C) chordal completion of G?

The proposition seems "too good to be true", especially since chordal completion seems like a hard computational problem (NP-complete, no great approximations are known), so I'm most interested in easy counterexamples. For instance, a "small" graph with only "large" chordal completions would confirm the falsehood of the proposition.

So, uh, any graph theory experts around?

midco70

I think logarithmic dependence on epsilon will be difficult, especially for problem one. To demonstrate this, consider the problem of approximating the maximum eigenvalue of a sparse (let's say m = O(n)) PSD matrix with trace at most n. The obvious algorithms to try aren't fast enough:

  1. power iteration / Lanczos algorithm compute  in time O(mk) using sparse matrix-vector multiplies with polynomial precision in k. Thus, such methods exhibit the undesirable polynomial dependence in epsilon.
  2. repeated squaring computes  in time  using dense matrix multiplication with exponential precision in k. Though such methods are logarithmic in k, the dense matrix multiplication steps render the algorithm too slow.

Intuitively, power iteration-style methods exploit sparsity by taking matrix-vector products exclusively, which yields slow (polynomial in epsilon) convergence.

To argue that problem one is difficult, I claim that one natural candidate completion is the all-zero completion, resulting in a sparse matrix. This suggests (but does not prove!) that the problem of determining existence of PSD completions requires determining whether the resulting sparse completion is PSD - equivalently, whether its minimum eigenvalue is nonnegative. The reduction to the max eigenvalue problem then follows by taking spectral shifts ( for ).

It's unclear to me whether problem two has a similar dependence, but given the similarity of the problems I'm skeptical that logarithmic dependence in epsilon is possible. If anyone has ideas for a similar reduction I'd love to discuss!

(comment based on work co-authored w @Somsubhro Bagchi)

note: Som and I have discussed a similar objection with @paulfchristiano, who seemed ok with solutions with convergence polynomial in epsilon but still O(mn) in the "typical case" (eg. for problem 1, with enough precision to determine (non)existence of PSD completions). After constraining all diagonal elements of A to be 1, we're somewhat optimistic such weaker convergence is attainable.

midco10

better: the problem can be phrased as a feasibility problem in semidefinite programming given m linear constraints; no idea if there are (randomized?) algorithms which run fast enough

[This comment is no longer endorsed by its author]Reply
midco10

(possibly trivial) observation: the generalization replacing "entries" with "linear constraints" seems almost as easy? I'd strongly suspect impossibility of the former implies impossibility of the latter (thus showing equivalence)

midco10

re: (1), you might be able to phrase the problem as convex optimization in the unknown parameters, which is probably \tilde{O}(n^2)

[This comment is no longer endorsed by its author]Reply
midco10

Upon reflection, I now suspect that the Impact  is analogous to Shapley Value. In particular, the post could be reformulated using Shapley values and would attain similar results. I'm not sure whether Impact-scarcity of Shapley values holds, but the examples from the post suggest that it does.

(thanks to TurnTrout for pointing this out!)

midco10

Answering questions one-by-one:

  • I played fast and loose with IEU in the intro section. I think it can be consistently defined in the Bayesian game sense of "expected utility given your type", where the games in the intro section are interpreted as each player having constant type. In the Bayesian Network section, this is explicitly the definition (in particular, player i's IEU varies as a function of their type).
  • Upon reading the Wiki page, it seems like Shapley value and Impact share a lot of common properties? I'm not sure of any exact relationship, but I'll look into connections in the future.
  • I think what's going on is that the "causal order" of  and  is switched, which makes  "look as though" it controls the value of . In terms of game theory the distinction is (I think) definitional; I include it because Impact has to explicitly consider this dynamic.
  • In retrospect: yep, that's conditional expectation! My fault for the unnecessary notation. I introduced it to capture the idea of a vector space projection on random variables and didn't see the connection to pre-existing notation.
midco10

We conjecture this? We've only proven limiting cases so far, (constant-sum, and strongly suspected for common-payoff), but we're still working on formulating a more general claim.

midcoΩ790

Thank you so much for the comments! I'm pretty new to the platform (and to EA research in general), so feedback is useful for getting a broader perspective on our work.

To add to TurnTrout's comments about power-scarcity and the CCC, I'd say that the broader vision of the multi-agent formulation is to establish a general notion of power-scarcity as a function of "similarity" between players' reward functions (I mention this in the post's final notes). In this paradigm, the constant-sum case is one limiting case of "general power-scarcity", which I see as the "big idea". As a simple example, general power-scarcity would provide a direct motivation for fearing robustly instrumental goals, since we'd have reason to believe an AI with goals orthogonal(ish) from human goals would be incentivized to compete with humanity for Power.

We're planning to continue investigating multi-agent Power and power-scarcity, so hopefully we'll have a more fleshed-out notion of general power-scarcity in the months to come.

Also, re: "as players' strategies improve, their collective Power tends to decrease", I think your intuition is correct? Upon reflection, the effect can be explained reasonably well by "improving your actions has no effect on your Power, but a negative effect on opponents' Power".