I think Proposition 1 is false as stated because the resulting functional is not always continuous (wrt the KR-metric). The function , with should be a counterexample. However, the non-continuous functional should still be continuous on the set of sa-measures.
Another thing: the space of measures is claimed to be a Banach space with the KR-norm (in the notation section). Afaik this is not true, while the space is a Banach space with the TV-norm, with the KR-metric/norm it should not be complete and is merely a normed vector space. Also the claim (in "Basic concepts") that is the dual space of is only true if equipped with TV-norm, not with KR-metric.
Another nitpick: in Theorem 5, the type of in the assumption is probably meant to be , instead of .
Regarding direction 17: There might be some potential drawbacks to ADAM. I think its possible that some very agentic programs have relatively low score. This is due to explicit optimization algorithms being low complexity.
(Disclaimer: the following argument is not a proof, and appeals to some heuristics/etc. We fix for these considerations too.) Consider an utility function . Further, consider a computable approximation of the optimal policy (AIXI that explicitly optimizes for ) and has an approximation parameter n (this could be AIXI-tl, plus some approximation of ; higher is better approximation). We will call this approximation of the optimal policy . This approximation algorithm has complexity , where is a constant needed to describe the general algorithm (this should not be too large).
We can get better approximation by using a quickly growing function, such as the Ackermann function with . Then we have .
What is the score of this policy? We have . Let be maximal in this expression. If , then .
For the other case, let us assume that if , the policy is at least as good at maximizing than . Then, we have .
I don't think that the assumption ( maximizes better than ) is true for all and , but plausibly we can select such that this is the case (exceptions, if they exist, would be a bit weird, and if ADAM working well due to these weird exceptions feels a bit disappointing to me). A thing that is not captured by approximations such as AIXI-tl are programs that halt but have insane runtime (longer than ). Again, it would feel weird to me if ADAM sort of works because of low-complexity extremely-long-running halting programs.
To summarize, maybe there exist policies which strongly optimize a non-trivial utility function with approximation parameter , but where is relatively small.
I think the "deontological preferences are isomorphic to utility functions" is wrong as presented.
Firts, the formula has issues with dividing by zero and not summing probabilities to one (and re-using variable as a local variable in the sum). So you probably meant something like Even then, I dont think this describes any isomorphism of deontological preferences to utility functions.
Utility functions are invariant when multiplied with a positive constant. This is not reflected in the formula.
utility maximizers usually take the action with the best utility with probability , rather than using different probabilities for different utilities.
modelling deontological constraints as probability distributions doesnt seem right to me. Let's say I decide between drinking green tea and black tea, and neither of those violate any deontological constraints, then assigning some values (which ones?) to P("I drink green tea") or P("I drink black tea") doesnt describe these deontological constraints well.
any behavior can be encoded as utility functions, so finding any isomorphisms to utility functions is usually possible, but not always meaningful.
I am going to assume that in the code, when calculating p_alice_win_given_not_caught
, we do not divide the term by two (since this is not that consistent with the description. I am also assuming that is a typo and is meant, which would also be more consistent with other stuff).
So I am going to assume assume a symmetrical version.
Here, P(Alice wins) is . Wlog we can assume (otherwise Bob will run everything or nothing in shielded mode).
We claim that is a (pure) Nash equilibrium, where .
To verify, lets first show that Alice cannot make a better choice if Bob plays . We have . Since this only depends on the sum, we can make the substitution . Thus, we want to maximize . We have . Rearranging, we get . Taking logs, we get . Rearranging, we get . Thus, is the optimal choice. This means, that if Bob sticks to his strategy, Alice cannot do better than .
Now, lets show that Bob cannot do better. We have . This does not depend on and anymore, so any choice of and is optimal if Alice plays .
(If I picked the wrong version of the question, and you actually want some symmetry: I suspect that the solution will have similarities, or that in some cases the solution can be obtained by rescaling the problem back into a more symmetric form.)
This article talks a lot about risks from AI. I wish the author would be more specific what kinds of risks they are thinking about. For example, it is unclear which parts are motivated by extinction risks or not. The same goes for the benefits of open-sourcing these models. (note: I haven't read the reports this article is based on, these might have been more specific)
Thank you for writing this review.
The strategy assumes we'll develop a good set of safety properties that we're demanding proof of.
I think this is very important. From skimming the paper it seems that unfortunately the authors do not discuss it much. I imagine that actually formally specifying safety properties is actually a rather difficult step.
To go with the example of not helping terrorists spread harmful virus: How would you even go about formulating this mathematically? This seems highly non-trivial to me. Do you need to mathematically formulate what exactly are harmful viruses?
The same holds for Asimov's three laws of robotics, turning these into actual math or code seems to be quite challenging.
There's likely some room for automated systems to figure out what safety humans want, and turn it into rigorous specifications.
Probably obvious to many, but I'd like to point out that these automated systems themselves need to be sufficiently aligned to humans, while also accomplishing tasks that are difficult for humans to do and probably involve a lot of moral considerations.
A common response is that “evaluation may be easier than generation”. However, this doesn't mean evaluation will be easy in absolute terms, or relative to one’s resources for doing it, or that it will depend on the same resources as generation.
I wonder to what degree this is true for the human-generated alignment ideas that are being submitted LessWrong/Alignment Forum?
For mathematical proofs, evaluation is (imo) usually easier than generation: Often, a well-written proof can be evaluated by reading it once, but often the person who wrote up the proof had to consider different approaches and discard a lot of them.
To what degree does this also hold for alignment research?
This sounds like https://www.super-linear.org/trumanprize. It seems like it is run by Nonlinear and not FTX.