It's Pasha Kamyshev, btw :) Main engagement is through
1. reading MIRI papers, especially the older agent foundations agenda papers
2. following the flashy developments in AI, such as Dota / Go RL and being somewhat skeptical of the "random play" part of the whole thing (other things are indeed impressive)
3. Various math text books: category theory for programmers, probability the logic of science, and others
4. Trying to implement certain theory in code (quantilizers, different prediction market mechanisms)
5. Statistics investigations into various claims of "algorithmic bias"
6. Conversations with various people in the community on the topic
This is excellent. I believe that this result is a good simulation of "what we could expect if the universe is populated by aliens".
Assuming the following:
1) aliens consider both destroying other civilizations and too early contact a form of defection
2) aliens reason from udt principles
3) advanced civilizations have some capacity to simulate non advanced ones
Then roughly the model in the post will work to explain what the strategic equlibrium is.
if the is indeed a typo, please correct it at the top level post and link to this comment. The broader point is that the interpretation of P( H | X2, M) is probability of heads conditioned on Monday and X2, and P (H |X2) is probability of heads conditioned on X2. In the later paragraphs, you seem to use the second interpretation. In fact, It seems your whole post's argument and "solution" rests on this typo.
Dismissing betting arguments is very reminiscent of dismissing one-boxing in Newcomb's because one defines "CDT" as rational. The point of probability theory is to be helpful in constructing rational agents. If the agents that your probability theory leads to are not winning bets with the information given to them by said theory, the theory has questionable usefulness.
Just to clarify, I have read Probability, the Logic of science, Bostrom's and Armstrong's papers on this. I have also read https://meaningness.com/probability-and-logic. The question of the relationship of probability and logic is not clear cut. And as Armstrong has pointed out, decisions can be more easily determined than probabilities, which means it's possible the ideal relationship between decision theory and probability theory is not clear cut, but that's a broader philosophical point that needs a top level post.
In the meantime, Fix Your Math!
I think this post is fairly wrong headed.
First, your math seems to be wrong.
Your numerator is ½ * p(y), which seems like a Pr (H | M) * Pr(X2 |H, M)
Your denominator is 1/2⋅p(y)+1/2⋅p(y)(2−q(y)), which seems like
Pr(H∣M) * Pr(X2∣H,M) + Pr(¬H∣M) * Pr(X2∣¬H,M), which is Pr(X2 |M)
By bayes rule, Pr (H | M) * Pr(X2 |H, M) / Pr(X2 |M) = Pr(H∣X2, M), which is not the same quantity you claimed to compute Pr(H∣X2). Unless you have some sort of other derivation or a good reason why you omitted M in your calculations: this isn’t really “solving” anything.
Second, the dismissal of betting arguments is strange. If decision theory is indeed downstream of probability, then probability acts as an input to the decision theory. So, if there is a particular probability p of heads at a given moment, it means it’s most rational to bet according to said probability. If your ideal decision theory diverges from probability estimates to arrive at the right answer on betting puzzles, then the question of probability is useless. If it takes the probability into account and gets the wrong answer, then this is not truly rational.
More generally, probability theory is supposed to completely capture the state of knowledge of an agent and if there is other knowledge that is obscured by probability, that means it is important to capture as well in another system. Building a functional AI would then require a knowledge representation that is separate, but interfacing from a probability representation, making the real question is: what is that knowledge representation?
“probability theory is logically prior to decision theory.” Yes, this is the common view because probability theory was developed first and is easier but it’s not actually obvious this *has* to be the case. If there is a new math that puts decisions as more fundamental than beliefs, then it might be better for a real AI.
Third, dismissal of “not H and it’s Tuesday” as not propositions doesn’t make sense. Classical logic encodes arbitrary statements within AND and OR -type constructions. There isn’t a whole lot of restrictions on them.
Fourth, the assumptions. Generally, I have read the problem as whatever the beauty experiences on Monday is the same as on Tuesday, or q(y) = 1, at which point this argument reduces to ½-er position and then the usual anti-1/2, pro-1/3 arguments apply. The paradox still stands for the moment when you wake up, or if you get no additional bits of input. The question of updating on actual input in the problem is an interesting one, but it hides the paradox of what your probability should be *at the moment of waking up*. You seem to simply declare it to be ½, by saying:
The prior for H is even odds: Pr(H∣M)=Pr(¬H∣M)=1/2.
This is generally indistinguishable from the ½ position you dismiss that argues for that prior on the basis of “no new information.” You still don’t know how to handle the situation of being told that it’s Monday and needing to update your probability accordingly, vs conditioning on Monday and doing inferences.
I think it's worth distinguishing between "smallest" and "fastest" circuits.
A note on smallest.
1) Consider a travelling salesman problem and a small program that brute-forces the solution to it. If the "deamon" wants to make a travelling salesman visit a particular city first, then they would simply order the solution space to consider it first. This has no guarantee of working, but the deamon would get what it wants some of the time. More generally, if there is a class of solutions we are indifferent to, but daemons have a preference order over, then nearly all deterministic algorithms could be seen as deamons. That said, this situation may be "acceptable" and it's worth re-defining the problem to exactly understand what is acceptable and what isn't.
A note on fastest
2) Consider a prime-generation problem, where we want some large primes between 10^100 and 10^200. A simple algorithm that hardcodes a set of primes and returns them is "fast". This isn't the smallest, since it has to store the primes. In a less silly example, a general prime-returning algorithm could only look for primes of particular types, such as Mersenne primes. The general intuition is that optimizations that make algorithms "faster" could come at a cost of forcing a particular probability distribution on the solution.
This is really good, however i would love some additional discussion on the way that the current optimization changes the user.
Keep in mind, when facebook optimizes "clicks" or "scrolls", it does so by altering user behavior, thus altering the user's internal S1 model of what is important. This could frequently lead to a distortion of reality, beliefs and self-esteem. There have been many articles and studies correlating facebook usage with mental health. However, simply understanding "optimization" is enough evidence that this is happening.
While, a lot of these issues are pushed under the same umbrella of "digital addiction," i think facebook is a lot more of a serious problem that, say video games. Video games do not, as a rule, act through the very social channels that are helpful to reducing mental illness. Facebook does.
Also another problem is facebook's internal culture that, as of 4 years ago was very marked by the cool-aid that somehow promised unbelievable power(1 billion users, horray) without necessarily caring about responsibility (all we want to do is make the world open and connected, why is everyone mad at us).
This problem is also compounded by the fact that facebook get a lot of shitty critiques (like the critique of the fact that they run A/B tests at all) and has thus learned to ignore legitimate questions of value learning.
full disclosure, i used to work at FB.
I am also confused. How does this do against EABot, aka C1=□(Them(Them)=D) and M = DefectBot. Is the number of boxes not well defined in this case?
hmm, looks like the year is wrong and the delete button has failed to work :(
Maybe this have been said before, but here is a simple idea:
Directly specify a utility function U which you are not sure about, but also discount AI's own power as part of it. So the new utility function is U - power(AI), where power is a fast growing function of a mix of AI's source code complexity, intelligence, hardware, electricity costs. One needs to be careful of how to define "self" in this case, as a careful redefinition by the AI will remove the controls.
One also needs to consider the creation of subagents with proper utilities as well, since in a naive implementation, sub-agents will just optimize U, without restrictions.
This is likely not enough, but has the advantage that the AI does not have a will to become stronger a priori, which is better than boxing an AI which does.
Well, i get where you are coming from with Goodhart's Law, but that's not the question. Formally speaking, if we take the set of all utility functions with complexity < N = FIXED complexity number, then one of them is going to be the "best", i.e. most correlated with the "true utility" function which we can't compute.
As you point out, with we are selecting utilities that are too simple, such as straight up life expectancy, then even the "best" function is not "good enough" to just punch into an AGI because it will likely overfit and produce bad consequences. However we can still reason about "better" or "worse" measures of societies. People might complain about un-employment rate, but it's a crappy metric to base your decision about which societies are over-all better than others, plus it's easier to game.
The use of at least "trying" to formalize values means we can at least have a set of metrics, that's not too large that we might care about in arguments like: "but the AGI reduced GDP, well it also reduced suicide rate"? Which is more important? Without a simple guidance of simply something we value, it's going to be a long and UN-productive debate.