Hello, this is my first post on lesswrong, a community I deeply value. Feel free to tell me to RTFM, especially if you can point me there.

Yesterday I attended the weekly rationalist meetup at "The Territory" in Seattle for the first time. I attended for many reasons, but I was particularly excited to discuss Yudkowsky's new novel with others. His novel can be found here: https://www.glowfic.com/posts/4582

Only one of the eleven or so people there had heard about it, so this post is my attempt to market the book. Since all marketing should also add value, I will now move on to adding value by giving you a warm-up problem for some of the math and concepts you will encounter in this novel.

The problem is as such:

You are playing a game against ROB, the robber. In this game, ROB will choose two distinct arbitrary (real/computable) numbers, A and B. ROB will send both numbers to TABI the arbiter, who will secretly flip a fair coin and then hand you either A or B depending on the outcome of the flip. Your task is to come up with an algorithm which does better than 50% accuracy in determining if you were handed the larger number.

I will post the solution tomorrow, and then I will highlight specific ways I have applied the lessons from Yudkowsky's latest work in real/professional life.

Note that the book will not explicitly spoil this problem for you, and vice versa.

Edit: solution is here.

New Comment
44 comments, sorted by Click to highlight new comments since:
[-]dxu132

 Supposing (without loss of generality) that A > B, we have two cases:

  1. TABI handed you A.
  2. TABI handed you B.

Your job is to ensure that you answer "yes" more often in case 1 than in case 2. For this purpose it suffices to probabilistically answer "yes" according to some function that increases monotonically across the reals; this ensures that the probability of answering "yes" will always be higher for A than for B, given any two numbers A > B.

There are infinitely many functions with this property, of course, but without further knowledge of ROB's algorithm, there is no way of knowing which of them performs best. However, since any such function will result in >50% accuracy, any one of them suffices as an answer to the question as posed.

[-]TLW10

I don't think that works? Or at least I'm missing something.

I do not see how your "without loss of generality" holds. If B > A, your 'correct' response with a monotonically increasing function becomes an incorrect response with a monotonically decreasing function - and with two distinct random reals the probability of this happening is... 50%. (At least in cases where this is probability is even defined...)

[This comment is no longer endorsed by its author]Reply
[-]dxu40

It's not clear to me what you're trying to say. Do you have a concrete counterexample in mind, e.g. an algorithm for ROB to follow, alongside a monotonically increasing function of your choice, such that the math does not give a strictly >50% accuracy for any agent which replies "yes" with probability determined by the function in question?

[-]TLW20

 Long story short, I misinterpreted the question. (I was thinking it was trying to predict if ROB chose A > B or A < B)

[-]TLW10

 Long story short, I misinterpreted the question.

Something along the lines of: if x is your number say yes (1-1/x) % of the time? Anything that makes your likelihood of saying yes increase as x gets bigger. Given two numbers, you thus would have been more likely to say yes to the larger number.

[-]TLW10

 Consider a ROB that does the following:

Flips a fair coin (under the same "this is a quick way to say 'the ROB has a private random oracle' assumptions as the arbiter has and game-theory in general requires"). If heads: A=0.25/B=0.75, otherwise A=0.75/B=0.25.

This is indistinguishable from a ROB that always returns those two numbers in a consistent order.

I'll commit to my ignorance here, before you give the solution.  I don't think it's possible to get better than 50%, unless ROB has a known and flawed algorithm for choosing.  I see some similarity to https://en.wikipedia.org/wiki/Two_envelopes_problem, which makes the point that there is no uniform distribution over an infinite set.

But there's a lot of wiggle room in "better than 50%" - if you mean 50% + epsilon over hugely many trials, you can probably argue that there is some bound that you can infer with some chance of correctness, and if that happens to be one of the numbers, you know which direction to guess.  If you mean "measurably better than 50%", or "better than 50% for every A and B", then no.

One possible method for ROB: pick a number in the middle-third of a large but finite set of integers.  Flip a coin to add or subtract one.  Those two adjacent integers are what gets sent to TABI.

edit: and it didn't take long for dxu to show me how I'm wrong.  reason for my error:

I mistakenly focused on the coin flip and the labeling of A and B, rather than the underlying fact that we one number MUST be larger than the other - we don't care which is A and which is B, we care which is bigger.  

Please spoiler your edit

 The intended answer for this problem is the Frequentist Heresy in which ROB's decisions are treated as nonrandom even though they are unknown, while the output of our own RNG is treated as random, even though we know exactly what it is, because it was the output of some 'random process'.

Instead, use the Bayesian method. Let P({a,b}) be your prior for ROB's choice of numbers. Let x be the number TABI gives you. Compute P({a,b}|x) using Bayes' Theorem. From this you can calculate P(x=max(a,b)|x). Say that you have the highest number if this is over 1/2, and the lowest number otherwise. This is guaranteed to succeed more than 1/2 of the time, and to be optimal given your state of knowledge about ROB.

This might be "optimal given your state of knowledge about ROB" but it isn't "guaranteed to succeed more than 1/2 of the time". For example:

My prior is that Rob always picks a positive and a negative number.

Actually Rob always picks two positive numbers.

Thus when we play the game I always observe a positive number, guess that it is the larger number and have a 0.5 probability of winning. 0.5 is not greater than 0.5

Right, I should have chosen a more Bayesian way to say it, like 'suceeds with probability greater than 1/2’.

How is "better than 50% accuracy" defined?

You need to have it on every possible ROB's strategy?

Or in general, with some measure on the (uncountable) set of strategies? In this case, what measure?

By "better than 50% accuracy" I am trying to convey "Provide an algorithm such that if you ran a casino where the players acted as ROB, the casino can price the game at even money and come out on top, given the law of large numbers". 

(Perhaps?) more precisely I mean that for any given instantiation of ROB's strategy, then for any given target reward R and payoff probability P<1 there exists a number N such that if you ran N trials betting even money with ROB you would have P probability to have at least R payoff (assuming you start with 1 dollar or whatever).

You can assume ROB will know your algorithm when choosing his distribution of choices. 

This one is a classic, so I can just copy-paste the solution from Google. The more interesting point is that this is one of those cases where math doesn't correspond to reality.

In the spirit of "trying to offer concrete models and predictions" I propose you a challenge: write a bot which would consistently beat my Rob implementation over the long run enough that it would show on numerical experiments. I need some time to work on implementing it (and might disappear for a while, in which case consider this one forfeited by me).

One of the rules I propose that neither of us are allowed to use details of others' implementation, in order to uphold the spirit of the original task.

A simple way to do this is for ROB to output the pair of integers {n, n+1} with probability K((K-1)/(K+1))^|n|, where K is some large number. Then even if you know ROB's strategy the best probability you have of winning is 1/2 + 1/(2K).

If you sample an event N times the variance in your estimate of its probability is about 1/N. So if we pick K >> √N then our probability of success will be statistically indistinguishable from 1/2.

The only difficulty is implementing code to sample from a geometric distribution with a parameter so close to 1.

What is the best place to discuss the story?

I think that the main barrier is that I have no idea how to read this book.

It looks like a forum to me. Do I read all the posts, or are some of them comments from readers? Do I need to pay attention to usernames, post times, history of each comment?

There are no reader comments -- this is a "glowfic", the whole thread is the story. Each "comment" is a portion of the story. You need read it all. 

You do need pay attention to the usernames because they often identify which character is speaking (e.g. "Keltham", or "Carissa Sevar"), and an approximation of their facial expression via the graphics.

(The post times and history of each comment are not important in-story, the "history" are just edits made to that story portion, the post times are just when it was posted)

One of the authors is "Iarwain" (who is EY) the other author is "lintamande".

Good questions! It's a forum with posts between two users "Iarwain" (Yudkowsky) and "lintamande", who is co-authoring this piece. There are no extraneous posts, although there are (very rarely) OOC posts, for instance announcing the discord or linking to a thread for a side story.

In each post, either user will post as either a character (ie Keltham, Carissa, and others - each user writes multiple characters) or without a character (for 3rd person exposition). I usually use the avatars when possible to quickly identify who is speaking.

You don't need to pay attention to history or post time, until your catch up to the current spot and start playing the F5 game (they are writing at a very quick pace).

I'm very interested to see the solution to this problem. I agree that "better than 50%" is a little vague, however even with the most generous interpretation I'm skeptical about the existence of such an algorithm, as long as we allow the nunbers to be arbitrarily selected.

Consider the following strategy. For N rounds of play, before round 1, let ROB arbitraily select a list of N numbers. For each round let ROB choose arbitrarily one member x of the set as A, remove that member from the list, and select B as x+1.

Clearly each round is independent since the selection of each round's numbers did not depend on the previous round in any respect.

For the first round, it's clearly impossible to produce an algorithm which gives a better than 50% chance of success. Since subsequent rounds are independent, the same conclusion holds.

Could you explain why it's clearly impossible to produce an algorithm that gives better than 50% chance of success on the first round? I think I follow the rest of your argument.

ROB selects A and B. First suppose A < B. Suppose A is revealed. Further Suppose that some deterministic Algorithm R exists which takes in A, and produces the probability that A is smaller.

In round one the only input to the algorithm can be A alone. Furthermore since we have supposed that R is "better than 50%", we must see have R yield us that P( A smaller ) > 0.5. We can then easily extrapolate P( A bigger ) < 0.5.

Now suppose we have the opposite case, that A > B. Again the only input to our algorithm can be A for the first round. However we must receive as output: P( A smaller ) < 0.5 and thus P( A bigger ) > 0 5

But consider that in both cases our only input was A, then it follows that R must not be deterministic since it produces two different results on the same input. This is a contradiction, hence there is no such deterministic algorithm R.

It is possible that there is a nondeterministic algorithm R', however I'm almost certain that no such algorithm can outperform a deterministic one in a case like this.

 Could you explain why you are almost certain?

Other posters have suggested using a function which increases montonically over the reals. My instinct is that for any such function, for any choice of epsilon, there is some delta such that n+delta does not provided a benefit of epsilon.

In my example i spoke of selecting n and n+1 as A and B. Instead let B be n+delta.

Recall that any monotnic increasing bounded function over the reals is continuous. Suppose we want to achieve a success rate with our guesses of 0.5+epsilon. ROB can select delta such that our performance is less than that. Then it follows that our success rate s will satisfy

0.5 < s < 0.5 + epsilon

Since we can produce a delta for any epsilon (because the montonic function is continuous), it follows that we can define a sequence of success rates which have the infinum 0.5

Essentially ROB can select two real nunbers sufficiently close together that no strategum produces a benefit of any substance. He can enact a strategy which pushes us arbitraily close to a 50/50 shot.

You made a comment regarding restricting the problem to integers. A solution may exist in that case, I'm not sure.

P.S. Thanks for making this post, it's been an interesting problem to think about.

I think you're right that there is no epsilon such that you can guarantee 50% + epsilon performance. You can however guarantee that your performance is greater than 50%. (Since ROB isn't allowed to use the full limit strategy of giving two identical reals, there will always be some nonzero delta.)

Monotonicity is insufficient. Your function does need to be strictly increasing. Otherwise ROB can always pick numbers in a non-increasing interval.

Thanks!

I do not believe that "any monotonically increasing bounded function over the reals is continuous". For instance, choose some montonically increasing function bounded to (0,0.4) for x<-1, another function bounded to (0.45,0.55) for -1<x<1, and a third function bounded to (0.6,1) for x>1.

I did not check the rest of the argument, sorry

You're right. I Misremembered, but i checked and it is true that a bounded montonic function of the reals can have only a countable number of discontinuities. So if ROB knows our algorithm, he can select one continuous interval for all of his values to come from.

Proof of the countable nature of discontinutes given here:

https://math.stackexchange.com/questions/2793202/set-of-discontinuity-of-monotone-function-is-countable

Sketch of an impossibility proof. Maybe I'm missing something in the problem statement?

If it's possible to guarantee better-than-chance performance independent of ROB's policy for choosing numbers, then ROB has to be unable to adversarially select two numbers for which my algorithm does no better than chance.

The only input my algorithm has is the one real number I get from the arbiter, so my algorithm essentially consists of a function from reals to probabilities.

  1. If my function assigns exactly 50% probability to at least two distinct real numbers, then ROB can choose these two numbers and constrain me to exactly 50% accuracy.

  2. If my function assigns less than 50% probability to at least two distinct real numbers, than ROB can choose these two numbers and constrain me to less than 50% accuracy.

  3. If my function assigns greater than 50% probability to at least two distinct real numbers, then ROB can choose these two numbers and constrain me to less than 50% accuracy.

There are more than three real numbers, therefore my function must fulfill one of (1, 2, 3) above and won't guarantee better-than-chance performance.

Maybe the above doesn't hold if my function can return distributions of probabilities? Not sure.

I don't see how 2 is true. 

If you always answer that your number is lower, you definitely have exactly 50% accuracy, right? So ROB isn't constraining you to less than 50% accuracy.

Even without that section, the modeling of your evaluation as a mapping from input number to a binary "you will say that this number is higher than the other" result is pretty binding.  If ROB knows (or can infer) your mapping/algorithm, it can just pick numbers for which you're wrong, every time.

Which turns this into a "whoever knows the other's algorithm better, wins" situation.  

Yeah, I was confused. I was thinking you had to state a probability of having the larger number (rather than a binary guess) and try to get better than chance according to some scoring rule.

How do you know the story was written by EY?

[-]TLW10

Questions:

Must ROB be deterministic? Must your agent be deterministic?

[-]TLW10

 Consider a ROB that does the following:

Flips a fair coin (under the same "this is a quick way to say 'the ROB has a private random oracle' assumptions as the arbiter has and game-theory in general requires"). If heads: A=0/B=1, otherwise A=1/B=0.

By a symmetry argument no agent can beat this ROB more than 50% of the time. (Any gains when the ROB coinflip is heads are balanced by an equivalent loss when the ROB coinflip is tails, and vice versa.)

(Regardless of what might happen in weird infinite distributions, it doesn't really matter. This ROB should be a sufficient counterexample, and doesn't do anything fancy.)

[This comment is no longer endorsed by its author]Reply

If I am following, it seems like an agent which says "bet 'higher' if positive and 'lower' otherwise" does well

The goal is not to guess which of A and B is higher, but to guess whether the number TABI gives you is higher than the other one.

ROB's choice of order for A and B is irrelevant.

[-]TLW10

"Please use spoiler tags liberally" Spoiler please?

 And yes, I misinterpreted the question.

I have a feeling, that "computable" is the key word here. Does ROB implement some halting algorithm? If yes, therefore there are finite boundaries for A and B, right?

Computability is not important. I only meant to cut off possibilities like "ROB hands TABI 1 and 'the number which is 0 if the goldbach conjecture is true and 2 otherwise'"

You can restrict yourself to arbitrary integers, and perhaps I should have

[+][comment deleted]20
[+][comment deleted]20