Astronomy PhD Student, University of Cambridge. Interested in AGI Alignment and actually understanding AI.
This scenario seems impossible, as in contradictory / not self-consistent. I cannot say exactly why it breaks, but at least the two statements here seem to be not consistent:
today they [Omicron] happen to have selected the number X
and
[Omega puts] a prime number in that box iff they predicted you will take only the big box
Both of these statements have implications for X and cannot both be always true. The number cannot both, be random, and be chosen by Omega/you, can it?
From another angle, the statement
FDT will always see a prime number
demonstrates that something fishy is going on. The "random" number X that Omicron has chosen -- and is in the box -- and seen my FDT -- is "always prime". Then it is not a random number?
Edit: See my reply below, the contradiction is that Omega cannot predict EDT's behaviour when Omicron chose a prime number. EDT's decision depends on Omega's decision, and EDT's decision depends on Omega's decision (via the "do the numbers coincide" link). On days where Omicron chooses a prime number this cyclic dependence leads to a contradiction / Omega cannot predict correctly.
Nice argument! My main caveats are
* Does training scale linearly? Does it take just twice as much time to get someone to 4 bits (top 3% in world, one in every school class) and from 4 to 8 bits (one in 1000)?
* Can we train everything? How much of e.g. math skills are genetic? I think there is research on this
* Skills are probably quite highly correlated, especially when it comes to skills you want in the same job. What about computer skills / programming and maths skills / science -- are they inherently correlated or is it just because the same people need both? [Edit: See point made by Gunnar_Zarncke above, better argument on this]
That is a very broad description - are you talking about locating Fast Radio Bursts? I would be very surprised if that was easily possible.
Background: Astronomy/Cosmology PhD student
I think I found the problem: Omega is unable to predict your action in this scenario, i.e. the assumption "Omega is good at predicting your behaviour" is wrong / impossible / inconsistent.
Consider a day where Omicron (randomly) chose a prime number (Omega knows this). Now an EDT is on their way to the room with the boxes, and Omega has to put a prime or non-prime (composite) number into the box, predicting EDT's action.
If Omega makes X prime (i.e. coincides) then EDT two-boxes and therefore Omega has failed in predicting.
If Omega makes X non-prime (i.e. numbers don't coincide) then EDT one-boxes and therefore Omega has failed in predicting.
Edit: To clarify, EDT's policy is two-box if Omega and Omicron's numbers coincide, one-box if they don't.