orellanin

Posts

Sorted by New

Wiki Contributions

Comments

Hi! In the past few months I've been participating in Leverage Research/EA discourse on Twitter. Now there is one Twitter thread discussing your involvement as throwaway/anonymoose: https://twitter.com/KerryLVaughan/status/1585319237018681344 (with a subthread starting at https://twitter.com/ohabryka/status/1586084766020820992 discussing anti-doxxing norms and linking back to EA Forum comments).

One piece of information that's missing is why you used two throwaway accounts instead of one (and in particular, why you used one to reply to the other one, as alleged by Kerry Vaughan in https://twitter.com/KerryLVaughan/status/1585319243985424384 ). Can you tell me about your reasoning behind that decision?

(If that matters, I am not affiliated with any Leverage-adjacent org and I am not a throwaway account for a different EA Forum user.)

Sorry, I'm having difficulty parsing the second paragraph here. Who's "he", and who's "we"?

Oof sorry for the delay!

Yes it looks like that's it. I didn't realize that once you hardcoded all the odd bits as some list L, the hypothesis "all even bits are 0 and the odd bits are L and then all 1s" isn't actually much simpler than the hypothesis "the even bits are length(L) 0s and then all 1s, the odd bits are L and then all 1s".

With this confusion out of the way, I'll try to dig deeper into the sequences and then report back what infra-Bayesianism does about this...

I'm not sure what exactly you mean by "fails" here, but I'm pretty sure the Solomonoff prior should be fine at predicting the even bits (in the sense that once you reveal a large number of bits of the sequence, it is overwhelmingly likely that that the Solomonoff prior will assign a very high probability that the next even bit is a zero).

Am I simply wrong about how the Solomonoff prior works, or do I just have a lower standard for "success" or "failure" here?

Confusion about what Solomonoff priors can’t do:

  • “Even bits are all zero, odd bits are random”: The Turing machine that writes zero to all even bits and writes some hardcoded string to all odd bits is simpler than the Turing machine that writes one long hardcoded string, so it seems to me that the Solomonoff prior should learn that the even bits are all zero
    • The discussion there seemed to bleed into "what if the string of odd bits is uncomputable", which I think of as a separate field of confusion, so I'm still confused what intuition this example is supposed to be pumping exactly.
  • “Uncomputable priors”: The simplest uncomputable prior I can think of would be “the nth bit is 1 iff the nth Turing machine halts”. But the Turing machine that tries to runs the nth Turing machine for 10^10 steps and writes 1 if it halts, and otherwise writes 0 unless n is in some hardcoded list is reasonably simple, so it seems to me that the Solomonoff prior should learn this kind of thing to a reasonable degree
    • This works finitely long but eventually the Solomonoff prior won't be able to be confident in what the next bit is. But to me it's not obvious how we could do better than that, given that this is inherently computationally expensive
  • Priors like “Omega predicts my action”: I have no idea what a solomonoff prior does, but I also have no idea what infra-Bayesianism does. Specifically, I'm not sure if there's some specific way that infra-Bayesianism learns this hypothesis (and whether it can infer it from observations or whether you have to listen to Omega telling you that they predict your action)