The way Jaynes says it, looks like it is meant to be a more general property than something that applies only "If two humans are chasing the same thing there is a limited amount of".
In Pearl's "The Book of Why", he mentions that in 2008 they found a gene that has this causal effect (influences both risk of smoking and lung cancer). Of course the effect is much smaller than the direct effect of smoking on cancer, it's just a funny fact.
I'm not reading further because it's long and I can't get what it is about after the first paragraphs.
If someone gives you a concrete point estimate probability of revival, their estimate is automatically untrustworthy.
This goes strongly against probabilistic forecasting. It seems a wrong principle to me.
> "guess a password on 1st try"
In my life, I tried to guess a password O(10) times. I succeeded on the first try in two cases. This would seem to make this more feasible than you think.
Here there are two selection effects working against my argument:
However, selection plays in favor of the hypothetical AI too: maybe you are confident you picked your password in a way that makes it unpredictable via public information, but there are other people who are not like that. Overall, about the question "Could it happen at least once that an important password was chosen in a way that made it predictable to an ASI, even assuming the ASI truly constrained in a box?", I don't feel confident either way right now.
The reactions I see to his public statements indicate that he is creating polarization.
I had the opposite impression, from this video and in general: that Yudkowsky is good at avoiding polarizing statements, while still not compromising on saying what he actually thinks. Compare him with Hinton, who throws around clearly politically coded statements.
Do you have a control to infer he's polarizing? I suspect you are looking at a confounded effect.
If developing interpretable ASIs is beyond us, we might need to strive towards making them extremely difficult to interpret, even for themselves.
Intuitively, I think that if developing interpretable ASI is beyond us, then developing provably-obscure-to-ASI ASI is beyond us too.
I think that simulating a specific slice of spacetime is more kolmogorov-complex that simulating the entire universe.
See https://www.lesswrong.com/tag/squiggle-maximizer-formerly-paperclip-maximizer