I can't help but notice that Transparent Newcomb seems flawed: namely, it seems impossible to have a very accurate predictor, even if the predictor is capable of perfectly simulating your brain.
Someone who doesn't care about the money and only wants to spite the predictor could precommit to the following strategy:
If I see that the big box is empty, I'll take one box. If I see that the big box is full, I'll take both boxes.
Then, the predictor has a 0% chance of being correct, which is far from "very accurate". (Of course, there could be some intervention which forces you to choose against your will, but that would defeat the whole point of the thought experiment if you can't enforce your decisions)
Anyway, this is just poking holes in Transparent Newcomb and probably unrelated to the reflexive inconsistency and preference-changing mentioned in the post, as I suspect that you could find some other thought experiment which arrives at the same conclusions in the post. But I'm curious if anyone's mentioned this apparent paradox in Transparent Newcomb before, and if there's an agreed-upon "solution" to it.