This is a linkpost for https://open.substack.com/pub/aliceandbobinwanderland/p/solving-newcombs-paradox-with-ai?r=24ue7l&utm_campaign=post&utm_medium=web
I noticed we now live in a world where we can run Newcomb's problem as an actual experiment, so I ran the experiment!
Roleplaying as the predictor in the Newcomb problem (with an LLM as the "decision maker") finally made me grok why one-boxing is the correct solution to the original problem.
Wondering if anyone else feels the same way?