We’ve all heard the ice cream thought experiment used in debates about free will. You walk into a store, equally craving vanilla and strawberry. You choose strawberry. Now imagine rewinding time—every subatomic particle reset to its exact prior state. If you run the scenario again, do you always walk out with strawberry?
If yes, you’ve accepted determinism—no real freedom, just the illusion of choice. If no, you’ve left the door open to randomness—quantum indeterminacy maybe—but not necessarily to anything we'd call free will.
The problem is: we can't run that experiment. Not with humans. Not with brains. Not with the universe.
But maybe we can get closer than we think.
As a software developer, I started wondering: could AI give us a rough but testable parallel? Not a philosophical toy model, but a real system we can reboot, rewind, and observe under controlled conditions.
We already know AIs can produce creative responses. Stories, poems, ideas. Sometimes surprisingly human ones. But are those choices? Or are they deterministic patterns wrapped in noise?
Here’s where the idea starts: what if we pick an AI—one that isn’t artificially locked into determinism for UX reasons—and do the following:
Prompt it with a creative question.
Shut it down. Reboot it. No memory of the last attempt.
Give it the exact same input again.
And repeat.
If it responds differently each time, we’ve got a system that displays divergence under same-input conditions. Maybe that’s randomness, maybe not. But it’s a data point. A tiny puzzle piece in a much larger puzzle.
Then take it a step further. Try to rewind not just the input, but the entire environment—memory state, internal variables, internal clock cycles, everything—so that from the AI’s perspective, the second run begins at the exact same computational moment as the first. If you first prompted it at clock cycle X, you must do so again at clock cycle X. If not, you've introduced noise.
That’s the challenge: recreating the system's conditions with such fidelity that the AI itself would have no “reason” to diverge—unless divergence is baked into its behavior.
And then ask: does the AI always produce the same output?
If yes, we’re seeing deterministic convergence. If no, we’re observing divergence—even under seemingly identical conditions. And that’s weirdly compelling.
It doesn’t prove or disprove free will—not for us, not for the AI. But it might be the closest thing we can currently get to the classic rewind scenario in practice.
It’s not a proof. It’s not an answer. It’s just a piece of the puzzle. And if it’s a valid piece, it’s worth having.
We’ve all heard the ice cream thought experiment used in debates about free will. You walk into a store, equally craving vanilla and strawberry. You choose strawberry. Now imagine rewinding time—every subatomic particle reset to its exact prior state. If you run the scenario again, do you always walk out with strawberry?
If yes, you’ve accepted determinism—no real freedom, just the illusion of choice.
If no, you’ve left the door open to randomness—quantum indeterminacy maybe—but not necessarily to anything we'd call free will.
The problem is: we can't run that experiment. Not with humans. Not with brains. Not with the universe.
But maybe we can get closer than we think.
As a software developer, I started wondering: could AI give us a rough but testable parallel? Not a philosophical toy model, but a real system we can reboot, rewind, and observe under controlled conditions.
We already know AIs can produce creative responses. Stories, poems, ideas. Sometimes surprisingly human ones. But are those choices? Or are they deterministic patterns wrapped in noise?
Here’s where the idea starts: what if we pick an AI—one that isn’t artificially locked into determinism for UX reasons—and do the following:
And repeat.
If it responds differently each time, we’ve got a system that displays divergence under same-input conditions. Maybe that’s randomness, maybe not. But it’s a data point. A tiny puzzle piece in a much larger puzzle.
Then take it a step further. Try to rewind not just the input, but the entire environment—memory state, internal variables, internal clock cycles, everything—so that from the AI’s perspective, the second run begins at the exact same computational moment as the first. If you first prompted it at clock cycle X, you must do so again at clock cycle X. If not, you've introduced noise.
That’s the challenge: recreating the system's conditions with such fidelity that the AI itself would have no “reason” to diverge—unless divergence is baked into its behavior.
And then ask: does the AI always produce the same output?
If yes, we’re seeing deterministic convergence.
If no, we’re observing divergence—even under seemingly identical conditions. And that’s weirdly compelling.
It doesn’t prove or disprove free will—not for us, not for the AI. But it might be the closest thing we can currently get to the classic rewind scenario in practice.
It’s not a proof. It’s not an answer.
It’s just a piece of the puzzle.
And if it’s a valid piece, it’s worth having.