This is an automated rejection. No LLM generated, assisted/co-written, or edited work.
Read full explanation
Newcomb's Paradox at high stakes, and what happens when Box B contains revelation instead of money
I think the standard formulation of Newcomb's Paradox hides two problems that the decision-theory literature has largely declined to examine. First, the predictor's motivation for administering the test is stipulated rather than explained. The problem is formally underdetermined about why this being exists and why it keeps running this experiment. Second, when you inflate the value of the transparent box to generate genuine loss aversion, the credence threshold required to sustain one-boxing rises toward absolute certainty. Absolute certainty is a level that finite inductive inference cannot warrant. My claim is that a biblical reframing resolves both problems by changing the predictor's motive and the nature of the reward, and it does so in ways that resolve structural problems the secular version can't. I'll first put pressure on the standard setup, then extract what any successful rewrite would need, and only then introduce John 7. This is a decision-theory post that uses John 7 as a worked example, not a sermon.
Epistemic status: Cross between decision theory, philosophical theology, and constructive interpretation. Argued here: standard Newcomb has a motivation gap; it hits an epistemic ceiling at high stakes ("every one-boxer has a price"); John 7 provides a structurally tight candidate mapping once the utilities are re-specified. Suggested but not fully established: revelation is a more coherent divine utility than cash; consecration is a sharper operational condition than a vague "religious temperament." Sequel territory: Genesis 2 and the two-tree architecture; eschatological conditions under which C = 1 becomes defensible. I am not assuming the Bible has special authority. I am arguing that its model supplies structural features the secular version lacks.
The puzzle everyone knows
A being of extraordinary predictive accuracy (Omega) places two boxes before you. Box A is transparent and contains $1,000. Box B is opaque and contains either $1,000,000 or nothing. Omega has already predicted whether you will take both boxes or only Box B and has loaded the boxes accordingly. If it predicted one-boxing, Box B contains the million. If it predicted two-boxing, Box B is empty. You choose.
The Dominance Principle says take both: the contents are fixed, so you get $1,000 more regardless. The Expected Utility Principle says take one: the one-box strategy produces dramatically higher expected returns given the predictor's track record. Smart people have split on this for decades, and each camp tends to regard the other as confused.
I don't want to relitigate which decision theory wins. I want to ask whether the setup itself stays coherent once you let the stakes climb and ask why Omega is running this test at all.
The hidden problem with Omega
Standard presentations of Newcomb's Paradox treat Omega's behavior as fully specified by stipulation: Omega predicts, Omega loads boxes, done. Omega's motives are exogenous to the problem. Cargile (1975) exemplifies this posture, he asserts that the predictor's nature and purpose are irrelevant to the formal decision structure. LessWrong discussions generally adopt the same stance, treating Omega as a being whose behavior is defined entirely by stipulation, with no explanatory requirement placed on its motivations.
I think this leaves the predictor-agent relationship formally underdescribed. A test without an intelligible motive for the tester is a test whose structure remains underdetermined, however precisely the payoffs are specified. And this underdescription infects what it even means to trust Omega at substantial stakes. Trust in what? For whose ends? If Omega already knows what you'll do, the test generates no new information for Omega. If Omega is indifferent to your welfare, the test is a parlor trick. If Omega has a purpose, then that purpose constrains what counts as a coherent reward, and the standard formulation never addresses this. In other words, what does Omega want and why does he want it?
Two prior theistic engagements got partway. Horne (1983) argued that religiously inclined respondents gravitate toward one-boxing because they recognize in the predictor's pattern of rewards an offer of cooperative relationship. In his view they're responding to a person, not optimizing against a mechanism. That's a genuine insight. But Horne left the boxes filled with cash, which means even after identifying the predictor with God, the divine-human relationship remains fundamentally transactional. Craig (1987) identified the predictor with God and showed that divine foreknowledge vindicates one-boxing, but he too kept the reward monetary. Both got halfway: they changed the identity of the predictor without changing the structure of the test.
I am not claiming that standard Newcomb collapses because Omega lacks a backstory. I am claiming there is a live structural underspecification that matters more as we raise the stakes and ask what could justify stable one-boxing at the limit.
Every one-boxer has a price
Let A be the visible amount in Box A, B the amount in Box B if filled, and C your credence that Omega predicts correctly. The expected utility equations for the standard Newcomb problem are:
EU(two-box) = C·A + (1−C)·(A+B)
EU(one-box) = C·B + (1−C)·0
Setting these equal and solving for C at the standard values (A = $1,000, B = $1,000,000) yields a threshold credence of approximately C = 0.5005. The one-box strategy is justified at a confidence barely above a coin flip. This is why the standard formulation feels tractable. The math is easy. One-boxing looks solid.
Now inflate Box A.
When Box A contains $500,000, the threshold rises to C = 0.75. The agent must forgo a guaranteed, life-altering sum. This would be the difference between financial precarity and security, between debt and ownership. And the agent is expected to one-box on the strength of 75% confidence in a being they know only through finite observation? The marginal utility of the first $500,000 is enormous. The marginal utility of the next $500,000 is categorically smaller.
Push Box A to $900,000 and the threshold rises to C = 0.95. Push it to $999,000 and the required credence reaches C = 0.9995. The agent must believe the predictor is correct 99.95% of the time to justify leaving $999,000 on the table.
At the limit, as the value of Box A approaches the value of Box B, the required credence approaches C = 1.
This reveals something important about the stability of the one-boxing disposition under secular conditions. It is not enough to observe that one-boxing is rational at C = 0.5005. The question is whether the disposition holds as the stakes increase. And under the secular formulation, it can always be defeated. Every one-boxer can be converted into a two-boxer by sufficiently inflating Box A. The disposition that appeared stable at 50.05% collapses as the guaranteed payoff rises.
The secular one-boxer and the two-boxer are not different kinds of agents. They are the same agent at different price points.
This is not about greed
I want to be precise about what the previous section does and does not show. The argument is not "people get greedy at high stakes." That would be a psychological observation, and an uninteresting one. The argument is epistemic.
On what basis could a secular agent hold a credence of 0.9995 in an impersonal predictor? The predictor in the standard formulation is a supercomputer, an alien intelligence, or a being defined entirely by a track record. The agent's confidence is necessarily derived from finite statistical induction: they have observed N previous trials, all correctly predicted, and from this they infer future accuracy. But inductive inference from a finite sample, however large, cannot rationally ground a credence of 99.95% when the cost of being wrong is the forfeiture of a near-certain, life-altering sum. Track records establish probability. They do not establish the kind of certainty the stakes demand.
This is the familiar Humean limitation reappearing inside Newcomb's Paradox as you push Box A upward. More datapoints reduce the residual uncertainty but do not eliminate it. The epistemic ceiling is not cured by a larger sample. It is structural. The instability of high-stakes secular one-boxing is a problem about what finite induction can warrant, not a quirk of human psychology.
The next question is what sort of predictor-agent relationship could, even in principle, justify that kind of credence.
What kind of Omega could survive high stakes?
If the motivation gap and the epistemic ceiling are real problems, they jointly define a design brief for any version of Newcomb's Paradox that could support stable one-boxing at arbitrarily high stakes. Such a version would need:
A predictor whose motive for running the test is intelligible to the agent, not stipulated away but part of the problem's structure.
Utilities in the boxes that fit that motive, not arbitrary cash amounts but rewards coherent with what the predictor is trying to accomplish.
A principled account of why Box B is empty for two-boxers, not arbitrary punishment but a structural consequence of the agent's disposition.
A basis for the agent's trust that is categorically different from statistical induction about an impersonal mechanism.
Notice what this implies. If the predictor is personal (and especially if it is divine) then a being whose purpose is relational or pedagogical is poorly modeled as offering larger piles of money to trusting agents. A better candidate would be an epistemic good: something the predictor can offer that is constitutively connected to the kind of relationship it is trying to establish.
I want to propose a specific candidate and then show that it maps onto an existing textual model with surprising precision.
Rewriting the boxes: autonomy versus revelation
Before introducing the worked example, let me define the rewritten utilities in operational language.
Box A = intellectual autonomy. The subjective utility of resolving a question on one's own terms and being satisfied with that result. This includes the satisfaction of being right, the social credibility of having a defensible position, and the epistemic comfort of certainty arrived at through one's own reasoning. This is a genuine good. People value it, and they should. The capacity to reason independently is not a defect.
Box B = revelation. Knowledge not produced by the agent's own deductive route, but received from the predictor under a different epistemic condition. This is not "information" in the usual sense. Rather, it is knowledge that requires a specific prior orientation of the agent's will in order to be received as what it is, rather than co-opted as raw material for the agent's own projects.
In other words, these two goods do not always flow together. If I insist that every question must be settled by my own analysis before I act then I may keep my sense of control, but I don’t have revelation that can only be received from the Predictor. I remain in the realm of what I can see, weigh, and manage. Revelation belongs to another realm. It does not arrive as the prize at the end of my reasoning; it comes as a gift from the Predictor. In the standard formulation, the million dollars also comes only from Omega. None of us have a million dollars. In the version I’m proposing none of us have this specific revelation. This is not a claim about irrationality. It is a claim about epistemic trade-offs.
John 7 as a worked Newcomb case
The text that maps onto this structure with the most precision is the narrative of John 7.
At the Feast of Booths in Jerusalem, Jesus teaches publicly in the temple courts. The crowd divides. Some think he may be the Messiah. Then a faction produces what they regard as a decisive refutation: "How can the Messiah come from Galilee? Does not Scripture say that the Messiah will come from Bethlehem?" (John 7:41–42). This is a textually grounded, logically valid argument from the prophet Micah. They have read their Bible carefully. Their conclusion follows correctly from their premise. The only problem is that the premise is factually incomplete: Jesus was born in Bethlehem. (For present purposes I am treating the Johannine narrative at face value as a model; the argument does not depend on resolving historical-critical disputes.) But they don't know that. And their confidence is not diminished by their ignorance. They walk away satisfied with their certainty.
In the terms of our reframed Newcomb setup, the crowd took both boxes.
Box A: the box containing satisfaction of having a logically valid scriptural argument that settles the question on their own terms.
Box B: the box containing the revelation that Jesus is in fact the Messiah, was empty, not because God decided to punish their curiosity. Rather, because the content of Box B is constitutively incompatible with the disposition that reaches for Box A. It is only ever available to those whose prior orientation made them capable of receiving the revelation.
The decision rule that formalizes this structure appears earlier in the same chapter. John 7:17: "If anyone resolves to do His will, he will know concerning the teaching, whether it is of God or whether I speak from Myself."
This verse establishes a strict epistemic sequence:
The resolution: a prior, unconditional commitment of the will to do God's will before any specific test presents itself, and before the agent knows what God's will requires in this particular situation.
The revelation: the subsequent granting of divine revelation, the discernment of whether the teaching is from God.
Commitment precedes knowledge. The resolution of the will is the condition under which knowledge is granted, not the result of knowledge already possessed. This is the precise structure of the one-box strategy: the disposition must be settled before the boxes appear.
The theological term for this prior commitment of the will is consecration: the unconditional dedication of one's will to God's purposes before knowing what those purposes will require. This is what Horne's (1983) vague "religious temperament" was reaching for but never specified. Consecration is not a feeling or a general spiritual inclination. It is a specific structural pre-commitment. If you like, consecration is a kind of precommitment operator applied to the will rather than to a policy over monetary payoffs. But the difference from standard decision-theoretic precommitment is significant, and it is worth making explicit.
Consecration versus instrumental precommitment
The distinction from Functional Decision Theory matters.
The FDT agent commits to a one-boxing policy because that policy maximizes expected utility across the class of problems where the agent's disposition is detected by the predictor. The commitment is instrumental: it is adopted because it wins. If an FDT agent were convinced that one-boxing no longer maximized expected utility across the relevant class of problems, they would revise the policy. The disposition is a tool in service of the agent's existing preference structure.
The consecrated agent commits to a disposition for a different reason: not because it maximizes outcomes, but because the agent has resolved that the predictor's will is the governing good, prior to and independent of any calculation of outcomes. The commitment is relational rather than instrumental. It is directed toward a person, not toward a payoff matrix.
This is why the consecrated agent's disposition does not collapse when Box A is inflated. The agent is not weighing the visible contents of Box A against the probability of Box B. The agent has already determined that Box A, however attractive its contents, does not contain the kind of good for which Box B can be sacrificed. The inflation of Box A changes the temptation. It does not change the structure of the commitment.
The FDT agent and the consecrated agent may both one-box at standard stakes. The difference is visible only when Box A is inflated to the point where the FDT agent's expected utility calculation flips. At that threshold, the FDT agent rationally two-boxes. The consecrated agent does not, because the basis of the commitment was never the calculation. This is the deeper sense in which "every one-boxer has a price" diagnoses a feature of secular, outcome-driven dispositions: once the numbers cross the right threshold, instrumental one-boxing rationally flips.
How the reframing hits both original problems
To connect back explicitly to the structural problems identified in the first half of this post:
The motivation gap closes. Once Box B is revelation conditioned on consecration, the predictor's test has an intelligible purpose. The test exists because revelation is, by its nature, available only to a specific kind of agent, one whose will has been unconditionally oriented toward the predictor before the test arrives. The predictor is not running a parlor trick. The predictor is selecting for a disposition that makes reception possible. And the biblical model is iterative: the two-boxer of John 7 can become the one-boxer of 1 Corinthians 15:7. This is why I interpret the test as pedagogical rather than merely classificatory. The purpose is not permanent sorting but progressive transformation.
The epistemic ceiling is addressed in principle. When the basis for trust shifts from statistical induction about an impersonal mechanism to relational knowledge of the predictor's character, the credence asymptote changes category. I am not arguing here that C = 1 is achievable. That is a larger claim involving eschatological considerations I will develop separately. But the framework at least permits a categorically different epistemic ground for high credence: not "I have observed N correct predictions" but "I know this person and what they are like." Whether relational knowledge can warrant C = 0.9995 is an open question, but it is at least the right kind of warrant for the job, which statistical induction is not.
The inflation resistance holds. Because the central good in Box B is non-fungible, revelation cannot be priced against cash. Thus raising the visible value of Box A does not destabilize the consecrated agent's disposition in the way it destabilizes the secular one-boxer's. The secular agent's price point shifts because the same currency is in both boxes. The consecrated agent's commitment is to a different kind of good entirely.
I am not claiming this is the only way to meet the design brief from earlier. I am claiming it is at least one fully specified example that satisfies all four requirements.
What this post does and does not prove
I want to be explicit about levels of confidence, because this argument is part of a larger project and LessWrong readers deserve to know where the borders are.
Argued here: The standard secular formulation of Newcomb's Paradox has a motivation gap that is papered over by stipulation. Inflating Box A creates an epistemic ceiling that exposes secular one-boxing as fragile at high stakes. The John 7 narrative maps onto a one-boxing structure with more precision than one might expect. John 7 specifies a concrete epistemic sequence, a concrete failure mode, and a built-in reason for Box B's emptiness.
Suggested but not fully established: Revelation is a more coherent utility than cash in a divine version of the paradox. Consecration, as defined from John 7:17, is a sharper and more operational specification of the one-boxing disposition than Horne's "religious temperament."
Sequel territory: The conditions under which C = 1 becomes rationally defensible is another topic. The claim that Genesis 2 and the two-tree architecture is the primordial version of this same choice structure is also another topic. I suspect this goes deeper, but that is not something I’m ready to argue at this time.
What would change my mind about the core argument?
A coherent account of why a secular agent should hold C > 0.999 in an impersonal predictor on inductive grounds alone, or a demonstration that the motivation gap is genuinely irrelevant to the decision structure in a way I haven't considered. Alternatively, a non-theistic utility that could play the same structural role as revelation. It should meet the design brief of “autonomy versus revelation” without requiring a personal predictor. This would significantly weaken my claim that the theological move is necessary rather than merely interesting.
Strongest objections
"Omega's motive is irrelevant. Newcomb is about decision structure, not teleology." This is the strongest version of the objection and I take it seriously. The counterargument is that the stability of the one-boxing disposition depends on the agent's epistemic relationship to the predictor, and that relationship depends on facts about what kind of being the predictor is. My claim is not that every decision problem must specify motives, but that once you ask for stable high-stakes one-boxing, the missing motive and the epistemic ceiling become the same problem. You can stipulate away the predictor's motive, but if you do, you also stipulate away the epistemic ground that would sustain one-boxing at high stakes. If someone can show me a version of Newcomb where the predictor's motive is genuinely irrelevant to the agent's rational credence at C > 0.999, I would need to substantially revise this argument.
"You're redefining the utilities until your theology wins." Partially fair. What I'm doing is proposing a redefinition of the utilities and arguing that the redefinition resolves structural problems the original formulation can't. Whether you find this persuasive depends on whether you think the motivation gap and epistemic ceiling are real problems or just features of the thought experiment that don't need solving. I think they're real. But I acknowledge that the theological redefinition is doing load-bearing work, and you should be suspicious of that. The question I'd put back is: if you think the motivation gap doesn't matter, can you explain why a predictor with no intelligible purpose should command near-perfect credence? If someone can show a non-theological utility that meets the same constraints, that would count as progress, not defeat.
"John 7 is an illustration, not a formal model." My response is that the mapping is more precise than "mere illustration." John 7 doesn't just vaguely resemble Newcomb's Paradox, it specifies the following:
A concrete epistemic sequence: will-commitment preceding knowledge.
A concrete failure mode: the Bethlehem argument as Box A.
A concrete structural reason for Box B's emptiness: incompatibility between the grasping disposition and the nature of what's offered.
Whether that's "formal" enough is partly a question about what formality requires. But it's not free association. I would be interested to see someone identify exactly where the mapping breaks down. If you think it fails, I'd be grateful for a pointer to the earliest step where you think the structure diverges.
"This is only persuasive if you already grant a Christian framework." I don't think so, though I understand why it looks that way. The motivation-gap and epistemic-ceiling arguments are entirely secular. They work on the standard formulation without any theology. The John 7 reframing is offered as a candidate resolution, not as a proof that Christianity is true. The question is whether the biblical model supplies structural features that the secular model lacks: a motivated predictor, a specific disposition, a non-fungible reward. You can evaluate that question without accepting the Bible's authority.
"The Genesis extension is smuggled in without argument." Correct that it is gestured at without full argument, that's why it is listed as sequel territory. If the core arguments here don't hold up, the Genesis extension is moot anyway. I mention it only to be honest about the larger project, not to claim credit for work I haven't done yet.
Cruxes and feedback request
The primary crux is whether Newcomb's setup can remain fully intelligible while treating Omega's motive as irrelevant. If it can, then the motivation gap is a non-problem and my argument for a teleological rewrite weakens considerably. If it can't, then some teleological rewrite is needed, theological or otherwise. Specifying Omega’s motives seems to be a structural requirement. My further claim is that the John 7 model is one unusually precise way to supply that teleology.
The secondary crux is whether the "every one-boxer has a price" argument demonstrates a genuine epistemic ceiling or merely a psychological instability. If it's only psychology then a sufficiently rational agent could sustain one-boxing at C = 0.9995 purely on inductive grounds. If so, the motivation for the theological move weakens, because the secular setup could in principle support robust one-boxing without changing Box B. I think the argument is epistemic, because the problem of induction is a problem about warrant, not about feelings. But I want to hear the strongest case that it isn't.
I'd most value feedback on these specific questions:
Does the motivation-gap argument fail in a way I'm not seeing?
Is there a flaw in the "every one-boxer has a price" derivation or its interpretation?
Is John 7 doing genuine structural work, or is it interchangeable flavor text that could be swapped for any narrative?
Where exactly does the theology get smuggled into the formalism, if you think it does?
Can you exhibit a secular Omega and utility specification that meets the same design brief?
I suspect the deeper structure here scales back to Genesis 2 and the two-tree architecture, but that is a separate argument for a separate post. For now, I'm offering this as a candidate model to be stress-tested, not a finished theory.
Newcomb's Paradox at high stakes, and what happens when Box B contains revelation instead of money
I think the standard formulation of Newcomb's Paradox hides two problems that the decision-theory literature has largely declined to examine. First, the predictor's motivation for administering the test is stipulated rather than explained. The problem is formally underdetermined about why this being exists and why it keeps running this experiment. Second, when you inflate the value of the transparent box to generate genuine loss aversion, the credence threshold required to sustain one-boxing rises toward absolute certainty. Absolute certainty is a level that finite inductive inference cannot warrant. My claim is that a biblical reframing resolves both problems by changing the predictor's motive and the nature of the reward, and it does so in ways that resolve structural problems the secular version can't. I'll first put pressure on the standard setup, then extract what any successful rewrite would need, and only then introduce John 7. This is a decision-theory post that uses John 7 as a worked example, not a sermon.
Epistemic status: Cross between decision theory, philosophical theology, and constructive interpretation. Argued here: standard Newcomb has a motivation gap; it hits an epistemic ceiling at high stakes ("every one-boxer has a price"); John 7 provides a structurally tight candidate mapping once the utilities are re-specified. Suggested but not fully established: revelation is a more coherent divine utility than cash; consecration is a sharper operational condition than a vague "religious temperament." Sequel territory: Genesis 2 and the two-tree architecture; eschatological conditions under which C = 1 becomes defensible. I am not assuming the Bible has special authority. I am arguing that its model supplies structural features the secular version lacks.
The puzzle everyone knows
A being of extraordinary predictive accuracy (Omega) places two boxes before you. Box A is transparent and contains $1,000. Box B is opaque and contains either $1,000,000 or nothing. Omega has already predicted whether you will take both boxes or only Box B and has loaded the boxes accordingly. If it predicted one-boxing, Box B contains the million. If it predicted two-boxing, Box B is empty. You choose.
The Dominance Principle says take both: the contents are fixed, so you get $1,000 more regardless. The Expected Utility Principle says take one: the one-box strategy produces dramatically higher expected returns given the predictor's track record. Smart people have split on this for decades, and each camp tends to regard the other as confused.
I don't want to relitigate which decision theory wins. I want to ask whether the setup itself stays coherent once you let the stakes climb and ask why Omega is running this test at all.
The hidden problem with Omega
Standard presentations of Newcomb's Paradox treat Omega's behavior as fully specified by stipulation: Omega predicts, Omega loads boxes, done. Omega's motives are exogenous to the problem. Cargile (1975) exemplifies this posture, he asserts that the predictor's nature and purpose are irrelevant to the formal decision structure. LessWrong discussions generally adopt the same stance, treating Omega as a being whose behavior is defined entirely by stipulation, with no explanatory requirement placed on its motivations.
I think this leaves the predictor-agent relationship formally underdescribed. A test without an intelligible motive for the tester is a test whose structure remains underdetermined, however precisely the payoffs are specified. And this underdescription infects what it even means to trust Omega at substantial stakes. Trust in what? For whose ends? If Omega already knows what you'll do, the test generates no new information for Omega. If Omega is indifferent to your welfare, the test is a parlor trick. If Omega has a purpose, then that purpose constrains what counts as a coherent reward, and the standard formulation never addresses this. In other words, what does Omega want and why does he want it?
Two prior theistic engagements got partway. Horne (1983) argued that religiously inclined respondents gravitate toward one-boxing because they recognize in the predictor's pattern of rewards an offer of cooperative relationship. In his view they're responding to a person, not optimizing against a mechanism. That's a genuine insight. But Horne left the boxes filled with cash, which means even after identifying the predictor with God, the divine-human relationship remains fundamentally transactional. Craig (1987) identified the predictor with God and showed that divine foreknowledge vindicates one-boxing, but he too kept the reward monetary. Both got halfway: they changed the identity of the predictor without changing the structure of the test.
I am not claiming that standard Newcomb collapses because Omega lacks a backstory. I am claiming there is a live structural underspecification that matters more as we raise the stakes and ask what could justify stable one-boxing at the limit.
Every one-boxer has a price
Let A be the visible amount in Box A, B the amount in Box B if filled, and C your credence that Omega predicts correctly. The expected utility equations for the standard Newcomb problem are:
Setting these equal and solving for C at the standard values (A = $1,000, B = $1,000,000) yields a threshold credence of approximately C = 0.5005. The one-box strategy is justified at a confidence barely above a coin flip. This is why the standard formulation feels tractable. The math is easy. One-boxing looks solid.
Now inflate Box A.
When Box A contains $500,000, the threshold rises to C = 0.75. The agent must forgo a guaranteed, life-altering sum. This would be the difference between financial precarity and security, between debt and ownership. And the agent is expected to one-box on the strength of 75% confidence in a being they know only through finite observation? The marginal utility of the first $500,000 is enormous. The marginal utility of the next $500,000 is categorically smaller.
Push Box A to $900,000 and the threshold rises to C = 0.95. Push it to $999,000 and the required credence reaches C = 0.9995. The agent must believe the predictor is correct 99.95% of the time to justify leaving $999,000 on the table.
At the limit, as the value of Box A approaches the value of Box B, the required credence approaches C = 1.
This reveals something important about the stability of the one-boxing disposition under secular conditions. It is not enough to observe that one-boxing is rational at C = 0.5005. The question is whether the disposition holds as the stakes increase. And under the secular formulation, it can always be defeated. Every one-boxer can be converted into a two-boxer by sufficiently inflating Box A. The disposition that appeared stable at 50.05% collapses as the guaranteed payoff rises.
The secular one-boxer and the two-boxer are not different kinds of agents. They are the same agent at different price points.
This is not about greed
I want to be precise about what the previous section does and does not show. The argument is not "people get greedy at high stakes." That would be a psychological observation, and an uninteresting one. The argument is epistemic.
On what basis could a secular agent hold a credence of 0.9995 in an impersonal predictor? The predictor in the standard formulation is a supercomputer, an alien intelligence, or a being defined entirely by a track record. The agent's confidence is necessarily derived from finite statistical induction: they have observed N previous trials, all correctly predicted, and from this they infer future accuracy. But inductive inference from a finite sample, however large, cannot rationally ground a credence of 99.95% when the cost of being wrong is the forfeiture of a near-certain, life-altering sum. Track records establish probability. They do not establish the kind of certainty the stakes demand.
This is the familiar Humean limitation reappearing inside Newcomb's Paradox as you push Box A upward. More datapoints reduce the residual uncertainty but do not eliminate it. The epistemic ceiling is not cured by a larger sample. It is structural. The instability of high-stakes secular one-boxing is a problem about what finite induction can warrant, not a quirk of human psychology.
The next question is what sort of predictor-agent relationship could, even in principle, justify that kind of credence.
What kind of Omega could survive high stakes?
If the motivation gap and the epistemic ceiling are real problems, they jointly define a design brief for any version of Newcomb's Paradox that could support stable one-boxing at arbitrarily high stakes. Such a version would need:
Notice what this implies. If the predictor is personal (and especially if it is divine) then a being whose purpose is relational or pedagogical is poorly modeled as offering larger piles of money to trusting agents. A better candidate would be an epistemic good: something the predictor can offer that is constitutively connected to the kind of relationship it is trying to establish.
I want to propose a specific candidate and then show that it maps onto an existing textual model with surprising precision.
Rewriting the boxes: autonomy versus revelation
Before introducing the worked example, let me define the rewritten utilities in operational language.
Box A = intellectual autonomy. The subjective utility of resolving a question on one's own terms and being satisfied with that result. This includes the satisfaction of being right, the social credibility of having a defensible position, and the epistemic comfort of certainty arrived at through one's own reasoning. This is a genuine good. People value it, and they should. The capacity to reason independently is not a defect.
Box B = revelation. Knowledge not produced by the agent's own deductive route, but received from the predictor under a different epistemic condition. This is not "information" in the usual sense. Rather, it is knowledge that requires a specific prior orientation of the agent's will in order to be received as what it is, rather than co-opted as raw material for the agent's own projects.
In other words, these two goods do not always flow together. If I insist that every question must be settled by my own analysis before I act then I may keep my sense of control, but I don’t have revelation that can only be received from the Predictor. I remain in the realm of what I can see, weigh, and manage. Revelation belongs to another realm. It does not arrive as the prize at the end of my reasoning; it comes as a gift from the Predictor. In the standard formulation, the million dollars also comes only from Omega. None of us have a million dollars. In the version I’m proposing none of us have this specific revelation. This is not a claim about irrationality. It is a claim about epistemic trade-offs.
John 7 as a worked Newcomb case
The text that maps onto this structure with the most precision is the narrative of John 7.
At the Feast of Booths in Jerusalem, Jesus teaches publicly in the temple courts. The crowd divides. Some think he may be the Messiah. Then a faction produces what they regard as a decisive refutation: "How can the Messiah come from Galilee? Does not Scripture say that the Messiah will come from Bethlehem?" (John 7:41–42). This is a textually grounded, logically valid argument from the prophet Micah. They have read their Bible carefully. Their conclusion follows correctly from their premise. The only problem is that the premise is factually incomplete: Jesus was born in Bethlehem. (For present purposes I am treating the Johannine narrative at face value as a model; the argument does not depend on resolving historical-critical disputes.) But they don't know that. And their confidence is not diminished by their ignorance. They walk away satisfied with their certainty.
In the terms of our reframed Newcomb setup, the crowd took both boxes.
The decision rule that formalizes this structure appears earlier in the same chapter. John 7:17: "If anyone resolves to do His will, he will know concerning the teaching, whether it is of God or whether I speak from Myself."
This verse establishes a strict epistemic sequence:
Commitment precedes knowledge. The resolution of the will is the condition under which knowledge is granted, not the result of knowledge already possessed. This is the precise structure of the one-box strategy: the disposition must be settled before the boxes appear.
The theological term for this prior commitment of the will is consecration: the unconditional dedication of one's will to God's purposes before knowing what those purposes will require. This is what Horne's (1983) vague "religious temperament" was reaching for but never specified. Consecration is not a feeling or a general spiritual inclination. It is a specific structural pre-commitment. If you like, consecration is a kind of precommitment operator applied to the will rather than to a policy over monetary payoffs. But the difference from standard decision-theoretic precommitment is significant, and it is worth making explicit.
Consecration versus instrumental precommitment
The distinction from Functional Decision Theory matters.
The FDT agent commits to a one-boxing policy because that policy maximizes expected utility across the class of problems where the agent's disposition is detected by the predictor. The commitment is instrumental: it is adopted because it wins. If an FDT agent were convinced that one-boxing no longer maximized expected utility across the relevant class of problems, they would revise the policy. The disposition is a tool in service of the agent's existing preference structure.
The consecrated agent commits to a disposition for a different reason: not because it maximizes outcomes, but because the agent has resolved that the predictor's will is the governing good, prior to and independent of any calculation of outcomes. The commitment is relational rather than instrumental. It is directed toward a person, not toward a payoff matrix.
This is why the consecrated agent's disposition does not collapse when Box A is inflated. The agent is not weighing the visible contents of Box A against the probability of Box B. The agent has already determined that Box A, however attractive its contents, does not contain the kind of good for which Box B can be sacrificed. The inflation of Box A changes the temptation. It does not change the structure of the commitment.
The FDT agent and the consecrated agent may both one-box at standard stakes. The difference is visible only when Box A is inflated to the point where the FDT agent's expected utility calculation flips. At that threshold, the FDT agent rationally two-boxes. The consecrated agent does not, because the basis of the commitment was never the calculation. This is the deeper sense in which "every one-boxer has a price" diagnoses a feature of secular, outcome-driven dispositions: once the numbers cross the right threshold, instrumental one-boxing rationally flips.
How the reframing hits both original problems
To connect back explicitly to the structural problems identified in the first half of this post:
The motivation gap closes. Once Box B is revelation conditioned on consecration, the predictor's test has an intelligible purpose. The test exists because revelation is, by its nature, available only to a specific kind of agent, one whose will has been unconditionally oriented toward the predictor before the test arrives. The predictor is not running a parlor trick. The predictor is selecting for a disposition that makes reception possible. And the biblical model is iterative: the two-boxer of John 7 can become the one-boxer of 1 Corinthians 15:7. This is why I interpret the test as pedagogical rather than merely classificatory. The purpose is not permanent sorting but progressive transformation.
The epistemic ceiling is addressed in principle. When the basis for trust shifts from statistical induction about an impersonal mechanism to relational knowledge of the predictor's character, the credence asymptote changes category. I am not arguing here that C = 1 is achievable. That is a larger claim involving eschatological considerations I will develop separately. But the framework at least permits a categorically different epistemic ground for high credence: not "I have observed N correct predictions" but "I know this person and what they are like." Whether relational knowledge can warrant C = 0.9995 is an open question, but it is at least the right kind of warrant for the job, which statistical induction is not.
The inflation resistance holds. Because the central good in Box B is non-fungible, revelation cannot be priced against cash. Thus raising the visible value of Box A does not destabilize the consecrated agent's disposition in the way it destabilizes the secular one-boxer's. The secular agent's price point shifts because the same currency is in both boxes. The consecrated agent's commitment is to a different kind of good entirely.
I am not claiming this is the only way to meet the design brief from earlier. I am claiming it is at least one fully specified example that satisfies all four requirements.
What this post does and does not prove
I want to be explicit about levels of confidence, because this argument is part of a larger project and LessWrong readers deserve to know where the borders are.
Argued here: The standard secular formulation of Newcomb's Paradox has a motivation gap that is papered over by stipulation. Inflating Box A creates an epistemic ceiling that exposes secular one-boxing as fragile at high stakes. The John 7 narrative maps onto a one-boxing structure with more precision than one might expect. John 7 specifies a concrete epistemic sequence, a concrete failure mode, and a built-in reason for Box B's emptiness.
Suggested but not fully established: Revelation is a more coherent utility than cash in a divine version of the paradox. Consecration, as defined from John 7:17, is a sharper and more operational specification of the one-boxing disposition than Horne's "religious temperament."
Sequel territory: The conditions under which C = 1 becomes rationally defensible is another topic. The claim that Genesis 2 and the two-tree architecture is the primordial version of this same choice structure is also another topic. I suspect this goes deeper, but that is not something I’m ready to argue at this time.
What would change my mind about the core argument?
A coherent account of why a secular agent should hold C > 0.999 in an impersonal predictor on inductive grounds alone, or a demonstration that the motivation gap is genuinely irrelevant to the decision structure in a way I haven't considered. Alternatively, a non-theistic utility that could play the same structural role as revelation. It should meet the design brief of “autonomy versus revelation” without requiring a personal predictor. This would significantly weaken my claim that the theological move is necessary rather than merely interesting.
Strongest objections
"Omega's motive is irrelevant. Newcomb is about decision structure, not teleology." This is the strongest version of the objection and I take it seriously. The counterargument is that the stability of the one-boxing disposition depends on the agent's epistemic relationship to the predictor, and that relationship depends on facts about what kind of being the predictor is. My claim is not that every decision problem must specify motives, but that once you ask for stable high-stakes one-boxing, the missing motive and the epistemic ceiling become the same problem. You can stipulate away the predictor's motive, but if you do, you also stipulate away the epistemic ground that would sustain one-boxing at high stakes. If someone can show me a version of Newcomb where the predictor's motive is genuinely irrelevant to the agent's rational credence at C > 0.999, I would need to substantially revise this argument.
"You're redefining the utilities until your theology wins." Partially fair. What I'm doing is proposing a redefinition of the utilities and arguing that the redefinition resolves structural problems the original formulation can't. Whether you find this persuasive depends on whether you think the motivation gap and epistemic ceiling are real problems or just features of the thought experiment that don't need solving. I think they're real. But I acknowledge that the theological redefinition is doing load-bearing work, and you should be suspicious of that. The question I'd put back is: if you think the motivation gap doesn't matter, can you explain why a predictor with no intelligible purpose should command near-perfect credence? If someone can show a non-theological utility that meets the same constraints, that would count as progress, not defeat.
"John 7 is an illustration, not a formal model." My response is that the mapping is more precise than "mere illustration." John 7 doesn't just vaguely resemble Newcomb's Paradox, it specifies the following:
Whether that's "formal" enough is partly a question about what formality requires. But it's not free association. I would be interested to see someone identify exactly where the mapping breaks down. If you think it fails, I'd be grateful for a pointer to the earliest step where you think the structure diverges.
"This is only persuasive if you already grant a Christian framework." I don't think so, though I understand why it looks that way. The motivation-gap and epistemic-ceiling arguments are entirely secular. They work on the standard formulation without any theology. The John 7 reframing is offered as a candidate resolution, not as a proof that Christianity is true. The question is whether the biblical model supplies structural features that the secular model lacks: a motivated predictor, a specific disposition, a non-fungible reward. You can evaluate that question without accepting the Bible's authority.
"The Genesis extension is smuggled in without argument." Correct that it is gestured at without full argument, that's why it is listed as sequel territory. If the core arguments here don't hold up, the Genesis extension is moot anyway. I mention it only to be honest about the larger project, not to claim credit for work I haven't done yet.
Cruxes and feedback request
The primary crux is whether Newcomb's setup can remain fully intelligible while treating Omega's motive as irrelevant. If it can, then the motivation gap is a non-problem and my argument for a teleological rewrite weakens considerably. If it can't, then some teleological rewrite is needed, theological or otherwise. Specifying Omega’s motives seems to be a structural requirement. My further claim is that the John 7 model is one unusually precise way to supply that teleology.
The secondary crux is whether the "every one-boxer has a price" argument demonstrates a genuine epistemic ceiling or merely a psychological instability. If it's only psychology then a sufficiently rational agent could sustain one-boxing at C = 0.9995 purely on inductive grounds. If so, the motivation for the theological move weakens, because the secular setup could in principle support robust one-boxing without changing Box B. I think the argument is epistemic, because the problem of induction is a problem about warrant, not about feelings. But I want to hear the strongest case that it isn't.
I'd most value feedback on these specific questions:
I suspect the deeper structure here scales back to Genesis 2 and the two-tree architecture, but that is a separate argument for a separate post. For now, I'm offering this as a candidate model to be stress-tested, not a finished theory.