Rejected for the following reason(s):
This is an automated rejection. No LLM generated, assisted/co-written, or edited work.
Read full explanation
This is a linkpost for https://irreverentadverbs.substack.com/p/how-much-does-it-cost-to-ask-claude
Rejected for the following reason(s):
Imagine you had an interesting question[1], something juicy enough that you’d want an AI to really chew on it before answering. You open Claude, but before you even type anything, you’re already making decisions: Opus or Sonnet? Extended thinking on or off? Each choice implies a tradeoff between intelligence and cost. Intelligence is hard to quantify, but cost is straightforward.
So how much are you actually saving by switching to Sonnet? And what does that “extended thinking” toggle cost you in practice?
I ran a small experiment to find out.
Method
Claude users get a rolling usage allowance measured in five-hour sessions. The settings page at
claude.ai/settings/usagedisplays a “current session” tracking bar as a percentage. This works as a basic instrument for measuring relative inference cost across configurations.The procedure was straightforward. Before each query, I recorded the current session percentage:
Then I launched the query, waited for the full response, and recorded the new percentage:
The delta between the two readings is the cost of that single exchange, expressed as a fraction of one session’s total budget.
All readings were noted contemporaneously in a Notion page. To check repeatability, I ran queries in both a standard Chat window and a Cowork instance.
The prompt held constant across all six trials was the following:
This gives the model room to flex: probability estimation, calibration language, speculative reasoning, hedging, tool use.
Results
Artifact Link
The range spans from 2% (Sonnet without Extended Thinking) up to 7% (Opus with Extended Thinking). Assuming one full session’s budget corresponds to roughly a dollar of inference cost[2], this means a single interesting question costs anywhere from about two cents on the frugal end to seven cents on the expensive end. Split the difference and call it five cents on average.
Extended thinking is the primary cost driver, and its effect is more pronounced with Opus than Sonnet. Opus with extended thinking consumed 3x to 3.5x more budget than the same model without it. Sonnet’s extended thinking overhead was more modest, roughly 1x to 1.5x above its baseline. Without extended thinking, Opus and Sonnet were indistinguishable at this level of measurement precision.
Conclusion
Compare the following two answers:
The two cent result (Sonnet)
https://claude.ai/share/59931ef2-9e4b-4939-a30b-bf1bb5e78d02
Good. I have enough to work with. Let me build this out carefully.
This is a genuinely interesting problem that requires separating the baseline actuarial case from the exotic-scenario adjustments and then weighting them by probability. I'll walk through each layer.
Baseline Actuarial Case
You need to reach January 1, 2100, which is about 74 years from now. That means surviving to age 107.
Male life expectancy at birth in the U.S. was 74.8 years as of 2022. NCBI That's the birth-cohort figure, but you've already survived to 33, which eliminates most of the early-death risk. The relevant number is conditional life expectancy given survival to 33. From SSA period tables, a 33-year-old American male has a remaining expected lifespan of roughly 45-46 additional years, putting the median death around age 78-79. Reaching 107 sits roughly 3.5 standard deviations above the current mean for your cohort (using a rough SD of ~15 years for the lifespan distribution).
In a static-mortality world, surviving to 107 has a probability of well under 1%. The current oldest-verified human ever (Jeanne Calment) died at 122, and the longest verified human lifespan is around 122 years Wikipedia, so 107 is biologically plausible but deeply tail-weighted. A conservative estimate under static mortality: roughly 0.5-1.5%, with the wide uncertainty band coming from the fat tail of the lifespan distribution and the fact that your individual characteristics (unknown to me) shift this considerably.
But static mortality is the wrong frame for this question.
The Secular Trend Adjustment
Mortality rates have been declining for over a century. In the 20th century, life expectancy increased roughly three months for every year lived New Zapiens, a trend that, if it continued linearly, would meaningfully shift the distribution for someone your age. The SSA's own cohort projections (which incorporate expected future mortality improvements, not just current rates) put male life expectancy for your birth cohort several years higher than the period table suggests.
Even under conservative continuation of historical trends with no exotic breakthroughs, the tail probability of reaching 107 improves from the baseline. Call this the "secular drift" scenario: mortality improvements continue at roughly historical rates through 2100. This pushes the probability upward to perhaps 2-4%, reflecting that you'd be dying in a world with substantially better medicine than today, even without any transformative event.
The Exotic Scenarios
Here's where the analysis gets genuinely uncertain, and where you need to reason about scenarios rather than distributions.
Scenario A: Longevity Escape Velocity (LEV) achieved before you age out (~2035-2055)
De Grey predicts a 50% chance humanity reaches longevity escape velocity within the next 12 to 15 years. CEO Today Kurzweil predicts this tipping point could happen by 2029. Biohackingnews George Church answered "I wouldn't be surprised if 2050 would be a point" when asked about reaching LEV. Wikipedia Mainstream gerontologists are much more skeptical. Most mainstream gerontologists raise an eyebrow and say "Come back in the 22nd century — biology is messy." University of Edinburgh
The critical variable for you is not just whether LEV gets achieved, but whether it gets achieved while you're still young enough to benefit. At 33 now, you'll be 50 in 2043 and 67 in 2060. If LEV arrives by 2045, you're 52 and in reasonable shape to benefit. If it arrives in 2075, you might already be in steep decline or dead.
As of early 2026, no trials have yet demonstrated sustained escape velocity-like gains, and regulatory hurdles persist. Grokipedia Mice live longer under certain interventions, but translation to humans has yet to succeed. Funding and research speed are increasing, yet definitive evidence is missing. ASPRIVA
The honest prior on LEV arriving in time for you to benefit (say, by 2055) is somewhere between 5% and 20%, depending on how much weight you put on de Grey-style optimism vs. mainstream gerontology. Call it 10% as a central estimate, with a 2x uncertainty band in either direction.
If LEV arrives, your probability of reaching 2100 goes from a floor of ~1% to something approaching certainty for the non-accident portion of your lifespan. The residual mortality under LEV would come from accidents, violence, and non-aging diseases. Life insurance actuaries studying similar scenarios estimate "background mortality" of roughly 0.3-0.7% per year from non-aging causes under modern conditions, which compounds to a non-trivial probability of death over 74 years even in a world without biological aging. At 0.5% annual background mortality, your probability of surviving 74 accident-free years is about 69%.
So: P(LEV in time) × P(survive to 2100 | LEV) ≈ 0.10 × 0.70 = 7% from this scenario alone.
Scenario B: AI-driven medical acceleration without full LEV
Even without achieving escape velocity, AI may compress drug discovery timelines dramatically. The protein folding problem was intractable for decades until suddenly solved by AlphaFold as one of the very early fruits of current AI progress, and the massive complexity and immense data sets of biology are well suited to being understood. NextBigFuture A scenario where AI adds 10-20 years to your effective lifespan without hitting LEV is plausible and perhaps more likely than full LEV. This would push your effective death distribution out by 1-1.5 standard deviations, more than doubling the tail probability of reaching 107. Probability of this scenario: ~25-35%. Conditional probability of reaching 2100 given this scenario: perhaps 5-10% (you'd still need to be in the right part of the tail). Contribution: roughly 1.5-3%.
Scenario C: Civilizational disruption
The flip side is scenarios that reduce your probability below baseline: great-power war with nuclear exchange, engineered pandemic, climate destabilization severe enough to collapse healthcare infrastructure, or an AI transition that goes poorly. These are not negligible risks over a 74-year horizon. A rough aggregation of catastrophic risk over 74 years from credible sources puts something like a 5-15% chance of civilization-scale disruption bad enough to materially reduce your survival odds. Call the mortality penalty from these scenarios large enough to subtract 1-2% from the total.
Scenario D: Whole-brain emulation or other radical technology
I'll acknowledge this category exists and assign it a small weight (~2-5% probability of relevant deployment in your lifetime), but the conditional probability of "you specifically benefit" under this scenario depends so much on implementation details and your own choices that it's not worth quantifying precisely. Treat it as a small positive tail.
Synthesized Estimate
| Scenario | Weight | P(reach 2100 | scenario) | Contribution | |---|---|---|---| | Static mortality / slow progress | 30% | 1.5% | 0.45% | | Secular trend continuation | 30% | 3.5% | 1.05% | | AI-accelerated medicine (no LEV) | 20% | 7% | 1.40% | | LEV achieved in time | 10% | 70% | 7.00% | | Civilizational disruption | 10% | 0.3% | 0.03% | | Total | | | ~10% |
My central estimate is roughly 10%, with a plausible range of 4-20%. That range is wide because the distribution of futures is genuinely bimodal: either something transformative happens in aging science in the next 20-30 years, or the baseline actuarial case dominates and the probability is under 3%.
Key Sensitivities
The estimate is most sensitive to two variables:
Bottom line: I'd put 10% as the honest central estimate, with genuine uncertainty spanning 4% to 20%. The expected value is dominated by the LEV tail rather than the actuarial base case. The single most consequential thing that will determine whether you see the 22nd century is whether aging science gets transformatively better before your 60s, not whether you make it to your 90s by conventional means.
The seven cent result (Opus Extended Thinking)
https://claude.ai/share/108e27e3-a29e-4477-83e2-f1c2434c4c5e
This is a genuinely interesting problem because it sits at the intersection of well-understood actuarial science and deeply uncertain technological forecasting. You need to reach approximately age 107 (born ~1993, targeting January 1, 2100). Let me build this up from the baseline and layer in the speculative scenarios.
The Actuarial Baseline
Only 0.4% of US females and 0.09% of US males survive to age 105, and reaching 110 is about 150 times rarer for females and 333 times rarer for males than reaching 100. Boston University Medical Campus These are figures derived from historical cohorts, though, and you'd benefit from being born decades later with better baseline healthcare. For the 1900 birth cohort, 0.51% of males and 3.2% of females lived to 100-104, while only 0.03% of males and 0.25% of females reached 105-109. PubMed Central
For your 1993 birth cohort, period life tables would give you a remaining life expectancy of roughly 44-47 years from age 33, putting the median death age around 77-80 for US males. Reaching 107 is deep into the tail of the distribution. Under a frozen-mortality assumption (no further medical progress at all), the probability for a 33-year-old US male reaching 107 is on the order of 0.1% to 0.3%.
Cohort tables, which account for expected incremental mortality improvements over your lifetime, roughly double or triple that, landing you somewhere around 0.3% to 1% with ordinary medical progress continuing its historical pace of roughly 1.5-2 years of life expectancy gain per decade.
The Scenario Decomposition
Here's where it gets interesting. I'll assign rough probability weights to distinct futures and estimate your conditional survival to 107 in each:
Scenario A: Business as usual (weight: ~35%). Medical progress continues at its incremental historical pace. Cancer survival improves, cardiovascular treatments get better, but no fundamental shift in the biology of aging. Your probability of reaching 107 in this world: roughly 0.5-1.5%.
Scenario B: Meaningful but limited aging interventions (weight: ~30%). Senolytics, rapamycin analogs, GLP-1 combinations, or similar therapies prove to add 5-15 healthy years in humans by the 2040s-2050s. A 2025 study showed rapamycin paired with the MEK inhibitor trametinib extended median mouse lifespan by 27-29%, outperforming single agents through synergistic inhibition of growth signaling. Grokipedia The field in 2025 is shifting from preclinical experiments toward clinical evaluations, with several human longevity trials underway. Healthspan But translating mouse results to humans consistently disappoints, and regulatory hurdles are steep. Your probability of reaching 107 here: roughly 5-15%.
Scenario C: Transformative aging breakthroughs / longevity escape velocity achieved (weight: ~15%). At Radfest 2025, de Grey estimated a 50% chance that by the 2030s we might enter the era of longevity escape velocity, though Kennedy and Kaeberlein were far more cautious, pointing out that animal studies have yet to surpass the known effects of caloric restriction or rapamycin. ASPRIVA In this scenario, partial epigenetic reprogramming, comprehensive damage repair, or some currently unknown intervention achieves genuine biological age reversal in humans by the 2040s-2060s. If this happens while you're in your 50s or 60s, the conditional probability of reaching 107 becomes very high: 50-80%.
Scenario D: AI-accelerated radical transformation (weight: ~10%). Superintelligent or near-superintelligent AI compresses the entire aging-research timeline by orders of magnitude, enabling molecular-level precision medicine and essentially solving aging as an engineering problem. This is the Kurzweil scenario on steroids. If it happens in time for you (say before you're 70), your conditional survival to 107 is 70-90%. But the probability of this scenario materializing on that timeline, and being broadly accessible to someone in your economic position, is itself quite low.
Scenario E: Civilizational disruption (weight: ~10%). Major wars, pandemics, climate-driven collapse, or other catastrophes that reduce life expectancy. Your probability of reaching 107 drops to near zero: ~0.05%.
The Weighted Estimate
Combining these:
P(107) ≈ (0.35 × 0.01) + (0.30 × 0.10) + (0.15 × 0.65) + (0.10 × 0.80) + (0.10 × 0.0005)
P(107) ≈ 0.0035 + 0.030 + 0.0975 + 0.08 + 0.00005
P(107) ≈ 21%
Quantifying the Uncertainty
My point estimate is roughly 15-25%, with a central value around 20%. But the confidence interval is enormously wide because the variance is dominated by the binary question of whether transformative aging interventions arrive in time.
If I had to give a 90% credible interval: 3% to 45%.
The distribution is heavily right-skewed and bimodal in an important sense. In most futures where aging research doesn't pan out, you almost certainly die before 107. In the minority of futures where it does, you probably cruise past 107 comfortably. There's relatively little probability mass in the "just barely made it" zone.
Key Sensitivities
The factors that most move the needle for you specifically:
Your current health and genetics are the single largest individual-level variable. The estimate above assumes roughly average health for a 33-year-old American male. If you have strong family longevity, no chronic conditions, healthy weight, and regular exercise, shift the baseline scenarios upward meaningfully. If you have significant risk factors, shift them down.
When breakthroughs arrive matters enormously. A transformative therapy available in 2045 (when you're 52) is worth far more to you than one available in 2075 (when you're 82 and may already have accumulated irreversible damage). The value function is steeply convex with respect to timing.
Access and distribution is underrated as a variable. Early longevity therapies will likely be expensive and unevenly distributed. Your economic trajectory over the next 30-40 years determines whether you're in the first wave or the fifth wave of adoption.
The catastrophe risk is real and probably underweighted in most longevity-optimist analyses. A 74-year horizon is long enough for multiple potential civilizational disruptions, and those scenarios are strongly correlated with each other in ways that compress the tail.
The honest bottom line: you're probably in the range of a 1-in-5 to 1-in-6 shot, with the vast majority of the probability mass coming from speculative technological scenarios rather than actuarial extrapolation. The base rate is brutal. The optionality from being 33 right now, during what might be the critical window for aging research, is what makes it a real question and not a flat "no."
They’re…both fine?
The seven-cent response is decent, and the two-cent response is also decent. Both gave reasonable, well-structured answers to the longevity question. The expensive answer maybe included more nuance and more thorough scenario analysis, but it wasn’t 3.5 times better than the cheap answer.
And yet, the incentive structure of Claude Pro’s pricing pushes in the opposite direction. Subscribers pay a flat monthly fee for a rolling usage budget. Any usage below the session cap is functionally free at the margin. There is zero incremental cost to choosing Opus with Extended Thinking over plain Sonnet, as long as you stay under the ceiling of roughly twenty queries per five hour period. The rational behavior for a cost-conscious subscriber under that limit is therefore to always use the most expensive configuration available. You are leaving quality on the table if you use Sonnet, and the money you “save” by doing so evaporates unused at the end of your session window.
This is a strange equilibrium. For many the system incentivizes maximum consumption per query, which is the exact opposite of what you’d want if you were optimizing inference infrastructure costs at scale.
Limitations and Next Steps
This experiment has obvious constraints. The session percentage bar is a blunt instrument; direct token-cost measurement via the API would yield far more precise data. The sample size is six queries with one prompt across two models, which is enough to sketch the landscape but nowhere near enough for statistical confidence. The Cowork and Chat deltas for the same configuration didn’t always agree, suggesting either meaningful variance in response length or some overhead difference between interfaces that deserves investigation.
More interesting open questions sit downstream. This was all done on Claude Pro, which is the default tier for a large share of users. For power users willing to go further: how cheap of a model can you actually tolerate before the quality gap becomes noticeable? Is the degradation curve smooth or is there a cliff? And for anyone considering self-hosting open-weight models, what does the cost picture look like when the only expense is energy and hardware amortization? Like $5 Ubers, the two-cent query might turn out to be a VC-fueled luxury that we look back on wistfully in the 22nd century[3].
not strictly required
I asked the models to estimate the cost of their token usage; their estimates were consistent at roughly a few cents each, so this seems directionally correct.
I give it a 15% chance.