Agent Economics: a BOTEC on feasibility
Edit (7th of February): I made an updated version of this, after Toby Ord's comment. You can find it here. Summary: I built a simple back-of-the-envelope model of AI agent economics that combines Ord's half-life analysis of agent reliability with real inference costs. The core idea is that if agent cost per successful outcome scales exponentially with task length, and human cost scales linearly, it creates a sharp viability boundary that cost reductions alone cannot meaningfully shift. The only parameter that matters much is the agent's half-life (reliability horizon), which is precisely the thing that requires the continual learning breakthrough (which I think is essential for AGI-level agents) that some place 5-20 years away. I think this has underappreciated implications for the $2T+ AI infrastructure investment thesis. The setup Toby Ord's "Half-Life" analysis (2025) demonstrated that AI agent success rates on tasks decay exponentially with task length, following a pattern analogous to radioactive decay. If an agent completes a 1-hour task with 50% probability, it completes a 2-hour task with roughly 25% probability and a 4-hour task with about 6%. There is a constant per-step failure probability, and because longer tasks chain more steps, success decays exponentially. METR's 2025 data showed the 50% time horizon for the best agents was roughly 2.5-5 hours (model-dependent) and had been doubling every ~7 months. The International AI Safety Report 2026, published this week, uses the same data (at the 80% success threshold, which is more conservative) and projects multi-day task completion by 2030 if the trend continues. What I haven't seen anyone do is work through the economic implications of the exponential decay structure. So here is a simple model. The model Five parameters: 1. Cost per agent step ($): average cost of one model call, including growing context windows. Ranges from ~$0.02 (cheap model, short co