Most predictive work on AI focuses on model capabilities themselves or their effects on society at large. We have timelines for benchmark performance, scaling curves, and macro-level labor impact estimates. What we largely do not have are personalized forecasts that translate those trends into implications for an individual role.
At the same time, many conversations about AI and work stall at a familiar level of abstraction. Some jobs will disappear, others will change, productivity will increase, and things will “shift.” They do not answer the question that actually matters to individuals: when does AI become capable enough to meaningfully threaten my role, given the specific tasks I do and the organization I work... (read 5422 more words →)
You raise some good points here and I think they’re worth integrating into a project I’ve been working on (https://dontloseyourjob.com/method/)
- I forecast job displacement risk using a hazard framing, and the model separates technical feasibility (when AI can reliably complete enough of your task buckets) from actual job loss (implementation delay summed with a compression hazard). Your slope-vs-intercept idea suggests the step change isn’t “is AI smart enough,” but “can AI run the workflow end-to-end with minimal supervision.” The discontinuity I should be watching for is the point where long-horizon agency improves enough that the AI stops needing constant scaffolding and quality control, and displacement risk stops rising smoothly.
- I think this means I
... (read more)