Last week, someone at an ad agency told me that they don’t contract storyboard artists anymore. AI image generators can spit out a “good enough” reference image from the same prompts the agency would give to the contractors. Even though AI is not as good as a skilled human in this instance, because the marginal cost of lightly monitoring that function themselves is near-zero, the agency chose to replace the contractors in the workflow.
Most knowledge work roles face a more gradual path. There’s substantial friction between “AI can do some tasks” and realized job displacement. This friction buys time, and if you understand what creates it, you can use that time to maximize your economic agency.
I’ve spent this year building a tool that predicts job displacement timing — it simulates AI capability growth and how it collides with the unique friction at your organization and role. Before I ship the full site + guide later this month, I want to illustrate what I see will be the strongest drivers of AI-related job displacement, and what will be your hardest shields against a reduction-in-force (whether officially attributed to AI or not).
These barriers are a broad simplification, and we won’t talk about them exactly in order, but they’re all interconnected and will collapse at different rates for different roles.
Chart is a mockup for illustrative purposes and not representative of a true job loss model
Complete displacement is simple: your entire function gets automated away (the storyboard artist case). You or your company provide a commodified output or service, and once AI passes the “good enough” threshold at a low marginal cost, your service is no longer required. This hits vendors and single-output contractors first; if you’re paid to produce one specific thing, you can’t pivot to “I’ll use AI to do more stuff.”
Gradual displacement is more common, but its predictability matches the complexity of the role. Most jobs aren’t built to complete a singular task or service, rather existing as a portfolio of interconnected responsibilities, each with varying automation potential. AI will be able to chip away at the routine stuff first, and your team’s output will increase with the same headcount (amplification).
When we consider how good the models are at short tasks, it may cause us to wonder what happens when our whole team soon becomes amplified by an increasingly powerful tool. Gradual displacement is both about what AI can do (which we’ll get to later), but just as importantly, what happens to the constraint that created your job in the first place.
Think about why your role exists. Your company hired you because there’s more work than existing employees could handle, or you have skills they lacked. Every job is born from a constraint; a gap between what the organization wants to produce and what it currently can manage. As AI begins to both displace some tasks and amplify your total output, that constraint reduces and an interplay opens up between you, your manager, and your coworkers. They are also undergoing the same transformation, both increasing their output and reducing the share of their tasks that can’t be automated, and some reassignment of capacity will be seen.
Thus, gradual displacement is a function of four factors:
Both kinds of displacement are really the same function, just seen at different levels. This barrier is only eliminated when a well-deployed system of AI can feasibly automate enough of your tasks and your coworkers or clients can take on the remaining share. At that point, the original constraint that justified your job has disappeared.
Even without AI, this logic has always governed work. A role exists only as long as it produces more value than it costs and fills a capacity gap that others cannot easily absorb. When that gap closes because the firm is cutting costs, demand reduces, or coworkers become more capable of doing the work themselves, the risk of displacement increases. This is the capacity distribution barrier mentioned earlier, but it’s worth noting that removing it doesn’t always trigger immediate job loss due to the other forces at play.
Chart is a simplification; actual displacement can take many forms
AI models are found to double in capacity every seven months for software engineering tasks, but this trendline is not certain to hold long-term, and when increasing the reliability metric for a standardized success rate, the length of tasks models can do drops. A model might have a 50% success rate for completing a software task that takes a human 120 minutes, but that length will drop to 25 minutes if you increase the required success rate to 80%.
So while you may see models crush benchmarks every few months, remember that there are several sub-barriers between their capabilities and the reliable transfer of task ownership at your job:
Because of these barriers and the capability trendline, we can infer that your most explicit, patterned tasks will be automated first and longer-term interpretive/tacit/domain-specific tasks will struggle. AI will not be a singular model that magically is as good as you are one day — it’s going to look more like distributed systems of agents that will slowly encroach your list of assigned tasks, rather than a personified robot clamoring for your title. These agents will all be generalists until they are fine-tuned, and the last 30% of your tasks may be much harder to automate than the first 70%, especially as your responsibilities evolve.
Feasibility lives at the level of tasks, not your “job”, so you should think of your current workflows in the same way. How easily can your work be broken into objectively defined tasks? How well do you rely upon structured workflows to complete tasks, and how often do you rely upon implicit knowledge or human relationships? Do people seek your assistance with tasks, or can they answer questions themselves with digital tools? How sensitive are your outputs to wrong information?
Making Your Work “Legible”
There’s a concept from James C. Scott’s Seeing Like a State about “scientific forestry,” when 18th century European foresters tried to optimize timber yield by making them “legible” like other agricultural crops. They transformed chaotic, biodiverse ecosystems into neat rows of single-species trees — measurable, predictable, and manageable. They were also ecological disasters that collapsed within a generation.
Your company may try to do the same thing with human work.
If your company sees constraints to automating your role, based on the feasibility constraints noted in the previous section, they’ll need to digitize and capture all of your context and tacit knowledge that makes you, you. They’ll attempt to boil down your expertise and experience into “legible” data for AI to learn from. You’ll see large initiatives like these (some are already starting):
None of this will be framed as “helping AI learn how to replace you”, and for a while, it will serve to amplify you. But what’s actually happening — all of this serves models as training data. Your company may be able to develop integrated environments to fill in the missing gaps for long-term workflows and tacit knowledge. These self-learning systems will utilize reinforcement learning (RL) to shadow your daily tasks and reverse-engineer your project inputs to your outputs.
Over time, the system will learn how to do more of your tasks, and it will thrive on the availability of the data in your role, team, and organization. For a more detailed explanation of why you should protect this data, see this essay.
If everything is well-connected and deployed, you’ll find that your chatbot may know much more about your role than you’ve told it, or agentic systems may start suggesting ideas familiar to your own. The models will not need to understand your relationships in any human sense, they will only need to recognize patterns of actions that lead to a good outcome. The models will continue to observe, imitate, and anticipate your actions.
Your task structures will compress, and we’ll see amplification/compression dynamics we mentioned earlier open up. Again, all the barriers are interconnected.
The good news is that implementation for any technology (and especially AI) is hard, slow, and messy. Even if models are capable and your company builds a perfect data infrastructure, there are still humans in the loop with their own incentives, and they may not all match the C-suite. There are two kinds of friction:
Naturally, deployment will take time, and this will benefit you if you’re worried about displacement. But your company will see others in the industry scrambling to implement AI, and it will have to balance the caution of political blowback vs the efficiency gains the labs are promising them. It will likely start by adopting models unevenly; more aggressively in low-risk, high-volume domains, and more slowly in politically sensitive ones.
These barriers do not stop the underlying trend, but they will shape the order of operations. Groups with less internal political capital may see more aggressive experiments, and laborers who do not control key relationships or serve institutional memory will be easier to replace.
If these barriers are the forces acting on your job, and this worries you, you can ask yourself: what can you do from your side of the equation?
You cannot control model capabilities, and you probably can’t dictate your entire company’s AI strategy. What you can control is the mix of work you do, the way you position yourself inside your systems, your own curiosity and agency, and the level of value you create relative to anyone else who might sit in your seat.
I think the answer is being two kinds of “indispensable” simultaneously. Being a “keeper” is valuable (see Paul Graham’s essay about this) — dedicating time to discovering deep domain knowledge, human relationships and institutional memory. But there’s also balancing being a little integrative, and using AI to increase your own agency. Skill and taste in a deep domain may atrophy if you let everything you do become automated, but even if our future results in a full automation of work, we still have a long road of human agency, both in our jobs and outside of it. What we may lose in deep technical experience in one domain may allow us extra capacity to tackle multiple.
If it interests you (it interested me!) you can read about the philosophical problems of human welfare and safety regarding AI. It helps to become educated on AI and the policy tools we need to build to serve a human-centered future. Here is a great place to start.
Thanks to Saheb Gulati for reviewing a draft of this post, and thanks to everyone else for reading! Please be on the lookout for the full tool and guide I am shipping later this month.