Mind: Brain Replacement isn't Brain Augmentation.
History, and much of the 96% of non-human work as you call it - and however you define that exact number - were mainly all sorts of types of brain augmentation, i.e. brain extension beyond our arms and mouths, using horses, ploughs, speakers, and all types of machines, worksheets, what have you.
AI, advanced AI, incontrast, is more and more sidelining that hitherto essential piece or monopolist, alias human brain.
And so, whatever the past, there is a structural break happening right now.
And so, you and the many others who ignore that one simple phrase I suggest remembering: Brain Replacement isn't Brain Augmentation - risk to wake up baffled in not so long a future. This at least would seem to be the very natural course of things to expect absent doom. Then, indeed the future is weird and who knows anyway. Maybe it's so weird that one way or other you'll still be right - I just really wouldn't bet on it they way you seem to argue for.
Think about theory of the firm: the firm is the largest portion of the economy that is better run through an authoritarian central planning regime rather than using the market prices to orient and organize production.
What we have seen with informatics over the past few decades is exactly that bigger is getting better. For years now the small-cap factor no longer works. J.P. Morgan Chase & Co. is world's largest bank and it's outperforming the industry. Amazon is capable to coordinate more than 1.5 million full-time employees worldwide.
As AGI accelerates this trend, there's no reason to imagine we won't see further consolidation. Yeah, sure, some people like Pepsi and other people like Coca-Cola. But likely there won't be 2,000 different soda brands that each one needs to be individual oversight by humans.
If you can organize production more through central planning through informatics and AGI, I dunno there will be much work left for humans to do.
And obviously, people on LW are überbulls on ASI. The view is that it'll get millions of times smarter, whatever you define, than humans.
TL;DR: As we deploy AI, the total amount of work being done will increase, and the % done by humans will fall. This does not require a decline in human employment. This is consistent with historical trends.
Sometimes, I hear economists make this argument about transformative AI:
I think transformative AI will increase GDP. However, I don’t think this necessitates a decline in human employment.
Anthropic CEO Dario Amodei imagines advanced AI as a “country of geniuses in a datacenter”. If such a country spontaneously sprang up tomorrow, I don’t think it would reduce human employment. Investors might want to re-allocate capital towards the country, but the country would require some inputs that it’s unable to self-supply.1
I think human and AI inputs could be complementary to each other — possibly because we legislate them to be so / require human oversight — like scenario-appropriate versions of the human drivers who currently sit in Tesla’s Robotaxis, watching the road without touching the controls.
~4 billion humans and ~100 billion non-human worker-equivalents currently work (BOTEC). A ‘worker-equivalent’ here means ‘the amount of work one average 1700 human worker could perform in a year.’ From 1900 to 2020, human labor input grew by ~2.5× while total economic work grew by ~16×, meaning most additional work was done by machines. On this BOTEC, only 4% of work is done by humans today.2
Some economists seem to implicitly assume that the amount of work done in the future will be the same as the amount of work done today. In Korinek and Suh’s ‘Scenarios for the Transition to AGI’:
In this model, task measure is fixed, and we start out with humans doing every task.
But we could productively deploy more labor than we currently have. In reality, task measure is not fixed, and we are not capped at the ~4 billion human jobs (and ~100 billion non-human jobs) being done today.
We could have (in effect) 1 trillion workers, 0.04% of whom are humans in management/oversight/monitoring roles, with no hit to human employment.
The problem of AI alignment can count on at minimum 4 billion humans who can provide monitoring. Even if we don’t end up needing to stay in the loop for safety reasons, we may still stay in the loop to improve management/prioritization, or to stabilize the economic transition.
The total amount of work being done will increase, and the % done by humans will fall. This does not require a decline in human employment. This is consistent with historical trends.
1
Humans would provide maintenance the AIs can’t self-provide, supply direction, check decisions the AI systems are uncertain about, monitor activations, and bear accountability for decisions made by AI agents on their prerogative.
2
the exact number may vary, depending which year you set as baseline and how you run the BOTEC; this is compatible with the broader point.