I do think that this is an under-discussed aspect of the intelligence explosion. I might even argue that, instead of the intelligence explosion simply accelerating the industrial explosion, that the intelligence explosion would be incumbent on a large, rapid expansion in compute and energy production; something that would only be possible with an economic shift like this.
I do wonder about the presentation of the individual stages. I agree with them in concept, but I do think that there's a disconnect between their names and their intended characteristics. Like, yes, nanotechnology would be the logical end-goal of stage three, but only the end-goal, and only based on the technology we understand now. I think it might be a bit clearer to communicate the stages by naming them based on the main vector of improvement throughout the entire stage, i.e. 'optimization of labor' for stage one, 'automation of labor' for stage two, 'miniturization' for stage three.
That being said, I also want to push back on the theory of stage one. The three points of increase you speculate are ~2x from more productive workers, ~2x from more laborers due to mass occupational shifts, and ~3x from organizational optimization, altogether totaling ~10x. While I do think ~10x is fairly reasonable, I don't think that it would necessarily be a 'one-time gain'; it would make more sense that adopting and adapting to these changes would take time, and that productivity, instead of doing this radical shaped curve you currently have, would look more exponential, leading into the more pronounced later stages.
That’s fair, I think I might’ve just gotten the wrong impression from the graph. Personally, I wouldn’t think there would be a hard cap, as the IE would naturally boost technology levels instead of holding them fixed. However, I do agree that, either way, there will eventually come a point where a generalized machine laborer will be more efficient and more productive than a human laborer.
I just read your ‘Three Types’ essay, and I thought it was also really good! Particularly interesting to me was the idea of how, as the IE cascades towards the full-stack IE, the concentration of power becomes more decentralized. I’ve been working on a model to anticipate the geopolitical and social impacts of AI development (please check it out!), and I hadn’t previously considered how the IE itself could have centralizing/decentralizing vectors.
Great work! I’ll definitely be keeping an eye out for more stuff from you guys.