I think you may also want to consider the dependency consequences when it comes to APIs and integrations.
Once you've integrated a piece of software into another piece of software, you generally want it to stay the same and stay available. So you're somewhat invested in putting compute for maintenance over capabilities.
A second note is that overhang consumption on less powerful models doesn't necessarily transfer as well to more powerful. The techniques become redundant with existing capabilities, and your skill there remains more useful for getting more out of less powerful.
Can't control the hype, but from a software development perspective, trying to develop integrations and consume overhangs will likely push towards wanting more stable capabilities over time and more gradual transitions.
From my perspective, trying to actually extract value from what we have with invested time and money is going to push base capabilities pace slower if anything. And understanding what we have already wrought and how it can be utilized is important.
I'm wondering whether there is consensus on the net value/detriment of these two AI activities:
I am not sure whether there is a neat theoretical answer to these questions. It might be an empirical question of where the equilibrium falls. Still, would love to read some thoughts on this, and especially pointers to prior art, because a lot of engineers and researchers are being thrown at both integrations and overhangs. If either of these are net-harmful, it's important to be able to steer people away from them.