Has anyone thought about Kremer/Jones-like economic growth models (where larger populations generate more ideas, leading to superexponential growth) but where some ideas are bad? I think there’s an interesting, loose analogy between these growth models and a model of the "tug of war" between passengers and drivers in cancer. In the absence of deleterious mutations the tumor in this model grows superexponentially. The fact that fixation of a driver makes the whole population grow better is a bit like the non-rival nature of ideas. But the growth models seem to have no analog to the deleterious passengers—bad ideas that might still fix, stochastically, and reduce the technology prefactor "A".
Such a model might then exhibit a "critical population size" (as for lesion size) below which there is techno-cultural decline (ancient Tasmania?). And is there a social analog of "mutational meltdown"—in population genetics, if mutations arrive too quickly, beneficial and deleterious mutations get trapped in the same lineages (clonal interference) and cannot be independently selected. Perhaps cultural/technological change that comes too rapidly leads to memeplexes with mixtures of good and bad ideas, which are linked and so cannot be independently selected for / against…
It seems like the growth models already take much of that into account, the same way that they do crime or war: if new technologies create new crime (which of course they often do), then that simply slightly offsets the benefits of those technologies, and it is the net benefit which shows up in the long-term growth rather than some 'pure' benefit free of any drawbacks. And likewise for technologies as a whole: if you're inventing some unknown grabbag of technologies each time-period, then it's the net of all the good ideas being offset slightly by the bad ones that is getting measured or driving the growth in the next time-period etc. It would be like measuring the growth of an actual tumor: whatever growth you observe, well, that must be the net growth after the defectors inside the tumor have done their worst, by definition.
So you'd have to invoke some sort of non-constant or non-proportionality: "yes, the bad ideas are only an offset, up until some threshold value like 'inventing nuclear bombs'" (like Bostrom's 'black balls'). But then your results seems dangerously circular: if you assume some fat tail payoff from the bad ideas after a certain threshold or increasingly with time, you are building in your conclusions like "we should halt all technological progress forever".