This question is material to us, as we're building an impact certificate market (a major component in retroactive public goods funding), and if the answer is yes, we might actually want to abort, or — more likely — I'd want to put a lot of work into helping to sure up mechanisms for making it sensitive to long-term negative externalities.
Another phrasing: Are there any dependencies for AGI, that private/academic AI/AGI projects are failing to coordinate to produce, that near-future foundations for developing free software would produce?
I first arrived at this question with my economist hat on, and the answer was "of course, there would be", because knowledge and software infrastructure are non-excludable goods (useful to many but not profitable to release). But then my collaborators suggested that I take the economist hat off and try to remember what's actually happening in reality, in which, oh yeah, it genuinely seems like all of the open source code and software infrastructures and knowledge required for AI are being produced and freely released by private actors, in which case, us promoting public goods markets couldn't make things worse. (Sub-question: Why is that happening?)
But it's possible that that's not actually happening, it could be a streetlight effect: Maybe I've only come to think that all of the progress is being publicly released because I don't see all of the stuff that isn't! Maybe there are a whole lot of coordination problems going on in the background that are holding back progress, maybe OpenAI and Deepmind, the algorithmic traders, DJI, and defense researchers are all doing a lot of huge stuff but it's not being shared and fitted together, but a lot of it would be in the public cauldron if an impact cert market existed. I wouldn't know! Can we rule it out?
It would be really great to hear from anyone working on AI, AGI, and alignment on this. When you're working in an engineering field, you know what the missing pieces are, you know where people are failing to coordinate, you probably already know whether there's a lot of crucial work that no individual player has an incentive to do.
Keep your economist hat on! For-profit companies release useful open source all the time, including for the following self-interested reasons:
This is sufficient incentive that in the case of ML tools, volunteers just don't have the resources to keep up with corporate projects. They still exist, but e.g. mygrad is not pytorch. For a deeper treatment, I'd suggest reading Working in Public (Nadia Eghbal) for a contemporary picture of how open-source development works, then maybe The Cathedral and the Bazzar (Eric Raymond) for the historical/founding-myth view.
I'd generally expect impact-motivated open source foundations to avoid competing directly with big tech, and instead try to build out under-resourced parts of the ecosystem like e.g. testing and verification. Regardless of the specifics here, to the extent that they work impact certificates invoke the unilateralists curse and so you really do need to consider negative externalities.
I entirely agree that private contributions to open source are far below socially-optimal level of public goods funding - I'd just expect that the first few billion dollars would best be spent on producing neglected goods like language-level improvements, testing, debugging, verification, etc. where most value is not captured. The state of the art in these areas is mostly set by individuals or small teams, and it would be easy to massively scale up given funding.
(disclosure: I got annoyed enough by this that I've tried to commercialize HypoFuzz, specifica... (read more)