I think the common thinking on this is that commoditization would foster fierce competition, which would create an environment where companies are optimized to respond to market pressure, and so cutting corners on safety and other public good, whereas a monopoly would have more slack to care about those things.

That seems right to me, and probably the bulk of the consequence weight goes there when thinking about monopolizing AI vs commoditizing it.

But other considerations I had were:

  • if AI is commoditized, then there might be less AI R&D
  • if AI is commoditized, then all the economic surplus might go to the consumers instead of the shareholders (hence preventing massive economical inequality)

Maybe all the answers are in Strategic Implications of Openness in AI Development and I should reread the paper (it's been a while).

Motivation for asking: I might have some vague proto-ideas on how to commoditize it.

New Answer
Ask Related Question
New Comment

1 Answers sorted by

Would be pretty interested in your ideas about how to commoditize AI.

1 comment, sorted by Click to highlight new comments since: Today at 2:01 AM

I'm not clear here what you mean by "AI". Full AGI, or just AI like the present (various specialized systems, usually but not often subhuman capabilities)? If the former, it's hard to speculate.

The latter, however, is already being commoditized and I include it as an example. FANG practically compete to see who can give away more source code & datasets & research and open-source everything, to the extent that when someone announces a release of a new reinforcement learning framework library I don't even bother announcing it on /r/reinforcementlearning unless there's something special. FANG is not threatened by open-sourcing everything because all the benefits come from the integration into their ecosystem as part of your smartphone, running on the ASICs built into your smartphone, with access to all the data in your account and the private data in all the FANG datacenters, which power your upfront payments and then advertising eyeballs later. Someone who invents a better image classifier can not in any meaningful way threaten Google's Android browser advertising revenues, but does help make Android image apps somewhat better and increases Google revenue indirectly, and so on. Thus, Google can release not just new research but the actual models themselves like EfficientNet, and it's no problem. You can provide a free competitor. But Google provides something which is "better than free".

As far as 'tool AI' or 'AI services' go, this seems to be the overall theme. Most tasks themselves are not too valuable. The question is what can you build on top or around it, and what does it unlock? ( is interesting.)

New to LessWrong?