On one hand, it would capture part of the negative externality in the market pricing. And the money could be used to finance AI safety research.

On the other hand, maybe AI companies would think their responsability are limited to paying that tax and that's it.

It might also be hard to implement and enforce. What counts as AI? Which part of the profit was made because of the AI component?

What other considerations are there? What do you think of this idea? How desirable and feasible is it? Has anyone written on this?

Motivations for asking:

  • Get people to think about this
  • Improve my understanding of economics
New Answer
Ask Related Question
New Comment

1 Answers sorted by

There's the saying that if you pitch to investors you say that you are doing AI, when you hire programmers you tell them they will do machine learning and when they actually work for you they are going to do linear regression.

Practially such a tax would be a way to put much more regulatory complexity on the sector and prevent people from innovating by sticking to safe linear regression that can be defended as not being AI.

It seems like it would create a lot of deadweight and unless your intent is to slow down tech development I don't think it's sensible.