Epistemic status: Hare-brained scheme

Suppose the state of Arkansas decided to levy a fine of $1000 on anyone attempting to push the state of AI forward. Further, Arkansas decreed that this $1000 fine would be paid directly out of the coffers of those found guilty, and directly towards the people who turned them in - a 'fine bounty', if you will.

$1000 is not a lot of money in the grand scheme of things. However, like most people in Arkansas, I don't have $1000 to spare, so I would decide to just not risk working on advancing AI. Nor would I join an organization involved in such activities, lest I be caught in the crossfire. The more coworkers I had, the more likely it would become that one bad apple would turn everyone in at once to collect the $n000 reward. (turning themselves in would just mean paying $1000 to themselves, which makes 'getting away' with this a lot easier).

This is the essential thrust of Robin Hanson's Privately Enforced & Punished Crime; he calls this the fine-insured bounty (FIB) crime law system in a follow-up post. He is much more cogent in spelling out the edge cases of the system in general there, so I will focus on some details which I think make this system unusually well-suited to the problem of how one deters technological development, such as AI development. 

  1. It scales. It is very hard to develop new AI capabilities without someone suspecting what you're up to. Even if they cannot prove it, the insurance mechanism in the fine-insured bounty system means even a low-fidelity suspicion can carry with it considerable economic value. Much like how someone with a history of running red lights may have a higher car insurance premium, someone with a history of buying GPUs may have a higher AI bounty insurance premium. (Someone running a virus which collects a fingerprint of GPU activity at scale likely has a valuable source of information to sell to insurers and prospective bounty hunters alike.)
  2. It chills. To use a leaky pipeline argument: Few people are smart enough to advance AI capabilities in the first place; of those, few possess the conscientiousness to build out such a project by themselves; of those, the world is already their oyster, and almost all of them have found better things to do with their time than labor in solitude. The chilling effect of knowing this will be your fate so long as you pursue this should not be underestimated. Power is much more about what people elect not to do, than what people do.
  3. It thrills. The median voter makes less money than the mean voter; much of the enthusiasm around redistributive taxation policies can be seen as flowing from this fact. Putting power in the hands of ordinary people to find, catch, and make a living off of the very people who for decades have been making much more money than them doing what appears to be much easier, certainly less immediately dangerous work, is something many people would be interested in. The hunted becomes the hunter -- who wouldn't want that?
hunter x hunter

Jokes aside, it's also worth noting that $1000 is really, really low and there's no actual reason for that number. The higher the better, if you truly think this is a world-ending catastrophe; high enough bounties can even lead to people bringing in outsiders to the FIB-supportive state to extort the bounty from them. Just because you live in Massachusetts doesn't mean you're safe on your winter trip to get away from the slush.

New to LessWrong?

New Comment