Suppose we succeed and bring AI to a screeching halt.
Then what? What direction do we want to go? Can we actually stop AI from advancing at all? For how long? What are we going to do with whatever extra time we have to make the future a safer place if/when we resume? How will we decide when to resume? What sort of future are we ultimately aiming for?
There are a lot of questions like these, that people sometimes want answered before even considering stopping AI. I don’t think we need to answer these questions before trying to stop it.
I have an analogy: Suppose your house is burning down. You probably want to put out the fire before thinking about other things like if you will stay living there or how to prevent another fire, etc. The base quarter of operations is:
Put out the fire
Everything else.
OK, I can do a bit better than that:
Put out the fire.
Check that the fire is actually out and not still smoldering somewhere.
Assess the damage that the fire has done and that you have done in your efforts to put it out.
Understand why the fire started and what preventative measures should now be taken. Do you need more fire extinguishers or fire alarms? Should you have a policy of setting a timer when you leave something cooking on the stove?
Decide whether or not to turn on the burner again.
I think we can have a similar attitude with respect to stopping AI. At least I think that should be acceptable and is something that most people could get behind. When I think about rallying people to stop AI, it’s about finding common ground. The other parts of this picture might be a lot more contentious. For instance, people might see very different roles for AI in society.
So I basically want to punt the question about what to do after we stop to… after we stop! I think this is something that everyone should get a say in, and I think it will take us a while to get to a baseline level of AI literacy needed.
That being said, I do have some thoughts about what should come during an indefinite pause…
We should have some sort of reckoning where we deal with the broader situation that got us to the point of almost eliminating our species.
We should aim to establish processes that will govern the pace and direction of AI progress. We should not be making decisions about how, and how fast, to develop and deploy AI based on competitive pressure, but on the collective interest.
More broadly, we should improve collective decision-making and collective sense-making; I view these two problems as at the core of the AI race.
Finally, we should consider a new “bill of rights” for the information age. We have a backlog of challenging problems around privacy, accountability, and basic human dignity that have arisen from technologies that predate AI; many of these are or will be made worse by AI. A few quick ideas for this are:
The right to talk to a person when interacting with a large company or organization
The right to appeal important decisions being made about you to a human.
The right to not create an account when one is not necessary.
The right to avoid interacting with manipulative technology. Like advertisements, AI systems can be trained specifically to influence people in particular directions.
Prohibitions on impersonating people with AI and protections for likenesses.
Data ownership: When an AI company uses your data, you get copmensated; you can also opt out and deny people usage of it.
In conclusion: people I talk to are usually focused on what sort of technical research progress we could make during an AI pause, but personally, I’m more focused on how we can use this time to institute social reforms that are helpful. Overall, I’m not particularly concerned about answering questions about what happens after a pause, unless this sort of uncertainty stops the pause from occurring. I think we can sort stuff out later. It’s great to have a plan, but we shouldn’t let not having one stop us. The house is on fire!
Thanks for reading The Real AI! Subscribe for free to receive new posts and support my work.
Suppose we succeed and bring AI to a screeching halt.
Then what? What direction do we want to go? Can we actually stop AI from advancing at all? For how long? What are we going to do with whatever extra time we have to make the future a safer place if/when we resume? How will we decide when to resume? What sort of future are we ultimately aiming for?
There are a lot of questions like these, that people sometimes want answered before even considering stopping AI. I don’t think we need to answer these questions before trying to stop it.
I have an analogy: Suppose your house is burning down. You probably want to put out the fire before thinking about other things like if you will stay living there or how to prevent another fire, etc. The base quarter of operations is:
Put out the fire
Everything else.
OK, I can do a bit better than that:
Put out the fire.
Check that the fire is actually out and not still smoldering somewhere.
Assess the damage that the fire has done and that you have done in your efforts to put it out.
Understand why the fire started and what preventative measures should now be taken. Do you need more fire extinguishers or fire alarms? Should you have a policy of setting a timer when you leave something cooking on the stove?
Decide whether or not to turn on the burner again.
I think we can have a similar attitude with respect to stopping AI. At least I think that should be acceptable and is something that most people could get behind. When I think about rallying people to stop AI, it’s about finding common ground. The other parts of this picture might be a lot more contentious. For instance, people might see very different roles for AI in society.
So I basically want to punt the question about what to do after we stop to… after we stop! I think this is something that everyone should get a say in, and I think it will take us a while to get to a baseline level of AI literacy needed.
That being said, I do have some thoughts about what should come during an indefinite pause…
We should have some sort of reckoning where we deal with the broader situation that got us to the point of almost eliminating our species.
We should aim to establish processes that will govern the pace and direction of AI progress. We should not be making decisions about how, and how fast, to develop and deploy AI based on competitive pressure, but on the collective interest.
More broadly, we should improve collective decision-making and collective sense-making; I view these two problems as at the core of the AI race.
Finally, we should consider a new “bill of rights” for the information age. We have a backlog of challenging problems around privacy, accountability, and basic human dignity that have arisen from technologies that predate AI; many of these are or will be made worse by AI. A few quick ideas for this are:
The right to talk to a person when interacting with a large company or organization
The right to appeal important decisions being made about you to a human.
The right to not create an account when one is not necessary.
The right to avoid interacting with manipulative technology. Like advertisements, AI systems can be trained specifically to influence people in particular directions.
Prohibitions on impersonating people with AI and protections for likenesses.
Data ownership: When an AI company uses your data, you get copmensated; you can also opt out and deny people usage of it.
In conclusion: people I talk to are usually focused on what sort of technical research progress we could make during an AI pause, but personally, I’m more focused on how we can use this time to institute social reforms that are helpful. Overall, I’m not particularly concerned about answering questions about what happens after a pause, unless this sort of uncertainty stops the pause from occurring. I think we can sort stuff out later. It’s great to have a plan, but we shouldn’t let not having one stop us. The house is on fire!
Thanks for reading The Real AI! Subscribe for free to receive new posts and support my work.
Share