"Lord, give me chastity and continence, but not yet"
-- St. Augustine
It seems to me that the current rates of progress in AI are largely fine when measured by absolute value increments in capabilities. We are not afraid of next month's progress; in fact, we as a society are mostly enthusiastic about it. The benefits so far have clearly outweighed the downsides. Clear downsides have started to appear: chatbots, especially GPT-4o, induce psychosis in some vulnerable people. But it seems manageable; maybe it could be fixed just by putting a statement like "Mitigating the risk of psychosis from the free version of ChatGPT should be OpenAI's priority alongside other societal-scale risks such as helping create biological and chemical weapons" and promoting it widely online (especially on Twitter).
Current AI progress seems largely fine.
But it is exponential.
One day we might think "okay, this is getting too fast". It seems prudent to me that we should move from the "exponential growth" to "linear growth" paradigm before then. That's why I propose the following governance idea:
The target could be to automate e.g. 0.4 - 0.6 percentage points of the global economy with AI every year. How could this "global central bank for AI" limit AI's growth? It would need to be researched, but I have in mind something like this (from least severe to most severe):
The above is just a draft. I welcome feedback and, if you judge my idea to be worthwhile, I'm looking for collaborators to develop it further. My goal is either for my idea to be destroyed by the truth, or to be developed into a governance report & recommendation. You can contact me privately at zaborpoczta(at)gmail(dot)com.
Q: Why economic indicators and not e.g. some benchmarks?
A: Economic impacts seem like the ultimate measure: hardest to game and the most objective. Even AI sceptics such as Robin Hanson would accept them. Also, everyone understands what unemployment is.
Q: Why moving from exponential to linear growth instead of just stopping?
A: Many (most?) people don't want to stop now. Even I don't want to stop completely now, and I'm at double digits P(doom). Also, how do we solve alignment after shutting down all the GPUs? Invest in improving humans via synthetic biology enough to create people >15 IQ higher than John von Neumann, then take out our pens and papers and discover the True Nature of Intelligence and then program a safe seed AI in <75 megabytes of Flare code? If that's realistic then sure... My proposal could then serve as a meaningful step in the direction of implementing an "off-switch" as proposed by MIRI. Either way it seems like an improvement over the status quo: we could limit the chances of a rapid intelligence explosion via limiting hardware, and we could gain valuable time for alignment if we go from, let's say, a global economy that is 2% AI to one that is 50% AI in 80 years (80 * 0.6 percentage point increments). Maybe just in time to prevent the declines in fertility from causing a new "dark ages" period?