[cross-posted from EAF]
Agreed that extreme power concentration is an important problem, and this is a solid writeup.
Regarding ways to reduce risk: My favorite solution (really a stopgap) to extreme power concentration is to ban ASI [until we know how to use it well], a solution that is notably absent from the article's list. I wrote more about my views here and about how I wish people would stop ignoring this option. It's bad that the 80K article did not consider what is IMO the best idea.
I found surprisingly little analysis of how epistemic interference could happen concretely, how big a deal it is, or how we could stop it. I hope I just failed to find all the great existing work on this;
I think this might be representative of prior art on this topic
On a more serious note, strongly agreed that this is an important and apparently neglected problem.
I recently wrote 80k’s new problem profile on extreme power concentration (with a lot of help from others - see the acknowledgements at the bottom).
It’s meant to be a systematic introduction to the risk of AI-enabled power concentration, where AI enables a small group of humans to amass huge amounts of unchecked power over everyone else. It’s primarily aimed at people who are new to the topic, but I think it’s also one of the only write-ups there is on this overall risk,[1]so might be interesting to others, too.
Briefly, the piece argues that:
That’s my best shot at summarising the risk of extreme power concentration at the moment. I’ve tried to be balanced and not too opinionated, but I expect many people will have disagreements with the way I’ve done it. Partly this is because people haven’t been thinking seriously about extreme power concentration for very long, and there isn’t yet a consensus way of thinking about it. To give a flavour of some of the different views on power concentration:
So you shouldn’t read the problem profile as an authoritative, consensus view on power concentration - it’s more a waymarker, my best attempt to give an interim overview of a risk which I hope we will develop a much clearer understanding of, hopefully soon.
Some salient things about extreme power concentration that I wish we understood better:
(For more musings on power concentration, you can listen to this podcast, where Nora Ammann and myself discuss our different takes on the topic.)
If you have thoughts on any of those things, please comment with them! And if you want to contribute to this area, consider:
Thanks to Nora Ammann, Adam Bales, Owen Cotton-Barratt, Tom Davidson, David Duvenaud, Holden Karnofsky, Arden Koehler, Daniel Kokotajlo, and Liam Patell for a mixture of comments, discussion, disagreement, and moral support.
I think AI-enabled coups, gradual disempowerment and the intelligence curse are the best pieces of work on power concentration so far, but they are all analysing a subset of the scenario space. I’m sure my problem profile is, too - but it is at least trying to cover all of the ground in those papers, though at a very high level. ↩︎
A few different complaints about the distinction that I’ve heard: ↩︎
(This is just an opportunistic breakdown based on the papers I like. I’d be surprised if it’s actually the best way to carve up the space, so probably there’s a better version of this question.) ↩︎
This is a form run by Forethought, but we’re in touch with other researchers in the power concentration space and intend to forward people on where relevant. We’re not promising to get back to everyone, but in some cases we might be able to help with funding, mentorship or other kinds of support. ↩︎