I'm going to ask even though I didn't read it, because this isn't addressed in the overview and it's my primary reason for asking If we solve alignment, do we die anyway?
What's the scheme for stopping AGIs from working outside of this system? It seems like any scheme like this only holds if it can prevent any private entity from obtaining and running enough GPUs for research to advance their AGI to ASI. And that amount goes down as algorithms and research improve.
Robotics adequate to bootstrap hidden fabrication is the other route; while that will take a little longer, it seems like a sharp time-limit on all plans of this type.
I plugged your middle paragraph into the provided AI because that's its point. Here's the response:
Currently, no technical or governance scheme can reliably guarantee that all private entities are prevented from developing or running AGI outside official oversight. Even strong international agreements or hardware controls can be circumvented by determined actors, especially as required compute drops with research progress. Without ubiquitous surveillance or global control over compute, models, and researchers, a determined group could realistically “go rogue,” meaning any system that depends on absolute prevention is vulnerable to secret efforts that might reach AGI/ASI first, potentially unleashing unaligned or unsafe systems beyond collective control.
sounds kinda sycophantic, e.g. you only need global control over one of the three.
Foresight Institute has launched a tech tree for secure multipolar AI. It maps potential technical paths toward secure multipolar AI, and is designed to help researchers, funders, and policymakers navigate the field, identify key bottlenecks, and coordinate efforts more effectively. The tech tree maps five critical goals – building a cooperative AI ecosystem, privacy-preserving AI collaboration, secure and robust AI, transparent and verifiable AI, and aligned AI agents – and breaks down each into technical capabilities, challenges, and potential solutions.
A tech tree maps a field’s trajectory and highlights key bottlenecks, enabling better coordination across research, funding, and talent. They reveal the how: critical milestones, open problems, major contributors, and leverage points for progress.
The Secure Multipolar AI Tech Tree gives an overview of the technical landscape for building secure AI systems that are capable of cooperating safely with humans and other AI agents. The goal of the tech tree is to clarify the steps needed for robust AI collaboration – serving both as an entry point for newcomers and a strategic tool for funders and experts navigating the space.
The tech tree covers five goals critical for robust AI collaboration:
In the tech tree, each goal branches into specific technical capabilities – the concrete technologies needed to make progress toward that objective. For instance, privacy-preserving AI collaboration includes capabilities like secure multi-party computation and federated learning protocols. These capabilities then connect to current open challenges (the specific problems blocking progress), and potential solutions to explore. The tree also identifies who's working on each piece: which research labs, companies, and projects are tackling specific subgoals, making it easier to track progress, spot gaps, and offer a detailed map of the field’s current state and future trajectories.
The tech tree includes a video explaining how it works, and how you can use it. It also has an AI chat bot to help users explore the aspects most relevant to them.
E.g. an overview of the tech tree at large with black nodes containing technological capabilities, orange nodes containing challenges, and green nodes containing solutions.
E.g. If you pick the “privacy preserving ai collaboration branch” but don’t want to bother exploring each node on the branch yourself, the ai chatbot will walk you through the branch node by node.
E.g. After discovering OpenMined as an actor in the “secure AI space”, you can ask the bot to tell you more about the non-profits’ projects.
E.g. say you become interested in the challenge “multi agent learning can devolve into selfish or oscillatory behavior instead of converging to stable cooperation” (top orange challenge node), you can use the bot to explore solution paths.
Explore the tech tree here. We have also hosted a seminar presenting it in-depth.
Acknowledgement
We would like to thank Future of Life Institute for the funding for this final iteration of the technology tree, Linda Petrini for building it, and Martin Karlsson for providing the Coordination Network infrastructure of the tree.