OpenAI have announced the approach they intend to use, to ensure humans stay in control of AIs smarter than they are:
Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence.
To align the first automated alignment researcher, we will need to 1) develop a scalable training method, 2) validate the resulting model, and 3) stress test our entire alignment pipeline:
When Jan Leike (OAI's head of alignment) appeared on the AXRP podcast, the host asked how they plan on aligning the automated alignment researcher. Jan didn't appear to understand the question (which had been the first to occur to me). That doesn't inspire confidence.
So the plan is to add layers of human and dubiously-aligned-human-level-AI intervention in an effort to discover how to keep AI aligned. That is to say, "If we throw enough additional complexity at it, the systems that we already don't understand won't hurt us!"
Like the man said, "the bureaucratic mentality is the only constant in the universe".
This plan, as currently worded, has me somewhat concerned.
I think that using AI to solve alignment should be possible, but to me this relies on it not being agentic (i.e., not making decisions based on satisfying preferences about the future of the world).
This "superalignment" plan, the way it is currently worded
But hey, maybe the safety stuff works and avoids the researcher being agentic.
Even then, in order for the ultimate aligned AI to wind up "aligned", it does have to care about the future at least indirectly (via human preferences about the future). Even an aligned AI doesn't have to (and IMO cannot, to be *really* aligned) care about the future directly (i.e. for any reason other than human preferences about the future), but if designed without an understanding of this, so that people just try to instill good-looking-to-humans preferences and cause good-looking-to-humans-in-the-short-run actions, then it will end up with these independent preferences about the future which inevitably won't be perfectly the same as humans' preferences.