This is a linkpost for https://control-inversion.ai/
Here is a nice essay from FLI's Anthony.
It takes the form of a website with a nice design (or that of the more regular PDF format).
It presents a reasonable operationalisation for what "Control" is. According to this operationalisation, it shows that we are on track to lose control of our AI systems, regardless of how aligned they may appear to be.
I like it because it shows how "alignment" can often be a red-herring. Most people do not want to take the gamble of letting uncontrollable systems decide our fate, regardless of how aligned various individuals may claim they are.
Cheers!