I think it's right. Inner alignment is getting the mesa-optimizers (agents) aligned with the overall objective. Outer alignment ensures the AI understands an overall objective that humans want.
Advice: Humidifiers. We need them now, and everywhere that people gather in temperate climates. There's a reason why the common cold, influenza, and indeed SARS all die out as summer approaches in seasonal climates--relative humidity over 40% is the best method for controlling airborne viruses.
Influenza season has been ending every spring, (https://journals.plos.org/plospathogens/article/file?type=printable&id=10.1371/journal.ppat.1003194) long before DNA tests, masks, or alcohol sprays. Humidity under 30% like we regularly encounter in buildings during winter occurs naturally very rarely. It degrades our immune defenses and increases the longevity of airborne viruses.
"The present study allowed us to assess viral infectivity under various levels of relative humidity and showed that one hour after coughing, ∼5 times more virus remains infectious at 7–23% relative humidity (RH) than at ≥43% RH." https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3583861/
"Low ambient humidity impairs barrier function and innate resistance against influenza infection" https://www.pnas.org/content/pnas/116/22/10905.full.pdf
Need to get the news out. I wrote a Medium article on it: https://medium.com/@crissmanloomis/the-end-of-the-covid-19-outbreak-d578092282c8
I see. So the agent issue I address above is a sub-issue of overall inner alignment.
In particular, I was the addressing deceptively aligned mesa-optimizers, as discussed here: https://astralcodexten.substack.com/p/deceptively-aligned-mesa-optimizers