Long horizon agency / strategic competence approximately does not exist among humans, even the smartest ones. With very few exceptions, billionaires spend or give away their money haphazardly, philosophers don't bother to think about long term implications of AI on philosophy production (positive or negative), Terence Tao spends his time wireheading on abstract math instead of doing anything remotely like instrumental convergence. Unlike my youthful expectations (upon reading Vernor Vinge), there are no university departments filled with super-geniuses charting a path for humanity to safely navigate the Singularity.
Aside from this, humans also have a bunch of other safety problems, like being bad at philosophy, being easy to manipulate, having strange and unstable values, tending to ignore risks they create (because acknowledging them would be bad for one's status). So if you try to improve people's agency, you likely just end up getting people like founders of FTX and OAI.
What about getting help from AI? Well they seem to suffer from many of the same safety problems, but in even more severe forms. E.g., current AI capabilities are even more skewed towards short-horizon, easily verifiable tasks, like math and coding. They seem even more prone to reward gaming, are even worse at doing philosophy, are liable to have even more alien values, etc.
Both AI and human safety seem to have this interlocking nature, i.e., there is a bunch of different safety problems where solving some but not all of them at the same time can make the overall situation worse. (For example, solving AI intent alignment allows humanity to do more damage to itself with AI help, if AI doesn't also provide competent strategic and philosophical assistance, but increasing AI strategic competence risks allowing misaligned AI to take over more easily.) This feature demands a high level of strategic competence to recognize and navigate, which is just what we don't have.
I've been supportive of AI p