This is a generally hard problem. You may find this old post of mine helpful https://www.neelnanda.io/blog/44-agency
There seems to be a disconnect here between the idea of agency you and these other articles are pursuing, and what your specific goals are. The definition of "agency" can mean a lot of different things to different people, but the version the LW community seems to coalesce around is something like "recognizing when irrational factors like social norms and emotional influences are stopping you from pursuing your goals as effectively as possible, and changing your behavior so that you are no longer restrained by those factors". If that's what you mean, the article you cited "Seven ways..." is probably as close as you're going to get. There's no magic bullet here. I would suggest doing post-mortem reviews on your day-to-day activities and trying to identify moments where in hindsight you let an irrational factor get in your way and you should have behaved more "agently", and using those as teaching moments going forward.
Another type of "agency" that's equally valid and fits the definition well, but doesn't seem to be what you're talking about, is agency in the sense of having the knowledge and skills to understand and interact with the systems around you in ways that most people don't bother to do. Orienting Towards Wizard Power is an article that does a great job on this, and if you were just asking for more concrete suggestions to be more agenty in a vacuum I would suggest it, but it doesn't seem very relevant to AI Safety which you seem to be focused on.
And that's why I think this article is kind of a contradiction. You're resting on the assumption that "everyone being more agenty" is what the AI Safety movement needs, and I don't think that's true. We already have established paths for people who are trying to devote themselves to the cause as effectively as possible. Either study AI and join the research effort, Earn to Give and devote yourself to making a bunch of money to donate to the cause, or focus on the social/political/marketing side and try to solve the hard problem of convincing the public that AI Safety is a pressing issue. Building more agency as an individual will help you somewhat in all of these pursuits, just like any others, but I don't see why you have identified it as the main thing holding you and others back.
LessWrong as a community idolizes agency to a great extent. However, the content I have seen seems to be full of generic exhortations of agency and lacking in concrete implementable strategies for how to build it. The CFAR Handbook, and the Hammertime sequence based on an older version of it, are among the best resources on the subject I have found, but even they seem too focused on feelings and self-assessments and not enough on carrying out a concrete plan to massively improve your agency on a short timeframe. As such, I thought it would be good to put out a call for resources on how to build agency.
I expect this post to serve two purposes. First of all, I'll use it as a compilation of the best resources I have on building agency. Second of all, I'll use it to explain which ones have worked best for me, where they are lacking, and what potential avenues could be for improving them.
Resources I have seen
Nicholas Kross posted a question titled "Ways to be more agenty?" which had some good answers, but I can't find the link and besides, not much new was proposed in it. However, it did point me to some resources that I will include below.
The anonymous post "Seven ways to be unstoppably agentic" is generally close to what I want here, but these are all very broad pieces of advice that aren't immediately actionable. In addition, the author has since cautioned against trusting the post too much.
Neel Nanda wrote a post on becoming a person who actually does things, and it does tend to focus on forming a favorable self-identity which I see as important to bring your System 1 in line with your System 2, but the challenge still is, the advice here is very broad and not especially actionable.
I also liked Dwarkesh Patel's post "Examples of barbell strategies" (original link broken, archive here) but that's more just forms of heterodox advice than a comprehensive formation program surrounding agency.
Parts of the Inadequate Equilibria book also fall into this group, but again, they also seem to fall too close into the "abstract praise of agency" trap. The Craft and the Community Sequence is similar and similarly disappointing in terms of lacking a clear roadmap.
An ideal resource
Ideally, the resource I'm thinking of would be some sort of high-intensity program, with a clear connection back to real-world performance in a range of tasks that require high agency. It seems to me that CFAR has come close, but historically had trouble with the real-world feedback issue. They've been moving more in that direction, but I'm unclear exactly what their plans are.
For instance, a program for forecasting would likely use calibration training or exercises similar to those on Clearer Thinking. However, I'm not sure what similar tests would be for agency, and it seems hard to build them because a key part of high agency is recognizing and taking advantage of opportunities that are non-obvious to other people.
In terms of examples of successful leadership training, probably something close to how military officers are trained would do well here. However, I'm not sure that "agency" in a broad sense works similarly to military leadership in terms of trainability, and again I'm unsure how to measure that.
Alternative plan: building systems that do not require agency
It is distinctly possible that agency cannot be trained, and should be treated as a trait that is predetermined and rare. However, if that is true, that is also not a case for business as usual. Instead, the focus should be on building structures within AI safety so that virtually no one has to be an agenty person to contribute.
An obvious way, though of course with similarly obvious advantages and disadvantages, would be expanding normal academic pipelines. Set up undergraduate courses and majors in AI alignment, PhD programs and fellowships in AI alignment, faculty positions in AI alignment, and similar sorts of things, so as to expand the field while piggybacking on the administrative infrastructure that already exists for academia. The key thing I see as the objection to this model is if timelines are too short for this to succeed in time. But there are similar options in that case. Here are a few:
My question
All right, so I admit so far I've fallen into the same trap that I've accused existing work of. But now I'd like to go a bit past that and ask: what should I personally do? I feel that I kind of have to move fast on this, because there are sudden ebbs and flows in terms of how capable I am to solve this, how much time I have, how worried I am about not solving this, and whatever else. And with world problems being what they are, this isn't something that can wait idly for someone else to solve it. In particular, if AI timelines are potentially as short as some people say, then I would want an extremely high level of commitment to high-impact AI safety work, either by massively increasing my agency and doing this on my own, or maintaining my current level of agency and volunteering for someone else to direct me in supporting their projects.