Helping inform important players. Many important players will want advice from people who have legibly thought a lot about AGI and AGI safety. Would they think to call you first when they have a question?
Who (else) is working on building infrastructure for helping individuals get the right advice at the right point in time (vs. current status quo of individuals trying to consult their direct networks)? We're doing our best to make headway here (third-opinion.org) and would be very interested in getting in touch with people who are thinking seriously about this and want to help build infrastructure for this purpose.
Sure:
So for one, all of the companies listed above already claim to have internal whistleblowing channels, incl. retaliation protections. You can find sources for those claims in the campaign page if you're interested.
Legal provisions also already exist in the U.S. today that prevent companies from retaliating against individuals raising violations of the law - although not requiring a dedicated channel. Senator Grassley's proposed AI Whistleblower Protection Act would more concretely require AI companies to set up internal channels. We would indeed be happy to see more details in this legislation around guarantees e.g. 'independence' of a system should look like as well as transparency requirements (our level 1 and 2).
The issue even with mandating internal channels broadly is that a) many risks we might care about are currently still not covered under any law and b) that just claiming to have a policy/ system doesn't mean workers (at all) that there won't be retaliation - even if there's legal protections.
A) This means raising e.g. violations of an RSP would not be protected - unless the company explicitly includes them in a whistleblowing policy that is transferred into individually enforceable rights (i.e. that the individual can take a company to court over alleged retaliation for making a report) by a person covered under the policy.
B) If reports, for example, go straight to the executive board or managers involved in the misconduct, who then intimidate individuals speaking up so that they will never make use of their rights -> very bad but sadly not uncommon. Likewise if e.g. reports go to HR, but HR does not understand/ know how to deal with reports -> not good but not uncommon. Unfortunately there's also still many cases (in general, not AI specific) where retaliation is also more subtle or part of the system - e.g. a team member is 'moved around' instead of individuals responsible for misconduct. We will only know this if we get to "Level 2".
The EU e.g. mandates internal channels - retaliation still happens - both as systems may be poorly set up (Level 1) AND if it is not really 'lived' (Level 2).
So how do we get to companies improving protections in the status quo: We believe transparency on what current systems are and how they perform is the catalyst to improve systems. And if companies do not create that transparency (without compelling reason) - then we might also have an answer there about how reliable those systems are. As a note: This is not a 'silver bullet' solution but should be a strong improvement on the margin.
So how do we convince companies? This is from less to more 'pressure':
We of course also have thoughts on what an ideal whistleblowing policy (level 1) and public reporting on outcomes/ effectiveness/ improvements (level 2) should be - if you'd like an indication, take a look at the rating framework that FLI used in their AI Safety Index on the whistleblowing section, Appendix, Indicator "Whistleblowing Policy Quality Analysis": https://futureoflife.org/wp-content/uploads/2025/07/FLI-AI-Safety-Index-Report-Summer-2025.pdf
We'll publish a proper framework for evaluating systems in due time - but for new want to keep our focus on transparency.