Dear LW community,
One week ago we have launched Third Opinion (Zvi's coverage of us from today, our X.com thread, website), a service that enables AI professionals to receive anonymous, expert guidance on concerning developments they observe in their work. This post outlines what we're doing and why we think it matters.
Thank you to all the individuals (also from this community) who have supported us over the past months and contributed to getting this off the ground. We hope this to be a valuable resource and contribution to the ecosystem.
AI professionals working at the development frontier may encounter situations that they feel raise (serious) concerns.
This matters as they are on the ‘frontlines’ and in a position to spot concerning developments potentially earlier than anyone else.
Unfortunately, they today face significant barriers to evaluating whether these concerns are actually well-founded - i.e. on an object-level concerning.
Through conversations with over 50 individuals in the field over the past ~year, we found this has the potential to create a persistent information gap - the opposite to what Dean Ball called a 'high quality information environment'.
This negatively affects…
The concept for Third Opinion was developed together with a former frontier lab insider. Third Opinion provides:
Our service is free. We help frame inquiries to determine if specific thresholds of concern have been crossed, while protecting sensitive information.
Find details on our process here, our FAQ here.
At this point you may ask yourself “Ah, so this is a Whistleblowing Service?” Not quite. Third Opinion is aimed solely at helping concerned individuals assess whether their concerns are well founded. We must be explicit here that we do not support the sharing of confidential information to third parties via this offering. We are of course always open to discuss the whistle-blower support organization landscape if this is an interest of yours - throughout our research phase we have naturally come across other capable organizations doing important work in this space.
*we are not above the law - there are circumstances imaginable where we could be forced to share conversation data, if not yet deleted, through judicial pressure. This is why it is important for us to share with you to not submit confidential or personal information. If you are concerned about this, please reach out to us directly. We will run you through the scenarios and the likelihood of this risk materializing.
We are starting with a focus on frontier AI development. We will expand our coverage over the coming months.
If you are considering submitting a question, please apply the following checklist to your question. You should be answering ‘yes’ to all below:
Note: If you are uncertain about whether your question fits our scope - submit it. You can stop the process at any time and we'll let you know if the question falls outside our purview. We will delete all data relating to your question upon your request or latest after 21 days of inactivity.
We list example (!) categories and ways to frame your question here. A selection of categories:
- Technical safety concerns in development
- Deployment risks and safeguards
- Organizational practices affecting safety
- Governance and oversight issues
- Potential misuse scenarios
Given the sensitive nature of this topic, we always look to upgrade our security and are very open to feedback and ideas for improvements - ideally on this topic via email (see bottom of post).
Question submission is possible via the Tor Network (tool link) through a self-hosted instance of the most up to date version of GlobaLeaks - an open source solution used by news-rooms and non-profits globally for handling information confidentially and anonymously. GlobaLeaks is regularly pen-tested and audited.
We've implemented several security measures to protect user anonymity and data:
Third Opinion is an initiative of OAISIS, an independent non-profit focused on supporting responsible AI development. Our core team includes:
- Karl Koch (Co-founder)
- Maximilian Nebl (Co-founder)
- Network of advisors from AI safety, law, and journalism fields
Third Opinion is our first offering - more will follow. We are in partnership with Whistleblower Netzwerk e.V. - Germany’s most experienced and largest non-profit dedicated to supporting courageous individuals.
If you work in AI development and have concerns you'd like evaluated:
2. Review our question policy
3. Submit your query through our tool on Tor
- Social: X.com/Bluesky/Threads
Questions or suggestions are very welcome in the comments or via hello@oais.is for more sensitive matters - find our PGP key at the bottom of this page.