We now have an SF office. We're hiring for all technical roles in SF and London!
The Scheming Research team focuses on two efforts
We're focusing on figuring out the science of scheming. In particular,
Will future models have misaligned preferences by default?
Will training against misaligned preferences fail?
improve our evaluations for scheming and loss of control for our evaluation campaigns with frontier AI labs
We're building out a monitoring team and coding agent monitoring product
Research: We've published a scalable monitoring agenda and intend to publish a lot of research on how to build more accurate and reliable monitors
Product: Watcher provides real-time monitors and other guardrails for coding agents and allows users to keep track of what all of their agents are doing.
Our AI governance efforts will focus on the governance of automated AI R&D and recursively improving AI and the associated Loss of Control risks.
Details: https://www.apolloresearch.ai/blog/apollo-update-may-2026/