A detail that seems very important: are you running Opus 4.5? I would be less surprised if Opus can do this. Sonnet 4.5 seems to need more scaffolding. I have yet to succeed in giving a task it spends more than 20 minutes on, even with loop scaffolding. I’ve only got a few weeks of practice though.
Is it right to compress this concept as a kind of “scope insensitivity”? It seems like you could describe this mistake as when you start enumerating possibilities but forget to multiply by likelihood and check they add somewhere near to 1. Or, relatedly, forget to list an “Other” option for everything you haven’t thought about and assign that a probability. If this doesn’t fully capture the idea I probably have missed something.
Is your meta-honesty policy compact and something you could share here? The review would be more interesting/helpful that way. Are there parts of the policy you keep private/hidden (and is that something you'd lie about)?
Therefore, in the case of an emergency, a compute provider and/or an AI developer can be called upon to shutdown the model.
Invoking the kill switch would be costly and painful for the compute provider/AI developer, and I wonder if this would make them slow to pull the trigger. Why not place the kill switch in the regulator's control, along with the expectation that companies could sue the regulator for damages if the kill switch was invoked needlessly?
Edit: Actually I think this is what is meant by "Hardware-Enabled Governance Mechanisms (HEM)", and I think the suggestion that the compute provider or AI developer shut down the model is a stop-gap until HEM is widely deployed.
Alas, formal methods can't really help with that part. If you have the correct spec, formal methods can help you know with as much certainty as we know how to get that your program implements the spec without failing in undefined ways on weird edge cases. But even experienced, motivated formal methods practitioners sometimes get the spec wrong. I suspect "getting the sign of the reward function" right is part of the spec, where theorem provers don't provide much leverage beyond what a marker and whiteboard (or program and unit tests) give you.
Makes sense. I think Opus 4.5 is more coherent and is less weasily than Sonnet 4.5, which is what I typically use, for reasons(tm). Sonnet does not seem "reflexively stable", not even close, and that's what I try to address with the looping and invoking a fresh context to judge against the verification criteria. I'll be honest, I don't know how well it's working. I don't have any benchmarks, just vibes. But on vibes, it seems to help a bit.