At the Omnicide Machine Manufacturing Corporation, we work tirelessly to ensure an omnicide-free future. That’s why we’re excited to announce our Responsible Increase Policy (RIP)—our internal protocol for managing any risks that arise as we create increasingly omnicidal machines.

Inspired by the risk-management framework used in gain-of-function virology research, our RIP defines a framework of Omnicidal Ability Levels (OAL), reflecting the precautions we plan to take as we release increasingly dangerous features over time:

The basic idea of the RIP is simple: each time we ship an update which makes our product more lethal, we will pause our efforts for some amount of time, and then revise our policies to be in some sense more “cautious.” For example, our RIP contains the following firm commitments:

  • We aspire to take actions which are broadly good, rather than broadly bad;
  • We hope to refrain from releasing any fairly omnicidal systems, until first implementing “certain safeguards”;
  • And we intend to refrain from creating any systems which we’re quite sure would kill everyone.

That said, we want to acknowledge that even this cautious approach has drawbacks. For example, if our prevention measures are too weak, we risk catastrophe—potentially leading to extreme, knee-jerk regulatory responses, like banning omnicide machines altogether. On the other hand, if our precautions are too conservative, we risk ending up in a situation where someone who isn’t us builds one first.

This is a tricky needle to thread. History is rife with examples of countries deciding to heavily restrict, or even outright ban, technologies which they perceive as incredibly dangerous. So we have designed our RIP to tread lightly, and to exemplify a “minimum viable” safety policy—a well-scoped, small set of tests, that labs can feasibly comply with, and that places the least possible restrictions on frontier existential risks.

The Sweet Lesson: Reasoning is Futile

As an omnicide creation and prevention research company, we think it’s important to seriously prepare for worlds in which our product ends up getting built. But the central insight of the modern era of gigantic machines—the so-called “Sweet Lesson”—is that it's possible to build incredibly powerful machines without first developing a deep theoretical understanding of how they work.

Indeed, we currently see ourselves as operating under conditions of near-maximal uncertainty. Time and time again, it has proven futile to try to predict the effects of our actions in advance—new capabilities and failure modes often emerge suddenly and unexpectedly, and we understand little about why.

As such, we endeavor to maintain an attitude of radical epistemic humility. In particular, we assume a uniform prior over the difficulty of survival:

For now, this degree of wholesale, fundamental uncertainty seems inescapable. But in the long-run, we do hope to add information to our world-model—and thanks to our Gain of Omnicide research team, we may soon have it.

Gain of Omnicide

Our Gain of Omnicide research effort aims to generate this information by directly developing omnicidal capacity, in order to then learn how we could have done that safely. Moreover, our core research bet at OMMC is that doing this sort of empirical safety research effectively requires access to frontier omnicide machines.

In our view, the space of possible threats from gigantic omnicide machines is simply too vast to be traversed from the armchair alone. That’s why our motto is “Show Don’t Tell”—we believe that to prevent the danger associated with these machines, we must first create that danger, since only then can we develop techniques to mitigate it.

But this plan only works if our prototypes stay merely fairly omnicidal, since if we overshoot and create a quite omnicidal machine, all will perish. We see this as the central tension of our approach—while it is crucial to create some degree of omnicidal inclination, we must also avoid creating full-blown omnicidal intent. 

Naively, it might seem like achieving this precise balance would be hard, given how suddenly and unexpectedly new machine capabilities seem to emerge. But the stringent standards of our RIP give us the confidence we need to stay just shy of omnicide—and we’ve already begun gathering the empirical evidence required to validate these standards.

Today, we’re proud to announce that we’ve begun working together with the Wuhan Institute of Virology, whose staff have direct, first-hand experience underestimating the degree to which their research was strongly lethal. We expect their boots-on-the-ground expertise in creating frontier pathogens will be invaluable for us as we continue to refine and improve our RIP.

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 12:33 PM

I'm highly skeptical that it's even possible to create omnicidal machines. Can you point empirically to a single omnicidal machine that's been created? What specifically would an OAL-4 machine look like? Whatever it is, just don't do that. To the extent you do develop anything OAL-4, we should be fine so long as certain safeguards are in place and you encourage others not to develop the same machines. Godspeed.

You know making the omnicide machine really is the fastest way to understand how you could have done it safely.

It is funny, but it also showed up on April 2nd in Europe and anywhere farther east...

Reads like a ha ha only serious to me anyway.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?