H/T Aella.
A company that made machine learning software for drug discovery, on hearing about the security concerns for these sorts of models, asked: "huh, I wonder how effective it would be?" and within 6 hours discovered not only one of the most potent known chemical warfare agents, but also a large number of candidates that the model thought was more deadly.
This is basically a real-world example of the "it just works to flip the sign of the utility function and turn a 'friend' into an 'enemy'"; this was slightly more complicated as they had two targets that they jointly optimized for the drug discovery process (toxicity and bioactivity), and only the toxicity target is flipped. [This makes sense--you'd want your chemical warfare agents to not be bioactive.] It also required a little bit of domain knowledge--they had to specify which sort of bioactivity to look for, and picked one that would point towards this specific agent.
You know, it might be fun to take something like this and point it at food, just to see how many of the outputs are already in our diets.
I would expect that you need to put in more work to produce a model that's useful for analyzing problems with food.
This model likely goes for highly reactive substances. The problematic interactions with food are likely that something in the food reacts with a few human proteins. I would expect that you need to actually model the protein interactions for that.