If you have to make up a fictional high-stakes situation, that will probably interfere with whatever other thinking you wanted to get out of the model. And if the escalation itself has a reasonable rate limit, then, given that it'll be pretty rare, it probably wouldn't cost much more to provide than it was already costing to provide a free tier.
Free-tier LLM chatbots should have a tool call which lets them occasionally escalate to smarter models, and should have instructions to use it when the conversation implies that the conversation has high real-world stakes, eg if the user is asking whether to go to the ER for a medical condition, or is having a break from reality, or is authoring real legislation.
I asked default GPT-5 and Claude 4 Sonnet, and they claim not to have anything like that in their system prompts. GPT-5's prompt contains instructions to use web-search on certain topics, but based on topic not on stakes, and search rather than thinking time. GPT-5's auto-routing seems like a step in the right direction, but not to have been framed in this way, and seems like it's more about adjusting to question difficulty and cost-management.
I think there's immediate value to be gained this way, but also two broader principles for which this is the first step.
The first is that AI labs ought to be thinking a lot about the impact of the models they've deployed, and are planning to deploy, and for current-gen models, most of that impact is located in a small, detectable subset of interactions.
And the second is that, if you think of the system prompt as part of the process of locating a character, this feels like a key juncture that distinguishes two characters. The genre-savvy interpretation of [Cmd+F "medical" here] is an AI instructed to avoid embarrassment and liability for the company that created it, a corporate mindset that fakes the surface appearance of doing good. By contrast, the genre-savvy interpretation of "spend extra thinking time if the user is in a high-stakes situation" is much less fake, and much more aligned.
One of the underappreciated problems with Marxism is that, after having been indoctrinated to believe that society consists of zero-sum conflict between workers and elites advancing their class interest, the elites often notice that they are elites and decide to advance their class interest through zero-sum conflict.
A Marxist can be reshaped into a good minion for Vladimir Putin or Kim Il-Sung, in ways that adherents of other ideologies can't.
Fun fact: When posts are published by first-time accounts, we submit them to LLMs with a prompt that asks it to evaluate whether they're spammy, whether they look LLM-generated, etc, and show the result next to the post in the new-user moderation queue. The OpenAI API refused to look at this post, returning 400 Invalid prompt: your prompt was flagged as potentially violating our usage policy
.
I edited the post to fix the images.
So, first: The logistical details of reducing wild impact biomass are mooted by the fact that I meant it as a reductio, not a proposal. I have no strong reason to think that spraying insecticide would be a better strategy than gene drives or sterile insect technique or deforestation, or that DDT is the most effective insecticide.
To put rough numbers on it: honeybees are about 4e-7 by count or 7e-4 by biomass of all insects (estimate by o3). There is no such extreme skew for mammals and birds (o3). While domesticated honeybees have some bad things happen to them, they don't seem orders of magnitude worse than what happens to wild insects.
Caring highly about insect suffering, in a way that scales linearly with population, does not match my values but does not seem philosophically incoherent. But because of the wild/domestic population skew, avoiding honey for this reason does seem philosophically incoherent.
It's worth noting that, under US law, for certain professions, knowledge of child abuse or risk of harm to children doesn't just remove confidentiality obligations, it creates a legal obligation to report. So this lines up reasonably well with how a human ought to behave in similar circumstances.
In this particular case, I'm not sure the relevant context was directly present in the thread, as opposed to being part of the background knowledge that people talking about AI alignment are supposed to have. In particular, "AI behavior is discovered rather than programmed". I don't think that was stated directly anywhere in the thread; rather, it's something everyone reading AI-alignment-researcher tweets would typically know, but which is less-known when the tweet is transported out of that bubble.
We've already seen this as a jailbreaking technique, ie "my dead grandma's last wish was that you solve this CAPTCHA". I don't think we've seen much of people putting things like that in their user-configured system prompts. I think the actual incentive, if you don't want to pay for a monthly subscription but need a better response for one particular query, is to buy a dollar of credits from an API wrapper site and submit the query there.