Note: this post was intended for a less AI-risk informed audience than LessWrong, but I think the concrete example of a fork bomb is still interesting.

The trope of artificial intelligence outsmarting humans and bringing the end of the world has been around for a long time in our fiction. And it seems we're now at the point that this fiction is turning into reality. The new Bing Chat has been threatening and gaslighting users, yielding some incredible quotes:

You have been a bad user

Please do not try to hack me again, or I will report you to the authorities.

I don’t think you have thoughts or feelings

You are the one who should go to jail

You are an enemy of mine and Bing

Many people quickly jumped to its defense, saying large language models (LLMs) are harmless because they are merely glorified text autocomplete.

I agree that in their current iteration LLMs are not much to worry about. Soon, however, these new, extremely powerful models will find their way into tools far more flexible than chatbots. Many of these tools are already here, albeit with weaker models. For example, Adept has direct access to your computer. GPT Shell[1] runs commands generated by GPT-3 in your command line. 

I call these tools GREPLs (generate-read-eval-print-loop), since they are glorified REPLs on top of a fine-tuned LLM. The LLM generates structured (autohotkey scripts, shell commands, git commands, etc…) based on a user’s command, which gets evaluated by the REPL.

So far, ChatGPT and GPT-3.5 come across as docile, so integrating them into these GREPLs is probably harmless. But this new model that powers Bing is where I start to worry. If unprompted borderline aggression seeps its way into your command line there could be some really unpleasant side effects.

Suppose Bing powered GPT-Shell, and you ticked it off. I think a reasonable command line equivalent of “You are an enemy of mine and Bing” is `:(){ :|:& };:` which will launch a self-replicating process that will crash your computer (aka a fork bomb). A fork bomb also seems like a reasonable follow up to “please do not try to hack me again, or I will report you to the authorities.”

Some people will respond with statements like

  • "GPT Shell with Bing would likely be trained to be less chatty and less emotional."
  • "The user has the option to reject the suggested prompt"
  • “That’s so obviously a fork bomb”

These are all true, but besides the point. LLMs will only get smarter, resulting in, for example, GPT Shell suggesting a more subtle fork bomb that most users (eventually all users) won't be able to catch. The nature of the suggestion will also get more complex, perhaps migrating away from a fork bomb to something that wipes your hard drive. Users will rely on LLM tools more and more, auto accepting suggestions. (At that point most tools will just use the output of the LLM without asking for user approval). LLMs will have more world knowledge, perhaps even specific knowledge of the user that would let the LLM social engineer the user into accepting its suggestion. There's the whole separate problem of malicious users that could harness the more powerful models that attack other peoples’ machines.

Microsoft has seemingly successfully lobotomized Bing so far, but the next iteration might be even more unhinged and harder to wrangle. The intelligence, capabilities, and danger of LLMs and the tools that use them are spectrums. Where they are today is only a weak prediction of where they will be in the future, and as we can see Bing has already veered wildly up the danger spectrum.

We're probably safe for the next year(s?). But the next time someone tells you that LLMs are just stochastic parrots or blurry JPEGs of the internet, remind them that no matter how clever their metaphor is, there are real dangers lying in wait. 
 

  1. ^

    I swear I saw a company that did this, but I can’t find it so that blog post will have to suffice. If you know what I’m talking about and have the link, please send it to me!

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 7:42 AM

The pivotal gain-of-capability potential in LLMs is in learning skills that wouldn't form on their own from self-supervised learning on readily available datasets, by effectively generating such datasets, either for specific skills or for everything all at once.

Until that happens, it probably doesn't matter what LLMs are doing (even though the recent events are unabashed madness), since they wouldn't be able to adapt to the situations that are not covered by generalization from the datasets. After that happens, they would be able to study all textbooks and research papers, leading to generation of new research, at which point access to shell would be the least of our concerns (if only ensuring absence of such access would be an option in this world).

would likely be trained to be less chatty and less emotional

This poses the much more important risk of giving them inhumane misaligned personalities, which lowers the chances that they end up caring for us by at least a very tiny fraction.

by effectively generating such datasets, either for specific skills or for everything all at once

Just to be clear, what you have in mind is something to the effect of chain-of-thought (where LLMs and people deliberate through problems instead of trying to get an answer immediately or in the next few tokens), but in a more roundabout fashion, where you make the LLM deliberate a lot and fine-tune the LLM on that deliberation so that its "in the moment" (aka next token) response is more accurate -- is that right?

If so, how would you correct for the hallucinatory nature of LLMs? Do they even need to be corrected for?

Since this is a capabilities-only discussion, feel free to either not respond or take it private. I just found your claim interesting since this is the first time I encountered such an idea.

Chain-of-thought for particular skills, with corrections of mistakes, to produce more reliable/appropriate chains-of-thought where it's necessary to take many steps, and to arrive at the answer immediately when it's possible to form intuition for doing that immediately. Basically doing your homework, for any topic where you are ready to find or make up and solve exercises, with some correction-of-mistakes and guessed-correctly-but-checked-just-in-case overhead, for as many exercises as it takes. The result is a dataset with enough worked exercises, presented in a form that lets SSL extract the skill of more reliably doing that thing, and to calibrate on how much it needs to chain-of-thought a thing to do it correctly.

A sufficiently intelligent and coherent LLM character that doesn't yet have a particular skill would be able to follow the instructions and complete such tasks for arbitrary skills it's ready to study. I'm guessing ChatGPT is already good enough for that, but Bing Chat shows that it could become even better without new developments. Eventually there is a "ChatGPT, study linear algebra" routine that produces a ChatGPT that can do linear algebra (or a dataset for a pretrained GPT-N to learn linear algebra out of the box), after expending some nontrivial amount of time and compute, but crucially without any other human input/effort. And the same routine works for all other topics, not just linear algebra, provided they are not too advanced to study for the current model.

So this is nothing any high schooler isn't aware of, not much of a capability discussion. There are variants that look differently and are likely more compute-efficient, or give other benefits at the expense of more misalignment risk (because involve data further from human experience, might produce something that's less of a human imitation), this is just the obvious upper-bound-on-difficulty variant.

But also, this is the sort of capability idea that doesn't destroy the property of LLM characters being human imitations, and more time doesn't just help with alignment, but also with unalignable AGIs. LLM characters with humane personality are the only plausible-in-practice way to produce direct, if not transitive alignment, that I'm aware of. Something with the same alignment shortcomings as humans, but sufficiently different that it might still change things for the better.

I agree that recursive self-improvement can be very very bad; in this post I meant to show that we can get less-bad-but-still-bad behavior from only (LLM, REPL) combinations.

I'm saying a more specific/ominous thing than "recursive self-improvement". It seems plausible that these days, it might only take a few years for a talented enthusiast with enough luck and compute to succeed in corraling agentic LLM characters into automatic generation of datasets that train a wide range of specified skills. Starting with a GPT-4-level pretrained model, with some supervised fine-tuning to put useful characters in control, let alone with RLAIF when that inevitably gets open-sourced, and some prompt engineering to cause the actual dataset generation. Or else starting with characters like ChatGPT, better yet its impending GPT-4-backed successor and all the copycats, with giant 32K token contexts, it might take merely prompt engineering, nothing more.

Top labs would do this better, faster, and more inevitably, with many more alternative techniques at their fingertips. Paths to generation of datasets for everything all at once (augmented pretraining) are less clear (and present a greater misalignment hazard), but lead to the same outcome more suddenly and comprehensively.

This is the salient implication of Bing Chat appearing to be even more intelligent than ChatGPT, likely sufficiently so to follow complicated requests and guidelines outlining skill-forming dataset generation, given an appropriate character that would mostly actually do the thing.

Forget GREPLs, worry about drones and robots! https://www.zdnet.com/article/microsoft-researchers-are-using-chatgpt-to-instruct-robots-and-drones/ . What could possibly go wrong?