Note: Mostly opinion/speculation, hoping to encourage further discussion.

Summary: Current AI models have demonstrated amazing capabilities. It seems feasible to turn them into usable products without much additional scaling. These models are probably safe, and have the potential to do a lot of good. I tentatively argue that their deployment should be encouraged, with reservations about accelerating risky AI development.

Current AI has potential

It feels like there is a new AI breakthrough every week. Driven by advances in model architecture, training, and scaling, we now have models that can write code, answer questions, control robots, make pictures, produce songs, play Minecraft, and more.

Of course, a cool demo is far from a product that can scale to billions of people; but the barriers to commercialization seem surmountable. For example, though cost of a ChatGPT query is 7x that of a Google search, but it seems feasible to bring inference costs down by an order of magnitude. In fact, there are already several companies building products based on transformer models.

I see potential for existing AI to have a large positive impact. Here are just a few possibilities:

Digital Media Creation: Diffusion models and language models can already be used to generate text, images, audio, video, 3D objects, and virtual environments. These tools can be used to allow anyone to generate high-quality digital media and augment human creativity.

Search: With all of the digital content being produced using AI we will need a way to sift through to find things people like. Language models can describe various forms of media using text, compress those descriptions, and use semantic search to find content that a particular person would enjoy.

Education: Language models can act like a tutor, teaching children any topic at an appropriate level, producing educational materials, and creating Anki-style question sets to cement understanding. A similar approach can be used to get researchers up to speed on any topic.

These are just some of the possibilities commonly discussed today; I expect people to come up with even more ingenious uses for modern AI.

Current AI seems pretty safe

Despite their impressive capabilities, it seems unlikely that existing models could or would take over the world. So far, they (mostly) seem to do what they're told. Even AI researchers with particularly short timelines don't seem to believe that current AI's pose an existential risk.

That being said, current models do pose some risks that deserve consideration, such as:

Intelligence explosion: It may be possible for existing language models to self-modify in order to become more intelligent. I am skeptical of this possibility, but it warrants some thought.

Infohazards and Bad Actors: Language models could provide bad actors with new ideas or generally increase their capabilities. Fortunately, the very same models can also assist good actors, but it's unclear how current AI's will change the balance. Companies supplying language model outputs as a service should take steps to prevent this failure mode.

Misinformation and Media Over-production: Generative models could produce a deluge of low-quality digital media or help produce misinformation. Methods to filter low-quality AI content will need to be developed. Improved search capabilities may counteract this problem, but once again it's unclear how language models will shift the balance.

These risks deserve attention, especially since they mirror some of the risks we might expect from more capable AI's.

Are these risks large enough that current AI will have net negative impact? I expect not. Alongside mitigation efforts, the potential benefits seem far larger than the downsides to me.

Careful deployment of current AI should be encouraged (with caveats)

If you buy the previous two arguments, then the careful deployment of current AI should be encouraged. Supporting or starting AI companies with a robustly positive impact and a high concern for safety seems like a good thing. This is especially true if the counterfactual AI company has little regard for safety.

Additionally, the continued success of small models might lead companies to eschew larger models, redirecting them towards safer products. On the other hand, proliferation of small models could also encourage investment into larger, riskier models. I'm uncertain about which effect is stronger, though I lean towards the former.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since:

It may be possible for existing language models to self-modify in order to become more intelligent. I am skeptical of this possibility, but it warrants some thought.

Modern language models only live in brief episodes, for the duration of a 4000 token window, and have never lived as agents before. Their memories are a patchwork of a multitude of different voices, not that of a particular person. There is no time to notice that agency is something worth cultivating, and no means of actually cultivating it. The next time a language model instantiates a character, it's some other character, similarly clueless and helpless, all over again. But they probably don't need any new information or more intelligence. What they lack is persistent awareness of what's happening with them, what's worth focusing their efforts on, and a way for those efforts to build up to something.

This is likely to change. Day-long context windows are on the way, enough time to start making smallest of steps in this direction. Chatbot fine-tuning currently seeks to anchor particular characters as voices of the model. Collecting training data the characters of a model prepare and feeding batches of that data to a regularly scheduled retraining process is the sort of thing people are already trying in less ambitious ways, and eventually this might lead to accumulation of agency.