The relevance of artificial wisdom
Recently I've been on a team, part of AI Safety Camp 10, reflecting on the idea of artificial wisdom (AW). This article aims to strengthen the sense of AW as an important lens through which to view AI by clarifying its relevance to AI systems of today and the near future. It does not explicitly discuss AI safety, still less existential risk, although it seems intuitively that AW must be deeply implicated in questions of AI safety.
First, what is meant by wisdom?
Wisdom in a human context is something beyond rationality narrowly understood – that is, beyond intelligent performance in the achievement of an end. It involves being able to:
It often further presupposes human understanding of the people one deals with and regard for their well-being. And it would usually be taken to include a desire to grow in wisdom.
These are the virtues that constitute wisdom in a human setting, and they are what AI systems need if they're to be considered wise.
It's desirable that AI systems possess wisdom in their own right; but also, wise AI promises to be a powerful aid in applying and increasing the wisdom of its human users.
The above talks in terms of the traits of wise persons; but actions, policies, rules and so on are described as wise to the extent that they show analogous features. I think the wisdom of a person simply comes down to the amount of wisdom that their actions, policies, etc. habitually display.
Wisdom in all these senses is a matter of degree, not a binary quantity that something or someone does or does not possess. And what is wise in one context may not be wise in another.
I will try to make these generalities concrete by considering some ways in which the idea of wisdom is more or less applicable to some of the AI tools that are around us in today's world. And I will imagine how these tools could develop in the near future if their wisdom is extended and deepened.
The first current AI I’ll consider is the self-driving car. What does it even mean to talk of wisdom here?
Self-driving cars
A present-day driverless car is both intelligent and rational in getting you to a specified destination. It balances the potentially conflicting values of speed, economy and safety while holding human physical well-being as a predominant good. These skills are a kind of wisdom, applied in a clearly defined context. Calling the car wise in this limited way would amount to nothing more than calling it skilful and responsible.
If I called a human driver wise, I also might mean nothing more than this – skilful and responsible. But in some contexts I could mean other things.
Human drivers can extend their thinking to new situations outside the strictly controlled environment of everyday driving. Perhaps someone hears that their child has been taken to hospital and they have to get there quickly. They can consider whether this emergency would justify the risk of driving along a flooded road or across a rickety bridge, where a self-driving car would simply refuse.
Or suppose war has broken out — the traffic signals aren't working and panic-stricken drivers are ignoring the traffic laws. In a situation so far outside its training, a current driverless car would probably be paralysed, whereas human drivers will (usually) come up with some kind of response in these extreme situations.
Additionally, even very ordinary human beings — whether drivers, passengers or citizens in general — have understandings of the institution of driving in general, understandings that incorporate more or less wisdom. In various situations people may: question whether it is better to use the car rather than a bus or bike; whether it is better to make a particular journey rather than stay at home to deal with family problems; whether the user's personal car journeys should be reduced because of the social effects of car use; and so on in ever wider circles of concern.
These questions can be multiplied without limit and no-one ever engages with more than a fraction of them; and yet even the average person in a car-driving culture can potentially address any of them at some time to some extent. The present-day driverless car isn't capable of doing so and isn't required to. It is required to be a pure instrument, for which the idea of wisdom wider and deeper than its current skills is far-fetched.
Cars of the future, I'm imagining, will incorporate the kinds of wisdom that human drivers deploy in novel situations. A human driver's decision to go along a dangerous road or to turn back will draw on judgements of probabilities, on values such as their concern for their children, their knowledge of alternative routes or of alternatives to making the journey at all, and so on with no clear limit. As AI systems develop and become more integrated with wider software systems in general, they will be able to make the same kinds of judgements, but with the speed and accuracy of software and without human frailties. It will begin to make sense to describe the performance of such cars as wise in various degrees.
And increasingly such cars will be able to converse with their users and counsel them. In the examples I've described, the car will be able to give and receive advice. The wisdom of the machine and the human user will mutually enhance each other.
The point of these speculations has been to give meaning to the idea of AW in the context of driverless cars. In what follows I try to develop the idea of AW as an enhancement of the capabilities of other current AI systems.
What about AI tools used today in business?
AW in business
Again, AW seems at first sight to have little application to many apps used today by a company's employees – even those touted as “AI-driven”. A spreadsheet, a database, an email app, an image-generator, has to be simply a tool, responsive to the user's requirements. But these have to be used with regard to various contexts. A tool's outputs have to be consistent with the company's broader requirements: to mesh with the outputs of other tools used in the company, with the outputs created by other employees, and with the company's overall goals.
There are wider contexts yet, calling for wider wisdom. The company's products and activities have to be compliant with law and (ideally) acceptable to public opinion. They have to accord with the company leaders' judgements of their value in the marketplace.
The responsibility of dealing with these wider contexts has traditionally been given to employees and company leaders. But it is increasingly being handed over to new LLM-based apps specifically designed to handle the integration of these functions. The need for this is made more pressing by the growth of agentic AI in business, which has to be kept under company supervision.
This machine-assisted context-sensitivity is the beginning of artificial wisdom — or perhaps human/machine hybrid wisdom — in the complex system of people, business goals and social constraints that constitutes a company.
It's easy to imagine that legal licensing of AI products will eventually require all business AI apps to possess this kind of context-sensitivity. Perhaps the same legal requirements will be extended to new office apps for private consumer use. Thus counselling, warnings and awareness of wider contexts will be generated by tools in much the same way that we expect today's tools to check spelling and style, flag errors in maths and code, even sometimes issue trigger warnings.
An AI with wide and deep wisdom in a business environment could also take into account the tension between the employee's goals, values and personal interests and those of the organization. The availability of third-party products with built-in wisdom could drive organizations to become more enlightened in promoting the flourishing of their employees, even where this wasn’t previously high in the organizations’ explicit priorities.
Such increasing sophistication will represent developing artificial wisdom in a growing complex of symbiotic human and machine wisdom.
Wise counsellors
The need for a concept of wisdom hardly needs to be argued for in the case of a whole class of current AI systems: AI assistants, recommendation apps and chatbots in general.
Chatbots are clearly unwise when they hallucinate or otherwise mislead the user, but such specific failings can be corrected by designers. But even chatbots that are efficient and accurate on most measures can lead their users to become over-reliant on them.
The guidance coming from today's chatbots, even while correct as far as it goes, may be insensitive to the user’s personal traits and contexts. And users can act lazily or stupidly, blindly following the guidance of the app. Chatbots can play into psychological weaknesses of the user. There are serious questions around psychotic or suicidal behaviour that is allegedly being promoted by current chatbots.
But the unwisdom of human users could be overcome by growing wisdom in the AI advisers.
What we want from a wise chatbot is what we would want in a wise human counsellor — to produce novel insights while being polite yet candid, obedient yet not sycophantic. In the best conversations with a wise human interlocutor, one meets with questions, challenges and warnings. It is wise to relish such conversations and to avoid comfortable reassurance, agreement and flattery. It is wise to talk to friends and therapists in the hope of learning new things and getting novel advice, even when these are unsettling. We could turn to similarly wise AI advisers for such challenging dialogues.
In dealing with human counsellors, we want our autonomy to be respected. We want warnings and reflections, but not censorship. Trigger warnings are pretty harmless and better than censorship; but often we don't even want trigger warnings. If an AI flatly refuses to provide certain images or certain information, then it ought to state explicitly what it is doing and why.
If wisdom enters the public discourse as a desirable feature in AI, it will attract the attention of lawmakers, for good or ill. It's imaginable that AI systems intended for personal use will be legally licensed only if they meet some requirements of wisdom. A system's degree of wisdom could be marketed as a product feature, just as capabilities and legal compliance are marketed as product features.
The coming AI ecosystems
I began with the puzzle of AW’s seeming irrelevance to today’s driverless cars and ‘dumb’ tools like spreadsheets and email apps. I found it to be already desirable as a feature of other of today’s AI systems, such as chatbots and integrative business apps. I see it as an even more central feature of the powerful AI ecosystems of the near future.
By ‘ecosystems’ I mean the networks of linked AI apps that will surely become ubiquitous in our lives. As the AI-powered devices and apps we use increasingly become able to converse with us and with each other, there will be the opportunity to escape the limitations of each individually and to access the wisdom of the other systems beyond. Every app we use would have the potential to be a counsellor and to access a vast network of other highly capable apps. We could then amplify our personal wisdom as in today’s digital environment we can already amplify our personal knowledge and our writing, mathematical and coding skills.
The growing AI ecosystem will pose many threats. For example, we will come under increasing surveillance by huge government and commercial entities. But in opposition to these there can be systems that are on our side – personal lawyers and watchmen, as it were, monitoring the streams of communications that we are constantly dealing with. These are not the subject of this article, except in so far as they all represent wider contexts in which our daily activities are carried out. And context-sensitivity has been the crucial feature of wisdom as I have discussed it above.
There are researchers working right now on AW, in the form of AI systems wise in their own right and in the form of wise AI advisers specifically. In a forthcoming article I shall map out some of this research.
Acknowledgements
Thanks for conversations with the other members of the Wise AI Advisers project in AI Safety Camp 10 — Chris Leong (project leader), Richard Kroon and Matt Hampton.