This is my attempt at an intuitive explanation of the term "ontology". This article is not going to say anything new, only provide a (maybe) different view point on known concepts.

There are tons of definitions for "ontology". In my experience, those definitions do not help in understanding the concept - one I heard at university is "Ontology is the explicit specification of conceptualization". Instead of giving definitions, I'm going to give an example of two AI agents with different ontologies.

AI agent Susan

Susan is an AI agent that can add integers. Susan is asked to play a game against another AI agent: The players take turns to select an integer between 1 and 9 and remember it (each integer can be selected only once). The first player to have exactly three integers that sum to 15 wins. If no player has three integers summing to 15, the game is a draw.

1 2 3 4 5 6 7 8 9

Here is an example play:

agent0 selects 2
agent1 selects 8
agent0 selects 6 (it has: 2, 6)
agent1 selects 7 (it has: 8, 7)
agent0 selects 4 (it has: 2, 4, 6)
agent1 selects 5 (it has: 8, 7, 5)
agent0 selects 9 (win because 9 + 4 + 2 = 15)

AI agent Greg

Greg is an AI agent that can find patterns in 2D grids. Greg is asked to play a game of tic-tac-toe against another AI agent: The players take turns - one places "x" and the other "o" in an empty cell of a 3x3 grid. The first player to complete a line of three (horizontal, vertical or diagonal) wins. If no player completes a line, the game is a draw. You're probably familar with the game, but here is an example play:

   
 x 
   
 o 
 x 
   
xo 
 x 
   
xo 
 x 
  o
xo 
xx 
  o
xo 
xxo
  o
xo 
xxo
x o

Susan and Greg playing against each other

At first glance it may appear that the two agents have very different capabilities but in fact the two games are isomorphic. The isomorphism can be represented by a common language. If both agents speak this language, they can play with each other. Consider a language with just 9 words: a b c d e f g h i. Each word corresponds to a possible move.

For Susan, each word has the meaning of selecting the following integer:

abcdefghi
135792468

For Greg, each word has the meaning of selecting the following grid cell:

fdh
eca
gbi

Now, each action and observation can be represented by a word. If Susan selects 2, she would communicate that to Greg with the word "f". For Greg, this would correspond to the top left location.

The ontologies of Susan and Greg

Even though the computations Susan and Greg are doing are very different, they can play against each other. We say that the two agents model the environment with different ontologies. Both agents assign some meaning to the words of the common language - for Susan "f" is the word for the number 2; for Greg, "f" is the word for the top left cell of the grid. When Susan plays the game, she is summing integers and trying to find three that sum to 15. When Greg plays the game, he is placing x and o at cells to get three in a line. Each agent uses an algorithm operating on top of the agent's ontology. The exact details of the algorithms are not important. The important part is that those algorithms are quite different and only happen to be "compatible" with respect to the rules of the game.

If Susan had to play a game where the goal sum is 16 instead of 15, she could do it quite naturally, given her ontology. Susan could answer questions such as "is b even?". Greg's ontology is not adapted for such scenarios.

If Greg had to play a game where diagonal lines don't count, he could do it quite naturally, given his ontology. Greg could answer questions such as "are c and a adjacent?". Susan's ontology is not adapted for such scenarios.

The two ontologies generalize in very different ways.

Learning AIs

In machine learning, we train an AI, using something like gradient descent to minimize a loss function. The loss function is defined in terms of the language the AI "speaks" (the input and output). We usually don't know how this language is processed internally. If we use two different training algorithms to produce agents playing tic-tac-toe, we could imagine one agent ending up with Susan's ontology an another one with Greg's. Situations like this happen for real. One of the goals of interpretability is to identify the ontology of AIs produced by gradient descent.

When we design agents to work in the real world, the limitations of our understanding become important. In tic-tac-toe, we decided what the rules are but in the universe we are only observers trying to discover some rules. A scientific theory is in part an ontology we use to think about the world. Throughout history, our understanding of the universe has improved and showed that some theories are inaccurate. For example, newtonian mechanics was superseded by relativity. If we create an AI based on newtonian mechanics and one day it discovers relativity, this will require a shift in the ontology of the AI. In relativity, concepts like "gravitational force" or "simultaneity" are not always meaningful. This raises an interesting question about human values - how can we tell an AI what to value if what we value turns out to be ill-defined?

An article from several years ago showed that a neural network for image recognition can be easily misled to believe the image is of something else. A famous example is taking a photo of a panda and adding some amount of noise, imperceptible to a human. The neural network believes the resulting image is of a gibbon. This shows that concepts which are quite distinct in a human's ontology (panda vs gibbon) may actually be very close in the ontology of a neural network trained by gradient descent. What we would like to understand is what ontology we can expect an AI to learn - this is the question of natural abstraction.
 

New Comment