Translation note: There is no English equivalent to the Sanskrit words शून्यता śūnyatā and रूप rūpa. By convention, शून्यता is translated "emptiness" and रूप is translated "form". I follow this convention. My use of the words "emptiness" and "form" in this post have little to do with the English words "emptiness" and "form"; they are placeholders for Sanskrit.

Consider a cat. From the perspective of fundamental physics, the cat is a collection of particles no more special than any other collection of particles. There is no clear line between "cat" and "non-cat". Everything is quantum fields. The "cat" is a representation created by the human mind. It is a trick of human perspective. From the perspective of an omniscient unbiased observer, the cat is just a scoop of water in a limitless ocean.

Cats are real.

The perspective "cats are real" is called "form". The perspective "cats are an arbitrary ontology with no well-defined meaning amongst the fundamental laws of the universe" is called "emptiness". There is no conflict between form and emptiness just as there is no conflict between quantum mechanics and classical mechanics. They are different ways interpreting the same thing at different scales.

Classical mechanics can be more practical than quantum mechanics even though quantum mechanics is more fundamental than classical mechanics. Similarly, emptiness is more fundamental than form yet form is a more useful model of the world than emptiness. Emptiness and form are neither equally true nor equally practical.

Maps ≠ Form & Emptiness ≠ Territory

You could say "form" roughly corresponds to "maps" and "emptiness" roughly corresponds to "territory". That would constitute a better translation from the original Sanskrit than "form" and "emptiness". But the form-emptiness dichotomy draws its line in a slightly different place than the map-territory dichotomy.

The map-territory dichotomy draws the line between reality and models of reality. The map-territory dichotomy distinguishes between reality and one's simplified models of reality. In this way, the map-territory dichotomy is a materialist perspective.

The form-emptiness dichotomy is an informatic perspective. If there is no difference between a map and a territory then—mathematically—the map and the territory are isomorphic respresentations of the same group.


"Emptiness" describes a shared quality between the reductionist nature of objective reality and the raw sensory data coming into a mind. In both cases, our Bayesian priors bucket high-dimensional data into into an ontology called "form".

In other words, form is a byproduct of subjectivity. All ontologies dissolve under the scrutiny of theoretical physics.

The duality between emptiness and form is fundamental to general intelligence.

Discreteness and Differentiability

Big data is easy. The hard problem of general intelligence concerns small data. Small data is all about transfer learning. Transfer learning is all about ontologies.

An intelligent system with hard-coded ontologies is conceptually unadaptable and therefore not a general intelligence. A general intelligence's ontologies must be emergent from its input data. But ontologies are discrete and the only way to navigate a high-dimensional input data is via the gradient descent algorithm. But the gradient descent algorithm requires a continuous representation. Can a representation be both continuous and discrete?

In theory, no. In practice, yes.

Consider the sigmoid function in the multilayer perceptron.

sigmoid zoom medium

If we zoom in on this function we can see it is continuously differentiable.

sigmoid zoom in

But when we zoom out it appears as a discrete step function.

sigmoid zoom out

The sigmoid function illustrates the scale-dependence of emptiness and form. When we zoom in we see continuity (emptiness), which is a prerequisite for gradient descent. When we zoom out, we see a discrete system (form), which is necessary for the emergence of ontologies. Emptiness and form work together to produce emergent ontologies.


New Comment

New to LessWrong?