Ontological Crisis

NunoSempere (+60/-52)
Ruby (+13/-10)
plex
plex (+9/-10) added links to Arbital
Kaj_Sotala (+39)
pedrochaves (-171)
pedrochaves
pedrochaves
pedrochaves
pedrochaves (+9/-12)

Ontological crisis is a term coined to describe the crisis an agent, human or not, goes through when its model - model—its ontology - ontology—of reality changes.

In the human context, a clear example of an ontological crisis is a believer’s loss of faith in God. Their motivations and goals, coming from a very specific view of life suddenly become obsolete and maybe even nonsense in the face of this new configuration. The person will then experience a deep crisis and go through the psychological task of reconstructing itstheir set of preferences according the new world view.

When dealing with artificial agents, we, as their creators, are directly interested in their goals. That is, as Peter de Blanc puts it, when we create something we want it to be useful. As such we will have to define the artificial agent’s ontology – ontology—but since a fixed ontology severely limits its usefulness we have to think about adaptability. In his 2011 paper,paper, the author then proposes a method to map old ontologies into new ones, thus adapting the agent’s utility functions and avoiding a crisis.

This crisis, in the context of an AGI, could in the worst case pose an existential risk when old preferences and goals continue to be used. Another possibility is that the AGI loses all ability to comprehend the world, and would pose no threat at all. If an AGI reevaluates its preferences after its ontological crisis, for example in the way mentioned above, very unfriendly behaviors could arise. Depending on the extent of theits reevaluations, the AGI's changes may be detected and safely fixed. On the other hand, itontology changes could go undetected until they go wrong - wrong—which shows how it is of our interest to deeply explore ontological adaptation methods when designing AI.

This crisis, in the context of an AGI, could in the worst case pose an existential risk when old preferences and goals continue to be used. Another possibility is that the AGI loses all ability to comprehend the world, and would pose no threat at all. If an AGI reevaluates its preferences after its ontological crisis, for example in the way mentioned above, very unfriendly behavioursbehaviors could arise. Depending on the extent of the reevaluations, the AGI's changes may be detected and safely fixed. On the other hand, it could go undetected until they go wrong - which shows how it is of our interest to deeply explore ontological adaptation methods when designing AI.

Ontological crisis is a term coined to describe the crisis an agent, human or not, goes through when its model - its ontology - of reality changes. When considering artificial agents, this ontology can be seen as a utility function, which needs to be adapted and re-defined according to the new knowledge of the world.

Load More (10/15)