In Bayesian inference, all probabilities are conditional on background knowlege.

Absolutely. The interpretation of the evidence depends entirely on its meaning, within the context at hand. This is why different observers can come to different conclusions given the same evidence; they have adopted different contexts.

For example: "...humans are making decisions based on how we think the world works, if erroneous beliefs are held, it can result in behavior that looks distinctly irrational."

So when we observe a person with behavior or beliefs that appear to be irrational, we are probably using a different context than they are. If we want to understand or to change this person's beliefs, we need to establish a common context with them, creating a link between their context and ours. This is essentially the goal of Nonviolent Communication.

I also see ideas in Buddhism that can be phrased in terms of the context principle. Suffering (dukkha) is context dependent. We may suffer under conditions that bring another joy. My wife, for example dislikes most of the TV shows I watch. If she realizes that I am happy to put on headphones to spare her from exposure, she can experience gratitude instead of resentment.

In all cases, there are rules for transferring information between context and "content".

This is a key insight. If you can split a system arbitrarily between context and content, how do you decide where to make the split? In programming, which part of the problem is represented in the program, and which part in the data?

This task can be arbitrarily hard. As I stated above:

In general it is very difficult to implement a simple idea, in a simple way that is simple to use.

The Daily WTF contains many examples of simple ideas implemented poorly.

But you can never completely eliminate the context. You are always left with a residual context which may take the form of assumed axioms, rules of inference, grammars, or alphabets. That is, the residual is our way of representing the simplest possible context.

In computer science you can ground certain abstractions in terms of themselves. For example the XML Schema Definition Language can be used to define a schema for itself.

The observable universe appears to be our residual common context. If we want to come up with a TOE that explains this context, perhaps we need to look for one that can be defined in terms of itself.

I think that it is an interesting research program to examine how more complex contexts can be specified using the same core machinery of axioms, alphabets, grammars, and rules.

This sounds similar to what I am working on. I am working on a methodology for creating a network of common contexts that can operate on each other to build new contexts. There is a core abstraction that all contexts can be projected into.

Key ideas for this approach come from Language-oriented programming and Aspect-oriented programming.

Welcome to Less Wrong! (2010-2011)

by orthonormal 1 min read12th Aug 2010805 comments

42


This post has too many comments to show them all at once! Newcomers, please proceed in an orderly fashion to the newest welcome thread.