Followup to: Overcoming the Loebian obstacle using evidence logic

In the previous post I proposed a probabilistic system of reasoning for overcoming the Loebian obstacle. For a consistent theory it seems natural the expect such a system should yield a coherent probability assignment in the sense of Christiano et al. This means that

a. provably true sentences are assigned probability 1

b. provably false sentences are assigned probability 0

c. The following identity holds for any two sentences φ, ψ

[1] P(φ) = P(φ and ψ) + P(φ and not-ψ)

In the previous formalism, conditions a & b hold but condition c is violated (at least I don't see any reason it should hold).

In this post I attempt to achieve the following:

- Solve the problem above.
- Generalize the system to allow for logical uncertainty induced by bounded computing resources. Note that although the original system is already probabilistic, in is not uncertain in the sense of assigning indefinite probability to the zillionth digit of pi. In the new formalism, the extent of uncertainty is controlled by a parameter playing the role of temperature in a Maxwell-Boltzmann distribution.

# Construction

Define a *probability field* to be a function p : {sentences} -> [0, 1] satisfying the following conditions:

- If φ is a tautology
*in propositional calculus*(e.g. φ = ψ or not-ψ) then p(φ) = 1 - For all φ: p(not-φ) = 1 - p(φ)
- For all φ, ψ: P(φ) = P(φ and ψ) + P(φ and not-ψ)

*energy*of a probability field p to be E(p) := Σ

_{φ}Σ

_{v}2

^{-l(v) }E

_{φ,v}(p(φ)). Here

**v**are pieces of evidence as defined in the previous post, E

_{φ,v}are their associated energy functions and l(

**v**) is the length of (the encoding of)

**v**. We assume that the encoding of

**v**contains the encoding of the sentence φ for which it is evidence and E

_{φ,v}(p(φ)) := 0 for all φ except the relevant one. Note that the associated energy functions are constructed in the same way as in the previous post, however they are

*not*the same because of the self-referential nature of the construction: it refers to final probability assignment.

The final probability assignment is defined to be

P(φ) = Integral_{p} [e^{-E(p)/T }p(φ)] / Integral_{p} e^{-E(p)/T}

Here T >= 0 is a parameter representing the magnitude of logical uncertainty. The integral is infinite-dimensional so it's not obviously well-defined. However, I suspect it can be defined by truncating to a finite set of statements and taking a limit wrt this set. In the limit T -> 0, the expression should correspond to computing the centroid of the set of minima of E (which is convex because E is convex).

# Remarks

- Obviously this construction is merely a sketch and work is required to show that
- The infinite-dimensional integrals are well-defined
- The resulting probability assignment is coherent for consistent theories and T = 0
- The system overcomes the Loebian obstacle for tiling agents in some formal sense

- For practical application to AI we'd like an efficient way to evaluate these probabilities. Since the form of the probabilities is analogous to statistical physics, it is suggestive to use similarly inspired Monte Carlo algorithms.