AISC team report: Soft-optimization, Bayes and Goodhart

9beren

1Mateusz Bagiński

New Comment

I like this post very much and in general I think research like this is on the correct lines towards solving potential problems with Goodheart's law -- in general Bayesian reasoning and getting some representation of the agent's uncertainty (including uncertainty over our values!) seems very important and naturally ameliorates a lot of potential problems. The correctness and realizability of the prior are very general problems with Bayesianism but often do not thwart its usefulness in practice although they allow people to come up with various convoluted counterexamples of failure. The key is to have sufficiently conservative priors such that you can (ideally) prove bounds about the maximum degree of goodhearting that can occur under realistic circumstances and then translate these into algorithms which are computationally efficient enough to be usable in practice. People have already done a fair bit of work on this in RL in terms of 'cautious' RL which tries to take into account uncertainty in the world model to avoid accidentally falling into traps in the environment.

People have already done a fair bit of work on this in RL in terms of 'cautious' RL which tries to take into account uncertainty in the world model to avoid accidentally falling into traps in the environment.

I would appreciate some pointers to resources

This is a report on our work inAISC Virtual 2023.For AISC 2023, our team looked into the foundations of soft optimization. Our goal at the beginning was to investigate variations of the original quantilizer algorithm, in particular by following intuitions that uncertainty about goals can motivate soft optimization. We ended up spending most of the time discussing the foundations and philosophy of agents, and exploring toy examples of Goodhart’s curse.

Our discussions centered on the form of knowledge about the utility function that an agent must have, such that expected utility maximization isn’t the correct procedure (from the designer's perspective). With well-calibrated beliefs about the true utility function, it’s always optimal to do Expected Utility maximization. However, there are situations where an agent is very sensitive to prior specification, and getting this wrong can have a large impact on the true utility achieved by the agent. Several other unrelated threads were pursued, such as an algorithm with high probability of above-threshold utility, and the relationship between SGD and soft optimization.

## The Bayesian view on Goodhart's law

Goodhart's Law states that when maximizing an approximation of a

true utilityfunction, which we callproxy utilityfunction, this leads to outcomes that are much less valuable than predicted by the proxy utility function. There is a Bayesian perspective from which it doesn't look like Goodhart's law will be a problem at all, namely if the agent has correct beliefs about the quality of the approximation. An ideal rational agent would represent the information they have about the world with a probability distribution that they arrived at by Bayesian reasoning. We assume that they represent their goals as a utility function over world states or as a belief distribution over such utility functions. Thus, they select actions by maximizing the expected utility over goals and world models.If the agent was able to perfectly capture all available information about goals into an "ideal"

^{[1]}belief distribution, maximizing the utility in expectation over this distribution would be optimal.^{[2]}It would avoid Goodhart’s law as much as possible given the agent's knowledge by anticipating any overestimation of the true utility function by the proxy utility function. In this section, we try to formalize this intuition. Sadly, Bayesian reasoning is intractable in practice, both because specifying the prior is difficult and because updating the distribution is computationally hard. This framing is explained in more detail in Does Bayes Beat Goodhart?.^{[3]}Beliefs about values are difficult to deal with, because you may not have a reliable way to update these beliefs after deployment. In this situation the agent's behavior is heavily dependent on the way we specify the priors.

## Setup and notation

We generally consider situations where an agent can choose a single action a∈A from some space A. For that choice, we are interested in knowledge about the (true) utility function v:A→R, a↦v(a) that we want the agent to maximize. The situation might include observed variables Obs and hidden variables h. The agent models each of them respectively by random variables V, Obs and H. We denote the ideal prior belief distribution, without prior misspecification or compute limitation, by ϕ(V,Obs,H). We denote the belief distribution that the agent estimated by ˜ϕ(V,Obs,H).

^{[4]}Hidden variables are used in some of our toy examples to provide better intuition about the problem. The observed variables Obs could for example be a dataset such as that containing human preferences used to train the reward model in RLHF. But in this post the observed variable is always a proxy utility function Obs=U. We write E:=U−V for the "error" random variable that indicates the error between the proxy utility and true utility functions.## Goodhart's law

Goodhart’s law can be formalized in different ways; here we focus on “regressional Goodhart”. We will model the situation where the difference between the true and proxy utilities for a given action a is given by an error term E(a):

U(a)=V(a)+E(a)

We will now assume that the proxy utility function is an unbiased estimator of the true utility function, i.e. Eϕ(U|V=v)U=v, or equivalently that the error function is the zero function in expectation, i.e. Eϕ(E|V=v)E=0.

In expectation over ϕ(V) and evaluated at any action a∈A, this yields

Eϕ(V,U)U(a)estimate of V(a)=Eϕ(V)V(a)=Eϕ(V,U)V(a).

However, this equality is not true if the fixed action a is replaced with the action A⋆ that optimizes U, that is A⋆=argmaxa∈AU(a). Note here that A⋆ is itself now a random variable.

Now, we find that

Eϕ(V)ϕ(U|V)U(A⋆)estimate of V(A⋆)≥Eϕ(V)ϕ(U|V)V(A⋆)

and the inequality is strict under mild assumptions

^{[5]}, e.g. in all of our toy examples. Intuitively, A⋆ selects for positive error E=U−V so that E(A⋆)=U(A⋆)−V(A⋆) is positive in expectation.So, the expected value Eϕ(V)ϕ(U|V)V(A⋆) of the action selected by proxy maximization is lower than the proxy prediction Eϕ(V)ϕ(U|V)U(A⋆), even though we assumed that U was an unbiased estimator. This is one way of stating the regressional variant of Goodhart's law.

## How Bayes could beat Goodhart

From a Bayesian perspective, the situation after observing the proxy utility function U is best described by the conditional distribution ϕ(V|U=u). We will therefore look at the conditional expected true utility and write it as f(u,a)=Eϕ(V|U=u)V(a).

The Bayes-optimal action for belief ϕ is A⋆Bayes(u)=argmaxa∈Af(u,a). While A⋆ maximizes the proxy utility U, A⋆Bayes maximizes the conditional expected true utility f(U,a).

Just as is the case for U, this maximization objective also has the same expectation over ϕ(V,U), as the true utility function V. We can see this for any action a∈A as follows:

Eϕ(V,U)f(U,a)estimate of V(a)=Eϕ(V,U)Eϕ(V′|U)V′(a)=Eϕ(U)Eϕ(V′|U)V′(a)=Eϕ(V,U)V(a).

But, in contrast to U, this equation also holds for the optimal action A⋆Bayes:

Eϕ(V,U)f(U,A⋆Bayes)estimate of V(A⋆Bayes)=Eϕ(V,U)Eϕ(V′|U)V′(A⋆Bayes)=Eϕ(V,U)V(A⋆Bayes).

Intuitively, this is due to the fact that the distribution ϕ(V|U) used in the Bayesian maximization objective, which we used as approximation of V, is the same as the distribution from which V is indeed sampled (at least according to the agent's best knowledge).

In this sense, "Bayes beats Goodhart". The Bayesian maximization objective Eϕ(V|Obs)V does not have the problem that by maximizing it, it ceases to be a good approximation of the true utility function V. As a consequence, the Bayesian maximization objective also doesn't suffer from Goodhart's law.

As a consequence of stochasticity in the belief distribution ϕ(V|U), the selected action might still turn out to perform poorly according to V, but it was the best the agent could have selected with its knowledge.

## Why Bayes can't beat Goodhart

However, we think that in practice an agent would not be able to access the ideal belief distribution ϕ. This is firstly because specifying priors is notoriously difficult, and because optimal actions usually require pushing away from regions of action-space where we (the designers) have a lot of data about the true value, so an agent is always heavily reliant on prior information about the reliability of the proxy. And secondly, computational limitations might mean that the distribution ϕ(V|U) can't be computed exactly.

See Abram Demski’s post, Does Bayes Beat Goodhart?, for a more detailed discussion. We found that this topic was difficult to think about, and very philosophical.

Our best guess is that the prior has to capture all the meta-knowledge that is known to the designer, for instance about boundaries in action-space or outcome-space where there is a possibility of breaking the relationship between your proxy value and true value. One extreme example of such knowledge is deontological rules like "don't kill people". We are confused about exactly what are the desiderata for such a "correct" prior specification.

We explored examples related to this below, but don't feel like we have resolved our confusion completely yet. In the next sections we look at different types of concrete examples both with and without "objectively correct" beliefs.

## What if the agent can access the ideal belief distribution?

In this section, we consider the scenario of an agent that is able to represent its beliefs about the true values and the errors of the proxy variable with an ideal belief distribution. We worked through several examples, some more general and some more specific, to get a better intuition about when and why regressional Goodhart can cause the most damage, and how the Bayesian approach fares.

The recent post When is Goodhart catastrophic? by Drake Thomas and Thomas Kwa contains an in-depth analysis of a similar scenario with emphasis on how errors are distributed in heavy-tailed vs light-tailed distributions.

## The scenario

For various actions, we observe the proxy values u(a). Our goal is to use this information to choose the action with the highest expected value of V(a). We will now assume that we know the prior distribution of the true values and the distribution of the error term, and that they are independent for each action.

By Bayes’ theorem, we can calculate then the distribution of V(a) given an observed value u, using P[U(a)=u∣V(a)=v]=P[E(a)=u−v]:

P[V(a)=v∣U(a)=u]=P[V(a)=v]P[E(a)=u−v]∫dv′P[V(a)=v′]P[E(a)=u−v′].

Given this distribution, it is now possible to calculate the expected value E[V(a)∣U(a)=u] and use this value to compare policies instead of the naive method of optimizing the proxy U(a). In other words, the rational choice of the action is

a∗=argmaxa∈AE[V(a)∣U(a)=u(a)].

## Example: Generalized normal distributions

Here, we assume that both the true values as well as the error term are distributed according to the generalized normal distribution with standard deviation σ=1. The shape parameter β of these distribution determines their "tailedness"; distributions with a smaller shape parameter have a heavier tail. We vary the shape parameters of both the error and the true value distribution and calculate the expected true value conditional on the proxy value, E[V(a)∣U(a)=u(a)], using numerical integration. The plots show the expected conditional values as a function of u as well as the identity function for comparison as a dashed line.

Only in the case where βV>βE and βE≤1 do we observe that the expected value graph does not go to infinity for large values of the proxy value. That is, only in these cases to we observe Goodhart's curse. This seems to be in agreement with the result proven in Catastrophic Regressional Goodhart: Appendix.

## Example: Normally distributed error term with randomly drawn standard deviations

We wanted a concrete example where the error distribution had a free parameter that controls the variance, which we can think of as being upon observing the proxy value, which intuitively should create a situation where maximizing the proxy reduces the correlation. In this example, we assume the error is normally distributed with mean 0, but for each action the standard deviation itself is randomly drawn from an exponential distribution with rate parameter λ=0.3. For concreteness, the error values can be sampled using the following Python code:

The resulting distribution of the error term can be calculated as

P[E(a)=x]=∫∞0P[E(a)=x|σ(a)=s]P[σ(a)=s]ds=0.3√2π∫∞0exp[−12(xs)2−0.3s]ds

We will now look at two cases for the distribution of the true values.

In the first case, we assume that the prior values are uniformly distributed in the interval [-0.5, 0.5]. Here, the graph of the expected true values look like this:

In the second case, we assume a standard Gaussian distribution (i.e. with mean μ=0 and standard deviation σ=1). The graph of the expected true values looks very similar:

In both cases, it is clearly not optimal to choose the policies with the largest proxy values. Instead, values quite close to u=0 have much higher expected value and the expected value decays towards 0 (the mean of the prior distribution) for very high values.

## Example: Laplace-distributed error term with randomly drawn scale parameters

In this example, we assume the error is Laplace-distributed with mean 0 and scale parameter 0.9, but the error is “stretched” by multiplying it with a random value from an exponential distribution (with rate parameter 1) taken to the power of 6. This was done in an attempt to make the Goodhart effect much stronger. In other words, the error values can be sampled using the following Python code:

The resulting distribution can be calculated similarly to the previous example.

^{[6]}We again calculated the resulting graph of the expected true values for the case of a uniform and a Gaussian distribution of the error term (both with the same parameters as before).

## Example: Goodhart's rollercoaster

A fun phenomenon occurs when the distribution of the error term has multiple modes. Here we combined three heavy-tailed distributions and generated the error values using the code:

Combined with uniformly-distributed true values, this results in an expected true value graph like this:

This illustrates that if there are different sources of errors, each with different mean, regressional Goodhart can occur not only in the extreme values, but also in intermediate values in the domain of the proxy function.

## What if the agent can't access the ideal belief distribution?

As discussed, we think that action selection by EU maximization with ideal belief, a⋆Bayes=argmaxa′∈AEϕ(V|Obs=obs)V(a′), is the best thing to do if feasible, but is infeasible in realistic situations, due to the unavailability of ϕ. This raises the question what action selection procedure (algorithm) to employ instead. We search for algorithms with performance (as potentially evaluated by the true value functions for a set of representative situations) as close to the optimal performance reached by EU maximization with ideal belief as possible. In other words: We search for good approximations of action selection by EU maximization with ideal belief.

## The naive algorithm and a second-level Goodhart's law

A salient candidate for the approximation of action selection by EU maximization with ideal belief ϕ is what we call the

naive algorithmornaive EU maximization: Action selection by EU maximization with an approximate/estimated belief ˜ϕ:a⋆˜Bayes=argmaxa′∈AE˜ϕ(V|Obs=obs)V(a′)

Curiously, and as observed in Does Bayes Beat Goodhart?, this brings us back into a familiar situation: We found that, in order to optimize V when just knowing ϕ, the correct objective to maximize is Eϕ(V|Obs)V. However, we noticed that the agent can't typically access this objective and proposed it might maximize E˜ϕ(V|Obs)V instead. Just as in the original situation involving V and U, the agent ends up maximizing a proxy of the true objective. This is the recipe for Goodhart's law. Regardless, we hope that Bayesian reasoning would still reduce the extent of Goodhart's law in the sense that a solution that avoids it needs to "bridge less of a gap".

With Obs=U, this situation arises in the context of trying to beat Goodhart's law by Bayesian reasoning, so in some sense on a second level. We could of course attempt to also capture this situation by Bayesian modelling of a meta distribution over ϕ and ˜ϕ, but that would just give us a third level Goodhart's law. Generally, for any meta level of Bayesian reasoning, there is Goodhart's law one level above. While we hope the Bayesian reasoning on the first level alleviates Goodhart's law to some degree, our intuition is that further levels of Bayesian reasoning are not helpful. An attempt at an argument for this intuition is that ϕ already captures all the agent's prior knowledge and ˜ϕ already approximates ϕ to the agent's best ability. So, if, by replacing ˜ϕ with an expected value of ϕ over the (approximate, not ideal) meta distribution, the agent captures it's knowledge better, then it must have done a poor job with computing ˜ϕ.

In the next section, we will describe a toy example that we will use to test the robustness of the naive approach to Goodhart's law.

## A simulated toy example with difference between ideal and estimated belief distribution

In this example we will analyze how an agent can utilize the true belief distribution ϕ(V|U) to choose actions to avoid Goodhart's law, and whether we need this.

If we knew the true belief distribution ϕ(V|U) we could label actions as safe or unsafe depending on how correlated V and U are according to ϕ. In the figure below we show the extreme example where the true value of “safe actions” are distributed with low variance around the true value mean, while unsafe actions are distributed with high-variance around the true value mean. This is illustrated in the figure below.

The red curve is a “regression curve” which represents the expected conditional true utility for each proxy utility. This expectation is calculated locally across bins of the proxy utility for visualization purposes.

We can observe that unsafe actions can have two unintended consequences. First, obviously the proxy utilities of unsafe actions are useless as they do not correlate significantly with true utilities. However, the second consequence is that unsafe actions can introduce extremal Goodhart, as the expected true utility (regression curve) decreases for extremal values of the proxy utility.

By knowing ϕ(V|U), we can avoid taking actions for which proxy utilities do not correlate with true utility. Since we do not have knowledge about ϕ(V|U), we must rely on an estimated belief distribution ˜ϕ(V|U).

If we assume ˜ϕ(V|U) is distributed in the same way as ϕ(V|U), we can observe that, while ˜ϕ and ϕ correlate for safe actions, there is still a Goodhart effect due to the unsafe actions. This example illustrate that approximating ϕ(V|U) by a prior belief ˜ϕ(V|U) can still introduce Goodhart. This observations made us consider the usage of soft-optimizers to deal with imperfect information about the prior beliefs.

## Conclusion

We investigated concrete examples to build intuitions about Goodhart's law and found a Bayesian perspective to be helpful for that. We concluded that Goodhart's law can indeed be beaten by Bayesian reasoning, but that belief misspecification can reintroduce Goodhart's law. As future work, we are still working on empirical evaluations of soft optimization as an alternative to naive Bayesian expected utility maximization.

## Other Results

## Stochastic gradient descent as a soft optimizer

Although soft optimizers seem somewhat artificial at first, in fact, a very common soft optimizer is the simplest of optimizers, the standard stochastic gradient descent with a constant time step. The reason for that is exactly because stochastic gradient descent is

stochastic, so we can think of it as a sequence of random variables, such that it converges to a stationary distribution around the minimum.To make this more clear, consider a loss function l(θ)=E[L(θ)], with L(θ) being a random loss function whose distribution depends on the data. Then, assume we are doing SGD with a step size β, starting from some θ0∼π(θ0). Letting L1(θ),…,LB(θ) be i.i.d. minibatch samples of L(θ), then the SGD update forms a Markov chain defined as

θt+1=θt−βB∑Bb=1∇Lb(θt)

The following is a standard result, which we briefly recap here. We can analyze better this chain by considering

Then our SDE limit becomes dθt=−βHθtdt+βB−1/2Σ1/2dWt, which we recognize as an OU process, whose stationary distribution is distributed according to p(θ)∝exp(−θTA−1θ), where A is the solution of the equation AH+HA=β−1B−1Σ. We can solve this equation for A by letting H=Qdiag(λ)QT be the eigendecomposition of H, defining ΣQ:=QTΣQ, and letting

^{[7]}A=βBQΓQT,Γi,j:=Σ(Q)i,jλi+λj.

We then conclude that SGD with a constant time step is a soft optimizer that, at convergence, approximately follows a Gaussian distribution whose covariance is proportional to the step size β and to the innate covariance Σ of L, and inversely proportional to B and the eigenvalues of H.

An interesting phenomenon arising in high-dimensional probability is that, under usual conditions, the distribution gets highly concentrated in a shell

aroundthe mode, called thetypical set. In the case of the distribution p(θ), we can find this typical set by noticing that, if Θ∼p(θ), then Θ=A1/2Z,Z∼N(0,I), so we find that Y:=||ΘTA−1Θ||2∼χ(d), with χ(d) being the chi-squared distribution with d degrees of freedom, and d the dimension of the parameter space. Therefore, we find that E[Y]=d and, using Markov inequality, P(|Y−k|>td)<2d−1t−2, so the typical set becomes the ellipsoid θTΓ−1θ=βd/B, and the relative distance to that set is inversely proportional to d.^{[8]}This implies that, in very high dimensions, not only is constant step SGD soft optimizing the minima, but it in fact rarely gets very near the minima, and just hovers around the typical set.Of course, SGD with a constant time step is rarely used in real situations, and instead, we use adaptations such as Adam with a schedule for varying the time step. Although the analysis in these cases is somewhat harder, we can conjecture that the general idea still holds, and we still have the same proportionality results. If true, this shows that, when thinking about soft optimizers, we should always remember that soft optimizers are "out there", already being used all the time.

## High probability soft optimizer alternative formulation and proof

Another soft optimizer, that explicitly tries to be a soft optimizer (unlike constant step SGD) is the quantilizer, as proposed by Jessica Taylor. Here, we give an alternative formulation for the quantilizer, one that we believe is more intuitive and slightly more general. To maintain continuity with the previous section, we use a minimization framing, consistent with the SGD discussion, and contrasting to the maximization framing used in the original paper.

Let l:Θ→R be a function

^{[9]}to be soft minimized on the state space Θ. We can, for any t∈R, consider the sublevel-set At,l⊆Θ given by At,l:={θ∈Θ;l(θ)≤t}. Now, assume a prior probability measure μ on Θ. With it, we can define a function gμ,l:R→[0,1] given by gμ,l(t)=μ(At,l). This function has the following properties:Therefore, we can define a function hμ,l:[0,1]→R, working as a "generalized inverse" of gπ,l (hπ,l=g−1π,l

ifgπ,l is invertible), by defining hμ,l(q)=sup{t∈R;gμ,f(t)≤q}. With it, we can define the q-quantilizer setΘq by letting tl=hμ,l(q) and Θq:=Atq,l. Of course, this defines aquantilizer functionσμ,l:[0,1]→2Θ defined as σμ,l(q)=Θq. Moreover, we can define thequantilizer distribution functionρμ,l:[0,1]→P(Θ) by restricting μ to Θq, such that ρμ,l(q)=μ(Θ∩σμ,l(q))/μ(Θ)=:μq,l.How is this a reformulation? In the original work, the desiderata used is "choosing an action from the set {a∈A,E[U(W(a))]≥t}". In our language (under suitable change to a minimization framing), this is exactly one of the subset levels, and the reordering of the action set and choosing a random variable from [1−q,1] is exactly sampling one of the q-best actions based on the prior μ, that is, sampling one of μq,l. We find this explicit reordering of the action space is both awkward, lacking geometric intuition, and hard to generalize to continuous spaces. Instead, we argue that the above reformulation is more intuitive while having the same properties as the original formulation.

We can gain some geometric intuition on quantilizers in Rd, by assuming again, that around the minima, we have l(θ)≈θTHθ. Then, our quantilizer sets become (approximately) the ellipsoids At,l≈{θTHθ≤t}. For large d, we again have concentration phenomena in quantilizers. In fact, letting V(At,l) be the d-dimensional volume of At,l, we find that, for some δ<t, that V(At−δ,l)/V(At)=(1−δ/t)d/2. So, if μ is approximately uniform around the minima, then for large d, and small q, almost all of the measure of μq,l is concentrated around the set {θHθT=tq}. In this case, we conclude that the quantilizer becomes an almost

exactquantilizer sampling from the exact q-best values of l(θ). Of course, unlike in the case of SGD, is not clear that the action set A in the original framing should be modeled as a subset of Rd with all its geometric structure, so whether this result is relevant depends on what is being modeled.^{^}By "ideal" we mean that the agent has arrived at the distribution by Bayesian updates from a well-specified prior, and it includes all the explicit and implicit knowledge available to the agent. From the subjective perspective of the agent, the variables appear (according to some notion that is not very clear to us) to be sampled from this distribution. In the simplest case, the world literally samples from this distribution, e.g. by rolling a die, and the agent has knowledge about the probabilities involved in that sampling process. We are still philosophically confused about what exactly characterizes this ideal distribution and whether there is really only one such distribution in any situation.

^{^}With respect to the true value function, which is the value function the designer intends for the agent.

^{^}“One possible bayes-beats-goodhart argument is: "Once we quantify our uncertainty with a probability distribution over possible utility functions, the best we can possibly do is to choose whatever maximizes expected value. Anything else is decision-theoretically sub-optimal." Do you think that the true utility function is really sampled from the given distribution, in some objective sense? And the probability distribution also quantifies all the things which can count as evidence? If so, fine. Alternatively, do you think the probability distribution really codifies your precise subjective uncertainty? Ok, sure, that would also justify the argument.”

^{^}Note that we regard ϕ and ˜ϕ as denoting probability measures, joint, conditional and marginal distributions, probability density functions or probability mass functions as convenient.

^{^}To show this, we first show Eϕ(U|V=v)U(A⋆)≥Eϕ(U|V=v)v(A⋆) and then take the expectation over ϕ(V). Define ^a:=argmaxa∈Av(a). With that, we have Eϕ(U|V=v)U(A⋆)≥Eϕ(U|V=v)U(^a)=Eϕ(U|V=v)v(^a)≥Eϕ(U|V=v)v(A⋆).

The strict version of this inequality clearly holds if ϕ(U(A⋆)−U(^a))<1 or ϕ(v(A⋆)=v(^a))<1.

^{^}The exponential distribution has pdf (for x≥0):

f(x)=λe−λx

When values drawn from this distribution are transformed by g(x)=x6, the pdf of the transformed variable σ is given by:

P(σ=y)=f(g−1(y))ddyg−1(y)=λ6exp(−λy16)y−56.

Our noise function is Laplace-distributed with location parameter μ=0 and scale parameter b=0.9σ and therefore has the pdf (given σ)

P(ϵ=x|σ)=12bexp(−|x−μ|b)=12⋅0.9σexp(−|x|0.9σ).

Therefore, if we integrate out σ, we have

P(ϵ=x)=∫∞0dσP(ϵ=x|σ)P(σ=y)=∫∞0dσ12⋅0.9σexp(−|x|0.9σ)λ6exp(−λσ16)σ−56=λ10.8∫∞0dσexp(−|x|0.9σ−λσ16)σ−116=λ9∫∞0drexp(−|x|0.9r65−λr−15) by substituting r(σ)=σ−56,

where the last substitution is useful to simplify numerical integration.

^{^}This can be shown that the general equation PX+XP=Y, for P positive-definite, can be solved using the eigendecomposition P=QΛQT, Λ=diagλ and letting

PX+XP=Y⟹QT(PX+XP)Q=QTYQ

⟹QTPQQTXQ+QTXQQTPQ=QTYQ

⟹ΛX′+X′Λ=Y′,X′:=QTXQ,Y′:=QTYQ

⟹λiX′i,j+λjX′i,j=Y′i,j⟹X′i,j=Y′i,j(λi+λj)−1

^{^}Much sharper exponential bounds can be derived, see here.

^{^}We assume everything to be well-behaved enough (measurable functions and measurable sets) so we don't need to do measure theory here.