Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Mentioned in

Iteration Fixed Point Exercises

22nd Nov 2018

11Gurkenglas

6Scott Garrabrant

10Donald Hobson

3Czynski

1martinkunev

9SatvikBeri

9Czynski

5Adele Lopez

4Rafael Harth

3seed

2Rafael Harth

2Rafael Harth

New Comment

12 comments, sorted by Click to highlight new comments since: Today at 11:34 PM

#3:

on shortens all distances but is strictly monotonic.

#6: (the "show that if" condition follows from the property, the question is likely misstated)

The iteration is so long that it must visit an element twice. We can't have a cycle in the order so the repetition must be immediate.

Thanks, I actually wanted to get rid of the earlier condition that for all , and I did that.

Answer to question 1.

Let for arbitrary . Call . Then by induction () (power series simplification)

Therefore ie is a cauchy sequence. However is said to be complete, which by definition means any cauchy sequence is convergent. So and So converges exponentially quickly

Answer to question 2.

From part 1, as is continuous, So is a fixed point. Suppose and are both fixed points of a contraction map. Then and so therefore so . Thus has a unique fixed point.

Answer to question 3.

is a metric space. Its the real line with normal distance. Let . Then is a contraction map because is differentiable and has the property . However no fixed point exists as . This works because the sequence generated from repeated applications of will tend to infinity, despite successive terms becoming ever closer.

For Q2, I believe you aren't done:

You have established that there is at most one fixed point, but not that a fixed point exists.

For question 2

you haven't proven f is continuous

For question 3 you say

is a contraction map because is differentiable and ...

I would think proving this is part of what is asked for.

#6:

Assume WLOG Then by monotonicity, we have If this chain were all strictly greater, than we would have istinct elements. Thus there must be some uch that By induction, or all

#7:

Assume nd construct a chain similarly to (6), indexed by elements of If all inequalities were strict, we would have an injection from o L.

#8:

Let F be the set of fixed points. Any subset S of F must have a least upper bound n L. If x is a fixed point, done. Otherwise, consider which must be a fixed point by (7). For any q in S, we have Thus s an upper bound of S in F. To see that it is the least upper bound, assume we have some other upper bound b of S in F. Then

To get the lower bound, note that we can flip the inequalities in L and still have a complete lattice.

#9:

P(A) clearly forms a lattice where the upper bound of any set of subsets is their union, and the lower bound is the intersection.

To see that injections are monotonic, assume nd s an injection. For any function, If nd that implies or some which is impossible since s injective. Thus s (strictly) monotonic.

Now s an injection Let e the set of all points not in the image of and let ote that since no element of s in the image of Then On one hand, every element of A not contained in s in y construction, so On the other, clearly so QED.

#10:

We form two bijections using the sets from (9), one between A' and B', the other between A - A' and B - B'.

Any injection is a bijection between its domain and image. Since nd s an injection, s a bijection where we can assign each element o the uch that Similarly, s a bijection between nd Combining them, we get a bijection on the full sets.

Q1

We wish to show that the terms of form a Cauchy sequence, which suffices to demonstrate they converge in a complete space. Take , and WLOG . Then we know from the definition of contraction that . This converges to 0 as m increases, so the sequence is Cauchy.

It's easy to see that this makes the rate of convergence between terms of the Cauchy sequence exponentially quick. Intuitively that seems like it ought to make the sequence converge to its limit with the same speed, but I don't think that can be made rigorous without more steps.

Q2

Take a sequence . This converges to some . Suppose was not a fixed point. Then choose an . A sequence which converges to a limit has, for every , some such that . Then we know that but , contradicting the contraction condition. So there is at least one fixed point, .

Suppose there are two fixed points, , for distinct and . If so, , which again contradicts the contraction condition. So there is at most one fixed point.

Q3

Take as the space , with the usual metric. Define . This is a weak contraction (toward infinity) and has no fixed points within this space.

Ex 6:

If at any point , then we're done. So assume that we get a strict increase each time up to . Since there are only elements in the entire poset, and is monotone, has to equal .

Ex 7:

For a limit ordinal , define as the least upper bound of for all . If , then the set for is a set of size that maps into a set of size by taking the value of the element. Since there are no injections between these sets, there must be two ordinals such that. Since is monotone, that implies that for every ordinal , and thus is a fixed point. Since this proves the exercise.

Ex 8:

Starting from , we can create a fixed point via iteration by taking , and iterating times as demonstrated in Ex 7. Call this fixed point . Suppose there was a fixed point such that and . Then at some point , but , which breaks the monotonicity of unless . So generated this way is always the smallest fixed point greater than .

Say we have fixed points . Then let be the least upper bound of , and generate a fixed point from . So will be greater than each element of since is monotone, and is the smallest such fixed point as shown in the above paragraph. So the poset of fixed points is semi-complete with upper bounds.

Now take our fixed points again. Now let be the greatest lower bound of , and generate a fixed point . Since and is monotonic, , and so is a lower bound of . It has to be the greatest such bound because itself is already the greatest such bound in our poset, and is monotonic.

Thus the lattice of fixed points has all least upper bounds and all greatest lower bounds, and is thus complete!

Ex1

Let , let , and let . Then . For each , we find an such that , then . This proves that is a Cauchy-sequence, which (because is complete) means it converges to some point .

Furthermore, given a position , we have

.

Ex2

Given a any sequence in , it converges to some point , and it's easy to see that is a fixed point of . Let be a fixed point of . Then, , hence . (Otherwise, this contradicts the fact that is a contraction.)

Ex3

Choose as . Then is complete because, given any Cauchy sequence , it's easy to prove that there is an such that all but finitely many are in the subspace . However, the map given by has no fixed points since it moves each point by at least 1. (And it's straight-forward to verify that is a weak contraction.)

#1

- can show by induction.

Therefore, is a Cauchy sequence, and since (X, d) is complete, it must have a limit in X. Suppose . Then , therefore

#2

Suppose . Let's show that y is a fixed point. Indeed, for any n, , and if we take the limit in both sides, we get .

Let's show uniqueness: suppose x and y are fixed points, then , therefore d(x,y) = 0.

#3

, f(x) = x + 1/x.

#4

Suppose , where h is some convex function and . Take . Since h is convex on segment [x,y], its directional derivative is nondecreasing. Its directional derivative is a projection of gradient of g on the [x,y] line. Therefore, we have , or . Hence,

Therefore, g is a contraction mapping, and from problem 1 it follows that the gradient descent converges exponentially quickly.

#5

Suppose A is an NxN positive matrix, and e is its minimal entry. (Then e < 1/N). Then we can write A = eJ + (1 - Ne)Q, where J is a matrix whose entries are all 1, and Q is a matrix whose entries are all nonnegative and the sum of each column is 1 (because the sum of each column is 1 in A and Ne in J). Suppose x and y are probability distributions, i.e. N-dimensional vectors with nonnegative entries whose sum is 1. Then

Denote , (pointwise max/min). Then , ,

so . The space of all probability distributions with metric induced by - norm is a compact subset of , so it is a complete metrics space, therefore, the sequence converges to a unique fixed point.

#6

Let us assume (the proof for is the same). Then, from monotonicity of f, is an ascending chain. This sequence cannot have more that |P| distinct elements, so an element of this sequence is going to repeat: . Then all the inequalities in must be equalities, so , is a fixed point.

(The second half made me realize how much more comfortable I am with abstract exercises than with regular Analysis à la Ex4.)

Ex6

If , then all are comparable to each other: we have

and so on. Furthermore, if is such that , then for all as well (verify by looking at the contrapositive). Consequently (set ), if were not a fixed point of , then , and hence , which means would have elements.

If , we get and so on, leading to the same argument.

Ex7

Wlog, assume . Set . Given any non-limit ordinal , we find a predecessor and set . Given any limit ordinal , we set .

Suppose this doesn't define for all ordinals . Then, there is some smallest ordinal such that is not defined. This immediately yields a contradiction (regardless of whether is a limit ordinal or not).

We want this construction to have the properties that and that . Thus, let and be an ordinal. If has a predecessor, the check for both properties are easy. If not, then and . Then, for the first property, note that the upper-bound of a one-element set is just the element itself. For the second, note that each element in the first set is smaller than some element in the second set, so is an upper-bound for the first set, which implies that since is the lowest upper-bound.

Now, given an ordinal , our construction defines a function . If , then the chain doesn't become stationary at any earlier point either (to verify, take a smallest such that [ but the chain is stationary for smaller ordinals] and derive a contradiction), and hence is injective, proving that . (This is the generalized version of the argument from Ex6.)

Ex8

Let be monotonic and let be the set of fixed points of . Then inherits the partial order from ; what needs doing is verify the least upper-bound property. So let . Then, has a least upper-bound in .

Let be some ordinal with . From the previous exercise, we know that . Choose the smallest such that . Then, and , hence is an upper-bound of .

It remains to show that it is the least upper-bound. Thus, let be another upper-bound of . Then, in , hence (apply Ex7).

Ex9

A least upper-bound is obtained via on all sets, and the greatest lower-bound via . (Easy checks.) Given any function , we trivially have ; injectivity is not needed.

We define

- (i.e., greatest lower bound of the 's)

We need to verify that , then and are the desired sets.

- "": Let . Then, there exists some smallest such that . (The case is possible and included.) We have , hence . Then, , hence .
- "": Let . Then, there exists some smallest such that . In this case, we must have , so we know that . It follows that was lost at this step, i.e.,

Ex10

Let be the set constructed in Ex9. Then, we can define a bijection via

Ex5 (this is super ugly but I don't think it's worth polishing and it does work. All important ideas are in the first third of the proof, the rest just inelegantly resolves the details.)

We define our metric space as where is the set of probability distributions, and . Let and let , then can be computed as

where the last step holds because multiplying a vector with the state-transition matrix leaves the sum of entries unchanged. (Reasonably easy to verify using that each column of sums up to 1.)

If , then has at least one negative entry and the inequality is strict. In that case, let and . In particular, we have . Note that, when two numbers have different sign, then and thus . Therefore, the amount that gets canceled out is at least

Let be the smallest entry in , then we can lower-bound the above as

Wlog, let . Let be the sum of all postive entires of , then , so the term we want to lower-bound is . The sum of the negative entries is , which means that the one with largest norm among them has norm at least . Thus, the relative decrease is at least

This is the third of three sets of fixed point exercises. The first post in this sequence is here, giving context.

Note: Questions 1-5 form a coherent sequence and questions 6-10 form a separate coherent sequence. You can jump between the sequences.

Let (X,d) be a complete metric space. A function f:X→X is called a contraction if there exists a q<1 such that for all x,y∈X, d(f(x),f(y))≤q⋅d(x,y). Show that if f is a contraction, then for any x, the sequence {xn=fn(x0)} converges. Show further that it converges exponentially quickly (i.e. the distance between the nth term and the limit point is bounded above by c⋅an for some a<1)

(Banach contraction mapping theorem) Show that if (X,d) is a complete metric space and f is a contraction, then f has a unique fixed point.

If we only require that d(f(x),f(y))<d(x,y) for all x≠y, then we say f is a weak contraction. Find a complete metric space (X,d) and a weak contraction f:X→X with no fixed points.

A function f:Rn→R is convex if f(tx+(1−t)y)≤tf(x)+(1−t)f(y), for all t∈[0,1] and x,y∈Rn. A function f is strongly convex if you can subtract a positive parabaloid from it and it is still convex. (i.e. f is strongly convex if x↦f(x)−ε||x||2 is convex for some ε>0.) Let f be a strongly convex smooth function from Rn to R, and suppose that the magnitude of the second derivative ∥∇2f∥ is bounded. Show that there exists an ε>0 such that the function g:Rn→Rn given by x↦x−ε(∇f)(x) is a contraction. Conclude that gradient descent with a sufficiently small constant step size converges exponentially quickly on a strongly convex smooth function.

A finite stationary Markov chain is a finite set S of states, along with probabilistic rule A:S→ΔS for transitioning between the states, where ΔS represents the space of probability distributions on S. Note that the transition rule has no memory, and depends only on the previous state. If for any pair of states s,t∈ΔS, the probability of passing from s to t in one step is positive, then the Markov chain (S,A) is ergodic. Given an ergodic finite stationary Markov chain, use the Banach contraction mapping theorem to show that there is a unique distribution over states which is fixed under application of transition rule. Show that, starting from any state s, the limit distribution limn→∞An(s) exists and is equal to the stationary distribution.

A function f from a partially ordered set to another partially ordered set is called monotonic if x≤y implies that f(x)≤f(y). Given a partially ordered set (P,≤) with finitely many elements, and a monotonic function from P to itself, show that if f(x)≥x or f(x)≤x, then fn(x) is a fixed point of f for all n>|P|.

A complete lattice (L,≤) is a partially ordered set in which each subset of elements has a least upper bound and greatest lower bound. Under the same hypotheses as the previous exercise, extend the notion of fn(x) for natural numbers n to fα(x) for ordinals α, and show that fα(x) is a fixed point of f for all x∈X with f(x)≤x or f(x)≥x and all |α|>|L| (|A|≤|B| means there is an injection from A to B, and |A|>|B| means there is no such injection).

(Knaster-Tarski fixed point theorem) Show that the set of fixed points of a monotonic function on a complete lattice themselves form a complete lattice. (Note that since the empty set is always a subset, a complete lattice must be nonempty.)

Show that for any set A, (P(A),⊆) forms a complete lattice, and that any injective function from A to B defines a monotonic function from (P(A),⊆) to (P(B),⊆). Given injections f:A→B and g:B→A, construct a subset A′ of A and a subset of B′ of B such that B′=f(A′) and A−A′=g(B−B′).

(Cantor–Schröder–Bernstein theorem) Given sets A and B, show that if |A|≤|B| and |A|≥|B|, then |A|=|B|. (|A|≤|B| means there is an injection from A to B, and |A|=|B| means there is a bijection)

Please use the spoilers feature - the symbol '>' followed by '!' followed by space -in your comments to hide all solutions, partial solutions, and other discussions of the math. The comments will be moderated strictly to hide spoilers!I recommend putting all the object level points in spoilers and including metadata outside of the spoilers, like so: "I think I've solved problem #5, here's my solution <spoilers>" or "I'd like help with problem #3, here's what I understand <spoilers>" so that people can choose what to read.Tomorrow's AI Alignment Forum Sequences post will be "Approval-directed agents: overview" by Paul Christiano in the sequence Iterated Amplification.The next post in this sequence will be released on Saturday 24th November, and will be 'Fixed Point Discussion'.