What was initially counterintuitive is that even though , the series doesn't converge.
This becomes much less counterintuitive if you instead ask: How would you construct a sequence with divergent series?
Obviously, take a divergent series, e.g. , and then split the th term into .
Technically the Peano axioms don't define addition and multiplication, they only provide a framework for analyzing them.
The sentence "Using the axiom of foundation [...]" seems to me to be poorly worded; the subsequent clause does not follow from the axiom of foundation, but rather contradicts it (enabling one to complete a proof by contradiction).
In your upper bounded monotonic sequences proof you are using to mean two different things. Also, the existence of in the second sentence doesn't follow, because could be less than for all . It looks like there is a step missing in the argument somewhere. Also, I don't see where you have used the completeness axiom, and you certainly can't prove the result without using completeness.
Typo: should be
The sentence about Lipschitz continuity should probably be worded so as to make clear that this is the definition of Lipschitz continuity, and not just a property of it.
There is a much easier example of a function that is uniformly continuous but not Lipschitz continuous: .
The formula is wrong, it should be .
The assumption that and are differentiable on is too strong for L'Hôpital's rule, you don't want to assume that they are differentiable at the limit point . Many applications involve the derivative at being either zero or infinity. Also, I am not sure what you are trying to say about .
Technically the Peano axioms don't define addition and multiplication, they only provide a framework for analyzing them.
I know. What I wrote does not particularly imply that they define addition and multiplication: "allowing us to define addition and multiplication".
In your upper bounded monotonic sequences proof you are using to mean two different things. Also, the existence of in the second sentence doesn't follow, because could be less than for all .
Good catch - I stayed up pretty late writing the review, and I suppose I should watch out for that next time! However, you can still repair the proof by setting the upper bound to that and repeating up to times, since it can only escape upwards (or refine its upper bound downwards) by a finite number of times.
It looks like there is a step missing in the argument somewhere. Also, I don't see where you have used the completeness axiom, and you certainly can't prove the result without using completeness.
And why is that? The proof I provided (with the typo fixed and the step filled in) seems sufficient?
There is a much easier example of a function that is uniformly continuous but not Lipschitz continuous.
I found it less interesting.
The assumption that and are differentiable on is too strong for L'Hôpital's rule
I'd like to say that I literally took the definition from the book here, but that's good to know!
I am not sure what you are trying to say about .
It's a pop culture reference, related to the GIF I had just used. A little bit of trivia.
The Peano axioms aren't what allows you to define addition and multiplication, though. For example, the induction axiom bears no relation to the definitions of addition and multiplication. (Some of the Peano axioms follow directly from the definitions of addition and multiplication, but really it is the axioms that follow from the definitions and not the other way around.)
You now appear to be proving that for all , there exists such that the sequence is eventually -close to . Whereas to prove that the sequence converges, you would need to show that there exists an such that for all , the sequence is eventually -close to .
Regarding , perhaps I am still not getting the joke, but what I meant was that I don't think the rule "if and are not differentiable on , then perhaps the limit does not exist" is the rule that you would use to determine that the limit does not exist in the example from the movie, since and are in fact differentiable on in that example.
The Peano axioms aren't what allows you to define addition and multiplication, though. For example, the induction axiom bears no relation to the definitions of addition and multiplication. (Some of the Peano axioms follow directly from the definitions of addition and multiplication, but really it is the axioms that follow from the definitions and not the other way around.)
I don't really view what I wrote as saying "you can't formalize addition and multiplication without (all of) these specific axioms". All I was saying is that these axioms provide a framework by which addition and multiplication can be coherently defined, and with which we can prove properties. Sure, the induction axiom is not related, but I don't see how you can make the same case for the axioms defining the successor function for the natural numbers. That is, in this book, we actually used these axioms to define addition and multiplication. Maybe I'm still missing your point!
You now appear to be proving that for all , there exists such that the sequence is eventually -close to . Whereas to prove that the sequence converges, you would need to show that there exists an L such that for all , the sequence is eventually -close to .
But I fixed before doing anything with ?
Regarding ...
I see how that's confusing! I was saying that L'Hôpital's rule is what you would use - not the sentence you had in mind. I'll clear that up.
Regarding Peano arithmetic, my point is that S(a)+b := S(a+b) (for example) is not the same thing as S(a)+b = S(a+b). The former is a definition, whereas the latter is an statement (which can be used as an axiom). In order to make sense of the definition you need to understand the concept of recursion, whereas you can make sense of the statement just by thinking of addition as "some binary operation that we don't know what it is, but this is a fact we know about it". (Let me reiterate that this is a rather technical distinction, I don't think it makes too much difference either way but I thought it was worth pointing out.)
So, your proof is actually kind of confusingly worded and I am not sure exactly what you mean by "repeating the above argument times". It looks like you want to keep the same but use a different each time. But you can't do that if you fix before introducing .
Another way to look at it is to consider the same proof but with all variables assumed to be
rational. In that case the conclusion of the theorem would be false (because there are bounded monotonic sequences of rationals that don't converge to any rational) and so you should try to figure out which step of your proof would be wrong in the new context. Ordinarily, it should be where you use the completeness axiom, because the completeness axiom doesn't hold for the rationals.
Regarding Peano arithmetic, my point is that S(a)+b := S(a+b) (for example) is not the same thing as S(a)+b = S(a+b). The former is a definition, whereas the latter is an statement (which can be used as an axiom). In order to make sense of the definition you need to understand the concept of recursion, whereas you can make sense of the statement just by thinking of addition as "some binary operation that we don't know what it is, but this is a fact we know about it". (Let me reiterate that this is a rather technical distinction, I don't think it makes too much difference either way but I thought it was worth pointing out.)
Especially in math, it's great to have concepts or diction which I think "clearly have the same shade of connotation" (but actually don't) brought to my attention. I understand the difference you're pointing to here, and I'll be more cognizant of it in the future. Precision of thought and speech is important, so thanks!
So, your proof is actually kind of confusingly worded...
That's a good point, and one I realized soon after leaving my computer - doesn't stay fixed in my argument. I think that one should still be able to show that all these different 's have to be the same, right? That is (this is just an outline), take any distinct ; then suppose for distinct , the sequence is eventually -close to and -close to . Then we have that . Since are arbitrary, we can expand this argument to an arbitrary number of arbitrarily small 's to show that all the 's must be equal.
My intuition is just so strong on this one, and I want to see whether I can extend my proof to cover it, or whether I actually need to just change approaches.
So, the way that I understand your argument, ranges over the values . (Of course there is a circular dependency here but as far as I can tell this was what you were thinking.) Of course, all of these possible values of are clearly distinct. I'm not really sure what your point is about what happens if you assume that a sequence is eventually close to two different numbers. I think part of the issue is that your proof is a proof by contradiction, but in a proof by contradiction you are only allowed to introduce the contradiction hypothesis once, not an arbitrary number of times.
The fundamental question that any proof of this theorem should answer is: given a bounded monotonic sequence, how do we find the that it converges to? Your proof doesn't appear to address this question at all, which makes me think you need a completely different idea.
Note: to address your "strong intuition" perhaps I should say that of course if you have two real numbers and such that for all , there is a number that is -close to and -close to , then . But I don't think this fact has the significance you want it to have in the broader argument.
Note: real analysis is not on the MIRI reading list (although I think it should be).
Foreword
As a young boy, mathematics captivated me.
In elementary school, I'd happily while away entire weekends working through the next grade's math book. I was impatient.
In middle school, I'd lazily estimate angles of incidence that would result if I shot lasers from my eyes, tracing their trajectories within the classroom and out down the hallway. I was restless.
In high school, I'd daydream about what would happen to integrals as I twisted functions in my mind. I was curious.
And now, I get to see how it's all put together. Imagine being fascinated by some thing, continually glimpsing beautiful new facets and sampling exotic flavors, yet being resigned to not truly pursuing this passion. After all, I chose to earn a computer science degree.
Wait.
Analysis I
As in Linear Algebra Done Right, I completed every single exercise in the book - this time, without looking up any solutions (although I did occasionally ask questions on Discord). Instead, I came back to problems if I couldn't solve them after half an hour of effort.
A sampling of my proofs can be found here.
1: Introduction
2: The Natural Numbers
In which the Peano axioms are introduced, allowing us to define addition and multiplication on the natural numbers {0,1,2,…}.
3: Set Theory
In which functions and Cartesian products are defined, among other concepts.
Recursive Nesting
How can you apply the axiom of foundation if sets are nested in each other? That is, how can the axiom of foundation "reach into" sets like A={B,…} and B={A,…}?
Show that if A and B are two sets, then either A∉B or B∉A (or both).
Proof. Suppose A∈B and B∈A. By the pairwise axiom, we know that there exists a set S={A,B}. We see that there does not exist an S′∈S such that S′∩S=∅. That is, if we choose A, one of its elements is B, which is also an element of S - this violates the axiom of foundation. The same reasoning applies if we choose B. Then ¬(A∈B∧B∈A), so either A or B (or both) is not an element of the other.
4: Integers and Rationals
In which the titular sets are constructed, allowing the exploration of absolute value, exponentiation, and the incompleteness (colloquially, the "gaps") of the rationals.
Readers newer to mathematics may find it interesting that even though there are (countably) infinitely many rational numbers between any two distinct rationals, the rationals still contain gaps.
5: The Real Numbers
In which Cauchy sequences allow us to formally construct the reals.1
6: Limits of Sequences
In which we meet convergence and its lovely limit laws, extend the reals to cover infinities, experience the delightfully-counterintuitive limsup and liminf, and complete our definition of real exponentiation.
Upper-Bounded Monotonic Sequence Convergence
I tried to come up with a clever title here - I really did. Apparently even my punmaking abilities are bounded.
Suppose you have a monotonically increasing sequence (xn)∞n=0 with an upper bound M∈R. Then the sequence converges; this also applies to lower-bounded monotonic decreasing sequences.
Weird, right? I mean, even though the sequence monotonically increases and there's an upper bound, there are still uncountably infinitely many "places" the sequence can "choose" to go. So what gives?
Proof. Let L∈[x0,M] and ϵ>0. Suppose that the sequence is not eventually ϵ-close to L. Let K∈N be such that for all k≥K, either L+ϵ<xk≤M or xk<L−ϵ≤M; we know that K exists because the sequence is monotone increasing. By the Archimedean principle, there exists some N∈N such that Nϵ>M−x0.
Since the sequence is monotone increasing, by repeating the above argument N times in the first case, we have that M<L+Nϵ<xk≤M, which is contradictory. By repeating the argument N times in the second case, we have xk<L−Nϵ<x0, which contradicts the fact that the sequence is monotone increasing. Then for any ϵ>0, the sequence must be eventually ϵ-close to some L∈[x0,M]. Intuitively, for any given ϵ>0, the sequence can only "escape" a limit a finite number of times before it runs out of room and has to be ϵ-close.
Next, we show that the Lϵ's form a Cauchy sequence. Let ϵ3>0, and set ϵ1,ϵ2 such that 0<ϵ1,ϵ2≤ϵ32. (xn) is eventually ϵ1-close to Lϵ1, so there exists a K1∈N such that for all k≥K1 we have |xk−Lϵ1|<ϵ1. Similar arguments hold for ϵ2. Set K=max(K1,K2)+1, now |Lϵ1−Lϵ2|≤|xK−Lϵ1|+|xK−Lϵ2|<ϵ1+ϵ2≤ϵ3. But ϵ3 is arbitrary, so we can easily see that the sequence (L1n)∞n=1 is Cauchy.
As the real numbers are complete, this sequence converges to some L∞∈R. Since the main sequence is eventually 1n-close to L1n, and L1n converges to L∞, by the triangle inequality we have that the main sequence converges to L∞.
7: Series
In which we finally reach concepts from pre-calculus.
Condensation
The Cauchy condensation test says that for a sequence (an)∞n=1 where an≥0 and an+1≤an for all n≥1, the series ∑∞n=1an converges iff the series ∑∞k=02ka2k converges. Using this result, we have that the harmonic series ∑∞n=11n diverges; the partial sums ∑xn=11n are given below.
What was initially counterintuitive is that even though limn→∞1n=0, the series doesn't converge. The best intuition I've come up with is that the harmonic series doesn't "deplete" its domain quickly enough, so you can get arbitrarily large partial sums.
If you want proofs, here are twenty!
8: Infinite Sets
In which uncountable sets2, the axiom of choice, and ordered sets brighten our lives.
9: Continuous Functions on R
In which continuity, the maximum principle, and the intermediate value theorem make their debut.
Lipschitz Continuity ⇎ Uniform Continuity
If a function f:X→R (X⊆R) is Lipschitz-continuous for some Lipschitz constant M, then by definition we have t for every x,y∈X,
The definition of uniform continuity is
Lipschitz continuity implies uniform continuity (do you see why?), but the converse is not true. I mean, what kind of twisted person would come up with this kind of function?
10: Differentiation of Functions
In which the basic rules of differential calculus are proven.
You know, I actually thought that I wouldn't have too much to explain in this post - the book went very smoothly up to this point. On the upside, we get to spend even more time together!
Differential Intuitions
Let me simply direct you to this excellent StackExchange answer.
pǝɹᴉʌɐʇᴉʌǝs
We can understand (f−1)′(x)=1f′(x) by simply thinking about runrise=1riserun, which makes sense for the derivative of the inverse!
L'Hôpital's Rule
Consider f,g:[a,b]→R differentiable on (a,b] (for real numbers a<b). Then if f(a)=g(a)=0,g′(x)≠0 for x∈[a,b], and limx→a;x∈(a,b]f′(x)g′(x)=L∈R, we have that g(x)≠0 for x∈(a,b] and limx→a;x∈(a,b]f(x)g(x)=L.
As a neat exercise, let's see how this rule breaks if we violate preconditions:
11: The Riemann Integral
In which partitions and piecewise constant functions help us define the Riemann integral, which later leads to the Riemann-Stieltjes integral and the fundamental theorems of calculus.
Having taken care of the exposition, we arrive at the Rivendell of real analysis, preparing ourselves for the arduous journey to Mt. Lebesgue.
Pointless Integration
There is zero area under a point (or even infinitely many points, such as N) due to how we define length, which in turn allows us to build from piecewise Riemann integrals to the real deal.
Infinite Partitions?
The upper and lower Riemann integrals can be defined as the infimum and supremum of the upper and lower Riemann sums, respectively. It is important to note that even though that for many functions (such as f(x)=x), further refinement of the partition always gets you closer to the extremum, the result is not an "infinite" partition (which is not defined according to this construction).
Consider the curried function gf:[P]→R, which takes a partition and computes its corresponding Riemann sum with respect to the predefined function. Then clearly this function is monotonic with respect to the refinement of the partition; the extremum is not necessarily achieved by any given partition in the refinement sequence, but rather the closest bound on what you can get with any partition.
Riemann-Stieltjes Confusionn
The book doesn't lay it out cleanly, so I will: the Riemann-Stieltjes integral allows us to use custom length functions to weight different parts of the function differently. I recommend working through a simple case like α(x)=x2 in your head: ∫31xdα (how do the piecewise constant Riemann-Stieltjes integrals of majorizing and minorizing functions change as you iteratively refine the coarsest partition possible?).
This is particularly useful in defining expectation in statistics:
The Riemann integral is recovered as the special case where α(x)=x.
Final Verdict
Terence Tao is both an incredible mathematician and writer, and it shows. There simply weren't many things which confused me, and that says more about his writing than it does about me. The exercises are appropriate and hew closely to each chapter's content; often, the reader proves key results.
My only complaint is that results are frequently referred to as "from Proposition 3.6.4 and Theorem 3.6.12" many chapters later, forcing the reader to infer the referents or backtrack all of the way. In a sense, this may be helpful - you don't want to backtrack a billion pages, so you try to fill in the blanks. In another sense, it's annoying.
In my opinion, Analysis I belongs near the very beginning of the research guide. It's a wonderful introduction to proof-based mathematics, with a helpful appendix clearing up concepts I had previously only picked up via osmosis. Additionally, I met with a deep learning professor at my university, asking them "if I want to be able to understand and potentially make progress on some of the fundamental issues in machine learning, do I need to know real analysis?". Their answer: a definitive yes.
Forwards
Next up is Analysis II, which works from metric spaces all the way up to the Lebesgue integral. Additionally, I'm going to start working through Sutton and Barto's Reinforcement Learning: An Introduction to increase the rigor with which I think about RL and in preparation for my work this summer.
I also think I need to run through some applied Calculus to refresh my reflexes (so I don't have to rederive every identity that I can't remember).
Tips
Marginal Attention
In the last pages of CFAR's participant handbook is an entry on marginal attention. Essentially, each bit of extra attention contributes more than the last. If you're totally dialed in on a task and get slightly distracted, that is far more disastrous than getting slightly more distracted while your attention is already somewhat unfocused.
I often simply left my phone at home and accompanied my Kindle to an empty classroom. This worked wonderfully; I suspect it more than doubled my hourly learning efficiency.
Proving Myself
Just over three months ago, I wrote:
And now, I think that my proofs are getting pretty good. When confronting a problem, I feel a certain mental lightness - a readiness to use my full repertoire of knowledge, to strike true down those paths likely to bring the proof to completion. Although my capacities remain faint shadows of what I hope to achieve within the coming year, my progress has been substantial.
Talk is cheap, and you probably don't feel like navigating to the selection of proofs I provided, so let me share with you my favorite proof from this book.
The following problem was admittedly confusing at first, but I had an overwhelmingly strong sense that the statement had to be true. I thought again and again about why; once that came to me, I wrote it all at once, and beamed.
Local Extrema are Stationary: let a<b be real numbers, and let f:(a,b)→R be a function. If x0∈(a,b), f is differentiable at x0, and f attains either a local maximum or local minimum at x0, then f′(x0)=0.
Proof. Suppose f(x0) is a local maximum; thus, for all x∈(a,b), f(x0)≥f(x). Let L=f′(x0); we know that L exists and is a real number since f is differentiable at x0. By the trichotomy of real numbers, L is either negative, positive, or zero.
Suppose that L is negative - then consider limx→x−0;x∈X−{x0}f(x)−f(x0)x−x0 (this is permissible as the left and right limits of a convergent limit are equal); we have a sequence (xn)∞n=1 of xn<x0 which converges to x0, so every term in (xn−x0)∞n=1 is negative.
If (f(xn)−f(x0))∞n=1 has no positive terms, then each term in (f(xn)−f(x0)x−x0)∞n=1 must be positive, so L≥0, contradicting our assumption that L<0. Then by the properties of convergent limits, there must be infinitely many xn such that f(xn)−f(x0) is positive. Therefore, these f(xn)>f(x0), contradicting the fact that f(x0) is a local maximum on (a,b). Then L cannot be negative.
A similar proof holds for L>0, so L=0.
To solve for local minimum f(x0), define g(x):=−f(x) and use the above result on local maximum g(x0).
If you are interested in working with me or others on the task of learning MIRI-relevant math, if you have a burning desire to knock the alignment problem down a peg - I would be more than happy to work with you. Messaging me may also net you an invitation to the MIRIx Discord server.
1 Constructing the reals from first principles was profoundly enjoyable. Working through this book gave me a sense of certitude when dealing with math. Numbers are no longer simply familiar friends following familiar rules, but rather objects I know how to construct. It feels great.
2 Gather round, gather round - for it is on this blessed morn/day/evening that I recount my youthful dalliances with uncountable infinities!
In my first term of college, I read Gödel, Escher, Bach as part of a wonderful tutorial class. I came across sizes of infinities, and, like many, my mind absolutely refused. However, I wanted to understand what was really going on so intensely that I spent many hours working through the intuitions on my own (not knowing much of anything about formal mathematics). This period of discovery was one of my favorite experiences of my undergraduate career; I can't tell you how many nights I just sat under the stars, thinking. Eventually, I came to the conclusion that of course there are multiple sizes of infinities.
Now, maybe it would have been faster to just learn the math behind diagonalization or some other method of proof, but I think there was tremendous value in learning to fall in love with the process - to commit yourself fully to the joy of discovery and thought.
I can certainly tell you that I wouldn't have made it so far so quickly down the research list if this journey didn't feel like one of the most beautiful things I've ever done.