Aug 24, 2018

2 comments

It's been too long - a month and a half since my last review, and about three months since *Analysis I*. I've been immersed in my work for CHAI, but reality doesn't grade on a curve, and I want more mathematical firepower.

On the other hand, I've been cooking up something really special, so watch this space!

*Metric spaces; completeness and compactness. *

It sucks, and I hate it.

*Generalized continuity, and how it interacts with the considerations introduced in the previous chapter. Also, a terrible introduction to topology.*

There's a lot I wanted to say here about topology, but I don't think my understanding is good enough to break things down - I'll have to read an actual book on the subject.

*Pointwise and uniform convergence, the Weierstrass **-test, and uniform approximation by polynomials. *

Suppose we have some sequence of functions , , which converge pointwise to the 1-indicator function (*i.e.*, and otherwise). Clearly, each is (infinitely) differentiable; however, the limiting function isn't differentiable at all! Basically, pointwise convergence isn't at all strong enough to stop the limit from "snapping" the continuity of its constituent functions.

As in previous posts, I mark my progression by sharing a result derived without outside help.

*Already proven:* .

*Definition.* Let and . A function is said to be an *-approximation to the identity* if it obeys the following three properties:

- is compactly supported on .
- is continuous, and .
- for all .

*Lemma: *For every and , there exists an -approximation to the identity which is a polynomial on .

*Proof of Exercise 14.8.2(c).* Suppose ; define for and otherwise. Clearly, is compactly supported on and is continuous. We want to find such that the second and third properties are satisfied. Since is non-negative on , must be positive, as must integrate to . Therefore, is non-negative.

We want to show that for all . Since is non-negative, we may simplify to . Since the left-hand side is strictly monotone increasing on and strictly monotone decreasing on , we substitute without loss of generality. As , so we may take the reciprocal and multiply by , arriving at .

We want ; as is compactly supported on , this is equivalent to . Using basic properties of the Riemann integral, we have . Substituting in for ,

with the second inequality already having been proven earlier. Note that although the first inequality is not always true, we can make it so: since is fixed and , the left-hand side approaches more quickly than does. Therefore, we can make as large as necessary; isolating ,

the second line being a consequence of . Then set to be any natural number such that this inequality is satisfied. Finally, we set . By construction, these values of satisfy the second and third properties. □

Those looking for an excellent explanation of convolutions, look no further!

*Theorem. *Suppose is continuous and compactly supported on . Then for every , there exists a polynomial such that .

In other words, any continuous, real-valued on a finite interval can be approximated with arbitrary precision by polynomials.

*Why I'm talking about this.* On one hand, this result makes sense, especially after taking machine learning and seeing how polynomials can be contorted into basically whatever shape you want.

On the other hand, I find this theorem intensely beautiful. 's proof was slowly constructed, much to the reader's benefit. I remember the very moment the proof sketch came to me, newly-installed gears whirring happily.

*Real analytic functions, Abel's theorem, and , complex numbers, and trigonometric functions.*

Cached thought from my CS undergrad: exponential functions always end up growing more quickly than polynomials, no matter the degree. Now, I finally have the gears to see why:

has *all* the degrees, so no polynomial (of necessarily finite degree) could ever hope to compete! This also suggests why .

You can multiply a number by itself some number of times.

[*nods*]

You can multiply a number by itself a negative number of times.

[Sure.]

You can multiply a number by itself an irrational number of times.

[OK, I understand limits.]

You can multiply a number by itself an imaginary number of times.

[Out. Now.]

Seriously, this one's weird (rather, it *seems* weird, but how can "how the world is" be "weird")?

Suppose we have some , where . Then , so "all" we need to figure out is how to take an imaginary exponent. Brian Slesinsky has us covered.

*Years before becoming involved with the rationalist community, Nate asks this question, and Qiaochu answers. *

*This isn't a coincidence, because nothing is ever a coincidence.*

*Or maybe it is a coincidence, because Qiaochu answered every question on StackExchange.*

*Periodic functions, trigonometric polynomials, periodic convolutions, and the Fourier theorem.*

*A beautiful unification of Linear Algebra and calculus: linear maps as derivatives of multivariate functions, partial and directional derivatives, Clairaut's theorem, contractions and fixed points, and the inverse and implicit function theorems. *

If you have a set of points in , when do you know if it's secretly a function ? For functions , we can just use the geometric "vertical line test" to figure this out, but that's a bit harder when you only have an algebraic definition. Also, sometimes we can implicitly define a function locally by restricting its domain (even if no explicit form exists for the whole set).

*Theorem.* Let be an open subset of , let be continuously differentiable, and let be a point in such that and . Then there exists an open containing , an open containing , and a function such that