Linear algebra, my old flame - how I missed you. At my undergraduate institution, linear algebra was my introduction to proof-based mathematics. There are people who shake hands, and there are people who shake hands. You know the type - you grasp their hand, and they clamp down and pull you in, agitating so wildly you fear for the structural integrity of your joints. My first experience with proofs was an encounter of the latter variety.

I received my first homework grade, and I was not pleased with my performance. I promptly went to the library and vowed not to leave until I learned how to write proofs adequately. The hours passed, and, (thankfully for my stomach), I got it. I didn't let up all semester. Immediately before the final exam, I was doing pushups in the hallway, high-fiving my friends, and watching the 'Michael Jordan Top 50 All Time Plays' video while visualizing myself doing that to the test. Do that to the test I did indeed.

This time around, the appropriately-acronymized LADR is the first step on my journey to attain a professional-grade mathematical skillset.

Tight Feedback Loops

In a (possibly maniacal) effort to ensure both mastery of the material and the maturation of my proof skillset, I did nearly1 every one of the 561 exercises provided. I skipped problems only when I was confident I wouldn't learn anything, or calculus I didn't remember was required (and the payoff didn't seem worth the time spent relearning it now in a shallow manner, as opposed to thoroughly learning more calculus later). If I could sketch a solid proof in my head, I wouldn't write anything down. Even in the latter case, I checked my answers using this site (additional solutions may be found here, although be warned that not all of them are correct).

I also sometimes elected to give myself small hints after being stuck on a problem for a while; the idea was to keep things at the difficulty sweet spot. Specifically, I'd spend 10-20 minutes working on a problem by myself; if I wasn't getting anywhere, I'd find a hint and then backpropagate the correct mental motion instead of what I had been trying to do. I think that focusing on where you were going wrong and what insight you should have had, in what direction you should have looked, is more efficient than just reading solutions.

Over time, I needed fewer hints, even as problem difficulty increased.

Surprisingly little is known about how long-term retention is most efficiently achieved... Our results suggest that a single session devoted to the study of some material should continue long enough to ensure that mastery is achieved but that immediate further study of the same material is an inefficient use of time.

The point isn't to struggle per se - it's to improve and to win.

Linear Algebra Done Right

This book has been previously reviewed by Nate Soares; as such, I'll spend time focusing on the concepts I found most difficult. Note that his review was for the second edition, while mine is for the third.

True to my vow in the last post, I have greatly improved my proof-babble; a sampling of my proofs can be found here.

If you zip through a page in less than an hour, you are probably going too fast.

Try me.

1: Vector Spaces

In which the author reviews complex numbers, vector spaces, and subspaces.

I kept having trouble parsing

For f,g∈FS, the sum f+g∈FS is defined by (f+g)(x)=f(x)+g(x) for all x∈S.

because my brain was insisting there was a type error in the function composition. I then had the stunning (and overdue) realization that my mental buckets for "set-theoretic functions" and "mathematical functions in general" should be merged.

That is, if you define

f:X→Y={(x,f(x)):x∈X}g:X→Y={(x,g(x)):x∈X},

then (f+g):X→Y simply has the definition {(x,f(x)+g(x)):x∈X}. There isn't "online computation"; the composite function simply has a different Platonic lookup table.

2: Finite-Dimensional Vector Spaces

In which the author covers topics spanning linear independence, bases, and dimension.

3: Linear Maps

In which the author guides us through the fertile territory of linear maps, introducing null spaces, matrices, isomorphisms, product and quotient spaces, and dual bases.

So far our attention has focused on vector spaces. No one gets excited about vector spaces.

Matrix Redpilling

The author built up to matrix multiplication by repeatedly insinuating that linear maps are secretly just matrix multiplications, teaching you to see the true fabric of the territory you've been exploring. Very well done.

Look no further than here and here for an intuitive understanding of matrix multiplication.

Dual Maps

If T∈L(V,W) then the dual map of T is the linear map T′∈L(W′,V′) defined by T′(ϕ)=ϕ∘T for ϕ∈W'.

The double dual space of V, denoted V′′, is defined to be the dual space of V′. In other words, V′′=(V′)′. Define Λ:V→V′′ by (Λv)(φ)=φ(v) for v∈V and φ∈V′.

Stay with me, this is dualble.

So Λ takes some v∈V and returns the curried function Λv∈V′′. Λv, being in V′′, takes some φ∈V′ and returns some a∈F. In other words, Λv∈V′′ lets you evaluate the space of evaluation functions (V′) with respect to the fixed v∈V. That's it!

4: Polynomials

In which the author demystifies the quadratic formula, sharing with the reader those reagents used in its incantation.

Remarkably, mathematicians have proved that no formula exists for the zeros of polynomials of degree 5 or higher. But computers and calculators can use clever numerical methods to find good approximations to the zeros of any polynomial, even when exact zeros cannot be found.

For example, no one will ever be able to give an exact formula for a zero of the polynomial p defined by p(x)=x5−5x4−6x3+17x2+4x−7.

...

There are two cats where I live. Sometimes, I watch them meander around; it's fascinating to think how they go about their lives totally oblivious to the true nature of the world around them. The above incomputability result surprised me so much that I have begun to suspect that I too am a clueless cat (until I learn complex analysis; you'll excuse me for having a few other textbooks to study first).

Edit: daozaich writes about why this isn't as surprising as it seems.

5: Eigenvalues, Eigenvectors, and Invariant Subspaces

In which the author uses the prefix 'eigen-' so much that it stops sounding like a word.

Revisiting Material

Before starting this book, I watched 3Blue1Brown's video on eigenvectors and came out with a vague "understanding". Rewatching it after reading Ch. 5.A, the geometric intuitions behind eigenvectors didn't seem like useful ways-to-remember an exotic math concept, they felt like a manifestation of how the world works. I knew what I was seeing from the hundreds of proofs I'd done up to that point.

Imagine being blind yet knowing the minute details of each object in your room; one day, a miracle treatment restores your eyesight in full. Imagine then seeing your room for the "first time".

Diagonalizability

Intuitively, the diagonalizability of some operator T∈L(V) on a finite-dimensional vector space V means you can partition (more precisely, express as a direct sum) V by the eigenspaces E(λi,T).

Another way to look at it is that diagonalization is the mutation of the basis vectors of V so that each column of M(T) is one-hot2; you then rearrange the columns (by relabeling the basis vectors) so that M(T) is diagonal.

Unclear Exercise

On page 156, you'll be asked to verify that a matrix is diagonalizable with respect to a provided nonstandard basis. The phrasing of the exercise makes it seem trivial, but the book doesn't specify how to do this until Ch. 10. Furthermore, it isn't core conceptual material. Skip.

6: Inner Product Spaces

In which the author introduces inner products, orthonormal bases, the Cauchy-Schwarz inequality, and a neat solution to minimization problems using orthogonal complements.

7: Operators on Inner Product Spaces

In which the author lays out adjoint, self-adjoint, normal, and isometric operators, proves the (a) Spectral theorem, and blows my mind with the Polar and Singular Value Decompositions.

Adjoints

Consider the linear functional φ∈L(W,F) given by ⟨Tv,w⟩ for fixed v∈V; this is then a linear functional on W for the chosen Tv. The adjoint T∗ produces the corresponding linear functional in L(V,F); given fixed w∈W, we now map to some linear functional on V such that ⟨Tv,w⟩=⟨v,T∗w⟩. The left-hand side is a linear functional on

## Foreword

Linear algebra, my old flame - how I missed you. At my undergraduate institution, linear algebra was my introduction to proof-based mathematics. There are people who shake hands, and there are people who

shake hands. You know the type - you grasp their hand, and they clamp down and pull you in, agitating so wildly you fear for the structural integrity of your joints. My first experience with proofs was an encounter of the latter variety.I received my first homework grade, and I was

notpleased with my performance. I promptly went to the library and vowed not to leave until I learned how to write proofs adequately. The hours passed, and, (thankfully for my stomach), I got it. I didn't let up all semester. Immediately before the final exam, I was doing pushups in the hallway, high-fiving my friends, and watching the 'Michael Jordan Top 50 All Time Plays' video while visualizing myself doing that to the test. Do that to the test I did indeed.This time around, the appropriately-acronymized

LADRis the first step on my journey to attain a professional-grade mathematical skillset.## Tight Feedback Loops

In a (possibly maniacal) effort to ensure both mastery of the material and the maturation of my proof skillset, I did nearly1 every one of the 561 exercises provided. I skipped problems only when I was confident I wouldn't learn anything, or calculus I didn't remember was required (and the payoff didn't seem worth the time spent relearning it now in a shallow manner, as opposed to thoroughly learning more calculus later). If I could sketch a solid proof in my head, I wouldn't write anything down. Even in the latter case, I checked my answers using this site (additional solutions may be found here, although be warned that not all of them are correct).

I also sometimes elected to give myself small hints after being stuck on a problem for a while; the idea was to keep things at the difficulty sweet spot. Specifically, I'd spend 10-20 minutes working on a problem by myself; if I wasn't getting anywhere, I'd find a hint and then

backpropagate the correct mental motion instead of what I had been trying to do.I think that focusing on where you were going wrong and what insight youshouldhave had, in what direction youshouldhave looked, is more efficient than just reading solutions.Over time, I needed fewer hints, even as problem difficulty increased.

My approach was in part motivated by the findings of Rohrer and Pashler:

The point isn't to struggle

per se -it's to improve and towin.## Linear Algebra Done Right

This book has been previously reviewed by Nate Soares; as such, I'll spend time focusing on the concepts I found most difficult. Note that his review was for the second edition, while mine is for the third.

True to my vow in the last post, I have greatly improved my proof-babble; a sampling of my proofs can be found here.

Try me.

## 1: Vector Spaces

In which the author reviews complex numbers, vector spaces, and subspaces.I kept having trouble parsing

because my brain was insisting there was a type error in the function composition. I then had the stunning (and overdue) realization that my mental buckets for "set-theoretic functions" and "mathematical functions in general" should be merged.

That is, if you define

then (f+g):X→Y simply has the definition {(x,f(x)+g(x)):x∈X}. There isn't "online computation"; the composite function simply has a different Platonic lookup table.

## 2: Finite-Dimensional Vector Spaces

In which the author covers topics spanning linear independence, bases, and dimension.## 3: Linear Maps

In which the author guides us through the fertile territory of linear maps, introducing null spaces, matrices, isomorphisms, product and quotient spaces, and dual bases.## Matrix Redpilling

The author built up to matrix multiplication by repeatedly insinuating that linear maps are secretly just matrix multiplications, teaching you to see the true fabric of the territory you've been exploring. Very well done.

Look no further than here and here for an intuitive understanding of matrix multiplication.

## Dual Maps

This StackExchange post both articulates and answers my initial confusion.

## Grueling Dualing

Stay with me, this is dualble.

So Λ takes some v∈V and returns the curried function Λv∈V′′. Λv, being in V′′, takes some φ∈V′ and returns some a∈F. In other words, Λv∈V′′ lets you evaluate the space of evaluation functions (V′) with respect to the

fixedv∈V. That's it!## 4: Polynomials

In which the author demystifies the quadratic formula, sharing with the reader those reagents used in its incantation....

There are two cats where I live. Sometimes, I watch them meander around; it's fascinating to think how they go about their lives totally oblivious to the true nature of the world around them. The above incomputability result surprised me so much that I have begun to suspect that I too am a clueless cat (until I learn complex analysis; you'll excuse me for having a few other textbooks to study first).

Edit:daozaich writes about why this isn't as surprising as it seems.## 5: Eigenvalues, Eigenvectors, and Invariant Subspaces

In which the author uses the prefix 'eigen-' so much that it stops sounding like a word.## Revisiting Material

Before starting this book, I watched 3Blue1Brown's video on eigenvectors and came out with a vague "understanding". Rewatching it after reading Ch. 5.A, the geometric intuitions behind eigenvectors didn't seem like useful ways-to-remember an exotic math concept, they felt like a manifestation of how the world works. I

knewwhat I was seeing from the hundreds of proofs I'd done up to that point.Imagine being blind yet knowing the minute details of each object in your room; one day, a miracle treatment restores your eyesight in full. Imagine then seeing your room for the "first time".

## Diagonalizability

Intuitively, the diagonalizability of some operator T∈L(V) on a finite-dimensional vector space V means you can partition (more precisely, express as a direct sum) V by the eigenspaces E(λi,T).

Another way to look at it is that diagonalization is the mutation of the basis vectors of V so that each column of M(T) is one-hot2; you then rearrange the columns (by relabeling the basis vectors) so that M(T) is diagonal.

## Unclear Exercise

On page 156, you'll be asked to verify that a matrix is diagonalizable with respect to a provided nonstandard basis. The phrasing of the exercise makes it seem trivial, but the book doesn't specify how to do this until Ch. 10. Furthermore, it isn't core conceptual material. Skip.

## 6: Inner Product Spaces

In which the author introduces inner products, orthonormal bases, the Cauchy-Schwarz inequality, and a neat solution to minimization problems using orthogonal complements.## 7: Operators on Inner Product Spaces

In which the author lays out adjoint, self-adjoint, normal, and isometric operators, proves the (a) Spectral theorem, and blows my mind with the Polar and Singular Value Decompositions.## Adjoints

Consider the linear functional φ∈L(W,F) given by ⟨Tv,w⟩ for fixed v∈V; this is then a linear functional on W for the chosen Tv. The adjoint T∗ produces the corresponding linear functional in L(V,F); given fixed w∈W, we now map to some linear functional on V such that ⟨Tv,w⟩=⟨v,T∗w⟩. The left-hand side is a linear functional on