I'm curious how well you are doing in terms of retaining all the math you have learned. Can you still prove all or most of the theorems in the books you worked through, or do all or most of the exercises in them? How much of it still feels fresh in mind vs something much vaguer that you can only recall in broad strokes? Do you have a reviewing system in place, and if so, what does it look like?
I don't think about my self-study as "know a bunch of math things". Rather, it's a) continually improve at mathematical reasoning, and b) accumulate a bunch of problem-solving strategies and ways-of-looking-at-the-world. I can
YES
answers, there exists a polynomial with roots on the base-ten encodings of those YES
inputs).Then, when I actually need to do stuff with polynomials for my research, I can see how the difference of returns from two policies can be represented by a polynomial in the agent's discount rate, which is a nice result (lemma 32). Insights build on insights.
So, I'm not trying to memorize everything. Leafing through Linear Algebra Done Right, I don't remember much about what self-adjointness means, or Jordan normal form, or whatever. However, I don't think that really matters. If I need to use the extraneous stuff, I know it exists, and could just pick it back up.
I am, however, able to regenerate fundamental things I actually use / run into in later studies. I have a mental habit of making myself regenerate random claims in every proof I consider. If we're relying on the commutativity of addition on the reals, I reflexively supply a proof of that property. I came up with: use the Cauchy sequence limit notion of reals, then rely on the commutativity of rationals under addition for each member of the sequence.
It's not like I've perfectly retained everything I need. I can tell I'm a little rusty on some things. However, I never do conscious reviewing (beyond building on past insights through further study, using the insights in my professional research, and rederiving things periodically). I have empirical feedback that this works pretty well. My PhD qualifier exam involved a matrix analysis question; without having even taken the class, I was able to get the right answer by reasoning using the skills and knowledge I got from self-study.
ETA: FWIW, when I talk with math undergrads at my university about areas we've both studied / solve problems with them, my impression is that my comprehension is often better.
For those reading this thread in the future, Alex has now adopted a more structured approach to reviewing the math he has learned.
I think I would have done well to think more carefully about the benefits of implementing a review system, when you posted this comment. It may have been true that I was getting by with my current study setup, but also accumulating "math recall debt" over time.
I think I underestimated the importance of knowing lots of math facts. For example, I initially replied:
So, I'm not trying to memorize everything. Leafing through Linear Algebra Done Right, I don't remember much about what self-adjointness means, or Jordan normal form, or whatever. However, I don't think that really matters. If I need to use the extraneous stuff, I know it exists, and could just pick it back up.
This underestimates the cost of picking it back up, which is linear for each fact in the dependencies I forgot, versus just maintaining the dependencies over time.
For example, when I'm studying quantum mechanics, it seems crazy to say "who cares about self-adjointness", but not that crazy to say "who cares about self-adjointness" if I'm just doing reinforcement learning theory.
I gestured at another useful habit, but one which only works on skills I use regularly:
I am, however, able to regenerate fundamental things I actually use / run into in later studies. I have a mental habit of making myself regenerate random claims in every proof I consider. If we're relying on the commutativity of addition on the reals, I reflexively supply a proof of that property. I came up with: use the Cauchy sequence limit notion of reals, then rely on the commutativity of rationals under addition for each member of the sequence.
I couldn't use this technique to help myself remember what a correlated equilibrium is, because I don't use concepts which build on 'correlated equilibria' very frequently.
I understood correlated equilibria at one point, but I don't recall anymore, and that makes me sad. Now I'll go back and put that chain of insights into Anki; if I'd been doing this from the beginning, I wouldn't have to do that.
You can avoid psychological annoyances throughout the year (tickets, unanticipated fees, etc.) and counteract the budget-planning fallacy by, at the beginning of each year, allocating money to goal-specific subaccounts.
What's the budget-planning fallacy?
Just a cheeky way to I decided to refer to the planning fallacy, but for allocating money instead of time.
I found this book through the CFAR reading list. Some content was previously posted on my shortform feed.
Foreword
The more broadly I read and learn, the more I bump into implicit self-conceptions and self-boundaries. I was slightly averse to learning from a manager-oriented textbook because I'm not a manager, but also because I... didn't see myself as the kind of person who could learn about a "business"-y context? I also didn't see myself as the kind of person who could read and do math, which now seems ridiculous.
Although the read was fast and easy and often familiar, I unearthed a few gems.
Judgment in Managerial Decision Making
A tip of dubious ethicality for lazy min-maxers:
Unified explanation of biases
The authors group biases as stemming from the Big Three of availability, representativeness, and confirmation. The model I took away relies on a mechanism somewhat similar to attention in neural networks: due to how the brain performs time-limited search, more salient/recent memories get prioritized for recall.
The availability heuristic goes wrong when our saliency-weighted perceptions of the frequency of events is a biased estimator of the real frequency, when we happen to be extrapolating off of a very small sample size, or when our memory structure makes recalling some kinds of things harder (e.g. words starting with 'a' versus words whose third letter is 'a'). Concepts get inappropriately activated in our mind, and we therefore reason incorrectly. Attention also explains anchoring: you can more readily bring to mind things related to your anchor due to salience.
The representativeness heuristic can be understood as highly salient concept-activations inappropriately dominating our reasoning. We then ignore e.g. base rates, sample size, statistical phenomena (regression to the mean), and the conjunctive burden of propositions. Consider how neural network activations could explain the following:
The case for confirmation bias seems to be a little more involved. We had evolutionary pressure to win arguments, which might mean our cognitive search aims to find supportive arguments and avoid even subconsciously signalling that we are aware of the existence of counterarguments. This means that those supportive arguments feel salient, and we (perhaps by "design") get to feel unbiased - we aren't consciously discarding evidence, we're just following our normal search/reasoning process! This is what our search algorithm feels like from the inside.
Making heads and tails of probabilistic reasoning
In Subjective Probability: A Judgment of Representativeness, Kahneman and Tversky hypothesize that
which lines up with the above attention/activation model. Anyways, participants judged that the sequence of birth sexes GBGBBG is more likely than BGBBBB (obviously, they have equal probability). K&T chalk this up to the first sequence seeming more "representative" of a "random" process. If you're considering whether the set of all sequences which "look like" the former is more likely than the set of sequences resembling the latter, then this answer could be correct.
However, checking the original paper, this was controlled for; K&T emphasized that the exact order of births was as described. They go on:
Share your unique information in groups
Groups are good because they pool knowledge and expertise. However, studies show that by default, shared knowledge is much more likely to be discussed than unshared knowledge, which can significantly worsen decision-making. The authors give the example of a group initially disfavoring a student council candidate. One person is privately aware of crucial positive information about the candidate, and groups in which all members knew the info were likely to favor the candidate. The information wasn't usually shared, and the candidate was passed over.
Open subaccounts for savings
You can avoid psychological annoyances throughout the year (tickets, unanticipated fees, etc.) and counteract the budget-planning fallacy by, at the beginning of each year, allocating money to goal-specific subaccounts. Then, you can forget about it during the year, and (perhaps) donate the remainder to an effective charity.
Be more risk-neutral
Biological explanation for hedonic treadmill?
Negotiation tips
Chapters 9 and 10 contain a wealth of (seemingly) good negotiation advice. Being a good negotiator and mediator seems like an important generalist life skill.
There were a lot more helpful takeaways, and I plan on rereading Ch. 9 before conducting any important negotiations.
Forwards
This book was a little slow at times, both because of excessive preamble/signposting, and my already being familiar with much of the literature. Still, I'm glad I read it.
Hello again
It's been a long while since my last review. After injuring myself last summer, I wasn't able to type reliably until early this summer. This derailed the positive feedback loop I had around "learn math" -> "write about what I learned" -> "savor karma". Protect your feedback loops.
I run into fewer basic confusions than when I was just starting at math, so I generally have less to talk about. This means I'll be changing the style of any upcoming reviews, instead focusing on deeply explaining the things I found coolest.
Since January, I've read Visual Group Theory, Understanding Machine Learning, Computational Complexity: A Conceptual Perspective, Introduction to the Theory of Computation, An Illustrated Theory of Numbers, most of Tadellis' Game Theory, the beginning of Multiagent Systems, parts of several graph theory textbooks, and I'm going through Munkres' Topology right now. I've gotten through the first fifth of the first Feynman lectures, which has given me an unbelievable amount of mileage for generally reasoning about physics.
My "plan" is to keep learning math until the low graduate level (I still need to at least do complex analysis, topology, field / ring theory, ODEs/PDEs, and something to shore up my atrocious trig skills, and probably more)[1], and then branch off into physics + a "softer" science (anything from microecon to psychology).
New year, new decade
In the new year, I'm going to focus hard on raising the level of my cognitive game.
Reading the Sequences qualitatively levelled me up, and I want to do that again. My thought processes are still insufficiently transparent: I need to flag motivated reasoning more often. I still fall prey to the planning fallacy (but somewhat less than two years ago). I don't notice my confusion nearly as often as I should.
Not noticing confusion often has a cost measured in hours (or more). Let me give you an example. Last night, I went to speak with Sen. Amy Klobuchar about effective altruism. It was my understanding that the event would be a meet-and-greet. I planned to query her interest in e.g. setting up a granting agency disbursing funds based on scientific evidence of high impact, with the details to be worked out in conjunction with relevant professionals in EA and the government.
While I was waiting for her to arrive, I noticed that people were writing on paper and handing it to other people. I rounded this off as commit-to-caucus cards, which, if I actually thought about it, makes no sense – you keep your commit-to-caucus card. They were, in fact, providing written questions, some of which Sen. Klobuchar would later answer. If I had just noticed this, I could have written a question and then left, saving myself two hours.
The list of things I've noticed I failed to notice in the last month is surprisingly long. I don't think I'm bad at this in a relative sense – just in an absolute sense.
This new year, I'm going to become a less oblivious, less stupid, and less wrong person.
I also still want to learn Bayes nets, category theory, get a much deeper understanding of probability theory, provability logic, and decision theory. ↩︎