Long-time lurker (c. 2013), recent poster. I also write on the EA Forum.
I currently work in policy research, which feels very different from my intrinsic aesthetic inclination, in a way that I think Tanner Greer captures well in The Silicon Valley Canon: On the Paıdeía of the American Tech Elite:
I often draw a distinction between the political elites of Washington DC and the industrial elites of Silicon Valley with a joke: in San Francisco reading books, and talking about what you have read, is a matter of high prestige. Not so in Washington DC. In Washington people never read books—they just write them.
To write a book, of course, one must read a good few. But the distinction I drive at is quite real. In Washington, the man of ideas is a wonk. The wonk is not a generalist. The ideal wonk knows more about his or her chosen topic than you ever will. She can comment on every line of a select arms limitation treaty, recite all Chinese human rights violations that occurred in the year 2023, or explain to you the exact implications of the new residential clean energy tax credit—but never all at once. ...
Washington intellectuals are masters of small mountains. Some of their peaks are more difficult to summit than others. Many smaller slopes are nonetheless jagged and foreboding; climbing these is a mark of true intellectual achievement. But whether the way is smoothly paved or roughly made, the destinations are the same: small heights, little occupied. Those who reach these heights can rest secure. Out of humanity’s many billions there are only a handful of individuals who know their chosen domain as well as they do. They have mastered their mountain: they know its every crag, they have walked its every gully. But it is a small mountain. At its summit their field of view is limited to the narrow range of their own expertise.
In Washington that is no insult: both legislators and regulators call on the man of deep but narrow learning. Yet I trust you now see why a city full of such men has so little love for books. One must read many books, laws, and reports to fully master one’s small mountain, but these are books, laws, and reports that the men of other mountains do not care about. One is strongly encouraged to write books (or reports, which are simply books made less sexy by having an “executive summary” tacked up front) but again, the books one writes will be read only by the elect few climbing your mountain.
The social function of such a book is entirely unrelated to its erudition, elegance, or analytical clarity. It is only partially related to the actual ideas or policy recommendations inside it. In this world of small mountains, books and reports are a sort of proof, a sign of achievement that can be seen by climbers of other peaks. An author has mastered her mountain. The wonk thirsts for authority: once she has written a book, other wonks will give it to her.
While I don't work in Washington, this description rings true to my experience, and I find it aesthetically undesirable. Greer contrasts this with the Silicon Valley aesthetic, which is far more like the communities I'm familiar with:
The technologists of Silicon Valley do not believe in authority. They merrily ignore credentials, discount expertise, and rebel against everything settled and staid. There is a charming arrogance to their attitude. This arrogance is not entirely unfounded. The heroes of this industry are men who understood in their youth that some pillar of the global economy might be completely overturned by an emerging technology. These industries were helmed by men with decades of experience; they spent millions—in some cases, billions—of dollars on strategic planning and market analysis. They employed thousands of economists and business strategists, all with impeccable credentials. Arrayed against these forces were a gaggle of nerds not yet thirty. They were armed with nothing but some seed funding, insight, and an indomitable urge to conquer.
And so they conquered.
This is the story the old men of the Valley tell; it is the dream that the young men of the Valley strive for. For our purposes it shapes the mindset of Silicon Valley in two powerful ways. The first is a distrust of established expertise. The technologist knows he is smart—and in terms of raw intelligence, he is in fact often smarter than any random small-mountain subject expert he might encounter. But intelligence is only one of the two altars worshiped in Silicon Valley. The other is action. The founders of the Valley invariably think of themselves as men of action: they code, they build, disrupt, they invent, they conquer. This is a culture where insight, intelligence, and knowledge are treasured—but treasured as tools of action, not goods in and of themselves.
This silicon union of intellect and action creates a culture fond of big ideas. The expectation that anyone sufficiently intelligent can grasp, and perhaps master, any conceivable subject incentivizes technologists to become conversant in as many subjects as possible. The technologist is thus attracted to general, sweeping ideas with application across many fields. To a remarkable extent conversations at San Fransisco dinner parties morph into passionate discussions of philosophy, literature, psychology, and natural science. If the Washington intellectual aims for authority and expertise, the Silicon Valley intellectual seeks novel or counter-intuitive insights. He claims to judge ideas on their utility; in practice I find he cares mostly for how interesting an idea seems at first glance. He likes concepts that force him to puzzle and ponder.
This is fertile soil for the dabbler, the heretic, and the philosopher from first principles. It is also a good breeding ground for books. Not for writing books—being men of action, most Silicon Valley sorts do not have time to write books. But they make time to read books—or barring that, time to read the number of book reviews or podcast interviews needed to fool other people into thinking they have read a book (As an aside: I suspect this accounts somewhat for the popularity of this blog among the technologists. I am an able dealer in second-hand ideas).
Out of curiosity, I asked Claude Sonnet 3.5 to create a checklist-style version of The Way to "serve as a daily reminder and also guide to practical daily action and thinking", with the understanding that (quoting Zvi) "The Way that can be specified is not The Way". Seems decent. (All bullet lists are meant to be checkboxes, except the last list of bullets.)
The Way: A Living Checklist
Note: The Way that can be specified is not The Way. This is an incomplete approximation, meant to guide rather than constrain.
Core Principles
Truth-Seeking
- Have I written down my actual beliefs clearly and publicly?
- Am I ready to be proven wrong and update accordingly?
- Have I avoided fooling myself, especially about things I want to be true?
- Am I reasoning things out explicitly, step by step?
- Have I shown my work so others can check my reasoning?
Action & Impact
- Am I actually Doing The Thing, rather than just talking about it?
- Have I found ways to create concrete improvements today, rather than waiting for perfect solutions?
- Am I focusing on real outcomes rather than appearances or process?
- Do I have meaningful skin in the game?
- Am I using my comparative advantage effectively?
Decision Making
- Have I considered the actual price/tradeoffs involved?
- Am I making decisions under uncertainty rather than waiting for perfect information?
- Have I avoided false dichotomies and found the nuanced path?
- Am I being appropriately careful with irreversible decisions?
- Have I maintained enough slack in my systems and decisions?
Learning & Growth
- Am I willing to look stupid to become less wrong?
- Have I learned from my mistakes and updated my models?
- Am I experimenting and iterating to find better approaches?
- Have I sought out worthy opponents who can challenge my thinking?
- Am I building deep understanding rather than surface knowledge?
Character & Conduct
- Have I been honest, even when it's costly?
- Am I following through on my commitments?
- Have I avoided needless cruelty or control?
- Am I using power and influence responsibly?
- Have I maintained my integrity while pursuing my goals?
Balance & Wisdom
- Have I found room for joy and fun without compromising effectiveness?
- Am I building lasting value rather than chasing short-term gains?
- Have I avoided both reckless abandon and paralyzing caution?
- Am I considering both practical utility and deeper principles?
- Have I remained adaptable as circumstances change?
Remember
- The Way is hard
- The Way is not for everyone
- The Way changes as reality changes
- Violence is not The Way
- The perfect need not be the enemy of the good
- Having skin in the game focuses the mind
- Mundane utility matters
- The Way includes both effectiveness and joy
This checklist is intentionally incomplete. The Way that matters is the one you find through doing the work.
Just reread Scott Aaronson's We Are the God of the Gaps (a little poem) from 2022:
When the machines outperform us on every goal for which performance can be quantified,
When the machines outpredict us on all events whose probabilities are meaningful,
When they not only prove better theorems and build better bridges, but write better Shakespeare than Shakespeare and better Beatles than the Beatles,
All that will be left to us is the ill-defined and unquantifiable,
The interstices of Knightian uncertainty in the world,
The utility functions that no one has yet written down,
The arbitrary invention of new genres, new goals, new games,
None of which will be any “better” than what the machines could invent, but will be ours,
And which we can call “better,” since we won’t have told the machines the standards beforehand.
We can be totally unfair to the machines that way.
And for all that the machines will have over us,
We’ll still have this over them:
That we can’t be copied, backed up, reset, run again and again on the same data—
All the tragic limits of wet meat brains and sodium-ion channels buffeted by microscopic chaos,
Which we’ll strategically redefine as our last strengths.
On one task, I assure you, you’ll beat the machines forever:
That of calculating what you, in particular, would do or say.
There, even if deep networks someday boast 95% accuracy, you’ll have 100%.
But if the “insights” on which you pride yourself are impersonal, generalizable,
Then fear obsolescence as would a nineteenth-century coachman or seamstress.
From earliest childhood, those of us born good at math and such told ourselves a lie:
That while the tall, the beautiful, the strong, the socially adept might beat us in the external world of appearances,
Nevertheless, we beat them in the inner sanctum of truth, where it counts.
Turns out that anyplace you can beat or be beaten wasn’t the inner sanctum at all, but just another antechamber,
And the rising tide of the learning machines will flood them all,
Poker to poetry, physics to programming, painting to plumbing, which first and which last merely a technical puzzle,
One whose answers upturn and mock all our hierarchies.
And when the flood is over, the machines will outrank us in all the ways we can be ranked,
Leaving only the ways we can’t be.
Feels poignant.
Philosophy bear's response to Scott is worth reading too.
While Dyson's birds and frogs archetypes of mathematicians is oft-mentioned, David Mumford's tribes of mathematicians is underappreciated, and I find myself pointing to it often in discussions that devolve into "my preferred kind of math research is better than yours"-type aesthetic arguments:
... the subjective nature and attendant excitement during mathematical activity, including a sense of its beauty, varies greatly from mathematician to mathematician... I think one can make a case for dividing mathematicians into several tribes depending on what most strongly drives them into their esoteric world. I like to call these tribes explorers, alchemists, wrestlers and detectives. Of course, many mathematicians move between tribes and some results are not cleanly part the property of one tribe.
- Explorers are people who ask -- are there objects with such and such properties and if so, how many? They feel they are discovering what lies in some distant mathematical continent and, by dint of pure thought, shining a light and reporting back what lies out there. The most beautiful things for them are the wholly new objects that they discover (the phrase 'bright shiny objects' has been in vogue recently) and these are especially sought by a sub-tribe that I call Gem Collectors. Explorers have another sub-tribe that I call Mappers who want to describe these new continents by making some sort of map as opposed to a simple list of 'sehenswürdigkeiten'.
- Alchemists, on the other hand, are those whose greatest excitement comes from finding connections between two areas of math that no one had previously seen as having anything to do with each other. This is like pouring the contents of one flask into another and -- something amazing occurs, like an explosion!
- Wrestlers are those who are focussed on relative sizes and strengths of this or that object. They thrive not on equalities between numbers but on inequalities, what quantity can be estimated or bounded by what other quantity, and on asymptotic estimates of size or rate of growth. This tribe consists chiefly of analysts and integrals that measure the size of functions but people in every field get drawn in.
- Finally Detectives are those who doggedly pursue the most difficult, deep questions, seeking clues here and there, sure there is a trail somewhere, often searching for years or decades. These too have a sub-tribe that I call Strip Miners: these mathematicians are convinced that underneath the visible superficial layer, there is a whole hidden layer and that the superficial layer must be stripped off to solve the problem. The hidden layer is typically more abstract, not unlike the 'deep structure' pursued by syntactical linguists. Another sub-tribe are the Baptizers, people who name something new, making explicit a key object that has often been implicit earlier but whose significance is clearly seen only when it is formally defined and given a name.
Mumford's examples of each, both results and mathematicians:
Some miscellaneous humorous quotes:
When I was teaching algebraic geometry at Harvard, we used to think of the NYU Courant Institute analysts as the macho guys on the scene, all wrestlers. I have heard that conversely they used the phrase 'French pastry' to describe the abstract approach that had leapt the Atlantic from Paris to Harvard.
Besides the Courant crowd, Shing-Tung Yau is the most amazing wrestler I have talked to. At one time, he showed me a quick derivation of inequalities I had sweated blood over and has told me that mastering this skill was one of the big steps in his graduate education. Its crucial to realize that outside pure math, inequalities are central in economics, computer science, statistics, game theory, and operations research. Perhaps the obsession with equalities is an aberration unique to pure math while most of the real world runs on inequalities.
In many ways [the Detective approach to mathematical research exemplified by e.g. Andrew Wiles] is the public's standard idea of what a mathematician does: seek clues, pursue a trail, often hitting dead ends, all in pursuit of a proof of the big theorem. But I think it's more correct to say this is one way of doing math, one style. Many are leery of getting trapped in a quest that they may never fulfill.
I kind of envy that you figured this out yourself — I learned the parallelipiped hypervolume interpretation of the determinant from browsing forums (probably this MSE question's responses). Also, please do write that blog article.
And if I keep doing that, hypothetically speaking, some of those discoveries might even be original.
Yeah, I hope you will! I'm reminded of what Scott Aaronson said recently:
When I was a kid, I too started by rediscovering things (like the integral for the length of a curve) that were centuries old, then rediscovering things (like an efficient algorithm for isotonic regression) that were decades old, then rediscovering things (like BQP⊆PP) that were about a year old … until I finally started discovering things (like the collision lower bound) that were zero years old. This is the way.
What were you outputting over a million words in a week for?
And given that there are 7 x 16 x 60 = 6,720 minutes in a week of 16-hour days, you'd need to output 150 wpm at minimum over the entire duration to hit a million words, which doesn't seem humanly possible. How did you do it?
I suspect you've probably seen Scott's Varieties Of Argumentative Experience, so this is mostly meant for others. He says of Graham's hierarchy:
Graham’s hierarchy is useful for its intended purpose, but it isn’t really a hierarchy of disagreements. It’s a hierarchy of types of response, within a disagreement. Sometimes things are refutations of other people’s points, but the points should never have been made at all, and refuting them doesn’t help. Sometimes it’s unclear how the argument even connects to the sorts of things that in principle could be proven or refuted.
If we were to classify disagreements themselves – talk about what people are doing when they’re even having an argument – I think it would look something like this:
Most people are either meta-debating – debating whether some parties in the debate are violating norms – or they’re just shaming, trying to push one side of the debate outside the bounds of respectability.
If you can get past that level, you end up discussing facts (blue column on the left) and/or philosophizing about how the argument has to fit together before one side is “right” or “wrong” (red column on the right). Either of these can be anywhere from throwing out a one-line claim and adding “Checkmate, atheists” at the end of it, to cooperating with the other person to try to figure out exactly what considerations are relevant and which sources best resolve them.
If you can get past that level, you run into really high-level disagreements about overall moral systems, or which goods are more valuable than others, or what “freedom” means, or stuff like that. These are basically unresolvable with anything less than a lifetime of philosophical work, but they usually allow mutual understanding and respect.
Scott's take on the relative futility of resolving high-level generators of disagreement (which seems to be beyond Level 7? Not sure) within reasonable timeframes is kind of depressing.
A bit more on the high-level generators:
High-level generators of disagreement are what remains when everyone understands exactly what’s being argued, and agrees on what all the evidence says, but have vague and hard-to-define reasons for disagreeing anyway. In retrospect, these are probably why the disagreement arose in the first place, with a lot of the more specific points being downstream of them and kind of made-up justifications. These are almost impossible to resolve even in principle.
“I feel like a populace that owns guns is free and has some level of control over its own destiny, but that if they take away our guns we’re pretty much just subjects and have to hope the government treats us well.”
“Yes, there are some arguments for why this war might be just, and how it might liberate people who are suffering terribly. But I feel like we always hear this kind of thing and it never pans out. And every time we declare war, that reinforces a culture where things can be solved by force. I think we need to take an unconditional stance against aggressive war, always and forever.”
“Even though I can’t tell you how this regulation would go wrong, in past experience a lot of well-intentioned regulations have ended up backfiring horribly. I just think we should have a bias against solving all problems by regulating them.”
“Capital punishment might decrease crime, but I draw the line at intentionally killing people. I don’t want to live in a society that does that, no matter what its reasons.”
Some of these involve what social signal an action might send; for example, even a just war might have the subtle effect of legitimizing war in people’s minds. Others involve cases where we expect our information to be biased or our analysis to be inaccurate; for example, if past regulations that seemed good have gone wrong, we might expect the next one to go wrong even if we can’t think of arguments against it. Others involve differences in very vague and long-term predictions, like whether it’s reasonable to worry about the government descending into tyranny or anarchy. Others involve fundamentally different moral systems, like if it’s okay to kill someone for a greater good. And the most frustrating involve chaotic and uncomputable situations that have to be solved by metis or phronesis or similar-sounding Greek words, where different people’s Greek words give them different opinions.
You can always try debating these points further. But these sorts of high-level generators are usually formed from hundreds of different cases and can’t easily be simplified or disproven. Maybe the best you can do is share the situations that led to you having the generators you do. Sometimes good art can help.
The high-level generators of disagreement can sound a lot like really bad and stupid arguments from previous levels. “We just have fundamentally different values” can sound a lot like “You’re just an evil person”. “I’ve got a heuristic here based on a lot of other cases I’ve seen” can sound a lot like “I prefer anecdotal evidence to facts”. And “I don’t think we can trust explicit reasoning in an area as fraught as this” can sound a lot like “I hate logic and am going to do whatever my biases say”. If there’s a difference, I think it comes from having gone through all the previous steps – having confirmed that the other person knows as much as you might be intellectual equals who are both equally concerned about doing the moral thing – and realizing that both of you alike are controlled by high-level generators. High-level generators aren’t biases in the sense of mistakes. They’re the strategies everyone uses to guide themselves in uncertain situations.
This doesn’t mean everyone is equally right and okay. You’ve reached this level when you agree that the situation is complicated enough that a reasonable person with reasonable high-level generators could disagree with you. If 100% of the evidence supports your side, and there’s no reasonable way that any set of sane heuristics or caveats could make someone disagree, then (unless you’re missing something) your opponent might just be an idiot.
Are we already at the point where AI, or some app, can summarize podcasts accurately and extract key takeaways with relatively technical interviewees like Dylan, so we don't need 5 hours (or even 2.5h at 2x)?