I agree a lot with the attitude of leadership through service/recognising the work of those who support others. Not sure that I would endorse the whole "hero/sidekick" dynamic. In my head it's more like "a group of adventurers in a fellowship, each of which has a different role."
Thanks for writing out something that I feel very strongly but often have trouble articulating in these spaces. The song made me tear up. Incidentally, my main character/self-representation for my aborted Harry Potter long-form fanfiction was a hufflepuff.
Follow up to https://vitalik.eth.limo/general/2025/11/07/galaxybrain.html
Here is a galaxy brain argument I see a lot:
"We should do [X], because people who are [bad quality] are trying to do [X] and if they succeed the consequences will be disastrous."
Usually [X] is some dual use strategy (acquire wealth and power, lie to their audience, build or use dangerous tech) and [bad quality] is something like being reckless, malicious, psychopathic etc. Sometimes the consequence is zero sum (they get more power to use to do Bad Things relative to us, the Good People) and sometimes the consequence is negative sum (Bad Things will happen)
As someone pointed out on the twitter replies to the mechanize essay, this kind of argument basically justifies selling crack or any other immoral action provided you can imagine a hypothetical worse person doing the same thing. See also its related cousin, "If I don't do [X] someone else will do it anyways", which (as Vitalik points out) assumes a perfectly liquid labour market that usually does not exist except in certain industries.
I leave it to the reader to furnish examples.
Yeah the next level of the question is something like "we can prove something to a small circle of experts, now how do we communicate the reasoning and the implications to policymakers/interested parties/the public in general"
To be honest, this makes me quite worried. Suppose that someone working with mathematical methods proves something of dire importance to society (lets say he comes up with a definitive formula for measuring probability of disaster in a given year, or the minimum conditions for AI takeoff). How will this be communicated to other mathematicians, much less the public?
Great review and post, leaves me with a lot more hope for positive, non-coercive, and non-guilting/brow-beating change in beliefs. I read the book before reading your review and agree with your summary, and I would go so far as thanking you for raising/summarising points made in the book that I didn't get during my own read-through. At this point I have a pretty firm belief that (as they say in Inception) positive motivation is stronger than negative motivation, at least for the purposes of long-term, intentional activities like cultivating an open attitude to facts and reason in the self.
Um, I really like a lot of your writing. But I think the parts of your post that are in bold paint a very different picture to the parts that aren't in bold.
That would be a pleasant fantasy for people who cannot abide the notion that history depends on small little changes or that people can really be different from other people.
I think both of those are true, but it does not follow that history is made of individuals solving individual math problems and pushing out papers which get stacked into the intellectual tower of babel. History as far as I can see is made out of systems or ensembles of people moving around in different configurations.
Yudkowsky couldn't do what he did without ET Jaynes, who in turn relied on the progenitors of probability and rationality including Thomas Bayes and William of Ockham. But he was also influenced "sideways" by the people who he learned from and defined himself against, the people in SL4 and the people he called idiots and the venture capitalists he once idolised for their competence and Peter Thiel and Demis Hassabis and his family. They shape (at the very least) his emotional worldview, which then shapes how he takes in information and integrates it at a deep and fundamental level. This is true insofar as it is true for any human who lives in a society. When I write anything I can feel the hands of writers past and present shaping my action space. They shape both what I write about and how I choose to write.
So yes if he was gone everything would be different. But it would also be the same, people would love and fight and struggle and cooperate. The sameness of trends manifests at a higher level of coarsegraining, the level where the systemic forces and the long dreams and molochian demons live. And none of this diminishes what he did, does, will do, or could have done. It's just the way things are, because we can't run randomised control trials on society.
My best argument as to why coarse-graining and "going up a layer" when describing complex systems are necessary:
Often we hear a reductionist case against ideas like emergence which goes something like this: "If we could simply track all the particles in e.g. a human body, we'd be able to predict what they did perfectly with no need for larger-scale simplified models of organs, cells, minds, personalities etc.". However, this kind of total knowledge is actually impossible given the bounds of the computational power available to us.
First of all, when we attempt to track billions of particle interactions we very quickly end up with a chaotic system, such that tiny errors in measurements and setting up initial states quickly compound into massive prediction errors (A metaphor I like is that you're "using up" the decimal points in your measurement: in a three body system the first timestep depends mostly on the value of the non-decimal portions of the starting velocity measurements. A few timesteps down changing .15 to .16 makes a big difference, and by the 10000th timestep the difference between a starting velocity of .15983849549 and .15983849548 is noticeable). This is the classic problem with weather prediction.
Second of all, tracking "every particle" means that the scope of the particles you need to track explodes out of the system you're trying to monitor into the interactions the system has with neighbouring particles, and then the neighbours of neighbours, so on and so forth. In the human case, you need to track every particle in the body, but also every particle the body touches or ingests (could be a virus), and then the particles that those particles touch... This continues until you reach the point where "to understand the baking process of an apple pie you must first track the position of every particle in the universe"
The emergence/systems solution to both problems is to essentially go up a level. Instead of tracking particles, you should track cells, organs, individual humans, systems etc. At each level (following Erik Hoel's Causal Emergence framework) you trade microscale precision for predictive power i.e. the size of the system you can predict for a given amount of computational power. Often this means collapsing large amounts of microscale interactions into random noise - a slot machine could in theory be deterministically predicted by tracking every element in the randomiser mechanism/chip, but in practice it's easier to model as a machine with an output distribution set by the operating company. Similarly, we trade Feynman diagrams for brownian motion and Langevin dynamics.