I loved the original universal paperclips. Your game is cool! I particularly liked the small realisation that I had been unlocking each of A, G, and I separately and boosting them as ways to "level up" (can't figure out how to do spoilers, sorry)
then you can say
than you can say (I normally don't do typo comments but in this case this inverts the argument that the post is making and caused me some confusion)
I like this post! The evidence was presented well in a non-judgemental manner (neither bashing respondents or pollsters), and it changed my mind by emphasising my previously understood-but-not-intuitive knowledge how outliers can shape means.
I think standard advice like the compliment sandwich (the formula "I liked X, I'm not sure about Y, but Z was really well done") is meant to counteract this a bit. You can also do stuff like "I really enjoyed this aspect of the idea/resonated with [...], but can I get more information about [...]".
Even if they had almost destroyed the world, the story would still not properly be about their guilt or their regret, it would be about almost destroying the world
It is possible to not be the story's subject and still be the protagonist of one strand for it. After all, that's the only truth most people know for ~certain. It's also possible to not dramatize yourself as the Epicentre of the Immanent World-Tragedy (Woe is me! Woe is me!) and still feel like crap in a way that needs some form of processing/growth to learn to live with. Similarly, you can be well-balanced and feel some form of hope without then making yourself the Epicentre of the Redemption of the World.
I guess what I'm trying to say is that you can feel things very strongly even without distorting your world-model to make it all about your feelings (most of the time, at least).
I agree a lot with the attitude of leadership through service/recognising the work of those who support others. Not sure that I would endorse the whole "hero/sidekick" dynamic. In my head it's more like "a group of adventurers in a fellowship, each of which has a different role."
Thanks for writing out something that I feel very strongly but often have trouble articulating in these spaces. The song made me tear up. Incidentally, my main character/self-representation for my aborted Harry Potter long-form fanfiction was a hufflepuff.
Follow up to https://vitalik.eth.limo/general/2025/11/07/galaxybrain.html
Here is a galaxy brain argument I see a lot:
"We should do [X], because people who are [bad quality] are trying to do [X] and if they succeed the consequences will be disastrous."
Usually [X] is some dual use strategy (acquire wealth and power, lie to their audience, build or use dangerous tech) and [bad quality] is something like being reckless, malicious, psychopathic etc. Sometimes the consequence is zero sum (they get more power to use to do Bad Things relative to us, the Good People) and sometimes the consequence is negative sum (Bad Things will happen)
As someone pointed out on the twitter replies to the mechanize essay, this kind of argument basically justifies selling crack or any other immoral action provided you can imagine a hypothetical worse person doing the same thing. See also its related cousin, "If I don't do [X] someone else will do it anyways", which (as Vitalik points out) assumes a perfectly liquid labour market that usually does not exist except in certain industries.
I leave it to the reader to furnish examples.
Yeah the next level of the question is something like "we can prove something to a small circle of experts, now how do we communicate the reasoning and the implications to policymakers/interested parties/the public in general"
My best argument as to why coarse-graining and "going up a layer" when describing complex systems are necessary:
Often we hear a reductionist case against ideas like emergence which goes something like this: "If we could simply track all the particles in e.g. a human body, we'd be able to predict what they did perfectly with no need for larger-scale simplified models of organs, cells, minds, personalities etc.". However, this kind of total knowledge is actually impossible given the bounds of the computational power available to us.
First of all, when we attempt to track billions of particle interactions we very quickly end up with a chaotic system, such that tiny errors in measurements and setting up initial states quickly compound into massive prediction errors (A metaphor I like is that you're "using up" the decimal points in your measurement: in a three body system the first timestep depends mostly on the value of the non-decimal portions of the starting velocity measurements. A few timesteps down changing .15 to .16 makes a big difference, and by the 10000th timestep the difference between a starting velocity of .15983849549 and .15983849548 is noticeable). This is the classic problem with weather prediction.
Second of all, tracking "every particle" means that the scope of the particles you need to track explodes out of the system you're trying to monitor into the interactions the system has with neighbouring particles, and then the neighbours of neighbours, so on and so forth. In the human case, you need to track every particle in the body, but also every particle the body touches or ingests (could be a virus), and then the particles that those particles touch... This continues until you reach the point where "to understand the baking process of an apple pie you must first track the position of every particle in the universe"
The emergence/systems solution to both problems is to essentially go up a level. Instead of tracking particles, you should track cells, organs, individual humans, systems etc. At each level (following Erik Hoel's Causal Emergence framework) you trade microscale precision for predictive power i.e. the size of the system you can predict for a given amount of computational power. Often this means collapsing large amounts of microscale interactions into random noise - a slot machine could in theory be deterministically predicted by tracking every element in the randomiser mechanism/chip, but in practice it's easier to model as a machine with an output distribution set by the operating company. Similarly, we trade Feynman diagrams for brownian motion and Langevin dynamics.