Feedback welcome: www.admonymous.co/mo-putera
Long-time lurker (c. 2013), recent poster. I also write on the EA Forum.
Just learned about the Templeton World Charity Foundation (TWCF), which is unusual in that one of their 7 core funding areas is, explicitly, 'genius':
Genius
TWCF supports work to identify and cultivate rare cognitive geniuses whose work can bring benefits to human civilization.
In this context, geniuses are not simply those who are classified as such by psychometric tests. Rather, they are those who: (1) generate significant mathematical, scientific, technological, and spiritual discoveries and inventions that benefit humanity or have the potential to transform human civilization, and (2) show exceptional cognitive ability, especially at an early age.
Eligible projects may include research on the benefits of various attributes of geniuses to humanity, biographical studies of individual geniuses, comparisons of groups of geniuses with various levels of cognitive abilities, and projects that facilitate the spread of creative insights, discoveries, and original ideas of geniuses. Projects may also investigate genetic factors contributing to genius, and the cultural and nurturing factors that engender geniuses who contribute to such cognitive virtues as diligence, constructive thinking, and noble purposes. Ineligible projects include physical, musical, or artistic geniuses; spelling bees; geniuses with spectacular memory; and scholarships for geniuses.
Among the 613 projects they've funded so far, 7 grants come up if you search for 'genius', all between 2013-18 so I'm not sure why they stopped since. Some of the largest grants:
Yeah the "pi like you" was a reference to that passage.
Yeah, this was the source of much personal consternation when I left my operations-heavy career path in industry to explore research roles, as much as I found the latter more intrinsically exciting.
It's also what's always back-of-mind w.r.t. the alignment-related work I'm most excited by, even though part of why I'm excited about them is how relatively empirically grounded they are.
Scott Aaronson made a very simple string finder that beats almost all naive human strategies. Check out the Github here, or play against it yourself here (using Collisteru’s implementation).
I can't resist sharing this quote from Scott's blog post, I loved it the first time I read it all those years ago in Lecture 18 of his legendary Quantum Computing Since Democritus series, the whole lecture (series really) is just a fun romp:
In a class I taught at Berkeley, I did an experiment where I wrote a simple little program that would let people type either "f" or "d" and would predict which key they were going to push next. It's actually very easy to write a program that will make the right prediction about 70% of the time. Most people don't really know how to type randomly. They'll have too many alternations and so on. There will be all sorts of patterns, so you just have to build some sort of probabilistic model. Even a very crude one will do well. I couldn't even beat my own program, knowing exactly how it worked. I challenged people to try this and the program was getting between 70% and 80% prediction rates. Then, we found one student that the program predicted exactly 50% of the time. We asked him what his secret was and he responded that he "just used his free will."
I wonder if he'd just memorised the first couple dozen digits of something like Chaitin's constant or e or pi like you or whatever and just started somewhere in the middle of his memorised substring, that's what I'd've done.
Mildly funny analogy by John Cutler, niche audience, illustrating a failure mode that feels personally salient to me. Here's how it begins:
Imagine if a restaurant behaved like your average product team. The kitchen is packed. Everyone is moving. Every station is busy. Prep lists are long. Meetings are constant. There is always something to do. Chopping, rearranging, documenting, planning, replating.
But plates rarely reach customers. When they do, they’re late. Or wrong. Or cold. Or oddly disconnected from what the diners said they wanted. Yet the kitchen isn’t “failing,” exactly. It never looks like a crisis. No one storms out. No one flips a table. Diners don’t riot. They just lower their expectations and stop coming back.
Inside the kitchen, though, the staff feels productive. Everyone is exhausted. Everyone is “at capacity.” Everyone can point to a dozen tasks they completed. They can even argue those tasks were important. And in isolation, many of them were.
But restaurants are not judged by how busy the kitchen is. They are judged by how consistently they deliver great food, on time, to the people who ordered it. Product development is strange because this feedback loop is muted. There is no instant revolt. A team can be unbelievably heroically busy without producing much that actually moves the needle.
That’s the trap: in software, effort is easy to generate, activity is easy to justify, and impact is surprisingly easy to avoid.
(much more at the link)
This comment ended up more interesting than I expected when I started reading it. Did you write up your progress journaling anywhere online?
I've mentioned this elsewhere — I first learned about effective altruism circa 2014 via A Modest Proposal, Scott's polemic on using dead children as units of currency to force readers to grapple with the opportunity costs of subpar resource allocation under triage. I was young and impressionable when I encountered it, so I've never stopped feeling the weight of the frame of EA as duty/obligation, although its weight has lightened considerably since. I related to Tyler's personal story (which unsurprisingly also references A Modest Proposal as a life-changing polemic) since I followed a similar life arc:
I thought my own story might be more relatable for friends with a history of devotion – unusual people who’ve found themselves dedicating their lives to a particular moral vision, whether it was (or is) Buddhism, Christianity, social justice, or climate activism. When these visions gobble up all other meaning in the life of their devotees, well, that sucks. I go through my own history of devotion to effective altruism. It’s the story of [wanting to help] turning into [needing to help] turning into [living to help] turning into [wanting to die] turning into [wanting to help again, because helping is part of a rich life].
There are other more personally-beneficial frames that arguably (persuasively, IMO) lead to much more long-run impact because they're sustainable, e.g. Steven Byrnes' response to a different comment seems pertinent, also Holden Karnofsky's advice:
I think the difference between “not mattering,” “doing some good” and “doing enormous good” comes down to how you choose the job, how good at it you are, and how good your judgment is (including what risks you’re most focused on and how you model them). Going “all in” on a particular objective seems bad on these fronts: it poses risks to open-mindedness, to mental health and to good decision-making (I am speaking from observations here, not just theory).
That is, I think it’s a bad idea to try to be 100% emotionally bought into the full stakes of the most important century - I think the stakes are just too high for that to make sense for any human being.
Instead, I think the best way to handle “the fate of humanity is at stake” is probably to find a nice job and work about as hard as you’d work at another job, rather than trying to make heroic efforts to work extra hard. (I criticized heroic efforts in general here.)
I think this basic formula (working in some job that is a good fit, while having some amount of balance in your life) is what’s behind a lot of the most important positive events in history to date, and presents possibly historically large opportunities today.
That said, if you asked me to list the activities I find most joyful, I'm not sure EA-related ones would make the top five.
Eric Drexler's recent post on how concepts often "round to false" as they shed complexity and gain memetic fitness discusses a case study personal to him, that of atomically precise mass fabrication, which seems to describe a textbook cowpox-ing of doubt dynamic:
The history of the concept of atomically precise mass fabrication shows how rounding-to-false can derail an entire field of inquiry and block understanding of critical prospects.
The original proposal, developed through the 1980s and 1990s, explored prospects for using nanoscale machinery to guide chemical reactions by constraining molecular motions6. From a physics perspective, this isn’t exotic: Enzymes guide substrate molecules and provide favorable molecular environments to cause specific reactions; in molecular manufacturing, synthetic molecular machines would guide strongly reactive molecules to cause specific reactions. In both cases, combining specific molecules in precise ways results in atomically-precise products, and all the microscopic details are familiar.
However, in the popular press (see, for example, Scientific American7) building atomically precise structures became “building atom by atom”, which became “nanobots with fingers that grab and place individual atoms”, stacking them like LEGO blocks. Despite technically specific pushback (see Scientific American again8), the rounded version became the overwhelmingly dominant narrative.
The rounded version is impossible, chemically absurd. Atoms that form strong bonds can’t be “picked up” and “put down” — bonding follows chemical rules that aren’t like anything familiar at larger scales. Molecules have size, shape, and rigidity, but their atoms bond through electron sharing and charge distributions, not mechanical attachment.9 Confusing constrained chemistry with fingers stacking atoms creates a cartoon that chemists rightly reject.10
A committee convened by the US National Academy of Sciences reviewed the actual technical analysis in 2006, finding that “The technical arguments make use of accepted scientific knowledge” and constitute a “theoretical analysis demonstrating the possibility of a class of as-yet unrealizable devices.”11 The committee compared the work to early theoretical studies of rocket propulsion for spaceflight. Yet to this day, the perceived scope of technological possibilities has been shaped, not by physical analysis of potential manufacturing systems,12 but by rejection of a cartoon, a mythos of swarming nanobots.13 The episode inflicted reputational damage that facts have not repaired. But let’s change the subject. Look! A deepfake cat video!
I asked a bunch of LLMs with websearch to try and name the classic mistake you're alluding to:
To be honest these just aren't very good, they usually do better at naming half-legible vibes.