The Landauer limit constrains irreversible computing, not computing in general.
Here's the argument I'd give for this kind of bottleneck. I haven't studied evolutionary genetics; maybe I'm thinking about it all wrong.
In the steady state, an average individual has n children in their life, and just one of those n makes it to the next generation. (Crediting a child 1/2 to each parent.) This gives log2(n) bits of error-correcting signal to prune deleterious mutations. If the genome length times the functional bits per base pair times the mutation rate is greater than that log2(n), then you're losing functionality with every generation.
One way for a beneficial new mutation to get out of this bind is by reducing the mutation rate. Another is refactoring the same functionality into fewer bits, freeing up bits for something new. But generically a fitness advantage doesn't seem to affect the argument that the signal from purifying selection gets shared by the whole genome.
An allegedly effective manual spaced-repetition system: flashcards in a shoebox with dividers. You take cards from the divider at one end and redistribute them by how well you recall. I haven't tried this, but maybe I will since notecards have some advantages over a computer at a desk or a phone.
(It turns out I was trying to remember the Leitner system, which is slightly different.)
Radical Abundance is worth reading. It says that current work is going on under other names like biomolecular engineering, the biggest holdup is a lack of systems engineering focused on achieving strategic capabilities (like better molecular machines for molecular manufacturing), and we ought to be preparing for those developments. It's in a much less exciting style than his first book.
Small correction: Law's Order is by David Friedman, the middle generation. It's an excellent book.
I had a similar reaction to the sequences. Some books that influenced me the most as a teen in the 80s: the Feynman Lectures and Drexler's Engines of Creation. Feynman modeled scientific rationality, thinking for yourself, clarity about what you don't know or aren't explaining, being willing to tackle problems, ... it resists a summary. Drexler had many of the same virtues, plus thinking carefully and boldly about future technology and what we might need to do in advance to steer to an acceptable outcome. (I guess it's worth adding that seemingly a lot of people misread it as gung-ho promotion of the wonders of Tomorrowland that we could all look forward to by now, more like Kurzweil. For one sad consequence, Drexler seems to have become a much more guarded writer.)
Hofstadter influenced me too, and Egan and Szabo.
I'm not a physicist, but if I wanted to fuse metallic hydrogen I'd think about a really direct approach: shooting two deuterium/tritium bullets at each other at 1.5% of c (for a Coulomb barrier of 0.1 MeV according to Wikipedia). The most questionable part I can see is that a nucleus from one bullet could be expected to miss thousands of nuclei from the other, before it hit one, and I would worry about losing too much energy to bremsstrahlung in those encounters.
I also reviewed some of his prototype code for a combinatorial prediction market around 10 years ago. I agree that these are promising ideas and I liked this post a lot.
Agreed. I had [this recent paper](https://ieeexplore.ieee.org/abstract/document/9325353) in mind when I raised the question.