Superstable proteins: A team from Nanjing University just created a protein that's 5x stronger against unfolding than normal proteins and can withstand temperatures of 150C. The upshot from some analysis on X seems to be:
So why is this relevant? It's basically the first step towards nanotech. Because standard proteins aren't strong enough to manipulate molecular fragments, Drexler's original guess at the path to nanotech was: regular proteins assemble crosslinked proteins, which assemble hybrid nanotech, which assemble full nanotech. At each stage the energy scale increases, and the systems become increasingly capable.
It's plausible to me that within 15-30 years, enzymes like these superstable proteins but much more advanced will build stronger proteins which build hybrid systems which build molecular assemblers, until we have real-life Poke Balls that can print animals in 15 seconds.
Do we have a two-sentence summary on what the EAs could have done better, with hindsight? Overall it was a pretty catastrophic outcome to lose all their power on the board AND become the scapegoat, and I'm not sure what lesson to draw. Inasmuch as Ilya/Mira were less reliable allies than the board thought, maybe we should think they misjudged the situation
I was also there, and my take is there was actually fairly little specific, technical discussion about the economics and politics of what happens post-AGI. This is mostly due to it not being anyone's job to think about these questions, and only somewhat because they're inherently hard questions. Not really sure what I would change.
In fact, the "plan" seems to be the exact reverse or so of my preference ordering on what it should be.
Is this due to it basically being a Pareto frontier of political capital needed vs probability of causing doom, or some other reason?
This is well-written, enjoyable to read, and not too long, but I wish the author called it something more intuitive like "Bootstrap Problems". Outside (and even maybe inside) the tiny Dwarf Fortress community no one will know what an Anvil Shortage is, and it's not really a Sazen because people can understand the concept without having read this post. Overall I give it +1.
I'm giving this +1 review point despite not having originally been excited about this in 2024. Last year, I and many others were in a frame where alignment plausibly needed a brilliant idea. But since then, I've realized that execution and iteration on ideas we already have is highly valuable. Just look at how much has been done with probes and steering!
Ideas like this didn't match my mental picture of the "solution to alignment", and I still don't think it's in my top 5 directions, but with how fast AI safety has been growing, we can assign 10 researchers to each of 20 "neglected approach"es like this, so it deserves +1 point.
The post has an empirical result that's sufficient to concretize the idea and show it has some level of validity, which is necessary. Adam Jones has a critique. However, the only paper on this so far didn't make it to a main conference and only has 3 cites, so the impact isn't large (yet).
The thesis has been basically right in the last 18 months, and still holds. I think the only way one could have done better than this investing would be taking on concentrated positions on AI stocks. Now, the case for options might be even stronger given the possibility of being in an AI bubble, as you're protected to the downside and options are still fairly cheap (VIX is 17 as I write this).
With recent events like Nvidia's political influence and the AI super PAC, it's also looking more likely that we're heading to a capitalistic future where post-singularity wealth matters for influence. I take this seriously enough that I've been curtailing consumption now to hopefully buy a small galaxy later, or whatever it ends up buying.
This seems to hold up a year later, and I've referenced it several times, including citing it in Measuring AI Ability to Complete Long Tasks. This report's note on power availability being limiting also preceded the 2025 boom in AI-relevant energy stocks. Overall it deserves +1 point.
Every time I think of doing research in a field that's too crowded, this post puts a faint image in my head of an obnoxious guy banging a drum at the front of a marching band. This is a real issue we need to keep in mind, both in AI safety research and elsewhere, and the number of LW posts that I remember at all two years later is pretty small, so this obviously deserves at least +1 review point.
The section on status made me pay more attention to my desire for status itself, but that's probably just me.
Thanks, this is good context. So they didn't even simulate if it would remain biologically functional? Seems to make it less impressive.