Nice post! My main takeaway is "incentives are optimization pressures". I may have had that thought before but this tied it nicely in a bow.
Some editing suggestions/nitpicks;
The bullet point that starts with "As evidence for #3" ends with a hanging "How".
Quite recently, a lot of ideas have sort of snapped together into a coherent mindset.
I would put "for me" at the end of this. It does kind of read to me like you're about to describe for us how a scientific field has recently had a breakthrough.
I don't think I'm following what "Skin in the game" refers to. I know the idiom, as in "they don't have any skin in the game" but the rest of that bullet point didn't click into place for me.
We definitely optimize for something, otherwise evolution wouldn't let us be here
I think this might be confusing "an optimizer" with "is optimized". We're definitely optimized, otherwise evolution wouldn't let us be here, but it's entirely possible for an evolutionary process to produce non-optimizers! (This feels related to the content of Risks from Learned Optimization.)
capabilities/alignment
Might be worth explicitly saying "AI capabilities/AI alignment" for readers who aren't super following the jargon of the field of AI alignment.
Optimization processes are themselves "things that work", which means they have to be created by other optimization processes.
If you're thinking about all the optimization processes on earth, then this is basically true, but I don't think it's a fundamental fact about optimization processes. As you point out, natural selection got started from that one lucky replicator. But any place with a source of negentropy can turn into an optimization process.
Thanks! Edits made accordingly. Two notes on the stuff you mentioned that isn't just my embarrassing lack of proofreading:
Context: Quite recently, a lot of ideas have sort of snapped together into a coherent mindset for me. Ideas I was familiar with, but whose importance I didn't intuitively understand. I'm going to try and document that mindset real quick, in a way I hope will be useful to others.
Five Bullet Points
Main Implications
The biggest takeaway is look for optimization processes. If you want to use a piece of the world (as a tool, as an ally, as evidence, as an authority to defer to, etc), it is important to understand which functions it has. In general, the functions a thing is "supposed to have" can come wildly apart from the things that it's actually optimized to do. If you can't find a mechanism that forces a particular thing to have a particular useful property, it probably doesn't. Examples:
The obvious first step when looking for optimization processes: learn to recognize optimization processes. This is the key to what Yudkowsky calls an adequacy argument, which is what I've been broadly calling "hey does this thing work the way I want it to?"
Musings