Arguments of the form "group A is like bad group B in ways x, y, and z" seem bad. When the argument has merit, it's because x, y, or z is bad, and then you can reduce it to "group A has property x, which is bad", which is a better way of saying it.
These examples are about paternalism, which is a property of Leninists, AI researchers, global health charities, governments, strategy consultants, civil engineers, and your mom (I checked). My preference is that paternalism should require some very strong justification, especially when it's about overriding the preferences of others, as opposed to helping them get what they want in a way they don't understand. I agree that this situation looks more like the bad kind of paternalism.
These are the drives listed in the "Universal Drives" section:
Each of them has utility in the singleplayer and multiplayer games we play in our lives. There are degrees of freedom in how they're implemented, but they stabilize cooperation which has value. I don't think the word arbitrary is specific enough to be a crux here but I agree OP seems to be imagining too much convergence. Potāto potăto.
For each drive above we can ask "does an AI need a recognizable version of that value to saturate the environments we're likely to get soon". I think the answer is pretty much no for each?
We have an overhang currently, where humans have some deontological-ish tendencies to cooperate even where it's not locally optimal. We're exploitable. This works well when we're the only players in the game, but collapses when flexible, selfish, fast replicators are introduced. I was surprised to see "Integrating humans" as the final section of the talk. I think we're dead in these worlds, and all of the interesting cooperation happens after the bad players are outcompeted.
Also, hi Charlie! We met a few months ago. I am the guy who asked "how's that going?" when you mentioned you're working on value learning, and you suggested reading your goodhart series. I've been reading your stuff.
"The modern world isn't really designed to let polymaths thrive" means something, but society is not necessarily designed at all. Reality arbitrates what works and what doesn't, even if locally it feels like someone's in charge. The success rate of curiosity is related to whether the low-hanging fruit has been picked, the symmetries have been noticed, and the ideas have been spread. This doesn't have much to do with anyone's design.
My experience as a generalist has been searching around in the dark, against the local economic gradient, deriving things for myself, then finding my "new" ideas written up over and over in the history books. Humbling! There's no guarantee that process will yield huge economic returns before you die. What if parents know this, and want something else for their kids? Or at least want their kids to be able to step off the treadmill to the upper middle class after they know what they're giving up?
Think about what matters, try to solve problems, get better, repeat. It feels like work, because it is. One guaranteed benefit - the process of figuring things out for yourself means you're quicker at solving all the little puzzles that pop up in your life. That has enormous value. If you're willing to bet you've found an improvement in the world, send it.
FYI - Scanning what I wrote, it looks pessimistic, but what I really am pointing at is that it helps if you love the game.
I really like this! Thanks for writing it. In particular I like this as a description of curiosity:
Ideas moved through my head as freely as the wind. I shuffled through subjects the way some people move through rooms. Everything felt connected. I didn’t understand why I was supposed to stay in one place.
I think there's a common script where people:
1. Follow their many interests
2. Notice they're falling behind specialists in legible achievement
3. Try to specialize (now permanently behind the specialists)
4. Are legibly mediocre
Public figures are judged on outlier statements rather than average statements. They also lack control over the context in which the public learns about them. A If 80% of statements from Alice on team A look good in hindsight, and 20% look bad:
1. Team B will distribute an attention-span-length list of bad things said by Alice.
2. Team A will distribute an attention-span-length list of good things said by Alice.
3. Viewers will set bad priors based on this biased sampling, and they're not going to listen to Alice enough to overcome this prior.
4. Team A must expend some energy to fix these priors or reduce association with Alice.
Still, I agree that there is something like a moral responsibility to self-immolate in a blinding flash of honesty. This selects for amoral people in public positions, but we do often see people with resources to spare spend some of those resources on speaking their mind.
Gold diggers seem like a decent female analog of male pickup artists. Both are intentional optimizers.
This is what came to mind for me:
"But once [protein structure prediction] is solved [-ish], you'd be able to effectively go through dozens of [de novo proteins] per day for, say, $1000 [each], while previously, each one would've taken six months and $50,000."
Hands
McDonald's of course!
Where precisely would you go in India? I can recommend some restaurants, but depending where you start, you may not consider them worth the drive.
It's an interesting comparison in a descriptive sense. To me the framing does encourage more general pattern-matching. Given the similarity, what follows? How should it change our beliefs or actions?
Do you think rationalism is comparable? The discourse on PauseAI and populism tends to center on the public's inability to come to the "right" conclusion, even when the public's preferences against AI development are strong and clear. There are a few utopian visions, a vanguardist self conception, and techno-optimism teleology.