"The modern world isn't really designed to let polymaths thrive" means something, but society is not necessarily designed at all. Reality arbitrates what works and what doesn't, even if locally it feels like someone's in charge. The success rate of curiosity is related to whether the low-hanging fruit has been picked, the symmetries have been noticed, and the ideas have been spread. This doesn't have much to do with anyone's design.
My experience as a generalist has been searching around in the dark, against the local economic gradient, deriving things for myself, then finding my "new" ideas written up over and over in the history books. Humbling! There's no guarantee that process will yield huge economic returns before you die. What if parents know this, and want something else for their kids? Or at least want their kids to be able to step off the treadmill to the upper middle class after they know what they're giving up?
Think about what matters, try to solve problems, get better, repeat. It feels like work, because it is. One guaranteed benefit - the process of figuring things out for yourself means you're quicker at solving all the little puzzles that pop up in your life. That has enormous value. If you're willing to bet you've found an improvement in the world, send it.
FYI - Scanning what I wrote, it looks pessimistic, but what I really am pointing at is that it helps if you love the game.
I really like this! Thanks for writing it. In particular I like this as a description of curiosity:
Ideas moved through my head as freely as the wind. I shuffled through subjects the way some people move through rooms. Everything felt connected. I didn’t understand why I was supposed to stay in one place.
I think there's a common script where people:
1. Follow their many interests
2. Notice they're falling behind specialists in legible achievement
3. Try to specialize (now permanently behind the specialists)
4. Are legibly mediocre
Public figures are judged on outlier statements rather than average statements. They also lack control over the context in which the public learns about them. A If 80% of statements from Alice on team A look good in hindsight, and 20% look bad:
1. Team B will distribute an attention-span-length list of bad things said by Alice.
2. Team A will distribute an attention-span-length list of good things said by Alice.
3. Viewers will set bad priors based on this biased sampling, and they're not going to listen to Alice enough to overcome this prior.
4. Team A must expend some energy to fix these priors or reduce association with Alice.
Still, I agree that there is something like a moral responsibility to self-immolate in a blinding flash of honesty. This selects for amoral people in public positions, but we do often see people with resources to spare spend some of those resources on speaking their mind.
Gold diggers seem like a decent female analog of male pickup artists. Both are intentional optimizers.
This is what came to mind for me:
"But once [protein structure prediction] is solved [-ish], you'd be able to effectively go through dozens of [de novo proteins] per day for, say, $1000 [each], while previously, each one would've taken six months and $50,000."
Hands
McDonald's of course!
Where precisely would you go in India? I can recommend some restaurants, but depending where you start, you may not consider them worth the drive.
I have not seen much written about the incentives around strategic throttling of public AI capabilities. Links would be appreciated! I've seen speculation and assumptions woven into other conversations, but haven't found a focused discussion on this specifically.
If knowledge work can be substantially automated, will this capability be shown to the public? My current expectation is no.
I think it's >90% likely that various national security folks are in touch with the heads of AI companies, 80% likely they can exert significant control over model releases via implicit or explicit incentives, and 60% likely that they would prevent or substantially delay companies from announcing the automation of big chunks of knowledge work. I expect a tacit understanding that if models which destabilize society beyond some threshold are released, the toys will be taken away. Perhaps government doesn't need to be involved, and the incentives support self-censorship to avoid regulation.
This predicts public model performance which lingers at "almost incredibly valuable" whether there is a technical barrier there or not, while internal capabilities advance however fast they can. Even if this is not happening now, this mechanism seems relevant to the future.
A Google employee might object by saying "I had lunch with Steve yesterday, he is the world's leading AI researcher, and he's working on public-facing models. He's a terrible liar (we play poker on Tuesdays), and he showed me his laptop". This would be good evidence that the frontier is visible, at least to those who play poker with Steve.
There might be some hints of an artificial barrier in eval performances or scaling metrics, but it seems things are getting more opaque.
Also, I am new, and I've really been enjoying reading the discussions here!
These are the drives listed in the "Universal Drives" section:
Each of them has utility in the singleplayer and multiplayer games we play in our lives. There are degrees of freedom in how they're implemented, but they stabilize cooperation which has value. I don't think the word arbitrary is specific enough to be a crux here but I agree OP seems to be imagining too much convergence. Potāto potăto.
For each drive above we can ask "does an AI need a recognizable version of that value to saturate the environments we're likely to get soon". I think the answer is pretty much no for each?
We have an overhang currently, where humans have some deontological-ish tendencies to cooperate even where it's not locally optimal. We're exploitable. This works well when we're the only players in the game, but collapses when flexible, selfish, fast replicators are introduced. I was surprised to see "Integrating humans" as the final section of the talk. I think we're dead in these worlds, and all of the interesting cooperation happens after the bad players are outcompeted.
Also, hi Charlie! We met a few months ago. I am the guy who asked "how's that going?" when you mentioned you're working on value learning, and you suggested reading your goodhart series. I've been reading your stuff.