One of the nice things about realizing that you're Douglas Adams' puddle is that the nature of cosmic reality doesn't have much to do with what you have for lunch. I think if you frame things more concretely, your questions will be more answerable.
Don't you want things? What do you want?
Why you want those things has to do with your particular circumstance, but this is probably what you believed before. And if you start thinking about what you want, as opposed to what you should want, you may find that it's not a simple and unsatisfying thing, but that there's a lot of depth there. And I suspect you'll find also that... (read more)
It's an interesting comparison in a descriptive sense. To me the framing does encourage more general pattern-matching. Given the similarity, what follows? How should it change our beliefs or actions?
Do you think rationalism is comparable? The discourse on PauseAI and populism tends to center on the public's inability to come to the "right" conclusion, even when the public's preferences against AI development are strong and clear. There are a few utopian visions, a vanguardist self conception, and techno-optimism teleology.
Arguments of the form "group A is like bad group B in ways x, y, and z" seem bad. When the argument has merit, it's because x, y, or z is bad, and then you can reduce it to "group A has property x, which is bad", which is a better way of saying it.
These examples are about paternalism, which is a property of Leninists, AI researchers, global health charities, governments, strategy consultants, civil engineers, and your mom (I checked). My preference is that paternalism should require some very strong justification, especially when it's about overriding the preferences of others, as opposed to helping them get what they want in a way they don't understand. I agree that this situation looks more like the bad kind of paternalism.
These are the drives listed in the "Universal Drives" section:
Each of them has utility in the singleplayer and multiplayer games we play in our lives. There are degrees of freedom in how they're implemented, but they stabilize cooperation which has value. I don't think the word arbitrary is specific enough to be a crux here but I agree OP seems to be imagining too much convergence. Potāto potăto.
For each drive above we can ask "does an AI need a recognizable version of that value to saturate the environments we're likely to get soon". I think the answer is pretty much no for each?
We have an overhang currently, where... (read more)
"The modern world isn't really designed to let polymaths thrive" means something, but society is not necessarily designed at all. Reality arbitrates what works and what doesn't, even if locally it feels like someone's in charge. The success rate of curiosity is related to whether the low-hanging fruit has been picked, the symmetries have been noticed, and the ideas have been spread. This doesn't have much to do with anyone's design.
My experience as a generalist has been searching around in the dark, against the local economic gradient, deriving things for myself, then finding my "new" ideas written up over and over in the history books. Humbling! There's no guarantee that process will... (read more)
I really like this! Thanks for writing it. In particular I like this as a description of curiosity:
Ideas moved through my head as freely as the wind. I shuffled through subjects the way some people move through rooms. Everything felt connected. I didn’t understand why I was supposed to stay in one place.
I think there's a common script where people:
1. Follow their many interests
2. Notice they're falling behind specialists in legible achievement
3. Try to specialize (now permanently behind the specialists)
4. Are legibly mediocre
Public figures are judged on outlier statements rather than average statements. They also lack control over the context in which the public learns about them. A If 80% of statements from Alice on team A look good in hindsight, and 20% look bad:
1. Team B will distribute an attention-span-length list of bad things said by Alice.
2. Team A will distribute an attention-span-length list of good things said by Alice.
3. Viewers will set bad priors based on this biased sampling, and they're not going to listen to Alice enough to overcome this prior.
4. Team A must expend some energy to fix these priors or reduce association with Alice.
Still, I agree that there is something like a moral responsibility to self-immolate in a blinding flash of honesty. This selects for amoral people in public positions, but we do often see people with resources to spare spend some of those resources on speaking their mind.
Gold diggers seem like a decent female analog of male pickup artists. Both are intentional optimizers.
This is what came to mind for me:
"But once [protein structure prediction] is solved [-ish], you'd be able to effectively go through dozens of [de novo proteins] per day for, say, $1000 [each], while previously, each one would've taken six months and $50,000."
Noticing that a description of a system from the intentional stance is isomorphic to a description from the physical stance when you have perfect information can help with this feeling. Otherwise in one frame you feel like you have something magical (in a good way), and in the other stance you have "just" what's actually there. It's both. It's one thing (you), described in 2 ways. It's not that one is right and one is wrong - think of it as a unification of 2 frameworks, or a mapping between the 2 sides of the dualism you were used to previously. You're still a cool guy who likes jazz or whatever in... (read more)