This observation should make us notice confusion about whether AI safety recruiting pipelines are actually doing the right type of thing.
In particular, the key problem here is that people are acting on a kind of top-down partly-social motivation (towards doing stuff that the AI safety community approves of)—a motivation which then behaves coercively towards their other motivations. But as per this dialogue, such a system is pretty fragile.
A healthier approach is to prioritize cultivating traits that are robustly good—e.g. virtue, emotional health, and fundamental knowledge. I expect that people with such traits will typically benefit the world even if they're missing crucial high-level considerations like the ones described above.
For example, an "AI capabilities" researcher from a decade ago who cared much more about fundamental knowledge than about citations might well have invented mechanistic interpretability without any thought of safety or alignment. Similarly, an AI capabilities researcher at OpenAI who was sufficiently high-integrity might have whistleblown on the non-disparagement agreements even if they didn't have any "safety-aligned" motivations.
Also, AI safety researchers who have those traits won't have an attitude of "What?! Ok, fine" or "WTF! Alright you win" towards people who convince them that they're failing to achieve their goals, but rather an attitude more like "thanks for helping me". (To be clear, I'm not encouraging people to directly try to adopt a "thanks for helping me" mentality, since that's liable to create suppressed resentment, but it's still a pointer to a kind of mentality that's possible for people with sufficiently little internal conflict.) And in the ideal case, they will notice that there's something broken about their process for choosing what to work on, and rethink that in a more fundamental way (which may well lead them to conclusions similar to mine above).
In general, yes. But in this case the thing I wanted an example of was "a very distracting example", and the US left-right divide is a central example of a very distracting example.
Some agreements and disagreements:
You should probably link some posts, it's hard to discuss this so abstractly. And popular rationalist thinkers should be able to handle their posts being called mediocre (especially highly-upvoted ones).
I think there are other dynamics that are probably as important as 'renouncing antisocial desires' — in particular, something like 'blocks to perceiving aspects of vanilla sex/sexuality' (which can contribute to a desire for kink as nearest-unblocked-strategy)
This seems insightful and important!
Fixed, ty!
Good question. I learned from my last curriculum (the AGI safety fundamentals one) that I should make my curricula harder than I instinctively want to. So I included a bunch of readings that I personally took a long time to appreciate as much as I do now (e.g. Hoffman on the debtor's revolt, Yudkowsky on local validity, Sotala on beliefs as emotional strategies, Moses on The Germans in week 1). Overall I think there's at least one reading per week that would reward very deep thought. Also I'm very near (and plausibly literally on) the global Pareto frontier in how much I appreciate all of MAGA-type politics, rationalist-type analysis, and hippie-type discussion of trauma, embodied emotions, etc. I've tried to include enough of all of these in there that very few people will consistently think "okay, I get it".
Having said that, people kept recommending that I include books, and I kept telling them I couldn't because I only want to give people 20k words max of main readings per week. Given a word budget it seems like people will learn more from reading many short essays than a few books. But maybe that's an artifact of how I personally think (basically, I like to start as broad as possible and then triangulate my way down to specific truths), whereas other people might get more out of going deeper into fewer topics.
I do think that there's not enough depth to be really persuasive to people who go in strongly disagreeing with me on some/all of these topics. My hope is that I can at least convey that there's some shape of coherent worldview here, which people will find valuable to engage with even if they don't buy it wholesale.
The threats of losing one’s job or getting evicted are not actually very scary when you’re in healthy labor and property markets. And we’ve produced so much technological abundance over the last century that our labor and property markets should be flourishing. So insofar as those things are still scary for people today, a deeper explanation for that comes in explaining why our labor and property markets arent very healthy, which comes back to our inability to build and our overrregulation.
But also: yes, there’s a bunch of stuff in this curriculum about exploitation by elites. Somehow there’s a strange pattern though where a lot of the elite exploitation is extremely negative-sum: e.g. so so much money is burned in the US healthcare system, not even transferred to elites (e.g. there are many ways in which being a doctor is miserable which you would expect a healthy system to get rid of). So I focused on paradigm examples of negative-sum problems in the intro to highlight that’s there’s definitely something very Pareto suboptimal going on here.
My version of your synthesis is something like as follows:
This is closer; I'd just add that I don't think activism is too different from other high-stakes domains, and I discuss it mainly because people seem to take activists more at face value than other entities. For example, I expect that law firms often pessimize their stated values (of e.g. respect for the law) but this surprises people less. More generally, when you experience a lot of internal conflict, every domain is an adversarial domain (against parts of yourself).
I think there's a connection between the following
- storing passwords in plain text|encrypting passwords on a secure part of the disk|salting and hashing passwords
- naive reasoning|studying fallacies and biases|learning to recognise a robust world model
- utilitarianism|deontology|virtue ethics
I think you lost the italics somewhere. Some comments on these analogies:
I'm taking the dialogue seriously but not literally. I don't think the actual phrases are anywhere near realistic. But the emotional tenor you capture of people doing safety-related work that they were told was very important, then feeling frustrated by arguments that it might actually be bad, seems pretty real. Mostly I think people in B's position stop dialoguing with people in A's position, though, because it's hard for them to continue while B resents A (especially because A often resents B too).
Some examples that feel like B-A pairs to me include: people interested in "ML safety" vs people interested in agent foundations (especially back around 2018-2022); people who support Anthropic vs people who don't; OpenPhil vs Habryka; and "mainstream" rationalists vs Vassar, Taylor, etc.