Sometimes when I bring up the subject of reprogenetics, people get uncomfortable. "So you want to do eugenics?", "This is going to lead to inequality.", "Parents are going to pressure their kids.". Each of these statements does point at legitimate concerns. But also, the person is uncomfortable, and they don't necessarily engage with counterpoints. And, even if they acknowledge that their stated concern doesn't make sense, they'll still be uncomfortable—until they think of another concern to state.
This behavior is ambiguous—I don't know what underlies the behavior in any given case. E.g. it could be that they're intent on pushing against reprogenetics regardless of the arguments they say, or it could be that they have good and true intuitions that they haven't yet explicitized. And in any case, argument and explanation is usually best. Still, I often get the impression that, fundamentally, what's actually happening in their mind is like this:
To be really clear: In many situations, doing 1—4 is straightforwardly CORRECT behavior. If there's some morally important question that you haven't thought about, but that your society apparently makes a strong judgement about, then usually you should follow that judgement until you think about it much more. In some cases 5 and 6 are at least empathizable, or even correct if there's a sufficiently repressive regime.
That said, this behavior supports a false consensus.
I wish that when someone asked me in, say, 2016, "Why are you working on decision theory?", I would have not said "Well I think that a better understanding of decision theory would tell us what sort of agents are possible and then we can understand reflective stability and this will explain the design space of agents which will allow us to figure out what levers we have to set the values of the AI and...". Instead I wish I had said "Mainly because Yudkowsky has been working on that and it seems interesting and I know math.". (Then I could launch into that other explanation if I wanted to, as it is also true and useful.)
Yudkowsky, being the best strategic thinker on the topic of existential risk from AGI, had several "founder effects" on the group of people working to decrease X-risk. It sort of seems like one of those founder effects was to overinvest in technical research and underinvest in "social victory", i.e. convincing everyone to not build AGI. Whose fault was that? I think it was a distributed correlated failure, caused by deference. What should we have done instead?
One example of something we could have done differently would have been to be more open to the full spectrum of avenues, even if we personally don't feel like working on that / wouldn't be good at working on it / don't know how to evaluate whether it would work / are intuitively skeptical of it being doable. Another example would be to make it more clear when we are deferring to Yudkowsky or to "the community". We don't have to stop deferring, to avoid this correlated failure. We just have to say that we're deferring. That way, people keep hearing "I think X, mainly because Yudkowsky thinks X", and then they can react to "Yudkowsky thinks X" rather than "everyone thinks X" (and can check whether Yudkowsky actually believes X).
Currently most X-risk reduction resources are directed by a presumption that AGI is coming in less than a decade. I think this "consensus" is somewhat overconfident, and also somewhat unreal (i.e. it's less of a consensus than it seems). That's a very usual state of affairs, so I don't want to be too melodramatic about it, but it still has concrete bad effects. I wish people would say "I don't have additional clearly-expressible reasons to think AGI is coming very soon, that I'll defend in a debate, beyond that it seems like everyone else thinks that.". I also wish people would say "I'm actually mainly thinking that AGI is coming soon because thoughtleaders Alice and Bob say so.", if that's the case. Then I could critique Alice's and/or Bob's stated position, rather than taking potshots at an amorphous unaccountable ooze.
There's a menagerie of questions we bump into in our lives. What food is safe to eat? Who should you vote for? What shape is the Earth? What effect would tariffs have on the economy? How easy is it to unify quantum mechanics and relativity? Was so-and-so generally honorable in zer private dealings? Which car rental service is good? How did https://wordpress.com/ come to be so good?? (Inkhaven brought to you by WordPress ❤️ .) What happened 50 years ago in Iran? What's happening right now in any place other than right where you are? Is genetic engineering moral? Will these socks wear out after 3 months? Should you get this vaccine? What's a reasonable price for a bike? Where should you hike? What's really going on at OpenAI? What is it dangerous to react sodium with? Is it legal to park here? When is it time to protest the government?
You can become an expert on almost any small set of these questions, such that you don't really need to defer very much to anyone else's testimony about them. But you can't become a simultaneous expert on most of the questions that you care about.
So, you have to defer to other people about many or most important questions. There are too many questions, and many important questions are complex and too hard to figure out on your own. Also, you can get by pretty well by deferring: a lot of other people have thought about those questions a lot, and often they can correctly tell you what's important to know.
But deference has several deep and important dangers.
If I'm not going to figure something out myself, how do I gracefully decay from a pure, individually witnessed understanding of the world (which was a fiction anyway), to a patchwork of half-understood pictures of the world copied imprecisely from a bunch of other people? How do we defer in a way that doesn't destroy our group epistemics, doesn't abdicate our proper responsibilities, properly informs others, coordinates on important norms and plans, and so on? How do we carve out a space for individual perspective-having without knocking out a bunch of load-bearing pillars of our ethics? How do we defer gracefully?