I feel like trying properly to explain it would veer more into speculating-about-his-psychology than I really want to. But it doesn't seem totally inexplicable to me, and I'd imagine that an explanation might look something like:
Eric doesn't think it's his comparative advantage to answer these questions; he also sometimes experiences people raising them as distracting from the core messages he is trying to convey.
(To be clear, I'm not claiming that this is what is happening; I'm just trying to explain why it doesn't feel in-principle inexplicable to me.)
I think it's easy to locally adopt bits of Greenish perspective when one can see how they would be instrumentally useful.
The claim I'm making is that it's often a good idea to adopt bits of Greenish perspective even when you can't see how they would be instrumentally useful -- because a reasonable chunk of the time they will be instrumentally useful and you just can't see it yet.
I don't think that requires adopting Green's justifications as terminal, but it does require you to adopt some generator-of-Greenish-perspective that isn't just "Blue led me to a Greenish conclusion in this particular case".
I feel like you're rejecting the instrumental value of Green based on a particular story you've invented about why green might be instrumentally valuable.
But ... IDK, it seems to me like a lot of the value of Green relates to being a boundedly rational actor, in a world with other minds. When I envision a world with a bunch of actors who appreciate Green and try to stay connected to that in their actions, I think they're less likely to fuck it up than a world with the same actors who otherwise disregard green. I think they're more likely to respect Chesterton's fences (and not cause unilateralist's curse type catastrophes), and they're more likely to act in ways which provide good interfaces for other people and make it easy for others to have justified trust in what's happening.
If I imagine instead a single actor in an otherwise mindless universe, I much less have the feeling that things will go better if they appreciate Green.
Indeed, to go a little meta and a little speculative: it kind of feels like you're making an epistemic error in this comment which I could round off as "too much Deep Atheism / lack of Green": I don't see a respect for the possibility that other minds, in liking Green, might be latching onto something which is deeper than what you're perceiving it as; instead assuming that the hypothesis you thought of is the only one worth considering.
This makes a lot of sense! I do find that the way my brain wants to fit your experience in with my conception of wholesomeness is that you were perhaps not attending enough to the part of the whole that was your own internal experience and needs?
Oh nice, I kind of vibe with "meditation on a theme" as a description of what this post is doing and failing to do.
Overall I'm really happy with this post.
It crystallized a bunch of thoughts I'd had for a while before this, and has been useful as a conceptual building block that's fed into my general thinking about the situation with AI, and the value of accelerating tools to improve epistemics and coordination. I often find myself wanting to link people to it.
Possible weaknesses:
This was written a few months after Situational Awareness. I felt like there was kind of a missing mood in x-risk discourse around that piece, and this was an attempt to convey both the mood and something of the generators of the mood.
Since then, the mood has shifted, to something that feels healthier to me. 80,000 Hours has a problem profile on extreme power concentration. At this point I mostly wouldn't link back to this post (preferring to link e.g. to more substantive research), although I might if I just really wanted to convey the mood to someone. I'm not really sure whether my article had any counterfactual responsibility for the research people have done in the interim.
I'm happy with this post. I think it captures something meta-level which is important in orienting to doing a good job of all sorts of work, and I occasionally want to point people to this.
Most of the thoughts probably aren't super original, but for something this important I am surprised that there isn't much more explicit discussion -- it seems like it's often just talked about at the level of a few sentences, and regarded as a matter of taste, or something. For people who aspire to do valuable work, I guess it's generally worth spending a few hours a year explicitly thinking about the tradeoffs here and how to navigate them in particular situations -- and then probably worth at least a bit of scaffolding or general thinking about the topic.
I like this post and am glad that we wrote it.
Despite that, I feel keenly aware that it's asking a lot more questions than it's answering. I don't think I've got massively further in the intervening year in having good answers to those questions. The way this thinking seems to me to be most helpful is as a background model to help avoid confused assumptions when thinking about the future of AI. I do think this has impacted the way I think about AI risk, but I haven't managed to articulate that well yet (maybe in 2026 ...).
I take this to be pretty strong evidence that this is not a good article for people reading Drexler to start with! (FWIW I valued reading it, but I'm now realising that the value I got was largely in understanding a bit better how Eric's sweep of ideas connect, and perhaps that wouldn't have been available to me if I hadn't had the background context.)