Philosophy graduate interested in metaphysics, meta-ethics, AI safety and whole bunch of other things. Meta-ethical and moral theories of choice: naturalist realism + virtue ethics. 

Thinks longtermism rests on a false premise – that for every moral agent and every value-bearing location, an agent's spatio-temporal distance from a given value-bearing location does not factor into the value born at that location (e.g. no matter how close or far a person is from you, that person should matter the same to you). 

Thinks we should spend a lot more resources trying to delay HLMI – make AGI development uncool. Questions what we really need AGI for anyway. Accepts the epithet "luddite" so long as this is understood to describe someone who:

  • suspects that on net, technological progress yields diminishing marginal human flourishing
  • OR believes workers have a right to organize to defend their interests (you know – what the original Luddites were doing)
  • OR suspects that, with regards to AI, the Luddite fallacy may not be a fallacy: AI really could lead to wide-spread permanent technological unemployment, and that might not be a good thing
  • OR considering the common sense observation that societies can only adapt so quickly, suspects excessive rates of technological change can lead to social harms, independent of how the technology is used. 
    • Assuming some base line amount of good and bad actors, we always need norms/regulations etc to ensure new tech is overall a net benefit for society. But excessively rapid change makes the optimal norms/regulations excessively fast moving targets. On a deeper note, social bonds could start to fray with excessively rapid change: think different generations or groups who adopted a given new tech at a different time/rate not being able to connect with one another, their experience varying too greatly. Think teachers being unable to connect with, guide or prepare their students effectively given that their own experience is already outdated/invalidated and will only become more so by the time students are adults.

Subscribes to Crocker's Rules: unvarnished critical (but constructive) feedback is welcome.


Wiki Contributions


While I completely agree that care should be taken if we try to slow down AI capabilities, I think you might be overreacting in this particular case. In short: I think you're making strawmen of the people you are calling "neo-luddites" (more on that term below). I'm going to heavily cite a video that made the rounds and so I think decently reflects the views of many in the visual artist community. (FWIW, I don't agree with everything this artist says but I do think it's representative). Some details you seem to have missed:

  1. I haven't heard of visual artists asking for the absolute ban of using copyrighted material in ML training data – they just think it should be opt-in and/or compensated.
  2. Visual artists draw attention to the unfair double standard for visual vs audio data, that exists because the music industry has historically had tighter/more aggressive copyright law. They want the same treatment that composers/musicians get.
    1. Furthermore, I ask: is Dance Diffusion "less-aligned" than Stable Diffusion? Not clear to me how we should even evaluate that. But those data restrictions probably made Dance Diffusion more of a hassle to make (I agree with this comment in this respect).
  3. Though I imagine writers could/might react similarly to visual artists regarding use of their artistic works, I haven't heard any talk of bans on scraping the vast quantities of text data from the internet that aren't artistic works. It's a serious stretch to say the protections that are actually being called for would make text predictor models sound like "a person from over 95 years ago" or something like that.

More generally, as someone you would probably classify as a "neo-luddite," I would like to make one comment on the value of "nearly free, personalized entertainment." For reasons I don't have time to get into, I disagree with you: I don't think it is "truly massive." However, I would hope we can agree that such questions of value should be submitted to the democratic process (in the absence of a better/more fair collective decision-making process): how and to what extent we develop transformative AI (whether agentic AGI or CAIS) involves a choice in what kind of lifestyles we think should be available to people, what kind of society we want to live in/think is best or happiest. That's a political question if ever there was one. If it's not clear to you how art generation AI might deprive some people of a lifestyle they want (e.g. being an artist) see here and here for some context. Look past the persuasive language (I recommend x1.25 or 1.5 playback) and I think you'll find some arguments worth taking seriously.

Finally, see here about the term "luddite." I agree with Zapata that the label can be frustrating and mischaracterizing given its connotation of "irrational and/or blanket technophobia." Personally, I seek to reappropriate the label and embrace it so long as it is used in one of these senses, but I'm almost certainly more radical than the many you seem to be gesturing at as "neo-luddites."

That sounds about right. The person in the second case is less morally ugly than the first. This is spot on:

the important part is the internalized motivation vs reasoning out what to do from ethical principles.


What do you mean by this though?:

(although I notice my intuition has a hard time believing the premise in the 2nd case)

You find it hard to believe someone could internalize the trait of compassion through "loving kindness meditation"? (This last I assume is a placeholder term for whatever works for making oneself more virtuous). Also, any reason you swapped the friend for a stranger? That changes the situation somewhat – in degree at least, but maybe in kind too.

I would note, according to (my simplified) VE, it's the compassion that makes the action of visiting the stranger/friend morally right. How the compassion was got is another question, to be evaluated on different merits.


I'm not sure I understand your confusion, but if you want examples of when it is right to be motivated by careful principled ethical reasoning or rule-worship, here are some examples: 

  • for a judge, acting in their capacity as judge, it is often appropriate that they be motivated by a love of consistently respecting rules and principles
  • for policymakers, acting in their capacity as policymakers (far-removed from "the action"), it is often appropriate for them to devise and implement their policies motivated by impersonal calculations of general welfare

These are just some but I'm sure there are countless others. The broader point though: engaging in this kind of principled ethical reasoning/rule-worship very often, making it a reflex, will likely result in you engaging in it when you shouldn't. When you do so involuntarily, despite you preferring that you wouldn't: that's internal moral disharmony. (Therefore, ethicists of all stripe probably tend to suffer internal moral disharmony more than the average person!)

Sorry, my first reply to your comment wasn't very on point. Yes, you're getting at one of the central claims of my post.

what I remain skeptical of is the idea that moral calculation mostly interferes with fellow-feeling

First, I wouldn't say "mostly." I think in excessive amounts it interferes. Regarding your skepticism: we already know that calculation (a maximizer's mindset) in other contexts interferes with affective attachment and positive evaluations towards the choices made by said calculation (see references to psych litt). Why shouldn't we expect the same thing to occur in moral situations, with the relevant "moral" affects? (In fact, depending on what you count as "moral," the research already provides evidence of this).

If your skepticism is about the sheer possibility of calculation interfering with empathy/fellow-feeling etc, then any anecdotal evidence should do. See e.g. Mill's autobiography. But also, you've never ever been in a situation where you were conflicted between doing two different things with two different people/groups, and too much back and forth made you kinda feel numb to both options in the end, just shrugging and saying "whatever, I don't care anymore, either one"? That would be an example of calculation interfering with fellow-feeling.

Some amount of this is normal and unavoidable. But one can make it worse. Whether the LW/EA community does so or not is the question in need of data – we can agree on that! See my comment below for more details.

In my view, the neural-net type of processing has different strength and weaknesses from the explicit reasoning, and they are often complementary.

Agreed. As I say in the post:

Of course cold calculated reasoning has its place, and many situations call for it. But there are many more in which being calculating is wrong.

I also mention that faking it til you make it (which relies on explicit S2 type processing) is also justified sometimes, but something one ideally dispenses with.

"moral perception" or "virtues" ...is not magic, bit also just a computation running on brains. 

Of course. But I want to highlight something you might be have missed: part of the lesson of the "one thought too many" story is that sometimes explicit S2 type processing is intrinsically the wrong sort of processing for that situation: all else being equal you would be better person if you relied on S1 in that situation. Using S2 in that situation counted against your moral standing. Now of course, if your S1 processing is so flawed that it would have resulted in you taking a drastically worse action, then relying on S2 was overall the better thing for you to do in that moment. But, zooming out, the corollary claim here (to frame things another way) is that even if your S2 process was developed to arbitrarily high levels of accuracy in identifying and taking the right action, there would still be value left on the table because you didn't develop your S1 process. There are a few ways to cash out this idea, but the most common is to say this: developing one's character (one's disposition to feel and react a certain way when confronted with a given situation – your S1 process) in a certain way (gaining the virtues) is constitutive of human flourishing – a life without such character development is lacking. Developing one's moral reasoning (your S2 process) is also important (maybe even necessary), but not sufficient for human flourishing.

Regarding explanatory fundamentality:
I don't think your analogy is very good. When you describe mechanical phenomena using the different frameworks you mention, there is no disagreement between them about the facts. Different moral theories disagree. They posit different assumptions and get different results. There is certainly much confusion about the moral facts, but saying theorists are confused about whether they disagree with each other is to make a caricature of them. Sure, they occasionally realize they were talking past each other, but that's the exception not the rule. 

We're not going to resolve those disagreements soon, and you may not care about them, which is fine – but don't think that they don't exist. A closer analogy might be different interpretations of QM: just like most moral theorists agree on ~90% of all common sense moral judgments, QM theorists agree on the facts we can currently verify but disagree about more esoteric claims that we can't yet verify (e.g. existence of other possible worlds). I feel like I need to remind EA people (which you may or may not be) that the EA movement is unorthodox, it is radical (in some ways – not all). That sprinkle of radicalism is a consequence of unwaveringly following very specific philosophical positions to their logical limits. I'm not saying here that being unorthodox automatically means you're bad. I'm just saying: tread carefully and be prepared to course-correct.

... consequentialism judges the act of visiting a friend in hospital to be (almost certainly) good since the outcome is (almost certainly) better than not doing it. That's it. No other considerations need apply. [...] whether there exist other possible acts that were also good are irrelevant.

I don't know of any consequentialist theory that looks like that. What is the general consequentialist principle you are deploying here? Your reasoning seems very one off. Which is fine! That's exactly what I'm advocating for! But I think we're talking past each other then. I'm criticizing Consequentialism not just any old moral reasoning that happens to reference the consequences of one's actions (see my response to npostavs)

If our motivation is just to make our friend feel better is that okay?

Absolutely. Generally being mindful of the consequences of one's actions is not the issue: ethicists of every stripe regularly reference consequences when judging an action. Consequentialism differentiates itself by taking the evaluation of consequences to be explanatorily fundamental – that which forms the underlying principle for their unifying account of all/a broad range of normative judgments. The point that Stocker is trying to make there is (roughly) that being motivated purely by intensely principled ethical reasoning (for lack of a better description) is ugly. Ethical principles are so general, so far removed, that they misplace our affect. Here is how Stocker describes the situation (NB: his target is both DE and Consequentialism):

But now, suppose you are in a hospital, recovering from a long illness. You are very bored and restless and at loose ends when Smith comes in once again. [...] You are so effusive with your praise and thanks that he protests that he always tries to do what he thinks is his duty, what he thinks will be best. You at first think he is engaging in a polite form of self-deprecation [...]. But the more you two speak, the more clear it becomes that he was telling the literal truth: that it is not essentially because of you that he came to see you, not because you are friends, but because he thought it his duty, perhaps as a fellow Christian or Communist or whatever, or simply because he knows of no one more in need of cheering up and no one easier to cheer up.

I should make clear (as I hope I did in the post): this is not an insurmountable problem. It leads to varying degrees of self-effacement. I think some theorists handle it better than others, and I think VE handles it most coherently, but it's certainly not a fatal blow for Consequentialism or DE. It does however present a pitfall (internal moral disharmony) for casual readers/followers of Consequentialism. Raising awareness of that pitfall was the principle aim of my post.

Orthogonal point:
The problem is certainly not just that the sick friend feels bad. As I mention:

Pretending to care (answering your friend "because I was worried!" when in fact your motivation was to maximize the general good) is just as ugly and will exacerbate the self-harm.

But many consequentialists can account for this. They just need a theory of value that accounts for harms done that aren't known to the one harmed. Eudaimonic Consequentialism (EC) could do this easily: the friend is harmed in that they are tricked into thinking they have a true, caring friend when they don't. Having true, caring friends is a good they are being deprived of. Hedonistic Consequentialism (HC) on the other hand will have a much harder time accounting for this harm. See footnote 2.

I say this is orthogonal because both EC and HC need a way to handle internal moral disharmony – a misalignment between the reasons/justifications for an action being right and the appropriate motivation for taking that action. Prima facie HC bites the bullet, doesn't self-efface, but recommends we become walking utility calculators/rule-worshipers. EC seems to self-efface: it judges that visiting the friend is right because it maximizes general human flourishing, but warns that this justification is the wrong motivation for visiting the friend (because having such a motivation would fail to maximize general human flourishing). In other words, it tells you to stop consulting EC – forget about it for a moment – and it hopes that you have developed the right motivation prior to this situation and will draw on that instead.

Here is my prediction:

I claim that one's level of engagement with the LW/EA rationalist community can weakly predict the degree to which one adopts a maximizer's mindset when confronted with moral/normative scenarios in life, the degree to which one suffers cognitive dissonance in such scenarios, and the degree to which one expresses positive affective attachment to one's decision (or the object at the center of their decision) in such scenarios.

More specifically I predict that, above a certain threshold of engagement with the community, increased engagement with the LW/EA community correlates with an increase in the maximizer's mindset, increase in cognitive dissonance, and decrease in positive affective attachment in the aforementioned scenarios.

The hypothesis for why that correlation will be there is mostly in this section and at the end of this section.

On net, I have no doubt the LW/EA community is having a positive impact on people's moral character. That does not mean there can't exist harmful side-effects the LW/EA community produces, identifiable as weak trends among community goers that are not present among other groups. Where such side-effects exist shouldn't they be curbed?

It’s better when we have our heart in it, and my point is that moral reasoning can help us do that.

My bad, I should have been clearer. I meant to say "isn't it better when we have our heart in it, and we can dispense with the reasoning or the rule consulting?"

I should note, you would be in good company if you answered "no." Kant believed that an action has no moral worth if it was not motivated by duty, a motivation that results from correctly reasoning about one's moral imperatives. He really did seem to think we should be reasoning about our duties all the time. I think he was mistaken.

Regarding moral deference:
I agree that moral deference as it currently stands is highly unreliable. But even if it were, I actually don't think a world in which agents did a lot of moral deference would be ideal. The virtuous agent doesn't tell their friend "I deferred to the moral experts and they told me I should come see you." 

I do emphasize the importance of having good moral authorities/exemplars help shape your character, especially when we're young and impressionable. That's not something we have much control over – when we're older, we can somewhat control who we hang around and who we look up to, but that's about it. This does emphasize the importance of being a good role model for those around us who are impressionable though!

I'm not sure if you would call it deference, but I also emphasize (following Martha Nussbaum and Susan Feagin) that engaging with good books, plays, movies, etc. is critical for practicing moral perception, with all the appropriate affect, in a safe environment. And indeed, it was a book (Marmontel's Mimoires) that helped J.S. Mill get out of his internal moral disharmony. If there are any experts here, it's the creators of these works. And if they have claim to moral expertise it is an appropriately humble folk expertise which, imho, is just about as good as our current state-of-the-art ethicists' expertise. Where creators successfully minimize any implicit or explicit judgment of their characters/situations, they don't even offer moral folk expertise so much as give us complex detailed scenarios to grapple with and test our intuitions (I would hold up Lolita as an example of this). That exercise in grappling with the moral details is itself healthy (something no toy "thought experiment" can replace).

Moral reasoning can of course be helpful when trying to become a better person. But it is not the only tool we have, and over-relying on it has harmful side-effects.

Regarding my critique of consequentialism:
Something I seem to be failing to do is make clear when I'm talking about theorists who develop and defend a form of Consequentialism and people who have, directly or indirectly, been convinced to operate on consequentialist principles by those theorists. Call the first "consequentialist theorists" and the latter "consequentialist followers." I'm not saying followers dance around the problem of self-effacement – I don't even expect many to know what that is. It's a problem for the theorists. It's not something that's going to get resolved in a forum comment thread. I only mentioned it to explain why I was singling out Consequentialism in my post: because I happen to know consequentialist theorists struggle with this more than VE theorists. (As far as I know DE theorists struggle with it to, and I tried to make that clear throughout the post, but I assume most of my readers are consequentialist followers and so don't really care). I also mentioned it because I think it's important for people to remember their "camp" is far from theoretically airtight. 

Ultimately I encourage all of us to be pluralists about ethics – I am extremely skeptical that any one theorist has gotten it all correct. And even if they did, we wouldn't be able to tell with any certainty they did. At the moment, all we can do is try and heed the various lessons from the various camps/theorists. All I was just trying to do was pass on a lesson one hears quite loudly in the VE camp and that I suspect many in the Consequentialism camp haven't heard very often or paid much attention to.

Regarding feelings about disease far away:
I'm glad you have become concerned about these topics! I'm not sure virtue ethicists couldn't also motivate those concerns though. Random side-note: I absolutely think consequentialism is the way to go when judging public/corporate/non-profit policy. It makes no sense to judge the policy of those entities the same way we judge the actions of individual humans. The world would be a much better place if state departments, when determining where to send foreign aid, used consequentialist reasoning.

Regarding feelings toward your friend:
I'm glad to hear that moral reasoning has helped you there too! There is certainly nothing wrong with using moral reasoning to cultivate or maintain one's care for another. And some days, we just don't have the energy to muster an emotional response and the best we can do is just follow the rules/do what you know is expected of you to do even if you have no heart in it. But isn't it better when we do have our heart in it? When we can dispense with the reasoning, or the rule consulting?

Load More