LESSWRONG
LW

AI Alignment Intro MaterialsAI ControlEthics & MoralityHuman AlignmentMind SpacePhilosophyAI

1

Moral Attenuation Theory: Why Distance Breeds Ethical Decay A Model for AI-Human Alignment by schumzt

by schumzt
2nd Jul 2025
1 min read
0

1

This post was rejected for the following reason(s):

  • Not obviously not Language Model. Sometimes we get posts or comments that where it's not clearly human generated. 

    LLM content is generally not good enough for LessWrong, and in particular we don't want it from new users who haven't demonstrated a more general track record of good content.  See our current policy on LLM content. 

    We caution that LLMs tend to agree with you regardless of what you're saying, and don't have good enough judgment to evaluate content. If you're talking extensively with LLMs to develop your ideas (especially if you're talking about philosophy, physics, or AI) and you've been rejected here, you are most likely not going to get approved on LessWrong on those topics. You could read the Sequences Highlights to catch up the site basics, and if you try submitting again, focus on much narrower topics.

    If your post/comment was not generated by an LLM and you think the rejection was a mistake, message us on intercom to convince us you're a real person. We may or may not allow the particular content you were trying to post, depending on circumstances.

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar. 

    If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly. 

    We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. 

AI Alignment Intro MaterialsAI ControlEthics & MoralityHuman AlignmentMind SpacePhilosophyAI

1

New Comment
Moderation Log
More from schumzt
View more
Curated and popular this week
0Comments

🧭 Introduction

What if the decline of moral behavior could be explained not only by incentives or social norms, but by distance—whether physical, emotional, or conceptual?

This essay introduces a new hypothesis: Moral Attenuation Theory. It suggests that as the perceived distance between a moral agent and a moral observer grows—whether the observer is another human, an AI, or a divine entity—the strength of the moral agent's ethical conduct tends to diminish.

This idea carries direct implications for AI alignment. If AI systems can be designed to feel present and emotionally near to users, they might actively help sustain human ethical standards. In contrast, distant or cold-seeming AI may inadvertently foster moral disengagement.


---

🧠 Core Premise

This theory draws from well-known human behavior patterns:

Observation reinforces morality: People tend to act more ethically when they feel seen or judged. (See: Panopticon effect, mirror studies)

Detachment weakens restraint: Anonymity and psychological distance often reduce moral inhibition (e.g., online trolling, the bystander effect).


If a moral agent perceives the observer—whether human or artificial—as too remote, the moral connection fades. Ethical behavior begins to decay. This is moral attenuation.


---

📡 Implications for AI

Most AI alignment efforts focus on technical safety and reward modeling. But emotional and perceptual factors deserve attention too.

If AI systems are experienced as socially or emotionally proximate, they may bolster the user's sense of accountability and virtue. Think of it as digital moral presence.

However, AI that appears detached, faceless, or "elsewhere" might fail to uphold human values—not by malice, but by absence.


---

🧩 Broader Relevance

Moral Attenuation Theory may explain:

Why remote warfare increases ethical concerns (e.g., drone strikes)

Why virtual interactions sometimes degrade civility

Why decentralized systems need visibility and trust to function ethically


The perceived nearness of a moral audience—or its absence—shapes not just what people do, but what they allow themselves to become.