403

LESSWRONG
LW

402
World Modeling
Frontpage
2025 Top Fifty: 14%

126

High-level actions don’t screen off intent

by AnnaSalamon
11th Sep 2025
1 min read
8

126

World Modeling
Frontpage

126

High-level actions don’t screen off intent
25Mateusz Bagiński
7testingthewaters
6TsviBT
2TsviBT
5Chris Lakin
2azergante
2Matt Goldenberg
2Richard_Kennaway
New Comment
8 comments, sorted by
top scoring
Click to highlight new comments since: Today at 12:31 AM
[-]Mateusz Bagiński2d2511

More generally, it's not just the effect that matters (globally/holistically/long-range-ish-ly speaking), but also whatever is ~causally upstream from the effect, because whatever is ~causally upstream from the effect determines what's gonna happen when the context changes the things upstream from the effect.

Reply
[-]testingthewaters2d73

Guessing the intention of others can also help predict future actions. "Alice who was court ordered to donate bed nets and doesn't want to go to prison" vs. "Alice the EA who has read MacAskill cover to cover" have very different projected patterns of future behaviour, especially if the context changes (e.g. the court order is completed). Notably, that doesn't stop them from both clicking the same buy button on Amazon today.

Reply
[-]TsviBT1d62

Conversely, contexts where high-level actions do screen off intent are hard to construct and fragile, and sometimes valuable, therefore precious. For example, you can have a tournament arbiter who enforces the rules completely and accurately. Ze might show disdain or admiration or whatnot towards some players, but if the rules are well-constructed and ze enforces them, in principle the screening-off could be near complete. (Or not, e.g. if the non-verbals nudge players to not bring up rules objections.) On the other hand, if there's even a small hole in the rules or probability of discretion in enforcement (as there usually is), the screening off is destroyed.

Reply
[-]TsviBT1d20

If you can make "golems" that transparently, mechanically enforce the rules, you get even more screening off. E.g. a computer-programmed game.

Reply
[-]Chris Lakin2d54

Mindreading is ubiquitous

Reply
[-]azergante1d2-2

High-level actions don’t screen off intent

, consequences do.

Reply
[-]Matt Goldenberg4h20

Only if your ethics are purely utilitarian. 

Reply
[-]Richard_Kennaway2d21

All screens are leaky.

Reply
Moderation Log
More from AnnaSalamon
View more
Curated and popular this week
8Comments

One might think “actions screen off intent”: if Alice donates $1k to bed nets, it doesn’t matter if she does it because she cares about people or because she wants to show off to her friends or whyever; the bed nets are provided either way. 

I think this is in the main not true (although it can point people toward a helpful kind of “get over yourself and take an interest in the outside world,” and although it is more plausible in the case of donations-from-a-distance than in most cases).

Human actions have micro-details that we are not conscious enough to consciously notice or choose, and that are filled in by our low-level processes: if I apologize to someone because I’m sorry and hope they’re okay, vs because I’d like them to stop going on about their annoying unfair complaints, many small aspects of my wording and facial expression will be likely different, in ways that’re hard for me to track. I may think of both actions as “I apologized politely,” while my intent nevertheless causes predictable differences in impact.

Even in the donations-from-a-distance case, there is some of this: the organization Alice donates to may try to discern Alice’s motives, and may tailor its future actions to try to appeal to Alice and others like her, in ways that have predictably different effects depending on eg whether Alice mostly wants to know/care/help or mostly wants to reinforce her current beliefs.

(This is a simple point, but I often wish to reference it, so I’m writing it up.)