For readers who need the opposite advice: I don't think the things people get hangry about are random, just disproportionate. If you're someone who suppresses negative emotions or is too conflict averse or lives in freeze response, notice what kind of things you get upset about while hangry- there's a good chance they bother you under normal circumstances too, and you're just not aware of it.
Similar to how standard advice is don't grocery shop while hungry, but I wouldn't buy enough otherwise.
You should probably eat before doing anything about hangry thoughts though.
Unless you've observed that you tend to unendorsedly let things slide once you're fed. In that case, better do something about the problem while you're hangry.
Linking to the Reverse All Advice post is itself a way to label an action collaborative. Without it, I risk coming off as thinking the original author made a mistake or should have explicitly addressed my point.
This rhymes with how one treats feature recommendations from users. It is typically the case that a user advising you to make a change does indeed have a problem when using your product that they're trying to solve, and you should figure out what that problem is, but their account of how to solve it (what 'improvement' to make) is usually worth throwing out the window.
This is also very, very true in UX design (and all similar fields such as print design, etc.).
Edit: This is why “I didn’t like X” or “X seems ugly” or “I have a hard time reading X” is extremely valuable feedback, and any designer is always happy to hear it. On the other hand, “X is designed wrong because [criticism of specific design decision]” is basically worthless feedback, and almost never helps in any way.
Edit 2: Note that the above is the opposite of what people’s intuitions tell them constitutes valuable feedback. Non-designers often think that “I didn’t like it” or “it’s ugly” is unhelpful, and they try to be more helpful by making specific criticisms (like “the text is justified; it shouldn’t be”). Coming from a layperson, this attempt to be helpful is actually the diametric opposite of an improvement, turning useful feedback into useless advice.
Edit 3: The most useful feedback is the one that tells me what is the specific problem you are experiencing. The subjective nature of the feedback is important!
(And, really, the rest of the comments on the post “Incorrect hypotheses point to correct observations”, as well as the post itself. Highly relevant!)
I don't think this stance is as rare as you think. My partner (who doesn't care for rationalism in general and has never met a rationalist other than (I guess) me) regularly says things like "[general wrath] oh wait my period is starting, that's probably why I'm raging, nevermind" and "have you considered that you're only being depressive about [side project] because [main job] is going badly?".
I will admit that selecting on "people who are in a relationship with me" is a pretty strong filter. Overall I'm hopeful for this social tech to become more common.
(In fact now I think about it, were the "you're being hysterical dear" comments of old actually sometimes a version of this, as opposed to being---as is often now assumed---abhorrent levels of sexism)
Agreed, I don't think it's actually that rare. The rare part is the common knowledge and normalization, which makes it so much easier to raise as a hypothesis in the heat of the moment.
The rare part is the common knowledge and normalization
Trying to suggest that someone else's bad mood might be caused by their period would be considered by most people horribly sexist. So you can only hope that they might notice it themselves... or very gently and non-specifically point towards the general idea of hangriness and hope that they can connect the dots...
And this is more likely to work if the concept is a frequently used common knowledge.
My stance towards emotions is to treat them as abstract "sensory organs" – because that's what they are, in a fairly real sense. Much like the inputs coming from the standard sensory organs, you can't always blindly trust the data coming from them. Something which looks like a cat at a glance may not be a cat, and a context in which anger seems justified may not actually be a context in which anger is justified. So it's a useful input to take into account, but you also have to have a model of those sensory organs' flaws and the perceptual illusions they're prone to.
(Staring at a bright lamp for a while and then looking away would overlay a visual artefact onto your vision that doesn't correspond to anything in reality, and if someone shines a narrow flashlight in your eye, you might end up under the impression someone threw a flashbang into the room. Similarly, the "emotional" sensory organs can end up reporting completely inaccurate information in response to some stimuli.)
Another frame is to treat emotions as heuristics – again, because that's largely what they are. And much like any other rule-of-thumbs, they're sometimes inapplicable or produce incorrect results, so one must build a model regarding how and when they work, and be careful regarding trusting them.
The "semantic claims" frame in this post is also very useful, though, and indeed makes some statements about emotions easier to express than in the sensory-organs or heuristics frames. Kudos!
Another example of this pattern that's entered mainstream awareness is tilt. When I'm playing chess and get tilted, I might think things like "all my opponents are cheating, "I'm terrible at this game and therefore stupid," or "I know I'm going to win this time, how could I not win against such a low-rated opponent." But if I take a step back, notice that I'm tilted, and ask myself what information I'm getting from the feeling of being tilted, I notice that it's telling me to take a break until I can stop obsessing over the result of the previous game.
Tilt is common, but also easy to fix once you notice the pattern of what it's telling you and start taking breaks when you experience it. The word "tilt" is another instance of a hangriness-type stance that's caught on because of its strong practical benefits--having access to the word "tilt" makes it easier to notice.
This strikes a chord with me. Another maybe similar concept that I use internally is "fried". Don't know if others have it too, or if it has a different name. The idea is that when I'm drawing, or making music, or writing text, there comes a point where my mind is "fried". It's a subtle feeling but I've learned to catch it. After that point, continuing working on the same thing is counterproductive, it leads to circles and making the thing worse. So it's best to stop quickly and switch to something else. Then, if my mind didn't spend too long in the "fried" state, recovery can be quite quick and I can go back to the thing later in the day.
The most likely etymology I'm aware of is via pinball, where a pinball machine would disable its controls and drain the ball if it detected that something was applying too much unexpected physical force to the machine. Anecdotally, the generalization of that to the “losing one's ability to make controlled plays after winding up in an unusual agitating situation” sort of meaning later made its way into poker, whence at least one LW-popular personality is definitely familiar with it. From Zvi's “Book Review: On The Edge: The Gamblers”:
Phil’s entire strategy is based on using a pattern designed to provoke the opponent into playing exploitatively or drive them into various forms of tilt. And then, through decades of experience, to know all the different ways players respond to that, and what to do about each one. Its seemingly obvious weaknesses and patterns are a feature.
and:
...Sports betting took more emotional bandwidth than I expected. I don’t think I went on tilt or became addicted to sports betting at any point—but sometimes it’s hard to know. (3793)
I most definitely did go on tilt at various times. The good news was I (mostly?) responded to that by taking a break, rather t
Likewise, emotions have semantics; they claim things. Anger might claim to me that it was stupid or inconsiderate for someone to text me repeatedly while I’m trying to work. Excitement might claim to me that an upcoming show will be really fun. Longing might claim to young me “if only I could leave school in the middle of the day to go get ice cream, I wouldn’t feel so trapped”. Satisfaction might claim to me that my code right now is working properly, it’s doing what I wanted.
I think it's clearer to say your emotions make you claim various potentially irrational things. This is one reason rationalists become particularly scared of their emotions, even though the behaviors your emotions induce might often be adaptive. (After all, they evolved for a reason.)
Emotions can motivate irrational behavior as well as irrational claims, so even people who aren't as truth-inclined often feel the need to resist their own emotions as well, as in anger management. However, emotions are particularly good at causing you to say untrue things, hence their status as distinguished enemies of rationality.
(Edit: Or maybe our standards for truthful claims are just much higher than our default standards for rational behavior?)
There's two imporant things missing here:
1: You mainly advocated for solving projected negativity. Positive emotions "lie" as well, and they can cause the opposite of the hangriness. If you were logically consistent and indeed wished to maximize correct information, then you'd seek to destroy excessively positive emotions as well. And I don't want to call you dishonest, but I don't think that most rationalists would destroy a state of agape or happiness just because it's "wrong". Further more, positive emotions have utility, even if they're wrong. This community does not seem to realize this yet, but only some ignorance and some delusion is harmful.
2: Emotions and experiences aren't one-way, but two-way. Your emotions will tell you something about the world, but what you're told about the world will affect your emotions. This leads to feedback-loops. Things like being hangry just makes this feedback loop more likely to go in a negative direction. Any valence in your body affects your experience of reality. If your body feels really good, then your experiences will all tend towards being pleasant. The reason I bring this up is that you're trying to solve an equation which depends on...
I entirely agree that “positive emotions ‘lie’ as well”; but I think that—often, likely usually, perhaps not “always” (though I’d have to give it some thought to be sure)—such false positive emotions are indeed dangerous and harmful, and ought to be “destroyed” (i.e., corrected).
For example, love for someone who does not deserve your love, treats you poorly, even abuses you—this is harmful, and you would be much better off to recognize the falsity of that emotion, and to bring it in line with reality. Misplaced affection, misplaced nostalgia, “rose-colored glasses”—these too are examples of “false positive emotions”. Satisfaction at a job well done, when in fact the job has been done poorly; pride, when in fact you’ve acted shamefully; anticipation of success, when in fact failure is nearly guaranteed (and the action to be taken is entirely optional); and many other examples… such positive emotions are irrational, in the literal sense of failing to systematically track reality; and they absolutely should be seen as mistakes to be corrected.
I believe that when reality and theory are in conflict, reality is the winner, even when it appears irrational. If religion wasn't a net positive, it wouldn't manifest in basically every culture to ever exist.
Are you aware that transposons are a thing? Also prions?
Memetics is similar enough to biology in this regard that, even just on priors, we should expect the existence of purely parasitic memes, beliefs which propagate without being long-term net positive for the hosts (i.e. humans). And on examination of details, that sure does seem to be the case for an awful lot of memes, especially the ideological variety.
I've seen hangriness-style advice circulating on twitter (via Zvi, so perhaps in the rationalist milieu) and tiktok (not in the rationalist milieu afaict).
If you feel like you hate everyone, eat
If you feel like everyone hates you, sleep
If you feel like you hate yourself, shower
If you feel like everyone hates everyone, go outside
If you feel overwhelmed by your thoughts, write them down
If you feel lost and alone, call a friend
If you feel stuck in the past, plan for the future
...
My rules of thumb:
When doing introspection on where the source of your emotions come from, I think it’s important to have some sort of radical self acceptance/courage. As a human, you might dig deep and discover parts of yourself that you might not endorse. (For example, I don’t want to X because it means I’ll lose social status).
I think this is also another instance where some sort of high decoupling comes in handy. You want to be able to discover the truth of why you’re feeling a certain way, decoupled from judgement of like “oh man if I’m the type of person to feel X/want Y deep down, that means I’m a bad person.”
Great post!
telling the truth is a skill that parts can get better at. Part of the skill is with the part itself and part is on the listener side to not do any gaslighting or weird avoidant stuff back at the part.
If the show turns out to be, say, the annual panto at the Palladium, then the claim was very conclusively true.
It would make sense that you would like a show put on at the LW theaters.
Promoted to curated: I am a bit uncertain how to feel about the claims of how widespread or different the attitudes towards rationalists here are compared to the rest of the population, the explanation of the underlying emotional attitude seems very valuable to me. Indeed, I am probably somewhat of an outlier in the degree to which I find it important to have this attitude present in my social environment, and I particularly appreciate a writeup of it.
Claim that the stance in question is fairly canonical or standard for rationalists-as-a-group, modulo disclaimers about rationalists never agreeing on anything.
I don't think this claim is correct. I have not noticed this being particularly common among rationalists relative to other similar populations, nor normative.
I think it's probably unusually common among postrationalists, but those are a very different culture from rationalists, grounded primarily in not sharing any of the main assumptions common to rationalists.
Usually I also take emotions as a channel to surface unconscious preferences (either situational or longer term), which helps with making that preference conscious as well as evaluated, and thus helps with rational decisions.
It's standard for parents to have these sorts of models about their kids' emotions, e.g. "She's cranky because she didn't get her nap."
I’ve trained myself not to give too much weight to the thoughts that come bundled with certain emotions usually because those thoughts are stupid or unhelpful, whereas I suspect the emotion itself might not be. A friend of mine (who’s a clinical psychologist) often reminds me that there’s a difference between intellectualising an emotion and actually sitting with it with the goal of feeling it fully and seeing what it has to offer. I still find that hard to do. I get why people who intellectualise their emotions (myself included) might end up going down th...
This was one of the first LessWrong posts I've made it all the way through, and I appreciated the journey you took your thoughts through. I like the underlying idea we can extend deeper social grace when we have a) common terms and b) ready mental models for why someone is "acting out" that doesn't beg some question to remove them. That hangry thoughts are ephemeral and easy to resolve lends to tolerance. I think that's what some commenters are latching onto when they describe this as more commonly-held: extending grace for extraneous personal circumstance...
This makes emotions subservient to rationality, but I think a lot of the people who complain about the rationalist approach to emotions instead see rationality as a system to generate compromises between emotions. From the latter perspective, the rationalist approach only really works with infinitesimal emotions.
I'm in agreement with the spirit of your piece written here, but I think the claim that emotions make true/false claims is not true. I think it's more reasonably to talk in terms of intentionality and sticking to the term 'information'. That is, emotion is 'about' something. I am not merely angry, but my anger is directed at a particular things. We also express information about our psychological states. We then construct propositions in relation to our emotions. When one says 'emotions are telling us something', I think this is best understood metaphorica...
I think this awareness will be really helpful only when talking to family, friends... Or to people who are already used to do some self-analysis when hangry.
Even if "being hangry" becomes much more "normalized" , a random person will still take it as an attack if you tell him that he's maybe hangry (the same way other cognitive bias are well-known but telling someone that he's being misled by confirmation bias will not be received well). People can even start building a defence mechanism to reject the " hangry remark " every time they are confronted ...
When you mention "I could be wrong" as being the major load bearing part of the response to a hangry person—especially out of a sense of maintaining emotional stability—demonstrates that epistemic charity and humility is not only an intellectual virtue, but also an emotional and empathic one as well. I find that this would be the trickiest part: the balance between making one feel heard while also providing useful feedback to what might actually help them—trying to find the middle between appeasement and callous criticism.
I also take this stance, but with a different framework. I acknowledge that emotions & thoughts are very different biological technologies in action. One is likely extremely older than the other in terms of evolutionary development. Given that, from my perspective, my inner monologue is just not very good at translating — so my self-awareness needs to translate the translation. To me, an emotion is never false; it’s always a valid reaction in some way or another. I just know it’s silly to agree with the first interpretation my mind makes of it.
An...
Another way to put this is “emotions as sensations vs. emotions as propositional attitudes”. (Under this framing, the thesis of the post would be “emotions are always sensations, but should not always be interpreted as propositional attitudes, because propositional attitudes should not be unstable under short-term shifts in physiological circumstances—which emotions are”.)
Telling people in the moment that the things their emotions are telling them seem false, and perhaps their emotions convey some other information… is usually not the right move unless you’re very unusually good at making people feel seen while not not telling them they’re being an idiot.
you can just get good at this with practice
Emotions Make Claims, And Their Claims Can Be True Or False
I appreciate the general thrust of this piece, but I find this aspect concerning because it fails to acknowledge that emotions (or their analogues) are likely to have evolved long before linguistics or the capability to assert and evaluate claims.
From introspection it seems possible that emotions can be triggered by non-linguistic situations (giant spider jumps on my child -> anger), and also it is possible for emotions to not cause logical claims to form... (e.g. "why am I feeling this way?")
Th...
This seems to point to a strong suspicion of mine (to humbly avoid making bold claims too early) that emotions are fundamentally rooted in physiological sensations and impulses.
Hunger is an instinct and impulse to act towards satisfying an important daily basic need. Food.
A very similar observation might be in the experience or the observation of many people between lust and anger. There's few better ways to get someone to hate you than to get in the way of them and their object of sexual arousal. Which might be because like hunger, it's also an instinct a...
My strong hunch is that this is true for almost any form of communication (internal and external) we receive: it conveys something we can extract value from if we are able to look past the surface (propositional content) of what we immediately infer.
And how difficult it is to remain open to the possibility that my first impression of a signal is "incorrect" (I got it wrong on my first attempt), given how frequently I have used my inference (first impression), and I am still alive (adaptive value of my past choice to not question my first impression)...
The ...
they’ll need to feel seen before anything else works. And it’s very hard to make such a person feel seen without at least somewhat endorsing whatever idiocy their emotions are claiming.
I think the way of making someone feel seen, without needing to endorse what their emotions are claiming, is to reflect / validate / normalize the emotions themselves, rather than their assumed causes.
"That sounds really frustrating, it makes sense that you're upset" (as a small random example) I imagine would make someone feel seen, without needing to endorse, or even...
"That sounds really frustrating, it makes sense that you're upset" pretty heavily endorses what the upset-ness is claiming. The central examples of hangriness, for instance, are cases where it does not make sense that the person is upset, because the things happening around them do not normally sound all that frustrating (relative to the strength of their upset-ness).
In fact I do now have the freedom to get ice cream in the middle of the day, and I generally do not feel trapped, so that’s an update toward my longing’s claim being true.
I love this article & the premise!!
Small note on this section: It seems to me that this is attributing causality to your freedom to get ice cream in the middle of the day. If there were something else causing you to not feel trapped -- for example, you enjoy your work more than you enjoyed school -- couldn't it still be that your longing's claim was false?
If you hated your...
“Pretend the emotion is a person or cute animal who can talk” is a pretty great trick.
Huh. Tried this on my social media cravings.
Couldn't visualize them as an animal, but managed <a stream of energy between me and my laptop screen>. Managed to make the stream talk in my mind.
This behaved like a "talking lens" laid over my perception. As if the craving itself was live-reacting to objects on my screen while I clicked and scrolled.
Informative via making the involved needs concrete.
Good post! This is definitely the approach I use for these things, and it's one of the most frequently-useful tools in my toolkit.
People have an annoying tendency to hear the word “rationalism” and think “Spock”, despite direct exhortation against that exact interpretation. But I don’t know of any source directly describing a stance toward emotions which rationalists-as-a-group typically do endorse. The goal of this post is to explain such a stance. It’s roughly the concept of hangriness, but generalized to other emotions.
That means this post is trying to do two things at once:
Many people will no doubt disagree that the stance I describe is roughly-canonical among rationalists, and that’s a useful valid thing to argue about in the comments in proportion to how well you actually know many rationalists.
When we’re hangry, it feels like people around us are doing stupid, inconsiderate, or otherwise bad things. It feels like we’re justifiably angry about those things. But then we eat, and suddenly our previous anger doesn’t feel so justified any more.
When we’re hangry, our anger is importantly wrong, or false in some sense. The feelings are telling our brain that other people are doing stupid, inconsiderate, or otherwise egregious things. And later, on reflection, we will realize that our feelings were largely wrong about that; the feelings were not really justified by the supposed wrongdoings.
But the correct response is not to dismiss or ignore the feelings! Even if the feelings “tell us false things” in some sense, those feelings still result from an important unmet need: we need food! The correct response isn’t to ignore or dismiss the anger, the correct response is to realize that the anger is mostly caused by hunger, and to go eat.
The word “hangry” conveys this whole idea in two syllables. And crucially, the existence of “hangry” as a word normalizes the phenomenon - more on that later.
I consider the word “hangry” to be one of the main ways in which mainstream society has become more sane in the past ~10 years. In a single word, it perfectly captures the stance toward emotions which I want to describe. We just need to generalize hangriness to other emotions.
The stance itself involves three main pieces:
Words have semantics. If someone tells me “there’s a bathroom down the hall around the corner”, then when I walk down the hall and turn the corner, I expect to see a bathroom. A physical bathroom being in that physical spot is the main semantic claim of the words.
Likewise, emotions have semantics; they claim things. Anger might claim to me that it was stupid or inconsiderate for someone to text me repeatedly while I’m trying to work. Excitement might claim to me that an upcoming show will be really fun. Longing might claim to young me “if only I could leave school in the middle of the day to go get ice cream, I wouldn’t feel so trapped”. Satisfaction might claim to me that my code right now is working properly, it’s doing what I wanted.
As with words, those semantic claims can be true or false.
If someone claims to me that there’s a bathroom down the hall around the corner, and then I go down the hall and around the corner and there’s no bathroom, I update that their claim was probably false. (Even more so if it turns out there is no corner, or possibly even no hallway.) If I go down the hall and around the corner and find a bathroom, then the claim was true.
If my anger claims to me that it was stupid or inconsiderate for someone to text me repeatedly while I’m trying to work, but on reflection I realize that I didn’t indicate I was busy and can’t reasonably expect them to guess I was busy, I update that my anger’s claim was probably false. If on reflection I have told the person many times before that texts during work hours are costly to me, then I update that my anger’s claim was probably true.
If my excitement claims to me that an upcoming show will be really fun, and the show turns out to be boring, then the claim was false. If the show turns out to be, say, the annual panto at the Palladium, then the claim was very conclusively true.
If my longing claims to young me “if only I could leave school in the middle of the day to go get ice cream, I wouldn’t feel so trapped”, and upon growing older and having the freedom to go get ice cream in the middle of the day I still feel trapped, I update that my longing’s claim was probably false. In fact I do now have the freedom to get ice cream in the middle of the day, and I generally do not feel trapped, so that’s an update toward my longing’s claim being true.
If my satisfaction claims to me that my code right now is working properly, and it turns out that an LLM simply overwrote my test code to always pass, then my satisfaction’s claim is false. If it turns out that my code is indeed working properly, then my satisfaction’s claim is true.
In general, if you want to know what an emotion is claiming, just imagine that the emotion is a person or cute animal who can talk, and ask what they say to you.
Let’s say I feel angry, so I imagine that my anger is a character named Angie and I ask them what’s up. And Angie starts off on a rant about how this shitty software library has terrible documentation and the API just isn’t doing what I expected and I’ve been at this for three fucking hours and goddammit I’m just so tired of this shit.
So, ok, Angie claims to be angry about the shitty software library. Fair enough, most software libraries are in fact hot trash. But c’mon, Angie, usually we’re not this worked up about it. What’s really going on here? And Angie pauses for a moment and is like “Man, I am just so tired.”. Perhaps what is really needed is… a break? Perhaps a nap? Perhaps a snack or some salt (both of which often alleviate tiredness)?
In a case like this, my anger is making claims about the quality of a software library. And those claims are… probably somewhat exaggerated in salience, even if not entirely false. But even insofar as the claims themselves are false, they still convey useful information. The anger may be wrong about the quality of the software library, but it still contains useful information: I’m tired. As a rough general rule, strong emotions are strong because some part of me is trying to tell me something it thinks is important… just not necessarily the thing the emotion claims.
“Pretend the emotion is a person or cute animal who can talk” is a pretty great trick. Not just for checking what they say, but for checking what they don’t say. See, lots of people have good enough social instincts to ask “Is that what’s really bothering you?” when someone else is worked up, but it’s a harder skill to pose that question to oneself. Picturing the emotion as a person or animal triggers that external perspective, makes it easier to notice that maybe the emotion is bothered by something other than what it’s saying.
But you can also just ask yourself “What’s really generating this emotion? What can I actually guess from it, setting aside the claims the emotion makes?”.
... and once one starts down that path, very often the answer turns out to be "I'm scared of X and this emotion wants to protect me from X". Often X is social disapproval of some sort (ranging from a glare to outright ostracism), or something the person has been burned by in the past. And that's why so many rationalists end up down a rabbit hole of trauma processing, or relational practices meant to make people feel loved and supported, or other borderline woo-ish things. An awful lot of those woo-ish things are optimized to deal with exactly these sorts of emotion generators.
Arguably the best thing about the word “hangry” is that its existence normalizes hangriness.
20 years ago, if some definitely-hypothetical person suggested to their hypothetical romantic partner that perhaps the partner was not angry about the thing they were ranting about, but was instead grumpy from being hungry… yeah, uh, that would normally not go over well. The partner would feel like their completely-valid(-feeling) emotions were just being brushed off, with some nonsense about being hungry.
But with the word hangry, it’s a lot easier to say “You seem maybe hangry right now, how about you have something to eat and if you still feel this way after then we can talk about it”. That doesn’t always work; it might still feel like being brushed off if someone’s worked up enough and/or sufficiently terrible at understanding their emotions. (Also people might sometimes in fact try to brush off other peoples’ valid emotions by hypothesizing hangriness, but that’s a trick which only delays things for like 20 minutes if one responds with the obvious test of eating something.) But it works a lot better than trying to convey the same thing before hangriness was normalized as a concept.
Alas, there’s still no word which normalizes this kind of thing more generally.
Telling people in the moment that the things their emotions are telling them seem false, and perhaps their emotions convey some other information… is usually not the right move unless you’re very unusually good at making people feel seen while not not telling them they’re being an idiot. Because, yes, someone ranting due to hangriness is being an idiot, and no, directly telling them they’re being an idiot does not help. Either they need to have already bought into the idea that a very large chunk of most peoples’ emotions claim false things but nonetheless convey useful information, or they’ll need to feel seen before anything else works. And it’s very hard to make such a person feel seen without at least somewhat endorsing whatever idiocy their emotions are claiming.
Among other rationalists, I usually expect that people are on board with the Generalized Hangriness Stance, so it’s usually ok to say something like “Look, I think you feel like X, but I suspect your feeling is in fact coming from Y rather than X. And to be clear I could be wrong here, but I think this should at least be in our hypothesis space. We can look at A, B, C as relevant evidence, and maybe try D, and if that doesn’t work then I’ll update that the feeling probably is coming from Y after all.”. Where, to be clear, the explicit “I could be wrong here” is an extremely load-bearing part of what makes this all work. Another good wording I use frequently is “I’m not sure this is true, but here’s a model of what’s going on…” ideally peppered with frequent reminders that this is a model and I’m not asserting that it’s correct.
Point is, this Stance toward emotions isn’t just individually useful. Arguably most of its value is as social tech. When most people in a space are on board with the Generalized Hangriness Stance, it becomes possible-at-all to point out to people that maybe their emotions are claiming stupid things, without that necessarily coming across as an attack on the person (and triggering defensiveness). And it then also becomes possible to help someone figure out what information their emotions actually convey, and help them with what they actually need (like e.g. eating). Some skill is still required, but it’s much more tractable when there’s common knowledge that people are on board with the Generalized Hangriness Stance.