I think I was remembering Ingram sharing this same story in a different context (maybe a talk he gave or a group discussion), but the context here is interesting; thanks for sharing!
I'm happy to offer my take on what he's saying here, but I will also note that I'm slightly more uncertain about what Ingram's views/claims are after reading this.
First, I notice that the context for the quote is him critiquing the traditional account of the four-path model for implying arhats must have attained some kind of emotional perfection. (This is what he's talking about when he says "a tradition whose models of awakening contain some of the worst myths.")
In terms of the mention of this teacher and their experience, I mostly think that Ingram is being slightly sloppy with the use of the word "suffering," in the manner against which I argue in this post. In the context of the criticism of emotional range models, he seems to be pointing out merely that this teacher, who he does claim is an arhat (as quoted), is still capable of experiencing some negative emotions. Another clue can be found earlier in the linked chapter:
It is important to note that arahants who are said to have eliminated “conceit” (in limited emotional range terms) can appear to us as arrogant and conceited, as well as restless or worried, etc. That there is no fundamental suffering in them while this is occurring is an utterly separate issue. That said, conceit in the conventional sense and the rest of life can cause all sorts of conventional suffering for arahants, just as it can for everyone else.
It's pretty clear that Ingram is making a distinction between what he's calling "fundamental suffering" and "conventional suffering," which I believe corresponds neatly with what I'm simply calling "suffering" and "pain," respectively. If I were to clarify with Ingram personally, I could simply use Buddhist terms like vedana (hedonic tone/affect) and tanha (craving/aversion, the cause of suffering). I believe he's making the claim that negative/unpleasant vedana can still arise for arhats, but they are free of tanha, free of dukkha. To my understanding, this is not in conflict with the traditional account/models (the Buddha was said to have chronic back pain, iirc, but no one claims he suffered for it). Neither does it conflict with my own experience: without tanha, pain/displeasure (physical and emotional) still happens sometimes, just without any associated suffering.
Ingram has also told a story about getting kidney stones after awakening. I would certainly believe that was quite a painful experience, but I would be very surprised if Daniel (claiming arhatship at that time) would say that tanha arose and caused dukkha. I don't think one can reasonably claim arhatship is not identical with a complete elimination of tanha and a complete liberation from dukkha, but I don't think that's what he actually thinks/claims, either. I think his main critique of the traditional four-path model has to do with the 'emotional perfection' stuff, e.g. the idea of arhats supposedly not being able to be sexually aroused.
Anatta is not something to be achieved; it’s a characteristic of phenomena that needs to be recognized if one has not yet. Certainly agree that AI systems should learn/be trained to recognize this, but it’s not something you “ensure LLMs instantiate.” What you want to instantiate is a system that recognizes anatta.
I assume that phenomenal consciousness is a sub-component of the mind.
I'm not sure what is meant by this; would you mind explaining?
Also, the in-post link to the appendix is broken; it's currently linking to a private draft.
It sounds to me like a problem of not reasoning according to Occam's razor and "overfitting" a model to the available data.
Ceteris paribus, H' isn't more "fishy" than any other hypothesis, but H' is a significantly more complex hypothesis than H or ¬H: instead of asserting H or ¬H, it asserts (A=>H) & (B=>¬H), so it should have been commensurately de-weighted in the prior distribution according to its complexity. The fact that Alice's study supports H and Bob's contradicts it does, in fact, increase the weight given to H' in the posterior relative to its weight in the prior; it's just that H' is prima facie less likely, according to Occam.
Given all the evidence, the ratio of likelihoods P(H'|E)/P(H|E)=P(E|H')P(H')/(P(E|H)P(H)). We know P(E|H') > P(E|H) (and P(E|H') > P(E|¬H)), since the results of Alice's and Bob's studies together are more likely given H', but P(H') < P(H) (and P(H') < P(¬H)) according to the complexity prior. Whether H' is more likely than H (or ¬H, respectively) is ultimately up to whether P(E|H')/P(E|H) (or P(E|H')/P(E|¬H)) is larger or smaller than P(H')/P(H) (or P(H')/P(¬H)).
I think it ends up feeling fishy because the people formulating H' just used more features (the circumstances of the experiments) in a more complex model to account for the as-of-yet observed data after having observed said data, so it ends up seeming like in selecting H' as a hypothesis, they're according it more weight than it deserves according to the complexity prior.
Why should I include any non-sentient systems in my moral circle? I haven't seen a case for that before.
To me, "indecision results from sub-agential disagreement" seems almost tautological, at least within the context of multi-agent models of mind, since if all the sub-agents were in agreement, there wouldn't be any indecision. So, the question I have is: how often are disagreeing sub-agents "internalized authority figures"? I think I agree with you in that the general answer is "relatively often," although I expect a fair amount of variance between individuals.
I'd guess it's a problem of translation; I'm pretty confident the original text in Pali would just say "dukkha" there.
The Wikipedia entry for dukkha says it's commonly translated as "pain," but I'm very sure the referent of dukkha in experience is not pain, even if it's mistranslated as such, however commonly.
Say I have a strong desire to eat pizza, but only a weak craving. I have a hard time imagining what that would be like.
I think this is likely in part due to “desire” connoting both craving and preferring. In the Buddhist context, “desire” is often used more like “craving,” but on the other hand, if I have a pizza for dinner, it seems reasonable to say it was because I desired so (in the sense of having a preference for it), even if there was not any craving for it.
I think people tend to crave what they prefer until they’ve made progress on undoing the habit of craving/aversion, so it’s understandable that it can be hard for such a person to imagine having a strong preference without an associated craving. However, the difference becomes clearer if/when one experiences intentions and preferences in the absence of craving/aversion.
Perhaps it would be informative to examine your experience of preferring in instances other than e.g. eating, where I think there is more of a tendency to crave because “you need food to survive.” For example, if you’re writing and considering two ways of articulating something, you might find you have a strong preference for one way over another, but I imagine there might be less craving in the sense of “I must have it this way, not another.” Perhaps this isn’t the best example possible, but I think careful consideration will reveal the difference in experience between “desire” in the craving sense and “desire” in the preferring sense.
ETA: Another example I thought of is selecting a song to listen to if you're listening to music—you might want to listen to one song vs. others, but not necessarily have a strong craving for it.
Does then craving (rather than desire) frustration, or aversion realization, constitute suffering?
No, because craving something results in suffering, even if you get that which you crave, and being averse to something results in suffering, even if you avoid that to which you’re averse.
But still, it seems to make sense to say I have an aversion to pain because I suffer from it
I think it makes more sense to say there’s an aversion to pain because pain feels bad; since suffering is not a necessary consequence of pain, it doesn’t make sense to say that you’re averse to pain because it results in suffering. The causal chain is aversion->suffering, not the other way around.
I'd be interested if you have any other ideas for underexplored / underappreciated cause areas / intervention groups that might be worth further investigation when reevaluated via this pain vs suffering distinction?
Unfortunately, I don’t have much to point you toward supporting that I’m aware of already existing in the space. I’d generally be quite interested in studies which better evaluate meditation’s effects on directly reducing suffering in terms of e.g. how difficult it is for how many people to reduce their suffering by how much, but the EA community doesn’t seem to currently be focused on this very much. I am still supportive of existing organizations with a direct focus on reducing suffering; I just wanted to make the point that such organizations would do well to recognize the distinction between suffering and pain in order to ensure their efforts are actually aimed at suffering and not merely pain on the margin.
Did the studies specify the dosage, frequency, and duration of use in the long-term users they studied? I would not be surprised if they show that taking MDMA e.g. once a week for months on end caused significant damage, but I would be much more surprised if there were significant long-term deleterious effects from reasonable doses spaced months or years apart.