LESSWRONG
LW

686
jbkjr
359Ω13310550
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
4jbkjr's Shortform
Ω
7d
Ω
3
jbkjr's Shortform
jbkjr7d10

I often like to have Claude summarize longer LessWrong posts for me if I'm unsure whether I want to commit to reading the entire thing. Lately, however, I've noticed that probably 75+% of the time, it fails to fetch the page because of rate limits. Maybe LW would just be overloaded by fetches from AIs, so it must limit them? Is there any solution to this on my end besides e.g. saving the page as a PDF and uploading it manually?

Reply
Literature Review: Risks of MDMA
jbkjr4mo54

Did the studies specify the dosage, frequency, and duration of use in the long-term users they studied? I would not be surprised if they show that taking MDMA e.g. once a week for months on end caused significant damage, but I would be much more surprised if there were significant long-term deleterious effects from reasonable doses spaced months or years apart.

Reply
Suffering Is Not Pain
jbkjr5mo30

I think I was remembering Ingram sharing this same story in a different context (maybe a talk he gave or a group discussion), but the context here is interesting; thanks for sharing!

I'm happy to offer my take on what he's saying here, but I will also note that I'm slightly more uncertain about what Ingram's views/claims are after reading this.

First, I notice that the context for the quote is him critiquing the traditional account of the four-path model for implying arhats must have attained some kind of emotional perfection. (This is what he's talking about when he says "a tradition whose models of awakening contain some of the worst myths.")

In terms of the mention of this teacher and their experience, I mostly think that Ingram is being slightly sloppy with the use of the word "suffering," in the manner against which I argue in this post. In the context of the criticism of emotional range models, he seems to be pointing out merely that this teacher, who he does claim is an arhat (as quoted), is still capable of experiencing some negative emotions. Another clue can be found earlier in the linked chapter:

It is important to note that arahants who are said to have eliminated “conceit” (in limited emotional range terms) can appear to us as arrogant and conceited, as well as restless or worried, etc. That there is no fundamental suffering in them while this is occurring is an utterly separate issue. That said, conceit in the conventional sense and the rest of life can cause all sorts of conventional suffering for arahants, just as it can for everyone else.

It's pretty clear that Ingram is making a distinction between what he's calling "fundamental suffering" and "conventional suffering," which I believe corresponds neatly with what I'm simply calling "suffering" and "pain," respectively. If I were to clarify with Ingram personally, I could simply use Buddhist terms like vedana (hedonic tone/affect) and tanha (craving/aversion, the cause of suffering). I believe he's making the claim that negative/unpleasant vedana can still arise for arhats, but they are free of tanha, free of dukkha. To my understanding, this is not in conflict with the traditional account/models (the Buddha was said to have chronic back pain, iirc, but no one claims he suffered for it). Neither does it conflict with my own experience: without tanha, pain/displeasure (physical and emotional) still happens sometimes, just without any associated suffering.

Ingram has also told a story about getting kidney stones after awakening. I would certainly believe that was quite a painful experience, but I would be very surprised if Daniel (claiming arhatship at that time) would say that tanha arose and caused dukkha. I don't think one can reasonably claim arhatship is not identical with a complete elimination of tanha and a complete liberation from dukkha, but I don't think that's what he actually thinks/claims, either. I think his main critique of the traditional four-path model has to do with the 'emotional perfection' stuff, e.g. the idea of arhats supposedly not being able to be sexually aroused.

Reply
No-self as an alignment target
jbkjr6mo3-7

Anatta is not something to be achieved; it’s a characteristic of phenomena that needs to be recognized if one has not yet. Certainly agree that AI systems should learn/be trained to recognize this, but it’s not something you “ensure LLMs instantiate.” What you want to instantiate is a system that recognizes anatta.

Reply
Is the mind a program?
jbkjr9mo10

I assume that phenomenal consciousness is a sub-component of the mind.

I'm not sure what is meant by this; would you mind explaining?

Also, the in-post link to the appendix is broken; it's currently linking to a private draft.

Reply
MichaelDickens's Shortform
jbkjr1y40

It sounds to me like a problem of not reasoning according to Occam's razor and "overfitting" a model to the available data.

Ceteris paribus, H' isn't more "fishy" than any other hypothesis, but H' is a significantly more complex hypothesis than H or ¬H: instead of asserting H or ¬H, it asserts (A=>H) & (B=>¬H), so it should have been commensurately de-weighted in the prior distribution according to its complexity. The fact that Alice's study supports H and Bob's contradicts it does, in fact, increase the weight given to H' in the posterior relative to its weight in the prior; it's just that H' is prima facie less likely, according to Occam.

Given all the evidence, the ratio of likelihoods P(H'|E)/P(H|E)=P(E|H')P(H')/(P(E|H)P(H)). We know P(E|H') > P(E|H) (and P(E|H') > P(E|¬H)), since the results of Alice's and Bob's studies together are more likely given H', but P(H') < P(H) (and P(H') < P(¬H)) according to the complexity prior. Whether H' is more likely than H (or ¬H, respectively) is ultimately up to whether P(E|H')/P(E|H) (or P(E|H')/P(E|¬H)) is larger or smaller than P(H')/P(H) (or P(H')/P(¬H)).

I think it ends up feeling fishy because the people formulating H' just used more features (the circumstances of the experiments) in a more complex model to account for the as-of-yet observed data after having observed said data, so it ends up seeming like in selecting H' as a hypothesis, they're according it more weight than it deserves according to the complexity prior.

Reply
Shortform
jbkjr1y73

Why should I include any non-sentient systems in my moral circle? I haven't seen a case for that before.

Reply
Indecision and internalized authority figures
jbkjr1y20

To me, "indecision results from sub-agential disagreement" seems almost tautological, at least within the context of multi-agent models of mind, since if all the sub-agents were in agreement, there wouldn't be any indecision. So, the question I have is: how often are disagreeing sub-agents "internalized authority figures"? I think I agree with you in that the general answer is "relatively often," although I expect a fair amount of variance between individuals.

Reply
Suffering Is Not Pain
jbkjr1y10

I'd guess it's a problem of translation; I'm pretty confident the original text in Pali would just say "dukkha" there.

The Wikipedia entry for dukkha says it's commonly translated as "pain," but I'm very sure the referent of dukkha in experience is not pain, even if it's mistranslated as such, however commonly.

Reply
Load More
4jbkjr's Shortform
Ω
7d
Ω
3
36Suffering Is Not Pain
1y
47
12Question about Lewis' counterfactual theory of causation
Q
1y
Q
7
-5Why I stopped working on AI safety
1y
0
40Integrating Three Models of (Human) Cognition
Ω
4y
Ω
4
48Grokking the Intentional Stance
Ω
4y
Ω
22
73Discussion: Objective Robustness and Inner Alignment Terminology
Ω
4y
Ω
7
63Empirical Observations of Objective Robustness Failures
Ω
4y
Ω
5
2Old post/writing on optimization daemons?
Q
5y
Q
2
15Mapping the Conceptual Territory in AI Existential Safety and Alignment
Ω
5y
Ω
0
Load More