G Gordon Worley III's Shortform

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
106 comments, sorted by Highlighting new comments since Today at 10:15 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Some thoughts on Buddhist epistemology.

This risks being threatening, upsetting, and heretical within a certain point of view I commonly see expressed on LW for reasons that will become clear if you keep reading. I don't know if that means you shouldn't read this if that sounds like the kind of thing you don't want to read, but I put it out there so you can make the choice without having to engage in the specifics if you don't want to. I don't think you will be missing out on anything if that warning gives you a tinge of "maybe I won't like reading this".

My mind produces a type error when people try to perform deep and precise epistemic analysis of the dharma. That is, when they try to evaluate the truth of claims made by the dharma this seems generally fine, but when they go deep enough that they end up trying to evaluate whether the dharma itself is based on something true, I get the type error.

I'm not sure what people trying to do this turn up. My expectation is that their results looks like noise if you aggregate over all such attempts. The reason being that the dharma is not founded on episteme.

As a quick reminder, there are at leas... (read more)

So when we talk about the dharma or justify our actions on it, it's worth noting that it is not really trying to provide consistent episteme. [...] Thus it's a strange inversion to ask the dharma for episteme-based proofs. It can't give them, nor does it try, because its episteme is not consistent and cannot be because it chooses completeness instead.

In my view, this seems like a clear failing. The fact that the dharma comes from a tradition where this has usually been the case is not an excuse for not trying to fix it.

Yes, the method requires temporarily suspending episteme-based reasoning and engaging with less conceptual forms of seeing. But it can still be justified and explained using episteme-based models; if it could not, there would be little reason to expect that it would be worth engaging with.

This is not just a question of "the dharma has to be able to justify itself"; it's also a question of leaving out the episteme component leaves the system impoverished, as noted e.g. here:

Recurrent training to attend to the sensate experience moment-by-moment can undermine the capacity to make meaning of experience. (The psychoanalyst Wilfred Bion d
... (read more)
7G Gordon Worley III1yHmm, I feel like there's multiple things going on here, but I think it hinges on this: Different traditions vary on how much to emphasize models and episteme. None of them completely ignore it, though, only seek to keep it within its proper place. It's not that episteme is useless, only that it is not primary. You of course should include it because it's part of the world, and to deny it would lead to confusion and suffering. As you note with your first example especially, some people learn to turn off the discriminating mind rather than hold it as object, and they are worse for it because then they can't engage with it anymore. Turning it off is only something you could safely do if you really had become so enlightened that you had no shadow and would never accumulate any additional shadow, and even then it seems strange from where I stand to do that although maybe it would make sense to me if I were in the position that it were a reasonable and safe option. So to me this reads like an objection to a position I didn't mean to take. I mean to say episteme has a place and is useful, it is not taken as primary to understanding, at some points Buddhist episteme will say contradictory things, that's fine and expected because dharma episteme is normally post hoc rather than ante hoc (though is still expected to be rational right up until it is forced to hit a contradiction), and ante hoc is okay so long as it is then later verified via gnosis or techne.

>unmediated-by-ontology knowledge of reality.

I think this is a confused concept, related to wrong-way-reduction.

2G Gordon Worley III1yI've thought about this a bit and I don't see a way through to what you are thinking that makes you suggest this since I don't see a reduction happening here, much less one moving towards bundling together confusion that only looks simpler. Can you say a bit more that might make your perspective on this clearer?
4romeostevensit1yIn particular, I think under this formulation knowledge and onotology largely refer to the same thing. Which is part of the reason I think this formulation is mistaken. Separately, I think 'reality' has too many moving parts to be useful for the role it's being used for here. [https://www.xenodochy.org/gs/multiordinal.html]
2G Gordon Worley III1yMaybe, although I think there is a not very clear distinction I'm trying to make between knowledge and ontological knowledge, though maybe it's not coming across, although if it is and you have some particular argument for why, say, there isn't or can't be such a meaningful distinction, I'd be interested to hear it. As for my model of reality having too many moving parts, you're right, I'm not totally unconfused about everything yet, and it's the place the remaining confusion lives.
4Chris_Leong1yI agree with KaJ Solata and Viliam that episteme is underweighted in Buddhism, but thanks for explicating that world view
4Viliam1yThe "unmediated contact via the senses" can only give you sensual inputs. Everything else contains interpretation. That means, you can only have "gnosis" about things like [red], [warm], etc. Including a lot of interesting stuff about your inner state, of course, but still fundamentally of the type [feeling this], [thinking that], and perhaps some usually-unknown-to-non-Buddhists [X-ing Y], etc. Poetically speaking, these are the "atoms of experience". (Some people would probably say "qualia".) But some interpretation needs to come to build molecules out of these atoms. Without interpretation, you could barely distinguish between a cat and a warm pillow... which IMHO is a bit insufficient for a supposedly supreme knowledge.
3romeostevensit10moIt's even worse than that, 'raw' sensory inputs already have ontological commitments. Those priors inform all our interpretations pre-consciously. Agree that the efficiency of various representations in the context of coherent intents is a good lens.
3Ouroborus10moCould you clarify the distinction between techne and gnosis? Is it something like playing around with a hammer and seeing how it works?
2G Gordon Worley III10moIt's not a very firm distinction, but techne is knowledge from doing, so I would consider playing with a hammer a way to develop techne. It certainly overlaps with the concept of gnosis, which is a bit more general and includes knowledge from direct experience that doesn't involve "doing", like the kind of knowledge you gain from observing. But the act of observing is a kind of thing you do, so as you see it's fuzzy, but generally I think of techne as that which involves your body moving.
1hamnox1yI am glad for having read this, but can't formulate my thoughts super clearly. Just have this vague sense that you're using too many groundless words and not connecting to the few threads of gnosis(?) that other rationalists would have available.

If an organism is a thing that organizes, then a thing that optimizes is an optimism.

This is a short post to register my kudos to LWers for being consistently pretty good at helping each other find answers to questions, or at least make progress towards answers. I feel like I've used LW numerous times to make progress on work by saying "here's what I got, here's where I'm confused, what do you think?", whether that be through formal question posts or regular posts that are open ended. Some personal examples that come to mind: recent, older, another.

Praise to the LW community!

People often talk of unconditional love, but they implicitly mean unconditional love for or towards someone or something, like a child, parent, or spouse. But this kind of love is by definition conditional because it is love conditioned on the target being identified as a particular thing within the lover's ontology.

True unconditional love is without condition, and it cannot be directed because to direct is to condition and choose. Unconditional love is love of all, of everything and all of reality even when not understood as a thing.

Such love is rare, so it seems worth pursuing the arduous cultivation of it.

4Dagon7mo"love" is poorly-defined enough that it always depends on context. Often, "unconditional love" _is_ expected to be conditional on identity, and really should be called "precommitment against abandonment" or "unconditional support". But neither of those signal the strength of the intent and safety conferred by the relationship very well. I _really_ like your expansion into non-identity, though. Love for the real state of the universe, and the simultaneous desire to pick better futures and acceptance of whichever future actually obtains is a mindset I strive for.
4G Gordon Worley III7moThis is the hidden half of what got me thinking about this: my growing being with the world as it is rather than as I understand it.
4Raemon7moI have a blog post upcoming called ‘Unconditional Love Integration Test: Hitler’

I think it's safe to say that many LW readers don't feel like spirituality is a big part of their life, yet many (probably most) people do experience a thing that goes by many names---the inner light, Buddha-nature, shunyata, God---and falls under the heading of "spirituality". If you're not sure what I'm talking about, I'm pointing to a common human experience you aren't having.

Only, I don't think you're not having it, you just don't realize you are having those experiences.

One way some people get in touch with this thing, which I like to think of as "the source" and "naturalness" and might describe as the silently illuminated wellspring, is with drugs, especially psychedelics but really any drug that gets you to either reduce activity of the default-mode network or at least notice it's operation and stop identifying with it (dissociatives may function like this). In this light, I think of drug users as very spiritual people, only they are unfortunately doing it in a way that is often destructive to their bodies and causes headlessness (causes them to fail to perceive reality accurately and so may act ... (read more)

Only, I don't think you're not having it, you just don't realize you are having those experiences.

The mentality that lies behind a statement like that seems to me to be pretty dangerous. This is isomorphic to "I know better than other people what's going on in those other people's heads; I am smarter/wiser/more observant/more honest."

Sometimes that's *true.* Let's not forget that. Sometimes you *are* the most perceptive one in the room.

But I think it's a good and common standard to be skeptical of (and even hostile toward) such claims (because such claims routinely lead to unjustified and not-backed-by-reality dismissal and belittlement and marginalization of the "blind" by the "seer"), unless they come along with concrete justification:

  • Here are the observations that led me to claim that all people do in fact experience X, in direct contradiction of individuals claiming otherwise; here's why I think I'm correct to ignore/erase those people's experience.
  • Here are my causal explanations of why and how people would become blindspotted on X, so that it's not just a blanket assertion and so that peo
... (read more)
8Ben Pace1yYeah, I think there's a subtle distinction. While it's often correct to believe things that you have a hard time communicating explicitly (e.g. most of my actual world model at any given time), the claim that there's something definitely true but that in-principle I can't persuade you of and also can't explain to you, especially when used by a group of people to coordinate around resources, is often functioning as a coordination flag and not as a description of reality.
6Raemon1yJust wanted to note that I am thinking about this exchange, hope to chime in at some point. I'm not sure whether I'm on the same page as Ben about it. May take a couple days to have time to respond in full.
4Raemon1yJust a quick update: the mod team just chatted a bunch about this thread. There’s a few different things going on. It’ll probably be another day before a mod follows up here.
[-]Ben Pace1y Moderator Comment10

[Mod note] I thought for a while about how shortform interacts with moderation here. When Ray initially wrote the shortform announcement post, he described the features, goals, and advice for using it, but didn’t mention moderation. Let me follow-up by saying: You’re welcome and encouraged to enforce whatever moderation guidelines you choose to set on shortform, using tools like comment removal, user bans, and such. As a reminder, see the FAQ section on moderation for instructions on how to use the mod tools. Do whatever you want to help you think your thoughts here in shortform and feel comfortable doing so.

Some background thoughts on this: In other places on the internet, being blocked locks you out of the communal conversation, but there are two factors that make it pretty different here. Firstly, banning someone from a post on LW means they can’t reply to the content they’re banned from, but it doesn’t hide your content from them or their content from you. And secondly, everyone here on LessWrong has a common frontpage where the main conversation happens - the shortform is a low-key place and a relatively unimportant part of the conversation. (You can be banned from posts on fr... (read more)

6G Gordon Worley III1ySure, this is short form. I'm not trying very hard to make a complete argument to defend my thoughts, just putting them out there. There is no norm that I need always abide everywhere to present the best (for some notion of best) version of my reasons for things I claim, least of all, I think, in this space as opposed to, say, in a frontpage post. Thus it feels to me a bit out of place to object in this way here, sort of like objecting that my fridge poetry is not very good or my shower singing is off key. Now, your point is well taken, but I also generally choose to simply not be willing to cross more than a small amount of inferential distance in my writing (mostly because I think slowly and it requires significant time and effort for me to chain back far enough to be clear to successively wider audiences), since I often think of it as leaving breadcrumbs for those who might be nearby rather than leading people a long way towards a conclusion. I trust people to think things through for themselves and agree with me or not as their reason dictates. Yes, this means I am often quite distanced from easily verifying the most complex models I have, but such seems to be the nature of complex models that I don't even have complete in my own mind yet, much less complete in a way that I would lay them out precisely such that they could be precisely verified point by point. This perhaps makes me frustratingly inscrutable about my most exciting claims to those with the least similar priors, but I view it as a tradeoff for aiming to better explain more of the world to myself and those much like me at the expense of failing to make those models legible enough for those insufficiently similar to me to verify them. Maybe my circumstances will change enough that one day I'll make a much different tradeoff?
2Duncan_Sabien1yThis response missed my crux. What I'm objecting to isn't the shortform, but the fundamental presumptuousness inherent in declaring that you know better than everyone else what they're experiencing, *particularly* in the context of spirituality, where you self-describe as more advanced than most people. To take a group of people (LWers) who largely say "nah, that stuff you're on is sketchy and fake" and say "aha, actually, I secretly know that you're in my domain of expertise and don't even know it!" is a recipe for all sorts of bad stuff. Like, "not only am I *not* on some sketchy fake stuff, I'm actually superior to my naysayers by very virtue of the fact that they don't recognize what I'm pointing at! Their very objection is evidence that I see more clearly than they do!" I'm pouring a lot into your words, but the point isn't that your words carried all that so much as that they COULD carry all that, in a motte-and-bailey sort of way. The way you're saying stuff opens the door to abuse, both social and epistemic. My objection wasn't actually a call for you to give more explanation. It was me saying "cut it out," while at the same time acknowledging that one COULD, in principle, make the same claim in a justified fashion, if they cared to.
4G Gordon Worley III1yNote: what follows responds literally to what you said. I'm suspicious enough that my interpretation is correct that I'll respond based on it, but I'm open to the possibility this was meant more metaphorically and I've misunderstood your intention. Ah, but that's not up to you, at least not here. You are welcome to dislike what I say, claim or argue that I am dangerous in some way, downvote me, flag my posts, etc. BUT it's not up to you to enforce a norm here to the best of my knowledge, even if it's what you would like to do. Sorry if that is uncharacteristically harsh and direct of me, but if that was your motivation, I think it important to say I don't recognize you as having the authority to do that in this space, consider it a violation of my commenting guidelines, and will delete future comments that attempt to do the same.

Hey Gordon, let me see if I understand your model of this thread. I’ll write mine and can you tell me if it matches your understanding?

  • You write a post giving your rough understanding of a commonly discussed topic that many are confused by
  • Duncan objects to a framing sentence that he claims means “I know better than other people what's going on in those other people's heads; I am smarter/wiser/more observant/more honest." because it seems inappropriate and dangerous in this domain (spirituality)
  • You say “Dude, I’m just getting some quick thoughts off my chest, and it’s hard to explain everything”
  • Duncan says you aren’t responding to him properly - he does not believe this is a disagreement but a norm-violation
  • You say that Duncan is not welcome to prosecute norm violations on your wall unless they are norms that you support
4G Gordon Worley III1yYes, that matches my own reading of how the interaction progressed, caveat any misunderstanding I have of Duncan's intent.

nods Then I suppose I feel confused by your final response.

If I imagine writing a shortform post and someone said it was:

  • Very rude to another member of the community
  • Endorsing a study that failed to replicate
  • Lied about an experience of mine
  • Tried to unfairly change a narrative so that I was given more status

I would often be like “No, you’re wrong” or maybe “I actually stand by it and intended to be rude” or “Thanks, that’s fair, I’ll edit”. I can also imagine times where the commenter is needlessly aggressive and uncooperative where I’d just strong downvote and ignore.

But I’m confused by saying “you’re not allowed to tell me off for norm-violations on my shortform”. To apply that principle more concretely, it could say “you’re not allowed to tell me off for lying on my shortform”.

My actual model of you feels a bit confused by Duncan’s claim or something, and wants to fight back against being attacked for something you don’t see as problematic. Like, it feels presumptuous of Duncan to walk into your post and hold you to what feels mostly like high standards of explanation, and you want to (rightly) say that he’s not allowed to do that.

Does that all seem right?

1G Gordon Worley III1yYes. To add to this what I'm most strongly reacting to is not what he says he's doing explicitly, which I'm fine with, but what further conversation suggests he is trying to do: to act as norm enforcer rather than as norm enforcement recommender.
4Duncan_Sabien1yI explicitly reject Gordon's assertions about my intentions as false, and ask (ASK, not demand) that he justify (i.e. offer cruxes) or withdraw them.
3G Gordon Worley III1yI cannot adequately do that here because it relies on information you conveyed to me in a non-public conversation. I accept that you say that's not what you're doing, and I am happy to concede that your internal experience of yourself as you experience it tells you that you are doing what you are doing, but I now believe that my explanation better describes why you are doing what you are doing than the explanation you are able to generate to explain your own actions. The best I can maybe offer is that I believe you have said things that are better explained by an intent to enforce norms rather than argue for norms and imply that general case should be applied in this specific case. I would say the main lines of evidence revolve around how I interpret your turns of phrase, how I read your tone (confrontational and defensive), what aspects of things I have said you have chosen to respond to, how you have directed the conversation, and my general model of human psychology with the specifics you are giving me filled in. Certainly I may be mistaken in this case and I am reasoning off circumstantial evidence which is not a great situation to be in, but you have pushed me hard enough here and elsewhere that it has made me feel it is necessary to act to serve the purpose of supporting the conversation norms I prefer in the places you have engaged me. I would actually really like this conversation to end because it is not serving anything I value, other than that I believe not responding would simply allow what I dislike to continue and be subtly accepted, and I am somewhat enjoying the opportunity to engage in ways I don't normally so I can benefit from the new experience.
3Duncan_Sabien1yI note for the record that the above is strong evidence that Gordon was not just throwing an offhand turn of phrase in his original post; he does and will regularly decide that he knows better than other people what's going on in those other people's heads. The thing I was worried about, and attempting to shine a light on, was not in my imagination; it's a move that Gordon endorses, on reflection, and it's the sort of thing that, historically, made the broader culture take forever to recognize e.g. the existence of people without visual imagery, or the existence of episodics, or the existence of bisexuals, or any number of other human experiences that are marginalized by confident projection. I'm comfortable with just leaving the conversation at "he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move." Personally, I find it unjustifiable and morally abhorrent. Gordon clearly does not. Maybe that's the crux.

[He] does and will regularly decide that he knows better than other people what's going on in those other people's heads. [...] Personally, I find it unjustifiable and morally abhorrent.

How can it be morally abhorrent? It's an epistemic issue. Factual errors often lead to bad consequences, but that doesn't make those errors moral errors. A moral error is an error about a moral fact, assignement of value to situations, as opposed to prediction of what's going on. And what someone thinks is a factual question, not a question of assigning value to an event.

8Wei_Dai1yThings that are morally abhorrent are not necessarily moral errors. For example I can find wildlife suffering morally abhorrent but there's obviously no moral errors or any kind of errors being committed there. Given that the dictionary defines abhorrent as "inspiring disgust and loathing; repugnant" I think "I find X morally abhorrent" just means "my moral system considers X to be very wrong or to have very low value."
7Vladimir_Nesov1yThat's one way for my comment to be wrong, as in "Systematic recurrence of preventable epistemic errors is morally abhorrent." When I was writing the comment, I was thinking of another way it's wrong: given morality vs. axiology distinction [https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/], and distinction between belief and disclosure of that belief, it might well be the case that it's a useful moral principle to avoid declaring beliefs about what others think, especially when those others disagree with the declarations. In that case it's a violation of this principle, a moral wrong, to declare such beliefs. (A principle like this gets in the way of honesty, so promoting it is contentious and shouldn't be an implicit background assumption. And the distinction between belief and its declaration was not clearly made in the above discussion.)
4Duncan_Sabien1yI find it morally abhorrent because, when not justified and made-cruxy (i.e. when done the only way I've ever seen Gordon do it), it's tantamount to trying to erase another person/another person's experience, and (as noted in my first objection) it often leads, in practice, to socially manipulative dismissiveness and marginalization that's not backed by reality.

So it's a moral principle under the belief vs. declaration distinction (as in this comment). In that case I mostly object to not making that distinction (a norm to avoid beliefs of that form is on entirely different level than a norm to avoid their declarations).

Personally I don't think the norm about declarations is on the net a good thing, especially on LW, as it inhibits talking about models of thought. The examples you mentioned are important but should be covered by a more specialized norm that doesn't cause as much collateral damage.

6Duncan_Sabien1yI'm not sure I'm exactly responding to what you want me to respond to, but: It seems to me that a declaration like "I think this is true of other people in spite of their claims to the contrary; I'm not even sure if I could justify why? But for right now, that's just the state of what's in my head" is not objectionable/doesn't trigger the alarm I was trying to raise. Because even though it fails to offer cruxes or detail, it at least signals that it's not A STATEMENT ABOUT THE TRUE STATE OF THE UNIVERSE, or something? Like, it's self-aware about being a belief that may or may not match reality? Which makes me re-evaluate my response to Gordon's OP and admit that I could have probably offered the word "think" something like 20% more charity, on the same grounds, though on net I still am glad that I spelled out the objection in public (like, the objection now seems to me to apply a little less, but not all the way down to "oops, the objection was fundamentally inappropriate").
8Vladimir_Nesov1y(By "belief" I meant a belief that talkes place in someone's head, and its existence is not necessarily communicated to anyone else. So an uttered statement "I think X" is a declaration of belief in X, not just a belief in X. A belief in X is just a fact about that person's mind, without an accompanying declaration. In this framing, the version of the norm about beliefs (as opposed to declarations) is the norm not to think certain thoughts, not a norm to avoid sharing the observations about the fact that you are thinking them.) I think a salient distinction between declarations of "I think X" and "it's true that X" is a bad thing, as described in this comment [https://www.lesswrong.com/posts/sSpu2EABtTTDmBZ6T/g-gordon-worley-iii-s-shortform?commentId=chqi4v8nYbp6kjkHC] . The distinction is that in the former case you might lack arguments for the belief. But if you don't endorse the belief, it's no longer a belief, and "I think X" is a bug in the mind that shouldn't be called "belief". If you do endorse it, then "I think X" does mean "X". It is plausibly a true statement about the state of the universe, you just don't know why; your mind inscrutably says that it is and you are inclined to believe it, pending further investigation. So the statement "I think this is true of other people in spite of their claims to the contrary" should mean approximately the same as "This is true of other people in spite of their claims to the contrary", and a meaningful distinction only appears with actual arguments about those statements, not with different placement of "I think".
7G Gordon Worley III1yI forget if we've talked about this specifically before, but I rarely couch things in ways that make clear I'm talking about what I think rather than what is "true" unless I am pretty uncertain and want to make that really clear or expect my audience to be hostile or primarily made up of essentialists. This is the result of having an epistemology where there is no direct access to reality so I literally cannot say anything that is not a statement about my beliefs about reality, so saying "I think" or "I believe" all the time is redundant because I don't consider eternal notions of truth meaningful (even mathematical truth, because that truth is contingent on something like the meta-meta-physics of the world and my knowledge of it is still mediated by perception, cf. certain aspects of Tegmark). I think of "truth" as more like "correct subjective predictions, as measured against (again, subjective) observation", so when I make claims about reality I'm always making what I think of as claims about my perception of reality since I can say nothing else and don't worry about appearing to make claims to eternal, essential truth since I so strongly believe such a thing doesn't exist that I need to be actively reminded that most of humanity thinks otherwise to some extent. Sort of like going so hard in one direction that it looks like I've gone in the other because I've carved out everything that would have allowed someone to observe me having to navigate between what appear to others to be two different epistemic states where I only have one of them. This is perhaps a failure of communication, and I think I speak in ways in person that make this much clearer and then I neglect the aspects of tone not adequately carried in text alone (though others can be the judge of that, but I basically never get into discussions about this concern in person, even if I do get into meta discussions about other aspects of epistemology). FWIW, I think Eliezer has (or at least had) a simil

leaving the conversation at "he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move."

Nesov scooped me on the obvious objection, but as long as we're creating common knowledge, can I get in on this? I would like you and Less Wrong as a community to be on the same page about the fact that I, Zack M. Davis, endorse making the mental move of deciding that I know better than other people what's going on in those other people's heads when and only when it is in fact the case that I know better than those other people what's going on in their heads (in accordance with the Litany of Tarski).

the existence of bisexuals

As it happens, bisexual arousal patterns in men are surprisingly hard to reproduce in the lab![1] This is a (small, highly inconclusive) example of the kind of observation that one might use to decide whether or not we live in a world in which the cognitive algorithm of "Don't decide that you know other people's minds better than they do" performs better or worse than other inference procedures.


  1. J. Michael Bailey, "What Is Sexual Orientation and Do Women Have One?", section titled "Sexual Arousal Patter

... (read more)
4Duncan_Sabien1yYes, as clearly noted in my original objection, there is absolutely a time and a place for this, and a way to do it right; I too share this tool when able and willing to justify it. It's only suspicious when people throw it out solely on the strength of their own dubious authority. My whole objection is that Gordon wasn't bothering to (I believe as a cover for not being able to).

as clearly noted in my original objection

Acknowledged. (It felt important to react to the great-grandparent as a show of moral resistance to appeal-to-inner-privacy conversation halters, and it was only after posting the comment that I remembered that you had acknolwedged the point earlier in the thread, which, in retrospect, I should have at least acknowledged even if the great-grandparent still seemed worth criticizing.)

there is absolutely a time and a place for this

Exactly—and lesswrong.com is the place for people to report on their models of reality, which includes their models of other people's minds as a special case.

Other places in Society are right to worry about erasure, marginalization, and socially manipulative dismissiveness! But in my rationalist culture, while standing in the Citadel of Truth, we're not allowed to care whether a map is marginalizing or dismissive; we're only allowed to care about whether the map reflects the territory. (And if there are other cultures competing for control of the "rationalist" brand name, then my culture is at war with them.)

My whole objection is that Gordon wasn't bothering to

Great! Thank you for critcizing people who don'

... (read more)

criticizing people who don't justify their beliefs with adequate evidence and arguments

I think justification is in the nature of arguments, but not necessary for beliefs or declarations of beliefs. A belief offered without justification is a hypothesis called to attention. It's concise, and if handled carefully, it can be sufficient for communication. As evidence, it's a claim about your own state of mind, which holds a lot of inscrutable territory that nonetheless can channel understanding that doesn't yet lend itself to arguments. Seeking arguments is certainly a good thing, to refactor and convey beliefs, but that's only a small part of how human intelligence builds its map.

2Duncan_Sabien1yYeah, if I had the comment to rewrite (I prefer not to edit it at this point) I would say "My whole objection is that Gordon wasn't bothering to (and at this point in the exchange I have a hypothesis that it's reflective of not being able to, though that hypothesis comes from gut-level systems and is wrong-until-proven-right as opposed to, like, a confident prior)."
7G Gordon Worley III1ySo, having a little more space from all this now, I'll say that I'm hesitant to try to provide justifications because certain parts of the argument require explaining complex internal models of human minds that are a level more complex than I can explain even though I'm using them (I only seem to be able to interpret myself coherently one level of organization less than the maximum level of organization present in my mind) and because other parts of the argument require gnosis of certain insights that I (and to the best of my knowledge, no one) knows how to readily convey without hundreds to thousands of hours of meditation and one-on-one interactions (though I do know a few people who continue to hope that they may yet discover a way to make that kind of thing scalable even though we haven't figured it out in 2500 years, maybe because we were missing something important to let us do it). So it is true that I can't provide adequate episteme of my claim, and maybe that's what you're reacting to. I don't consider this a problem, but I also recognize that within some parts of the rationalist community that is considered a problem (I model you as being one such person, Duncan). So given that, I can see why from your point of view it looks like I'm just making stuff up or worse since I can't offer "justified belief" that you'd accept as "justified", and I'm not really much interested in this particular case in changing your mind as I don't yet completely know myself how to generate that change in stance towards epistemology in others even though I encountered evidence that lead me to that conclusion myself.

There's a dynamic here that I think is somewhat important: socially recognized gnosis.

That is, contemporary American society views doctors as knowing things that laypeople don't know, and views physicists as knowing things that laypeople don't know, and so on. Suppose a doctor examines a person and says "ah, they have condition X," and Amy responds with "why do you say that?", and the doctor responds with "sorry, I don't think I can generate a short enough explanation that is understandable to you." It seems like the doctor's response to Amy is 'socially justified', in that the doctor won't really lose points for referring to a pre-existing distinction between those-in-the-know and laypeople (except maybe for doing it rudely or gracelessly). There's an important sense in which society understands that it in fact takes many years of focused study to become a physicist, and physicists should not be constrained by 'immediate public justification' or something similar.

But then there's a social question, of how to grant that status. One might imagine that we want astronomers to be able to do their ... (read more)

6Vladimir_Nesov1yThat's not the point! Zack is talking about beliefs, not their declaration, so it's (hopefully) not the case that there is "a time and a place" for certain beliefs (even when they are not announced), or that beliefs require ability and willingness to justify them (at least for some senses of "justify" and "belief").
3Duncan_Sabien1yOh, one last footnote: at no point did I consider the other conversation private, at no point did I request that it be kept private, and at no point did Gordon ask if he could reference it (to which I would have said "of course you can"). i.e. it's not out of respect for my preferences that that information is not being brought in this thread.
2G Gordon Worley III1yCorrect, it was made in a nonpublic but not private conversation, so you are not the only agent to consider, though admittedly the primary one other than myself in this context. I'm not opposed to discussing disclosure, but I'm also happy to let the matter drop at this point since I feel I have adequately pushed back against the behavior I did not want to implicitly endorse via silence since that was my primary purpose in continuing these threads past the initial reply to your comment.
3Duncan_Sabien1yThere's a world of difference between someone saying "[I think it would be better if you] cut it out because I said so" and someone saying "[I think it would be better if you] cut it out because what you're doing is bad for reasons X, Y, and Z." I didn't bother to spell out that context because it was plainly evident in the posts prior. Clearly I don't have any authority beyond the ability to speak; to IS what I was doing, and all I was doing.
4G Gordon Worley III1yI mostly disagree that better reasons matter in a relevant way here, especially since I am currently reading your intent as not one of informing me of that you think there is a norm that should be enforced but instead a bid to enforce that norm. To me what's relevant is intended effect.

What's the difference?

Suppose I'm talking with a group of loose acquaintances, and one of them says (in full seriousness), "I'm not homophobic. It's not that I'm afraid of gays, I just think that they shouldn't exist."

It seem to me that it is appropriate for me to say, "Hey man, that's not ok to say." It might be that a number of other people in the conversation would back me up (or it might be that they they defend the first guy), but there wasn't common knowledge of that fact beforehand.

In some sense, this is a bid to establish a new norm, by pushing a the private opinions of a number of people into common knowledge. It also seems to me to be a virtuous thing to do in many situations.

(Noting that my response to the guy is not: "Hey, you can't do that, because I get to decide what people do around here." It's "You can't do that, because it's bad" and depending on the group to respond to that claim in one way or another.)




6Duncan_Sabien1y"Here are some things you're welcome to do, except if you do them I will label them as something else and disagree with them." Your claim that you had tentative conclusions that you were willing to update away from is starting to seem like lip service. Literally my first response to you centers around the phrase "I think it's a good and common standard to be skeptical of (and even hostile toward) such claims." That's me saying "I think there's a norm here that it's good to follow," along with detail and nuance à la here's when it's good not to follow it.
4G Gordon Worley III1yThis is a question of inferred intent, not what you literally said. I am generally hesitant to take much moderation action based on what I infer, but you have given me additional reason to believe my interpretation is correct in a nonpublic thread on Facebook. (If admins feel this means I should use a reign of terror moderation policy I can switch to that.) Regardless, I consider this a warning of my local moderation policy only and don't plan to take action on this particular thread.
4Ben Pace1yEr, I generally have FB blocked, but I have now just seen the thread on FB that Duncan made about you, and that does change how I read the dialogue (it makes Duncan’s comments feel more like they’re motivated by social coordination around you rather than around meditation/spirituality, which I’d previously assumed). (Just as an aside, I think it would’ve been clearer to me if you’d said “I feel like you’re trying to attack me personally for some reason and so it feels especially difficult to engage in good faith with this particular public accusation of norm-violation” or something like that.) I may make some small edit to my last comment up-thread a little after taking this into account, though I am still curious about your answer to the question as I initially stated it.
2Duncan_Sabien1yI can have different agendas and follow different norms on different platforms. Just saying. If I were trying to do the exact same thing in this thread as I am in the FB thread, they would have the same words, instead of different words. (The original objection *does* contain the same words, but Gordon took the conversation in meaningfully different directions on the two different platforms.) I note that above, Gordon is engaging in *exactly* the same behavior that I was trying to shine a spotlight on (claiming to understand my intent better than I do myself/holding to his model that I intend X despite my direct claims to the contrary).

Outside observer takeaway: There's a bunch of sniping and fighting here, but if I ignore all the fighting and look at only the ideas, what we have is that Gordon presented an idea, Duncan presented counterarguments, and Gordon declined to address the counterarguments. Posting on shortform doesn't come with an obligation to follow up and defend things; it's meant to be a place where tentative and early stage ideas can be thrown around, so that part is fine. But I did come away believing the originally presented idea is probably wrong.

(Some of the meta-level fighting seemed not-fine, but that's for another comment.)

5Viliam1ySeems to me that modern life full of distractions. As a smart person, you probably have a work that requires thinking (not just moving your muscles in a repetitive way). In your free time there is internet with all the websites optimized for addictiveness. Plus all the other things you want to do (books to read, movies to see, friends to visit). Electricity can turn your late night into a day; you can take a book or a smartphone everywhere. So, unless we choose it consciously, there are no silent moments, to get in contact with yourself... or whatever higher power you imagine there to be, talking to you. I wonder what is the effect ratio between meditation and simply taking a break and wondering about stuff. Maybe it's our productivity-focused thinking saying that meditating (doing some hard work in order to gain supernatural powers) is a worthy endeavor, while goofing off is a sin.
3G Gordon Worley III1y"Simply taking a break and wondering about stuff" is a decent way to get in touch with this thing I'm pointing at. The main downside to it is that it's slow, in that for it to produce effects similar to meditation probably requires an order of magnitude more time, and likely won't result in the calmest brain states where you can study your phenomenology clearly.
1Xenotech1yAre there individuals willing to explicitly engage in comforting discussion regarding these things you've written about? Any willing to extend personal invitations? I would love to discuss spirituality with otherwise "rational" intelligent people. Please consider teaching out to me personally - it would be transformative: drawnalong@gmail.com [drawnalong@gmail.com]

I have plans to write this up more fully as a longer post explaining the broader ideas with visuals, but I thought I would highlight one that is pretty interesting and try out the new shortform feature at the same time! As such, this is not optimized for readability, has no links, and I don't try to backup my claims. You've been warned!

Suppose you frequently found yourself identifying with and feeling like you were a homunculus controlling your body and mind: there's a real you buried inside, and it's in the driver's seat. Sometimes your mind and body do what "you" want, sometimes it doesn't and this is frustrating. Plenty of folks reify this in slightly different ways: rider and elephant, monkey and machine, prisoner in cave (or audience member in theater), and, to a certain extent, variations on the S1/S2 model. In fact, I would propose this is a kind of dual process theory of mind that has you identifying with one of the processes.

A few claims.

First, this is a kind of constant, low-level dissociation. It's not the kind of high-intensity dissociation we often think of when we use that term, but it's still a separation of sense of ... (read more)

tl;dr: read multiple things concurrently so you read them "slowly" over multiple days, weeks, months

When I was a kid, it took a long time to read a book. How could it not: I didn't know all the words, my attention span was shorter, I was more restless, I got lost and had to reread more often, I got bored more easily, and I simply read fewer words per minute. One of the effects of this is that when I read a book I got to live with it for weeks or months as I worked through it.

I think reading like that has advantages. By living with a book for... (read more)

4Raemon1yInteresting idea, thanks. I think this also hints at other ways to approach this (i.e. maybe rather than interspersing books with other books, you could interspersing them with non-reading-things that still give you some chance to have idea from multiple domains bumping into each other)

Explanations are liftings from one ontology to another.

2Raemon5moSeems true, although in some cases I feel like one of the ontologies is just an obviously bigger/better version of another one.
3G Gordon Worley III5moThis actually fits the lifting metaphor (which is itself a metaphor)!

I get worried about things like this article that showed up on the Partnership on AI blog. Reading it there's nothing I can really object to in the body of post: it's mostly about narrow AI alignment and promotes a positive message of targeting things that benefit society rather than narrowly maximize a simple metric. How it's titled "Aligning AI to Human Values means Picking the Right Metrics" and that implies to me a normative claim that reads in my head something like "to build aligned AI it is necessary and sufficient to p... (read more)

3jonathanstray9moHi Gordon. Thanks for reading the post. I agree completely that the right metrics are nowhere near sufficient for aligned AI — further I’d say that “right” and “aligned” have very complex meanings here. What I am trying to do with this post is shed some light on one key piece of the puzzle, the actual practice of incorporating metrics into real systems. I believe this is necessary, but don’t mean to suggest that this is sufficient or unproblematic. As I wrote in the post, “this sort of social engineering at scale has all the problems of large AI systems, plus all the problems of public policy interventions.” To me the issue is that large, influential optimizing systems already exist and seem unlikely to be abandoned. There may be good arguments that a particular system should not be used, but it’s hard for me to see an argument to avoid this category of technology as a whole. As I see it, the question is not so much “should we try to choose appropriate metrics?” but “do we care to quantitatively monitor and manage society-scale optimizing systems?” I believe this is an urgent need for this sort of work within industry. Having said all that, you may be right that the title of this post overpromises. I’d welcome your thoughts here.

I recently watched all 7 seasons of HBO's "Silicon Valley" and the final episode (or really the final 4 episodes leading up into the final one) did a really great job of hitting on some important ideas we talk about in AI safety.

Now, the show in earlier seasons has played with the idea of AI with things like an obvious parody of Ben Goertzel and Sophia, discussion of Roko's Basilisk, and of course AI that Goodharts. In fact, Goodharting is a pivotal plot point in how the show ends, along with a Petrov-esque ending where hard choices have to be made under u... (read more)

NB: There's something I feel sad about when I imagine what it's like to be others, so I'm going to ramble about it a bit in shortform because I'd like to say this and possibly say it confusingly rather than not say it at all. Maybe with some pruning this babble can be made to make sense.

There's a certain strain of thought and thinkers in the rationality community that make me feel sad when I think about what it must be like to be them: the "closed" individualists. This is as opposed to people who view personal identity as... (read more)

4Dagon1y[upvoted for talking about something that's difficult to model and communicate about] Hmm. I believe (with fairly high confidence - it would take a big surprise to shift me) a combination of empty and closed. Moments of self-observed experience are standalone, and woven into a fabric of memories in a closed, un-sharable system that will (sooner than I prefer) physically degrade into non-experiencing components. I haven't found anyone who claims to be open AND is rational enough to convince me they're not just misstating what they actually experience. In fact, I'd love to hear someone talk about what it means to "want" something if you're experiencing all things simultaneously. I'm quite sympathetic to the argument that it is what it is, and there's no reason to be sad. But I'm also unsure whether or why my acceptance of closed-empty existence makes you sad. Presumably, if your consciousness includes me, you know I'm not particularly sad overall (I certainly experience pain and frustration, but also joy and optimistic anticipation, in a balance that seems acceptable).
2G Gordon Worley III1yBecause I know the joy of grokking the openness of the "individual" and see the closed approach creating inherent suffering (via wanting for the individual) that cannot be accepted because it seems to be part of the world.
4Viliam1yI wonder how much the "great loneliness for creatures like us" is a necessary outcome of realizing that you are an individual, and how much it is a consequence of e.g. not having the kinds of friends you want to have, i.e. something that you wouldn't feel under the right circumstances. From my perspective, what I miss is people similar to me, living close to me. I can find like-minded people, but they live in different countries (I met them on LW meetups). Thus, I feel more lonely than I would feel if I lived in a different city. Similarly, being extraverted and/or having greater social skills could possibly help me find similar people in my proximity, maybe. Also, sometimes I meet people who seem like they could be what I miss in my life, but they are not interested in being friends with me. Again, this is probably a numbers game; if I could meet ten or hundred times more people of that type, some of them could be interested in me. (In other words, I wonder whether this is not yet another case of "my personal problems, interpreted as a universal experience of the humankind".) Yet another possible factor is the feeling of safety. The less safe I feel, the greater the desire of having allies, preferably perfect allies, preferably loyal clones of myself. Plus the fear of death. If, in some sense, there are copies of me out there, then, in some sense, I am immortal. If I am unique, then at my death something unique (and valuable, at least to me) will disappear from this universe, forever.
2G Gordon Worley III1yMy quick response is that all of these sources of loneliness can still be downstream of using closed individualism as an intuitive model. The more I am able to use the open model the more safe I feel in any situation and the more connected I feel to others no matter how similar or different they are to me. Put one way, every stranger is a cousin I haven't met yet, but just knowing on a deep level that the world is full of cousins is reassuring.

Strong and Weak Ontology

Ontology is how we make sense of the world. We make judgements about our observations and slice up the world into buckets we can drop our observations into.

However I've been thinking lately that the way we normally model ontology is insufficient. We tend to talk as if ontology is all one thing, one map of the territory. Maybe these can be very complex, multi-manifold maps that permit shifting perspectives, but one map all the same.

We see some hints at the breaking of this ontology of ontology as a single map by noticing the way... (read more)

So long as shortform is salient for me, might as well do another one on a novel (in that I've not heard/seen anyone express it before) idea I have about perceptual control theory, minimization of prediction error/confusion, free energy, and Buddhism that I was recently reminded of.

There is a notion within Mahayana Buddhism of the three poisons: ignorance, attachment (or, I think we could better term this here, attraction, for reasons that will become clear), and aversion. This is part of one model of where suffering arises from. Others express these n... (read more)

If CAIS if sufficient for AGI, then likely humans are CAIS-style general intelligences.

9Matt Goldenberg1yWhat's the justification for this? Seems pretty symmetric to "If wheels are sufficient for getting around, then its' likely humans evolved to use wheels."
2G Gordon Worley III1yHuman brains look like they are made up of many parts with various levels and means of integration. So if it turns out to be the case that we could build something like AGI via CAIS, that is CAIS can be assembled in a way that result in general intelligence, then I think it's likely that human intelligence doesn't have anything special going on that would meaningfully differentiate it from the general notion of CAIS other than being implemented in meat.

Personality quizzes are fake frameworks that help us understand ourselves.

What-character-from-show-X-are-you quizzes, astrology, and personality categorization instruments (think Big-5, Myers-Briggs, Magic the Gathering colors, etc.) are perennially popular. I think a good question is to ask, why do humans seem to like this stuff so much that even fairly skeptical folks tend to object not to categorization but that the categorization of any particular system is bad?

My stab at an answer: humans are really confused about themselves, and are interested in thi... (read more)

6Dagon5moThey help us understand others as well - even as fake frameworks, anything that fights against https://wiki.lesswrong.com/wiki/Typical_mind_fallacy [https://wiki.lesswrong.com/wiki/Typical_mind_fallacy] is useful. I'd argue these categorizations don't go far enough, and imply a smaller space of variation than is necessary for actual modeling of self or others, but a lot of casual observers benefit from just acknowledging that there IS variation.

As I work towards becoming less confused about what we mean when we talk about values, I find that it feels a lot like I'm working on a jigsaw puzzle where I don't know what the picture is. Also all the pieces have been scattered around the room and I have to find the pieces first, digging between couch cushions and looking under the rug and behind the bookcase, let alone figure out how they fit together or what they fit together to describe.

Yes, we have some pieces already and others think they know (infer, guess) what the picture is from those ... (read more)

Most of my most useful insights come not from realizing something new and knowing more, but from realizing something ignored and being certain of less.

After seeing another LW user (sorry, forgot who) mention this post in their commenting guidelines, I've decided to change my own commenting guidelines to the following, matching pretty close to the SSC commenting guidelines that I forgot existed until just a couple days ago:

Comments should be at least two of true, useful, and kind, i.e. you believe what you say, you think the world would be worse without this comment, and you think the comment will be positively received.

I like this because it's simple and it says what rather than how. My old gui... (read more)

http://www.overcomingbias.com/2019/12/automation-so-far-business-as-usual.html

I similarly suspect automation is not really happening in a dramatically different way thus far. Maybe that will change in the future (I think it will), but it's not here yet.

So why so much concern about automation?

I suspect because of something they don't look at in this study much (based on the summary): displacement. People are likely being displaced from jobs into other jobs by automation or the perception of automation and some few of those exit the labor market ra... (read more)

This post suggests a feature idea for LessWrong to me:

https://www.lesswrong.com/posts/6Nuw7mLc6DjRY4mwa/the-national-defense-authorization-act-contains-ai

It would be pretty cool if, instead of a lot of comments that have an order determined by votes or time of posting it were instead possible to write a post that had part that could be commented on directly. So, for example, say the comments for a particular section could live straight in the section rather than down at the bottom. Could be an interesting way to deal with lots of comments on large, structured posts.

2Pattern13dYou have reinvented Google Docs. A similar effect could be achieved by having a sequence which...all appears on one page. (With the comments.)
2Matt Goldenberg12dMedium also has this feature, and I think it improves the Medium discourse quite a bit.
2Pattern12dWhich feature?
2Matt Goldenberg12dCommeting on specific parts of articles and seeing those comments as you go through the article.

I few months ago I found a copy of Staying OK, the sequel to I'm OK—You're OK (the book that probably did the most to popularize transactional analysis), on the street near my home in Berkeley. Since I had previously read Games People Play and had not thought about transactional analysis much since, I scooped it up. I've just gotten around to reading it.

My recollection of Games People Play is that it's the better book (based on what I've read of Staying OK so far). Also, transactional analysis is kind of in the water in ways... (read more)

Off-topic riff on "Humans are Embedded Agents Too"

One class of insights that come with Buddhist practice might be summarized as "determinism", as in, the universe does what it is going to do no matter what the illusory self predicts. Related to this is the larger Buddhist notion of "dependent origination", that everything (in the Hubble volume you find yourself in) is causally linked. This deep deterministic interdependence of the world is hard to appreciate from our subjective experience, because the creation of ontology crea... (read more)

ADHD Expansionism

I'm not sure I fully endorse this idea, hence short form, but it's rattling around inside my head and maybe we can talk about it?

I feel like there's a kind of ADHD (or ADD) expansionism happening, where people are identifying all kinds of things as symptoms of ADHD, especially subclinical ADHD.

On the one had this seems good in the sense that performing this kind of expansionism seems to actually be helping people by giving them permission to be the way they are via a diagnosis and giving them strategies they can try to live their life bett... (read more)

2Dagon7dThere's not much agreement on what to call it when "normal" is harmful, but not so overwhelmingly common as to seem immutable. Agreed that thinking of it as a pathology doesn't quite cut it, but also "acceptable" seems wrong.

You're always doing your best

I like to say "you're always doing your best", especially as kind words to folks when they are feeling regret.

What do I mean by that, though? Certainly you can look back at what you did in any given situation and imagine having done something that would have had a better outcome.

What I mean is that, given the all conditions under which you take any action, you always did the best you could. After all, if you could have done something better given all the conditions you would have.

The key is that all the conditions include the e... (read more)

1Troy Macedon11dBut it could've gone another way if you were slightly different in one area, like you willpower. But I do agree that regret is wrong because I don't view myself as anything other than the entity that caused the worse outcome. I'm not the entity that has more willpower. I'm the entity that made the mistake and is now experiencing the following consequences. To change the past is to commit cartesian su*cide.

I feel like something is screwy with the kerning on LW over the past few weeks. Like I keep seeing sentences that look like they are missing space between the period and the start of the next sentence but when I check closely they are not. For whatever reason this doesn't seem to show in the editor, only in the displayed text.

I think I've only noticed this with comments and short form, but maybe it's happening other places? Anyway, wanted to see if others are experiencing this and raise a flag for the LW team that a change they made may be behaving in unexpected ways.

6Ben Pace6moIt is totally real and it's been this way over two months. It's an issue with Chrome, and I'm kinda boggled that Chrome doesn't jump on these issues, it's a big deal for readability.
2habryka6moYep, it's a Chrome bug. It's kind of crazy.

Story stats are my favorite feature of Medium. Let me tell you why.

I write primarily to impact others. Although I sometimes choose to do very little work to make myself understandable to anyone who is more than a few inferential steps behind me and then write out on a far frontier of thought, nonetheless my purpose remains sharing my ideas with others. If it weren't for that, I wouldn't bother to write much at all, and certainly not in the same way as I do when writing for others. Thus I care instrumentally a lot about being able to assess if I a... (read more)

2Ruby1yVery quick thought: basically the reasons we haven't and might not do more in this direction is how it might alter what gets written. It doesn't seem good if people were to start writing more heavily for engagement metrics. Also not clear to me that engagement metrics capture the true value that matters of intellectual contributions.
5Raemon1y(Habryka has an old comment somewhere delving into this, which I couldn't find. But the basic gist was "the entire rest of the internet is optimizing directly for eyeballs, and it seemed good for LessWrong to be a place trying to have a different set of incentives")