Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
This is a special post for quick takes by Gordon Seidoh Worley. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
149 comments, sorted by Click to highlight new comments since: Today at 7:42 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Some thoughts on Buddhist epistemology.

This risks being threatening, upsetting, and heretical within a certain point of view I commonly see expressed on LW for reasons that will become clear if you keep reading. I don't know if that means you shouldn't read this if that sounds like the kind of thing you don't want to read, but I put it out there so you can make the choice without having to engage in the specifics if you don't want to. I don't think you will be missing out on anything if that warning gives you a tinge of "maybe I won't like reading this".

My mind produces a type error when people try to perform deep and precise epistemic analysis of the dharma. That is, when they try to evaluate the truth of claims made by the dharma this seems generally fine, but when they go deep enough that they end up trying to evaluate whether the dharma itself is based on something true, I get the type error.

I'm not sure what people trying to do this turn up. My expectation is that their results looks like noise if you aggregate over all such attempts. The reason being that the dharma is not founded on episteme.

As a quick reminder, there are at leas... (read more)

So when we talk about the dharma or justify our actions on it, it's worth noting that it is not really trying to provide consistent episteme. [...] Thus it's a strange inversion to ask the dharma for episteme-based proofs. It can't give them, nor does it try, because its episteme is not consistent and cannot be because it chooses completeness instead.

In my view, this seems like a clear failing. The fact that the dharma comes from a tradition where this has usually been the case is not an excuse for not trying to fix it.

Yes, the method requires temporarily suspending episteme-based reasoning and engaging with less conceptual forms of seeing. But it can still be justified and explained using episteme-based models; if it could not, there would be little reason to expect that it would be worth engaging with.

This is not just a question of "the dharma has to be able to justify itself"; it's also a question of leaving out the episteme component leaves the system impoverished, as noted e.g. here:

Recurrent training to attend to the sensate experience moment-by-moment can undermine the capacity to make meaning of experience. (The psychoanalyst Wilfred Bion d
... (read more)
7Gordon Seidoh Worley5y
Hmm, I feel like there's multiple things going on here, but I think it hinges on this: Different traditions vary on how much to emphasize models and episteme. None of them completely ignore it, though, only seek to keep it within its proper place. It's not that episteme is useless, only that it is not primary. You of course should include it because it's part of the world, and to deny it would lead to confusion and suffering. As you note with your first example especially, some people learn to turn off the discriminating mind rather than hold it as object, and they are worse for it because then they can't engage with it anymore. Turning it off is only something you could safely do if you really had become so enlightened that you had no shadow and would never accumulate any additional shadow, and even then it seems strange from where I stand to do that although maybe it would make sense to me if I were in the position that it were a reasonable and safe option. So to me this reads like an objection to a position I didn't mean to take. I mean to say episteme has a place and is useful, it is not taken as primary to understanding, at some points Buddhist episteme will say contradictory things, that's fine and expected because dharma episteme is normally post hoc rather than ante hoc (though is still expected to be rational right up until it is forced to hit a contradiction), and ante hoc is okay so long as it is then later verified via gnosis or techne.

>unmediated-by-ontology knowledge of reality.

I think this is a confused concept, related to wrong-way-reduction.

2Gordon Seidoh Worley5y
I've thought about this a bit and I don't see a way through to what you are thinking that makes you suggest this since I don't see a reduction happening here, much less one moving towards bundling together confusion that only looks simpler. Can you say a bit more that might make your perspective on this clearer?
4romeostevensit5y
In particular, I think under this formulation knowledge and onotology largely refer to the same thing. Which is part of the reason I think this formulation is mistaken. Separately, I think 'reality' has too many moving parts to be useful for the role it's being used for here.
2Gordon Seidoh Worley5y
Maybe, although I think there is a not very clear distinction I'm trying to make between knowledge and ontological knowledge, though maybe it's not coming across, although if it is and you have some particular argument for why, say, there isn't or can't be such a meaningful distinction, I'd be interested to hear it. As for my model of reality having too many moving parts, you're right, I'm not totally unconfused about everything yet, and it's the place the remaining confusion lives.
4Chris_Leong5y
I agree with KaJ Solata and Viliam that episteme is underweighted in Buddhism, but thanks for explicating that world view
4Viliam5y
The "unmediated contact via the senses" can only give you sensual inputs. Everything else contains interpretation. That means, you can only have "gnosis" about things like [red], [warm], etc. Including a lot of interesting stuff about your inner state, of course, but still fundamentally of the type [feeling this], [thinking that], and perhaps some usually-unknown-to-non-Buddhists [X-ing Y], etc. Poetically speaking, these are the "atoms of experience". (Some people would probably say "qualia".) But some interpretation needs to come to build molecules out of these atoms. Without interpretation, you could barely distinguish between a cat and a warm pillow... which IMHO is a bit insufficient for a supposedly supreme knowledge.
3romeostevensit4y
It's even worse than that, 'raw' sensory inputs already have ontological commitments. Those priors inform all our interpretations pre-consciously. Agree that the efficiency of various representations in the context of coherent intents is a good lens.
3Ouroborus4y
Could you clarify the distinction between techne and gnosis? Is it something like playing around with a hammer and seeing how it works?
2Gordon Seidoh Worley4y
It's not a very firm distinction, but techne is knowledge from doing, so I would consider playing with a hammer a way to develop techne. It certainly overlaps with the concept of gnosis, which is a bit more general and includes knowledge from direct experience that doesn't involve "doing", like the kind of knowledge you gain from observing. But the act of observing is a kind of thing you do, so as you see it's fuzzy, but generally I think of techne as that which involves your body moving.
1hamnox5y
I am glad for having read this, but can't formulate my thoughts super clearly. Just have this vague sense that you're using too many groundless words and not connecting to the few threads of gnosis(?) that other rationalists would have available.

If an organism is a thing that organizes, then a thing that optimizes is an optimism.

I'm sad that postrationality/metarationality has, as a movement, started to collapse on itself in terms of doing the thing it started out doing.

What I have in mind is that initially, say 5+ years ago, postrationality was something of a banner for folks who were already in the rationalist or rationalist-adjacent community, saw some ways in which rationalists were failing at their own project, and tried to work on figuring out how to do those things.

Now, much like postmodernism before it, I see postrationality collapsing from a thing only for people who were already rationalists and wanted to go beyond its limitations of the time to a kind of prerationality that rejects instead of builds on the rationalist project.

This kind of dynamic is pretty common (cf. premodern, modern, and postmodern) but it still sucks. On the other hand, I guess the good side of it is that I see lots of signs that the rationality community is better integrating some of the early postrationalist insights such that it feels like there's less to push back against in the median rationalist viewpoint.

Yeah, it seems like postrationalists should somehow establish their rationalist pedigree before claiming the post- title. IIRC, Chapman endorsed this somewhere on twitter? But I can't find it now. Maybe it was a different postrat. Also it was years ago.

2Viliam3y
Are there any specific articles you could point out as good examples of this? I don't remember reading anything about "postrationality" for a year or so -- I actually kinda forgot they exist -- so I am curious what I missed. I had a weird feeling from the beginning, when it seemed that Chapman -- a leader of a local religious group, if I understand it correctly -- became the key figure of "doing rationality better". On the other hand, it's not like Less Wrong avoided the religious woo completely. Seems like somehow it only became a minor topic here, and maybe more central one among the postrationalists? (Perhaps because other competing topics, such as AI, were missing?) Also, I suppose that defining yourself in opposition to something is not helpful to actually finding the "middle way". Which is why it was easier for rationalists to accept the good arguments made by postrationalists, than the other way round.

This is a short post to register my kudos to LWers for being consistently pretty good at helping each other find answers to questions, or at least make progress towards answers. I feel like I've used LW numerous times to make progress on work by saying "here's what I got, here's where I'm confused, what do you think?", whether that be through formal question posts or regular posts that are open ended. Some personal examples that come to mind: recent, older, another.

Praise to the LW community!

I'm fairly pessimistic on our ability to build aligned AI. My take is roughly that it's theoretically impossible and at best we might build AI that is aligned well enough that we don't lose. I've not written one thing to really summarize this or prove it, though.

The source of my take comes from two facts:

  1. Goodharting is robust. That is, the mechanism of Goodharting seems impossible to overcome. Goodharting is just a fact of any control system.
  2. It's impossible to infer the inner experience (and thus values) of another being perfectly without making normative assumptions.

Stuart Armstrong has made a case for (2) with his no free lunch theorem. I've not seen anyone formally make the case for (1), though.

Is this something worth trying to prove? That Goodharting is unavoidable and at most we can try to contain its effects?

I'm many years out from doing math full time so I'm not sure if I could make a rigorous proof of it, but this seems to be something that people disagree on sometimes (arguing that Goodharting can be overcome) but I think most of those discussions don't get very precise about what that means.

This paper gives a mathematical model of when Goodharting will occur. To summarize: if

(1) a human has some collection  of things which she values,

(2) a robot has access to a proxy utility function which takes into account some strict subset of those things, and

(3) the robot can freely vary how much of  there are in the world, subject only to resource constraints that make the  trade off against each other,

then when the robot optimizes for its proxy utility, it will minimize all 's which its proxy utility function doesn't take into account. If you impose a further condition which ensures that you can't get too much utility by only maximizing some strict subset of the 's (e.g. assuming diminishing marginal returns), then the optimum found by the robot will be suboptimal for the human's true utility function.

That said, I wasn't super-impressed by this paper -- the above is pretty obvious and the mathematical model doesn't elucidate anything, IMO.

Moreover, I think this model doesn't interact much with the skeptical take about whether Goodhart's Law implies doom in practice. Namely, here are some things I believe about the worl... (read more)

2Gordon Seidoh Worley2y
I actually don't think that model is general enough. Like, I think Goodharting is just a fact of control system's observing. Suppose we have a simple control system with output X and a governor G. G takes a measurement m(X) (an observation) of X. So long as m(X) is not error free (and I think we can agree that no real world system can be actually error free), then X=m(X)+ϵ for some error factor ϵ. Since G uses m(X) to regulate the system to change X, we now have error influencing the value of X. Now applying the standard reasoning for Goodhart, in the limit of optimization pressure (i.e. G regulating the value of X for long enough), ϵ comes to dominate the value of X. This is a bit handwavy, but I'm pretty sure it's true, which means in theory any attempt to optimize for anything will, under enough optimization pressure, become dominated by error, whether that's human values or something else. The only interesting question is can we control the error enough, either through better measurement or less optimization pressure, such that we can get enough signal to be happy with the output.
1Sam Marks2y
Hmm, I'm not sure I understand -- it doesn't seem to me like noisy observations ought to pose a big problem to control systems in general. For example, suppose we want to minimize the number of mosquitos in the U.S., and we access to noisy estimates of mosquito counts in each county. This may result in us allocating resources slightly inefficiently (e.g. overspending resources on counties that have fewer mosquitos than we think), but we'll still always be doing the approximately correct thing and mosquito counts will go down. In particular, I don't see a sense in which the error "comes to dominate" the thing we're optimizing. One concern which does make sense to me (and I'm not sure if I'm steelmanning your point or just saying something completely different) is that under extreme optimization pressure, measurements might become decoupled from the thing they're supposed to measure. In the mosquito example, this would look like us bribing the surveyors to report artificially low mosquito counts instead of actually trying to affect real-world mosquito counts. If this is your primary concern regarding Goodhart's Law, then I agree the model above doesn't obviously capture it. I guess it's more precisely a model of proxy misspecification.
2Gordon Seidoh Worley2y
"Error" here is all sources of error, not just error in the measurement equipment. So bribing surveyors is a kind of error in my model.
2RHollerith2y
Can you explain where there is an error term in AlphaGo or where an error term might appear in hypothetical model similar to AlphaGo trained much longer with much more numerous parameters and computational resources?
8Gordon Seidoh Worley2y
AlphaGo is fairly constrained in what it's designed to optimize for, but it still has the standard failure mode of "things we forgot to encode". So for example AlphaGo could suffer the error of instrumental power grabbing in order to be able to get better at winning Go because we misspecified what we asked it to measure. This is a kind of failure introduced into the systems by humans failing to make m(X) adequately evaluate X as we intended, since we cared about winning Go games while also minimizing side effects, but maybe when we constructed m(X) we forgot about minimizing side effects.
6RHollerith2y
At least one person here disagrees with you on Goodharting. (I do.) You've written before on this site if I recall correctly that Eliezer's 2004 CEV proposal is unworkable because of Goodharting. I am granting myself the luxury of not bothering to look up your previous statement because you can contradict me if my recollection is incorrect. I believe that the CEV proposal is probably achievable by humans if those humans had enough time and enough resources (money, talent, protection from meddling) and that if it is not achievable, it is because of reasons other than Goodhart's law. (Sadly, an unaligned superintelligence is much easier for humans living in 2022 to create than a CEV-aligned superintelligence is, so we are probably all going to die IMHO.) Perhaps before discussing the CEV proposal we should discuss a simpler question, namely, whether you believe that Goodharting inevitably ruins the plans of any group setting out intentionally to create a superintelligent paperclip maximizer. Another simple goal we might discuss is a superintelligence (SI) whose goal is to shove as much matter as possible into a black hole or an SI that "shuts itself off" within 3 months of its launching where "shuts itself off" means stops trying to survive or to affect reality in any way.
0RHollerith2y
The reason Eliezer's 2004 "coherent extrapolated volition" (CEV) proposal is immune to Goodharting is probably because being immune to it was probably one of the main criteria for its creation. I.e., Eliezer came up with it through a process of looking for a design immune to Goodharting. It may very well be that all other published proposals for aligning super-intelligent AI are vulnerable to Goodharting. Goodhart's law basically says that if we put too much optimization pressure on criterion X, then as a side effect, the optimization process drives criteria Y and Z, which we also care about, higher or lower than we consider reasonable. But that doesn't apply when criterion X is "everything we value" or "the reflective equilibrium of everything we value". The problem of course being that although the CEV plan is probably within human capabilities to implement (and IMHO Scott Garrabrant's work is probably a step forward) unaligned AI is probably significantly easier to implement, so will likely arrive first.

People often talk of unconditional love, but they implicitly mean unconditional love for or towards someone or something, like a child, parent, or spouse. But this kind of love is by definition conditional because it is love conditioned on the target being identified as a particular thing within the lover's ontology.

True unconditional love is without condition, and it cannot be directed because to direct is to condition and choose. Unconditional love is love of all, of everything and all of reality even when not understood as a thing.

Such love is rare, so it seems worth pursuing the arduous cultivation of it.

4Dagon4y
"love" is poorly-defined enough that it always depends on context. Often, "unconditional love" _is_ expected to be conditional on identity, and really should be called "precommitment against abandonment" or "unconditional support". But neither of those signal the strength of the intent and safety conferred by the relationship very well. I _really_ like your expansion into non-identity, though. Love for the real state of the universe, and the simultaneous desire to pick better futures and acceptance of whichever future actually obtains is a mindset I strive for.
4Gordon Seidoh Worley4y
This is the hidden half of what got me thinking about this: my growing being with the world as it is rather than as I understand it.
4Raemon4y
I have a blog post upcoming called ‘Unconditional Love Integration Test: Hitler’

I think it's safe to say that many LW readers don't feel like spirituality is a big part of their life, yet many (probably most) people do experience a thing that goes by many names---the inner light, Buddha-nature, shunyata, God---and falls under the heading of "spirituality". If you're not sure what I'm talking about, I'm pointing to a common human experience you aren't having.

Only, I don't think you're not having it, you just don't realize you are having those experiences.

One way some people get in touch with this thing, which I like to think of as "the source" and "naturalness" and might describe as the silently illuminated wellspring, is with drugs, especially psychedelics but really any drug that gets you to either reduce activity of the default-mode network or at least notice it's operation and stop identifying with it (dissociatives may function like this). In this light, I think of drug users as very spiritual people, only they are unfortunately doing it in a way that is often destructive to their bodies and causes headlessness (causes them to fail to perceive reality accurately and so may act ... (read more)

Only, I don't think you're not having it, you just don't realize you are having those experiences.

The mentality that lies behind a statement like that seems to me to be pretty dangerous. This is isomorphic to "I know better than other people what's going on in those other people's heads; I am smarter/wiser/more observant/more honest."

Sometimes that's *true.* Let's not forget that. Sometimes you *are* the most perceptive one in the room.

But I think it's a good and common standard to be skeptical of (and even hostile toward) such claims (because such claims routinely lead to unjustified and not-backed-by-reality dismissal and belittlement and marginalization of the "blind" by the "seer"), unless they come along with concrete justification:

  • Here are the observations that led me to claim that all people do in fact experience X, in direct contradiction of individuals claiming otherwise; here's why I think I'm correct to ignore/erase those people's experience.
  • Here are my causal explanations of why and how people would become blindspotted on X, so that it's not just a blanket assertion and so that peo
... (read more)
8Ben Pace5y
Yeah, I think there's a subtle distinction. While it's often correct to believe things that you have a hard time communicating explicitly (e.g. most of my actual world model at any given time), the claim that there's something definitely true but that in-principle I can't persuade you of and also can't explain to you, especially when used by a group of people to coordinate around resources, is often functioning as a coordination flag and not as a description of reality.
6Raemon5y
Just wanted to note that I am thinking about this exchange, hope to chime in at some point. I'm not sure whether I'm on the same page as Ben about it. May take a couple days to have time to respond in full.
4Raemon5y
Just a quick update: the mod team just chatted a bunch about this thread. There’s a few different things going on. It’ll probably be another day before a mod follows up here.
[-]Ben Pace5yModerator Comment100

[Mod note] I thought for a while about how shortform interacts with moderation here. When Ray initially wrote the shortform announcement post, he described the features, goals, and advice for using it, but didn’t mention moderation. Let me follow-up by saying: You’re welcome and encouraged to enforce whatever moderation guidelines you choose to set on shortform, using tools like comment removal, user bans, and such. As a reminder, see the FAQ section on moderation for instructions on how to use the mod tools. Do whatever you want to help you think your thoughts here in shortform and feel comfortable doing so.

Some background thoughts on this: In other places on the internet, being blocked locks you out of the communal conversation, but there are two factors that make it pretty different here. Firstly, banning someone from a post on LW means they can’t reply to the content they’re banned from, but it doesn’t hide your content from them or their content from you. And secondly, everyone here on LessWrong has a common frontpage where the main conversation happens - the shortform is a low-key place and a relatively unimportant part of the conversation. (You can be banned from posts on fr... (read more)

6Gordon Seidoh Worley5y
Sure, this is short form. I'm not trying very hard to make a complete argument to defend my thoughts, just putting them out there. There is no norm that I need always abide everywhere to present the best (for some notion of best) version of my reasons for things I claim, least of all, I think, in this space as opposed to, say, in a frontpage post. Thus it feels to me a bit out of place to object in this way here, sort of like objecting that my fridge poetry is not very good or my shower singing is off key. Now, your point is well taken, but I also generally choose to simply not be willing to cross more than a small amount of inferential distance in my writing (mostly because I think slowly and it requires significant time and effort for me to chain back far enough to be clear to successively wider audiences), since I often think of it as leaving breadcrumbs for those who might be nearby rather than leading people a long way towards a conclusion. I trust people to think things through for themselves and agree with me or not as their reason dictates. Yes, this means I am often quite distanced from easily verifying the most complex models I have, but such seems to be the nature of complex models that I don't even have complete in my own mind yet, much less complete in a way that I would lay them out precisely such that they could be precisely verified point by point. This perhaps makes me frustratingly inscrutable about my most exciting claims to those with the least similar priors, but I view it as a tradeoff for aiming to better explain more of the world to myself and those much like me at the expense of failing to make those models legible enough for those insufficiently similar to me to verify them. Maybe my circumstances will change enough that one day I'll make a much different tradeoff?
3[DEACTIVATED] Duncan Sabien5y
This response missed my crux. What I'm objecting to isn't the shortform, but the fundamental presumptuousness inherent in declaring that you know better than everyone else what they're experiencing, *particularly* in the context of spirituality, where you self-describe as more advanced than most people. To take a group of people (LWers) who largely say "nah, that stuff you're on is sketchy and fake" and say "aha, actually, I secretly know that you're in my domain of expertise and don't even know it!" is a recipe for all sorts of bad stuff. Like, "not only am I *not* on some sketchy fake stuff, I'm actually superior to my naysayers by very virtue of the fact that they don't recognize what I'm pointing at! Their very objection is evidence that I see more clearly than they do!" I'm pouring a lot into your words, but the point isn't that your words carried all that so much as that they COULD carry all that, in a motte-and-bailey sort of way. The way you're saying stuff opens the door to abuse, both social and epistemic. My objection wasn't actually a call for you to give more explanation. It was me saying "cut it out," while at the same time acknowledging that one COULD, in principle, make the same claim in a justified fashion, if they cared to.
4Gordon Seidoh Worley5y
Note: what follows responds literally to what you said. I'm suspicious enough that my interpretation is correct that I'll respond based on it, but I'm open to the possibility this was meant more metaphorically and I've misunderstood your intention. Ah, but that's not up to you, at least not here. You are welcome to dislike what I say, claim or argue that I am dangerous in some way, downvote me, flag my posts, etc. BUT it's not up to you to enforce a norm here to the best of my knowledge, even if it's what you would like to do. Sorry if that is uncharacteristically harsh and direct of me, but if that was your motivation, I think it important to say I don't recognize you as having the authority to do that in this space, consider it a violation of my commenting guidelines, and will delete future comments that attempt to do the same.

Hey Gordon, let me see if I understand your model of this thread. I’ll write mine and can you tell me if it matches your understanding?

  • You write a post giving your rough understanding of a commonly discussed topic that many are confused by
  • Duncan objects to a framing sentence that he claims means “I know better than other people what's going on in those other people's heads; I am smarter/wiser/more observant/more honest." because it seems inappropriate and dangerous in this domain (spirituality)
  • You say “Dude, I’m just getting some quick thoughts off my chest, and it’s hard to explain everything”
  • Duncan says you aren’t responding to him properly - he does not believe this is a disagreement but a norm-violation
  • You say that Duncan is not welcome to prosecute norm violations on your wall unless they are norms that you support
4Gordon Seidoh Worley5y
Yes, that matches my own reading of how the interaction progressed, caveat any misunderstanding I have of Duncan's intent.

nods Then I suppose I feel confused by your final response.

If I imagine writing a shortform post and someone said it was:

  • Very rude to another member of the community
  • Endorsing a study that failed to replicate
  • Lied about an experience of mine
  • Tried to unfairly change a narrative so that I was given more status

I would often be like “No, you’re wrong” or maybe “I actually stand by it and intended to be rude” or “Thanks, that’s fair, I’ll edit”. I can also imagine times where the commenter is needlessly aggressive and uncooperative where I’d just strong downvote and ignore.

But I’m confused by saying “you’re not allowed to tell me off for norm-violations on my shortform”. To apply that principle more concretely, it could say “you’re not allowed to tell me off for lying on my shortform”.

My actual model of you feels a bit confused by Duncan’s claim or something, and wants to fight back against being attacked for something you don’t see as problematic. Like, it feels presumptuous of Duncan to walk into your post and hold you to what feels mostly like high standards of explanation, and you want to (rightly) say that he’s not allowed to do that.

Does that all seem right?

1Gordon Seidoh Worley5y
Yes. To add to this what I'm most strongly reacting to is not what he says he's doing explicitly, which I'm fine with, but what further conversation suggests he is trying to do: to act as norm enforcer rather than as norm enforcement recommender.
4[DEACTIVATED] Duncan Sabien5y
I explicitly reject Gordon's assertions about my intentions as false, and ask (ASK, not demand) that he justify (i.e. offer cruxes) or withdraw them.
3Gordon Seidoh Worley5y
I cannot adequately do that here because it relies on information you conveyed to me in a non-public conversation. I accept that you say that's not what you're doing, and I am happy to concede that your internal experience of yourself as you experience it tells you that you are doing what you are doing, but I now believe that my explanation better describes why you are doing what you are doing than the explanation you are able to generate to explain your own actions. The best I can maybe offer is that I believe you have said things that are better explained by an intent to enforce norms rather than argue for norms and imply that general case should be applied in this specific case. I would say the main lines of evidence revolve around how I interpret your turns of phrase, how I read your tone (confrontational and defensive), what aspects of things I have said you have chosen to respond to, how you have directed the conversation, and my general model of human psychology with the specifics you are giving me filled in. Certainly I may be mistaken in this case and I am reasoning off circumstantial evidence which is not a great situation to be in, but you have pushed me hard enough here and elsewhere that it has made me feel it is necessary to act to serve the purpose of supporting the conversation norms I prefer in the places you have engaged me. I would actually really like this conversation to end because it is not serving anything I value, other than that I believe not responding would simply allow what I dislike to continue and be subtly accepted, and I am somewhat enjoying the opportunity to engage in ways I don't normally so I can benefit from the new experience.
4[DEACTIVATED] Duncan Sabien5y
I note for the record that the above is strong evidence that Gordon was not just throwing an offhand turn of phrase in his original post; he does and will regularly decide that he knows better than other people what's going on in those other people's heads. The thing I was worried about, and attempting to shine a light on, was not in my imagination; it's a move that Gordon endorses, on reflection, and it's the sort of thing that, historically, made the broader culture take forever to recognize e.g. the existence of people without visual imagery, or the existence of episodics, or the existence of bisexuals, or any number of other human experiences that are marginalized by confident projection. I'm comfortable with just leaving the conversation at "he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move." Personally, I find it unjustifiable and morally abhorrent. Gordon clearly does not. Maybe that's the crux.

[He] does and will regularly decide that he knows better than other people what's going on in those other people's heads. [...] Personally, I find it unjustifiable and morally abhorrent.

How can it be morally abhorrent? It's an epistemic issue. Factual errors often lead to bad consequences, but that doesn't make those errors moral errors. A moral error is an error about a moral fact, assignement of value to situations, as opposed to prediction of what's going on. And what someone thinks is a factual question, not a question of assigning value to an event.

9Wei Dai5y
Things that are morally abhorrent are not necessarily moral errors. For example I can find wildlife suffering morally abhorrent but there's obviously no moral errors or any kind of errors being committed there. Given that the dictionary defines abhorrent as "inspiring disgust and loathing; repugnant" I think "I find X morally abhorrent" just means "my moral system considers X to be very wrong or to have very low value."
8Vladimir_Nesov5y
That's one way for my comment to be wrong, as in "Systematic recurrence of preventable epistemic errors is morally abhorrent." When I was writing the comment, I was thinking of another way it's wrong: given morality vs. axiology distinction, and distinction between belief and disclosure of that belief, it might well be the case that it's a useful moral principle to avoid declaring beliefs about what others think, especially when those others disagree with the declarations. In that case it's a violation of this principle, a moral wrong, to declare such beliefs. (A principle like this gets in the way of honesty, so promoting it is contentious and shouldn't be an implicit background assumption. And the distinction between belief and its declaration was not clearly made in the above discussion.)
5[DEACTIVATED] Duncan Sabien5y
I find it morally abhorrent because, when not justified and made-cruxy (i.e. when done the only way I've ever seen Gordon do it), it's tantamount to trying to erase another person/another person's experience, and (as noted in my first objection) it often leads, in practice, to socially manipulative dismissiveness and marginalization that's not backed by reality.

So it's a moral principle under the belief vs. declaration distinction (as in this comment). In that case I mostly object to not making that distinction (a norm to avoid beliefs of that form is on entirely different level than a norm to avoid their declarations).

Personally I don't think the norm about declarations is on the net a good thing, especially on LW, as it inhibits talking about models of thought. The examples you mentioned are important but should be covered by a more specialized norm that doesn't cause as much collateral damage.

7[DEACTIVATED] Duncan Sabien5y
I'm not sure I'm exactly responding to what you want me to respond to, but: It seems to me that a declaration like "I think this is true of other people in spite of their claims to the contrary; I'm not even sure if I could justify why? But for right now, that's just the state of what's in my head" is not objectionable/doesn't trigger the alarm I was trying to raise. Because even though it fails to offer cruxes or detail, it at least signals that it's not A STATEMENT ABOUT THE TRUE STATE OF THE UNIVERSE, or something? Like, it's self-aware about being a belief that may or may not match reality? Which makes me re-evaluate my response to Gordon's OP and admit that I could have probably offered the word "think" something like 20% more charity, on the same grounds, though on net I still am glad that I spelled out the objection in public (like, the objection now seems to me to apply a little less, but not all the way down to "oops, the objection was fundamentally inappropriate").
8Vladimir_Nesov5y
(By "belief" I meant a belief that talkes place in someone's head, and its existence is not necessarily communicated to anyone else. So an uttered statement "I think X" is a declaration of belief in X, not just a belief in X. A belief in X is just a fact about that person's mind, without an accompanying declaration. In this framing, the version of the norm about beliefs (as opposed to declarations) is the norm not to think certain thoughts, not a norm to avoid sharing the observations about the fact that you are thinking them.) I think a salient distinction between declarations of "I think X" and "it's true that X" is a bad thing, as described in this comment. The distinction is that in the former case you might lack arguments for the belief. But if you don't endorse the belief, it's no longer a belief, and "I think X" is a bug in the mind that shouldn't be called "belief". If you do endorse it, then "I think X" does mean "X". It is plausibly a true statement about the state of the universe, you just don't know why; your mind inscrutably says that it is and you are inclined to believe it, pending further investigation. So the statement "I think this is true of other people in spite of their claims to the contrary" should mean approximately the same as "This is true of other people in spite of their claims to the contrary", and a meaningful distinction only appears with actual arguments about those statements, not with different placement of "I think".
7Gordon Seidoh Worley5y
I forget if we've talked about this specifically before, but I rarely couch things in ways that make clear I'm talking about what I think rather than what is "true" unless I am pretty uncertain and want to make that really clear or expect my audience to be hostile or primarily made up of essentialists. This is the result of having an epistemology where there is no direct access to reality so I literally cannot say anything that is not a statement about my beliefs about reality, so saying "I think" or "I believe" all the time is redundant because I don't consider eternal notions of truth meaningful (even mathematical truth, because that truth is contingent on something like the meta-meta-physics of the world and my knowledge of it is still mediated by perception, cf. certain aspects of Tegmark). I think of "truth" as more like "correct subjective predictions, as measured against (again, subjective) observation", so when I make claims about reality I'm always making what I think of as claims about my perception of reality since I can say nothing else and don't worry about appearing to make claims to eternal, essential truth since I so strongly believe such a thing doesn't exist that I need to be actively reminded that most of humanity thinks otherwise to some extent. Sort of like going so hard in one direction that it looks like I've gone in the other because I've carved out everything that would have allowed someone to observe me having to navigate between what appear to others to be two different epistemic states where I only have one of them. This is perhaps a failure of communication, and I think I speak in ways in person that make this much clearer and then I neglect the aspects of tone not adequately carried in text alone (though others can be the judge of that, but I basically never get into discussions about this concern in person, even if I do get into meta discussions about other aspects of epistemology). FWIW, I think Eliezer has (or at least had) a simil

leaving the conversation at "he, I, and LessWrong as a community are all on the same page about the fact that Gordon endorses making this mental move."

Nesov scooped me on the obvious objection, but as long as we're creating common knowledge, can I get in on this? I would like you and Less Wrong as a community to be on the same page about the fact that I, Zack M. Davis, endorse making the mental move of deciding that I know better than other people what's going on in those other people's heads when and only when it is in fact the case that I know better than those other people what's going on in their heads (in accordance with the Litany of Tarski).

the existence of bisexuals

As it happens, bisexual arousal patterns in men are surprisingly hard to reproduce in the lab![1] This is a (small, highly inconclusive) example of the kind of observation that one might use to decide whether or not we live in a world in which the cognitive algorithm of "Don't decide that you know other people's minds better than they do" performs better or worse than other inference procedures.


  1. J. Michael Bailey, "What Is Sexual Orientation and Do Women Have One?", section titled "Sexual Arousal Patter

... (read more)
4[DEACTIVATED] Duncan Sabien5y
Yes, as clearly noted in my original objection, there is absolutely a time and a place for this, and a way to do it right; I too share this tool when able and willing to justify it. It's only suspicious when people throw it out solely on the strength of their own dubious authority. My whole objection is that Gordon wasn't bothering to (I believe as a cover for not being able to).

as clearly noted in my original objection

Acknowledged. (It felt important to react to the great-grandparent as a show of moral resistance to appeal-to-inner-privacy conversation halters, and it was only after posting the comment that I remembered that you had acknolwedged the point earlier in the thread, which, in retrospect, I should have at least acknowledged even if the great-grandparent still seemed worth criticizing.)

there is absolutely a time and a place for this

Exactly—and lesswrong.com is the place for people to report on their models of reality, which includes their models of other people's minds as a special case.

Other places in Society are right to worry about erasure, marginalization, and socially manipulative dismissiveness! But in my rationalist culture, while standing in the Citadel of Truth, we're not allowed to care whether a map is marginalizing or dismissive; we're only allowed to care about whether the map reflects the territory. (And if there are other cultures competing for control of the "rationalist" brand name, then my culture is at war with them.)

My whole objection is that Gordon wasn't bothering to

Great! Thank you for critcizing people who don'

... (read more)

criticizing people who don't justify their beliefs with adequate evidence and arguments

I think justification is in the nature of arguments, but not necessary for beliefs or declarations of beliefs. A belief offered without justification is a hypothesis called to attention. It's concise, and if handled carefully, it can be sufficient for communication. As evidence, it's a claim about your own state of mind, which holds a lot of inscrutable territory that nonetheless can channel understanding that doesn't yet lend itself to arguments. Seeking arguments is certainly a good thing, to refactor and convey beliefs, but that's only a small part of how human intelligence builds its map.

2[DEACTIVATED] Duncan Sabien5y
Yeah, if I had the comment to rewrite (I prefer not to edit it at this point) I would say "My whole objection is that Gordon wasn't bothering to (and at this point in the exchange I have a hypothesis that it's reflective of not being able to, though that hypothesis comes from gut-level systems and is wrong-until-proven-right as opposed to, like, a confident prior)."
7Gordon Seidoh Worley5y
So, having a little more space from all this now, I'll say that I'm hesitant to try to provide justifications because certain parts of the argument require explaining complex internal models of human minds that are a level more complex than I can explain even though I'm using them (I only seem to be able to interpret myself coherently one level of organization less than the maximum level of organization present in my mind) and because other parts of the argument require gnosis of certain insights that I (and to the best of my knowledge, no one) knows how to readily convey without hundreds to thousands of hours of meditation and one-on-one interactions (though I do know a few people who continue to hope that they may yet discover a way to make that kind of thing scalable even though we haven't figured it out in 2500 years, maybe because we were missing something important to let us do it). So it is true that I can't provide adequate episteme of my claim, and maybe that's what you're reacting to. I don't consider this a problem, but I also recognize that within some parts of the rationalist community that is considered a problem (I model you as being one such person, Duncan). So given that, I can see why from your point of view it looks like I'm just making stuff up or worse since I can't offer "justified belief" that you'd accept as "justified", and I'm not really much interested in this particular case in changing your mind as I don't yet completely know myself how to generate that change in stance towards epistemology in others even though I encountered evidence that lead me to that conclusion myself.

There's a dynamic here that I think is somewhat important: socially recognized gnosis.

That is, contemporary American society views doctors as knowing things that laypeople don't know, and views physicists as knowing things that laypeople don't know, and so on. Suppose a doctor examines a person and says "ah, they have condition X," and Amy responds with "why do you say that?", and the doctor responds with "sorry, I don't think I can generate a short enough explanation that is understandable to you." It seems like the doctor's response to Amy is 'socially justified', in that the doctor won't really lose points for referring to a pre-existing distinction between those-in-the-know and laypeople (except maybe for doing it rudely or gracelessly). There's an important sense in which society understands that it in fact takes many years of focused study to become a physicist, and physicists should not be constrained by 'immediate public justification' or something similar.

But then there's a social question, of how to grant that status. One might imagine that we want astronomers to be able to do their ... (read more)

6Vladimir_Nesov5y
That's not the point! Zack is talking about beliefs, not their declaration, so it's (hopefully) not the case that there is "a time and a place" for certain beliefs (even when they are not announced), or that beliefs require ability and willingness to justify them (at least for some senses of "justify" and "belief").
3[DEACTIVATED] Duncan Sabien5y
Oh, one last footnote: at no point did I consider the other conversation private, at no point did I request that it be kept private, and at no point did Gordon ask if he could reference it (to which I would have said "of course you can"). i.e. it's not out of respect for my preferences that that information is not being brought in this thread.
2Gordon Seidoh Worley5y
Correct, it was made in a nonpublic but not private conversation, so you are not the only agent to consider, though admittedly the primary one other than myself in this context. I'm not opposed to discussing disclosure, but I'm also happy to let the matter drop at this point since I feel I have adequately pushed back against the behavior I did not want to implicitly endorse via silence since that was my primary purpose in continuing these threads past the initial reply to your comment.
3[DEACTIVATED] Duncan Sabien5y
There's a world of difference between someone saying "[I think it would be better if you] cut it out because I said so" and someone saying "[I think it would be better if you] cut it out because what you're doing is bad for reasons X, Y, and Z." I didn't bother to spell out that context because it was plainly evident in the posts prior. Clearly I don't have any authority beyond the ability to speak; to IS what I was doing, and all I was doing.
4Gordon Seidoh Worley5y
I mostly disagree that better reasons matter in a relevant way here, especially since I am currently reading your intent as not one of informing me of that you think there is a norm that should be enforced but instead a bid to enforce that norm. To me what's relevant is intended effect.

What's the difference?

Suppose I'm talking with a group of loose acquaintances, and one of them says (in full seriousness), "I'm not homophobic. It's not that I'm afraid of gays, I just think that they shouldn't exist."

It seem to me that it is appropriate for me to say, "Hey man, that's not ok to say." It might be that a number of other people in the conversation would back me up (or it might be that they they defend the first guy), but there wasn't common knowledge of that fact beforehand.

In some sense, this is a bid to establish a new norm, by pushing a the private opinions of a number of people into common knowledge. It also seems to me to be a virtuous thing to do in many situations.

(Noting that my response to the guy is not: "Hey, you can't do that, because I get to decide what people do around here." It's "You can't do that, because it's bad" and depending on the group to respond to that claim in one way or another.)




7[DEACTIVATED] Duncan Sabien5y
"Here are some things you're welcome to do, except if you do them I will label them as something else and disagree with them." Your claim that you had tentative conclusions that you were willing to update away from is starting to seem like lip service. Literally my first response to you centers around the phrase "I think it's a good and common standard to be skeptical of (and even hostile toward) such claims." That's me saying "I think there's a norm here that it's good to follow," along with detail and nuance à la here's when it's good not to follow it.
4Gordon Seidoh Worley5y
This is a question of inferred intent, not what you literally said. I am generally hesitant to take much moderation action based on what I infer, but you have given me additional reason to believe my interpretation is correct in a nonpublic thread on Facebook. (If admins feel this means I should use a reign of terror moderation policy I can switch to that.) Regardless, I consider this a warning of my local moderation policy only and don't plan to take action on this particular thread.
4Ben Pace5y
Er, I generally have FB blocked, but I have now just seen the thread on FB that Duncan made about you, and that does change how I read the dialogue (it makes Duncan’s comments feel more like they’re motivated by social coordination around you rather than around meditation/spirituality, which I’d previously assumed). (Just as an aside, I think it would’ve been clearer to me if you’d said “I feel like you’re trying to attack me personally for some reason and so it feels especially difficult to engage in good faith with this particular public accusation of norm-violation” or something like that.) I may make some small edit to my last comment up-thread a little after taking this into account, though I am still curious about your answer to the question as I initially stated it.
2[DEACTIVATED] Duncan Sabien5y
I can have different agendas and follow different norms on different platforms. Just saying. If I were trying to do the exact same thing in this thread as I am in the FB thread, they would have the same words, instead of different words. (The original objection *does* contain the same words, but Gordon took the conversation in meaningfully different directions on the two different platforms.) I note that above, Gordon is engaging in *exactly* the same behavior that I was trying to shine a spotlight on (claiming to understand my intent better than I do myself/holding to his model that I intend X despite my direct claims to the contrary).

Outside observer takeaway: There's a bunch of sniping and fighting here, but if I ignore all the fighting and look at only the ideas, what we have is that Gordon presented an idea, Duncan presented counterarguments, and Gordon declined to address the counterarguments. Posting on shortform doesn't come with an obligation to follow up and defend things; it's meant to be a place where tentative and early stage ideas can be thrown around, so that part is fine. But I did come away believing the originally presented idea is probably wrong.

(Some of the meta-level fighting seemed not-fine, but that's for another comment.)

5Viliam5y
Seems to me that modern life full of distractions. As a smart person, you probably have a work that requires thinking (not just moving your muscles in a repetitive way). In your free time there is internet with all the websites optimized for addictiveness. Plus all the other things you want to do (books to read, movies to see, friends to visit). Electricity can turn your late night into a day; you can take a book or a smartphone everywhere. So, unless we choose it consciously, there are no silent moments, to get in contact with yourself... or whatever higher power you imagine there to be, talking to you. I wonder what is the effect ratio between meditation and simply taking a break and wondering about stuff. Maybe it's our productivity-focused thinking saying that meditating (doing some hard work in order to gain supernatural powers) is a worthy endeavor, while goofing off is a sin.
3Gordon Seidoh Worley5y
"Simply taking a break and wondering about stuff" is a decent way to get in touch with this thing I'm pointing at. The main downside to it is that it's slow, in that for it to produce effects similar to meditation probably requires an order of magnitude more time, and likely won't result in the calmest brain states where you can study your phenomenology clearly.
1Xenotech5y
Are there individuals willing to explicitly engage in comforting discussion regarding these things you've written about? Any willing to extend personal invitations? I would love to discuss spirituality with otherwise "rational" intelligent people. Please consider teaching out to me personally - it would be transformative: drawnalong@gmail.com

I have plans to write this up more fully as a longer post explaining the broader ideas with visuals, but I thought I would highlight one that is pretty interesting and try out the new shortform feature at the same time! As such, this is not optimized for readability, has no links, and I don't try to backup my claims. You've been warned!

Suppose you frequently found yourself identifying with and feeling like you were a homunculus controlling your body and mind: there's a real you buried inside, and it's in the driver's seat. Sometimes your mind and body do what "you" want, sometimes it doesn't and this is frustrating. Plenty of folks reify this in slightly different ways: rider and elephant, monkey and machine, prisoner in cave (or audience member in theater), and, to a certain extent, variations on the S1/S2 model. In fact, I would propose this is a kind of dual process theory of mind that has you identifying with one of the processes.

A few claims.

First, this is a kind of constant, low-level dissociation. It's not the kind of high-intensity dissociation we often think of when we use that term, but it's still a separation of sense of ... (read more)

More surprised than perhaps I should be that people take up tags right away after creating them. I created the IFS tag just a few days ago after noticing it didn't exist but wanted to link it and I added the first ~5 posts that came up if I searched for "internal family systems". It now has quite a few more posts tagged with it that I didn't add. Super cool to see the system working in real time!

One of the fun things about the current Good Heart Token week is that it's giving me cover to try less hard to write posts. I'm writing a bunch, and I have plausible deniability if any of them end up not being that good—I was Goodharting. Don't hate the player, hate the game.

I'm not sure how many of these posts will stand the test of time, but I think there's something valuable about throwing a bunch of stuff at the wall and seeing what sticks. I'm not normally going to invest in that sort of strategy; I just don't have time for it. But for one week it's f... (read more)

tl;dr: read multiple things concurrently so you read them "slowly" over multiple days, weeks, months

When I was a kid, it took a long time to read a book. How could it not: I didn't know all the words, my attention span was shorter, I was more restless, I got lost and had to reread more often, I got bored more easily, and I simply read fewer words per minute. One of the effects of this is that when I read a book I got to live with it for weeks or months as I worked through it.

I think reading like that has advantages. By living with a book for... (read more)

4Raemon4y
Interesting idea, thanks. I think this also hints at other ways to approach this (i.e. maybe rather than interspersing books with other books, you could interspersing them with non-reading-things that still give you some chance to have idea from multiple domains bumping into each other)

Explanations are liftings from one ontology to another.

2Raemon4y
Seems true, although in some cases I feel like one of the ontologies is just an obviously bigger/better version of another one. 
3Gordon Seidoh Worley4y
This actually fits the lifting metaphor (which is itself a metaphor)!

I get worried about things like this article that showed up on the Partnership on AI blog. Reading it there's nothing I can really object to in the body of post: it's mostly about narrow AI alignment and promotes a positive message of targeting things that benefit society rather than narrowly maximize a simple metric. How it's titled "Aligning AI to Human Values means Picking the Right Metrics" and that implies to me a normative claim that reads in my head something like "to build aligned AI it is necessary and sufficient to p... (read more)

3jonathanstray4y
Hi Gordon. Thanks for reading the post. I agree completely that the right metrics are nowhere near sufficient for aligned AI — further I’d say that “right” and “aligned” have very complex meanings here.  What I am trying to do with this post is shed some light on one key piece of the puzzle, the actual practice of incorporating metrics into real systems. I believe this is necessary, but don’t mean to suggest that this is sufficient or unproblematic. As I wrote in the post, “this sort of social engineering at scale has all the problems of large AI systems, plus all the problems of public policy interventions.” To me the issue is that large, influential optimizing systems already exist and seem unlikely to be abandoned. There may be good arguments that a particular system should not be used, but it’s hard for me to see an argument to avoid this category of technology as a whole. As I see it, the question is not so much “should we try to choose appropriate metrics?” but “do we care to quantitatively monitor and manage society-scale optimizing systems?” I believe this is an urgent need for this sort of work within industry. Having said all that, you may be right that the title of this post overpromises. I’d welcome your thoughts here.

Sometimes people at work say to me "wow, you write so clearly; how do you do it?" and I think "given the nonsense I'm normally trying to explain on LW, it's hardly a surprise I've developed the skill well enough that when it's something as 'simple' as explaining how to respond to a page or planning a technical project that I can write clearly; you should come see what it looks like when I'm struggling at the edge of what I understand!".

It seems like humans need an outgroup.

My evidence is not super strong, but I notice a few things:

  • There's less political tension and infighting when there's a clear enemy. Think about wartime.
  • There's a whole political theory about creating ingroup cohesion based on defining the ingroup against the outgroup. This is how a number of nation-states and religions were congealed.
  • Lots of political infighting has ramped up over the last 30+ years. This period has also been a long period of peace with no threat of major power wars. Theory: people constructed an outg
... (read more)
7Dagon2y
This is the basic intuition behind "war on X" framing of political topics.  Making Drugs, or Cancer, or whatever the "outgroup" triggers that sense of us-vs-them.  But it doesn't work that well, because human brains are more complicated than that, and are highly tuned to the mix of competition and cooperation with other humans, not non-agentic things.   One of the first things people do in their conception of members of outgroups is to forget or deny their humanity.  This step fails for things that already aren't human, and I suspect will derail that path to cohesion.
6Viliam2y
Humans are so fucked up. "We need an enemy that we can believe is inhuman, so we can unite to fight it." "Okay, what about Death? That's a logical choice considering that it is already trying to kill you..." "Nah, too inhuman."
4ChristianKl2y
War framing leads to centralization of power. It allows those on the top to weaken their political enemies and that in turn results in less open conflicts. This has advantages but also comes with it's problem as dissenting perspectives about how to address problems get pushed out.
3Yitz2y
This is why I strongly believe a Hollywood-style alien or Terminator-AI attack would do incredible things for uniting humanity. Unfortunately, AGI irl is unlikely to present in such a way that would make it an easy thing to outgroup…

I recently watched all 7 seasons of HBO's "Silicon Valley" and the final episode (or really the final 4 episodes leading up into the final one) did a really great job of hitting on some important ideas we talk about in AI safety.

Now, the show in earlier seasons has played with the idea of AI with things like an obvious parody of Ben Goertzel and Sophia, discussion of Roko's Basilisk, and of course AI that Goodharts. In fact, Goodharting is a pivotal plot point in how the show ends, along with a Petrov-esque ending where hard choices have to be made under u... (read more)

NB: There's something I feel sad about when I imagine what it's like to be others, so I'm going to ramble about it a bit in shortform because I'd like to say this and possibly say it confusingly rather than not say it at all. Maybe with some pruning this babble can be made to make sense.

There's a certain strain of thought and thinkers in the rationality community that make me feel sad when I think about what it must be like to be them: the "closed" individualists. This is as opposed to people who view personal identity as... (read more)

4Dagon4y
[upvoted for talking about something that's difficult to model and communicate about] Hmm. I believe (with fairly high confidence - it would take a big surprise to shift me) a combination of empty and closed. Moments of self-observed experience are standalone, and woven into a fabric of memories in a closed, un-sharable system that will (sooner than I prefer) physically degrade into non-experiencing components. I haven't found anyone who claims to be open AND is rational enough to convince me they're not just misstating what they actually experience. In fact, I'd love to hear someone talk about what it means to "want" something if you're experiencing all things simultaneously. I'm quite sympathetic to the argument that it is what it is, and there's no reason to be sad. But I'm also unsure whether or why my acceptance of closed-empty existence makes you sad. Presumably, if your consciousness includes me, you know I'm not particularly sad overall (I certainly experience pain and frustration, but also joy and optimistic anticipation, in a balance that seems acceptable).
2Gordon Seidoh Worley4y
Because I know the joy of grokking the openness of the "individual" and see the closed approach creating inherent suffering (via wanting for the individual) that cannot be accepted because it seems to be part of the world.
4Viliam4y
I wonder how much the "great loneliness for creatures like us" is a necessary outcome of realizing that you are an individual, and how much it is a consequence of e.g. not having the kinds of friends you want to have, i.e. something that you wouldn't feel under the right circumstances. From my perspective, what I miss is people similar to me, living close to me. I can find like-minded people, but they live in different countries (I met them on LW meetups). Thus, I feel more lonely than I would feel if I lived in a different city. Similarly, being extraverted and/or having greater social skills could possibly help me find similar people in my proximity, maybe. Also, sometimes I meet people who seem like they could be what I miss in my life, but they are not interested in being friends with me. Again, this is probably a numbers game; if I could meet ten or hundred times more people of that type, some of them could be interested in me. (In other words, I wonder whether this is not yet another case of "my personal problems, interpreted as a universal experience of the humankind".) Yet another possible factor is the feeling of safety. The less safe I feel, the greater the desire of having allies, preferably perfect allies, preferably loyal clones of myself. Plus the fear of death. If, in some sense, there are copies of me out there, then, in some sense, I am immortal. If I am unique, then at my death something unique (and valuable, at least to me) will disappear from this universe, forever.
2Gordon Seidoh Worley4y
My quick response is that all of these sources of loneliness can still be downstream of using closed individualism as an intuitive model. The more I am able to use the open model the more safe I feel in any situation and the more connected I feel to others no matter how similar or different they are to me. Put one way, every stranger is a cousin I haven't met yet, but just knowing on a deep level that the world is full of cousins is reassuring.

Strong and Weak Ontology

Ontology is how we make sense of the world. We make judgements about our observations and slice up the world into buckets we can drop our observations into.

However I've been thinking lately that the way we normally model ontology is insufficient. We tend to talk as if ontology is all one thing, one map of the territory. Maybe these can be very complex, multi-manifold maps that permit shifting perspectives, but one map all the same.

We see some hints at the breaking of this ontology of ontology as a single map by noticing the way... (read more)

So long as shortform is salient for me, might as well do another one on a novel (in that I've not heard/seen anyone express it before) idea I have about perceptual control theory, minimization of prediction error/confusion, free energy, and Buddhism that I was recently reminded of.

There is a notion within Mahayana Buddhism of the three poisons: ignorance, attachment (or, I think we could better term this here, attraction, for reasons that will become clear), and aversion. This is part of one model of where suffering arises from. Others express these n... (read more)

Small boring, personal update:

I've decided to update my name here and various places online.

I started going by "G Gordon Worley III" when I wrote my first academic paper and discovered I there would be significant name collision if I just went by "Gordon Worley". Since "G Gordon Worley III" is, in fact, one version of my full legal name that is, as best as I can tell, globally unique, it seemed a reasonable choice.

A couple years ago I took Zen precepts and received a Dharma name: "Sincere Way." In the Sino-Japanese used for Dharma names, "誠道", or "Seidoh" ... (read more)

In a world that is truly and completely post-scarcity there would be no need for making tradeoffs.

Normally when we think about a post-scarcity future we think in terms of physical resources like minerals and food and real estate because for many people these are the limiting resources.

But the world is wealthy enough that some people already have access to this kind of post-scarcity. That is, they have enough money that they are not effectively limited in access to physical resources. If they need food, shelter, clothing, materiel, etc. they can get it in s... (read more)

5Viliam2y
There will be always a way to ruin post-scarcity, if humanity reproduces exponentially. Unless some new laws of physics are discovered that would allow unlimited exponential growth. Or maybe future legislation will make reproduction the only remaining scarce thing. As people currently get richer, they have fewer babies on average, but the reason is that we live in (from historical perspective) unprecedented luxury that we now take for granted, and need to give up a part of it when taking care of kids. Post-scarcity robotic nannies could easily revert this trend. I wonder what it is like to be super rich. I can easily imagine burning lots of money for things that my current self would consider reasonable. First, I could somewhat trade money for time, by paying people to do stuff that I want to get done but isn't inherently enjoyable and would take too much time to do it myself. Second, I could move to more ambitious projects that are currently clearly out of my reach so I usually do not even think much about them. Third, there are global projects like solving poverty or curing malaria, that even Bill Gates cannot handle alone. Yeah, immortality would be nice; it would remove a lot of pressure from... almost everything. I wonder whether humans invent some way to ruin this, too. For example, imagine a culture that you want to be a part of, that updates in some way frequently (changes its norms; evolves new jargon), so need to spend a lot of time every day keeping up with it; and if you fall of the wagon once, it will be very difficult to join again. Maybe to avoid low status, you will need to spend a lot of time doing some stupid things that you do not enjoy, but it will be a kind of multiplayer prisonner's dilemma. Some kind of trap, where people get punished for (a) refusing to sacrifice to Moloch, and (b) interacting with those who get punished; and even if many of your friends would agree that the system is stupid, they would not be ready to get socially shunned

If I want to continue to rack up Good Heart Tokens I now have to make legit contributions, not just make a bid to feed me lots of karma because I'm going to donate it.

So, what would be an interesting post you'd enjoy reading from me? It'll have to be something I can easily put together without doing a lot of research.

I unfortunately don't have a backlog of things to polish up and put out because I've been working on a book, and although I have draft chapters none of them is quite ready to go out. I might be able to get one of them out the door before GHT g... (read more)

One of the nice things in my work is I can just point to when I think something human is getting in the way. Like, sometimes someone says an idea is a bad idea. If I dig in, sometimes there's a human reason they say that: they don't actually think it's a bad idea, they just don't think they will like doing the work to make the idea real, or something similar. But those are different things, but it's important to have a conversation to sort that out and then we can move forward on two topics: is the idea good and why don't you want to be involved with it.

Bu... (read more)

1TLW2y
An issue with sharing human stories is the juxtaposition between: 1. Many people are/must be anonymous online. 2. Sharing human stories is often self-doxing.
2Matt Goldenberg2y
Do you have a human story about why sharing stories is self-doxxing? I imagine most stories can be told in a way that doesn't doxx, especially if you change some details that are irrelevant to the crux.
1TLW2y
Some stories aren't. That being said, many stories are. I would give examples from my own experience on this site, but they are, uh, self-doxing. Most of the issues arise either a) when the crucial details are themselves the details that you have to hide ("How can you be an expert on X given that there's about a half-dozen people that know X?" is a classic, for instance.), or b) the story in isolation doesn't leak enough bits of information to self-dox, but when combined with other already-told (and hence irrevocable) stories is enough. (Remember, you only need ~33 bits of information to uniquely identify an individual[1]. That's tiny.) 1. ^ Although of course this can be more difficult in practice.

Hogwarts Houses as Religions

Okay, this is just a fun nonsense idea I thought up. Please don't read anything too much into it, I'm just riffing. Sorry if I've mischaracterized a religion or Hogwarts house!

What religion typifies each Hogwarts house?

I'll start with Hufflepuff, which I think is aligned with Buddhism: treat everyone the same, and if you want salvation the only option is to do multiple lifetimes worth of work.

Next is Ravenclaw, which looks a lot like Judaism: there's a system to the world, you gotta follow the rules, and also lets debate and res... (read more)

If CAIS if sufficient for AGI, then likely humans are CAIS-style general intelligences.

9Matt Goldenberg5y
What's the justification for this? Seems pretty symmetric to "If wheels are sufficient for getting around, then its' likely humans evolved to use wheels."
2Gordon Seidoh Worley5y
Human brains look like they are made up of many parts with various levels and means of integration. So if it turns out to be the case that we could build something like AGI via CAIS, that is CAIS can be assembled in a way that result in general intelligence, then I think it's likely that human intelligence doesn't have anything special going on that would meaningfully differentiate it from the general notion of CAIS other than being implemented in meat.

Robert Moses and AI Alignment

It's useful to have some examples in mind of what it looks like when an intelligent agent isn't aligned with the shared values of humanity. We have some extreme examples of this, like paperclip maximizers, and some less extreme but extreme in human terms examples, like dictators like Stalin, Mao, and Pol Pot who killed millions in the pursuit for their goals, but these feel like outliers that people can too easily make various arguments for being extreme and that no "reasonable" system would have these problems.

Okay, so let's t... (read more)

2ChristianKl3y
I don't think Moses did useful things just because they brought him into power. From reading Caro's biography it seems to me that especially at the beginning Moses had good intentions.  When it comes to parks, parks are also not just helping some people but helped most people. When Moses caused a park to be build when the money would have better spent on a new school, the issue isn't that less people profited from the park then would have profited from the school. I think a key problem with Moses is that as his power grew, his workload also grew. Instead of delegating some of his power to people under him he made decisions about projects where he had little time to invest into the project.  If we would have invested the time he could have likely understood that mothers who want to go with small children to the park have a problem when they use a stroller and the entry of the park has stairs. Moses however cut himself of from being questioned and as a result such an issue didn't get addressed when planning for new parks.  Other problems came from him doing things to keep up his power by making the system both intransparent and corrupt. While intransparency might come with an AGI, I would be more surprised if issues arises because they AGI cuts itself from information flow or the AGI doesn't have enough time to manage his duties. The AGI can just spin up more instances. 

Won't I get bored living forever?

I feel like this question comes up often as a kind of push back against the idea of living an unbounded number of years, or even just a really really long time beyond the scale of human comprehension for what it would mean to live that many years.

I think most responses rely on intuition about our lives. If your life today seems full of similar days and you think you'd get bored, not living forever or at least taking long naps between periods of living seems appealing. Alternatively, if your life today seems full of new expe... (read more)

2Viliam3y
When people barely live 100 years and we worry about them getting bored if they could live forever... that seems to me like finding a beggar who only has $100 net worth and is asking for some spare change, and explaining to him that giving him more money would be bad because eventually he would become a billionaire and everyone knows that power corrupts. Yeah, it has some philosophical merit, but is completely unrelated to the life as we know it.
2Gunnar_Zarncke3y
For me, this looks like a very simplified treatment (I mean in a I-need-to-simplify-to-model-it way; I wanted to avoid the word 'academic'). While the word boredom as you seem to use it is a very practical and complex emotion. I can't disagree with your model but I don't think it captures what people feel is boring now or what would be boring in the future. I think a good counterpoint is the one by Yoav, that you can just go to sleep until something new comes up. Something that is not possible if your time is limited, to begin with.
2Dagon3y
When this argument is presented to me, there are two counterpoints I often use: 1. Simple induction.  I wasn't bored enough to want to die yesterday, nor the day after that (today).  Assuming that future days are roughly as similar as the past two, that degree of novelty is sufficient. 2. Options are not commitments.  If I ever do want to die, I can do so.  If it never happens, or doesn't happen for a thousand or a hundred thousand years, that's fine too. For those who really want to engage on #2, I've had interesting conversations about akrasia-like self-disagreements where "I am bored and would prefer to have died" but "I have FOMO and will not willingly die".  For this, there is a possibility of mechanism design, where the decision can be made rule-based.  Something like "after N years (say, 3/4 the median lifespan of your reference group), take a permanent poison, such that you must take an antidote every week/year.  If you ever get bored/unhappy enough to not take the antidote, you die.) A tougher disagreement is the Malthusian one - old people are already too powerful, and it'll get far worse if they're healthy and active for centuries (let alone longer).  Further, they take resources/opportunities from the young.  The availability heuristic for this is vampires, not techno-utopia.  I have yet to really find a good counterargument for this - it quite likely contains a fair grain of truth, at least for the current planetary and human governance limitations.
3Yoav Ravid3y
Another option on two is to go into some kind of preservation instead of total death forever. then you can write instructions on when to wake you up (if X person asks, if X event happens, in X years, etc..). you still miss out on some stuff, but not literally everything. The second benefit is that for people who stayed it's not like you died and they'll never be able to interact with you again, you just took a really long vacation :)
2Gunnar_Zarncke3y
If powerful old people go to sleep when they are bored then they run the risk of being overtaken by younger, faster, and less risk-averse people. Maybe a good model is corporations: Corporations are also immortal and can learn more and more but they also have more to lose and they seem to acquire knowledge that also seems to slow them down. If there are changes in the environment or innovations they often cannot adapt fast enough and are quickly overtaken by younger players.  

Personality quizzes are fake frameworks that help us understand ourselves.

What-character-from-show-X-are-you quizzes, astrology, and personality categorization instruments (think Big-5, Myers-Briggs, Magic the Gathering colors, etc.) are perennially popular. I think a good question is to ask, why do humans seem to like this stuff so much that even fairly skeptical folks tend to object not to categorization but that the categorization of any particular system is bad?

My stab at an answer: humans are really confused about themselves, and are interested in thi... (read more)

6Dagon4y
They help us understand others as well - even as fake frameworks, anything that fights against https://wiki.lesswrong.com/wiki/Typical_mind_fallacy is useful. I'd argue these categorizations don't go far enough, and imply a smaller space of variation than is necessary for actual modeling of self or others, but a lot of casual observers benefit from just acknowledging that there IS variation.

As I work towards becoming less confused about what we mean when we talk about values, I find that it feels a lot like I'm working on a jigsaw puzzle where I don't know what the picture is. Also all the pieces have been scattered around the room and I have to find the pieces first, digging between couch cushions and looking under the rug and behind the bookcase, let alone figure out how they fit together or what they fit together to describe.

Yes, we have some pieces already and others think they know (infer, guess) what the picture is from those ... (read more)

Most of my most useful insights come not from realizing something new and knowing more, but from realizing something ignored and being certain of less.

After seeing another LW user (sorry, forgot who) mention this post in their commenting guidelines, I've decided to change my own commenting guidelines to the following, matching pretty close to the SSC commenting guidelines that I forgot existed until just a couple days ago:

Comments should be at least two of true, useful, and kind, i.e. you believe what you say, you think the world would be worse without this comment, and you think the comment will be positively received.

I like this because it's simple and it says what rather than how. My old gui... (read more)

http://www.overcomingbias.com/2019/12/automation-so-far-business-as-usual.html

I similarly suspect automation is not really happening in a dramatically different way thus far. Maybe that will change in the future (I think it will), but it's not here yet.

So why so much concern about automation?

I suspect because of something they don't look at in this study much (based on the summary): displacement. People are likely being displaced from jobs into other jobs by automation or the perception of automation and some few of those exit the labor market ra... (read more)

I started showing symptoms and testing positive for COVID on Saturday. I'm now over nearly all the symptoms other than some pain in parts of my body and fatigue.

The curious question in my mind is, what's causing this pain and fatigue and what can be done about it?

My high-level, I'm-not-a-doctor theory is that there's something like generalized inflammation happening in my body, doing things makes it worse, and then my body sends out the signal to rest in order to get the inflammation back down. Once it's down I can do things for a while until it builds up ... (read more)

Maybe spreading cryptocurrency is secretly the best thing we can do short term to increase AI safety because it increases the cost of purchasing compute needed to build AI. Possibly offset, though, by the incentives to produce better processors for cryptocurrency mining that are also useful for building better AI.

2Steven Byrnes3y
I'd say "more than offset". Increases chip makers' economies of scale and justifies higher R&D outlays...

This post suggests a feature idea for LessWrong to me:

https://www.lesswrong.com/posts/6Nuw7mLc6DjRY4mwa/the-national-defense-authorization-act-contains-ai

It would be pretty cool if, instead of a lot of comments that have an order determined by votes or time of posting it were instead possible to write a post that had part that could be commented on directly. So, for example, say the comments for a particular section could live straight in the section rather than down at the bottom. Could be an interesting way to deal with lots of comments on large, structured posts.

2Pattern3y
You have reinvented Google Docs. A similar effect could be achieved by having a sequence which...all appears on one page. (With the comments.)
2Matt Goldenberg3y
Medium also has this feature, and I think it improves the Medium discourse quite a bit.
2Pattern3y
Which feature?
2Matt Goldenberg3y
Commeting on specific parts of articles and seeing those comments as you go through the article.

I few months ago I found a copy of Staying OK, the sequel to I'm OK—You're OK (the book that probably did the most to popularize transactional analysis), on the street near my home in Berkeley. Since I had previously read Games People Play and had not thought about transactional analysis much since, I scooped it up. I've just gotten around to reading it.

My recollection of Games People Play is that it's the better book (based on what I've read of Staying OK so far). Also, transactional analysis is kind of in the water in ways... (read more)

Off-topic riff on "Humans are Embedded Agents Too"

One class of insights that come with Buddhist practice might be summarized as "determinism", as in, the universe does what it is going to do no matter what the illusory self predicts. Related to this is the larger Buddhist notion of "dependent origination", that everything (in the Hubble volume you find yourself in) is causally linked. This deep deterministic interdependence of the world is hard to appreciate from our subjective experience, because the creation of ontology crea... (read more)

I just noticed something odd. It's not that odd: the cognitive bias that powers it is well know. It's more odd that a company is leaving money on the table by not exploiting it.

I primarily fly United and book rental cars with Avis. United offers to let you buy refundable fares for a little more than the price of a normal ticket. Avis let's you pre-pay for your rental car to receive a discount. These are symmetrical situations presented with a different framing because the default action is different in the two cases: on United the default is to have a non-... (read more)

3Martin Vlach1y
My guess is that "rental car market" has less direct/local competition while the airlines are centralized on the airport routes and many cheap flight search engines( ex. Kiwi.com) make this a favorable mindset. Is there a price comparison for car rentals?

Isolate the Long Term Future

Maybe this is worthy of a post, but I'll do a short version here to get it out.

  • In modern computer systems we often isolate things to increase reliability.
  • If one isolated system goes down, the others keep working.
  • Examples:
    • multiple data centers spread around the world
    • using multiple servers that all do the same thing running in those different data centers
    • replicating data between data centers
    • isolating customers within a single data center so if one goes down only the customers using that data center are affected
  • We can do the same k
... (read more)

Psychological Development and Age

One of the annoying things about developmental psychology is disentangling age-related from development-related effects.

For example, as people age they tend to get more settled or to more have their lives sorted out. I'm pointing at the thing where kids and teenages and adults in their 20s tend to have a lot of uncertainty about what they are going to do with their lives and that slowly decreases over time.

A simple explanation is that it's age related, or maybe more properly experience related. As a person lives more years,... (read more)

ADHD Expansionism

I'm not sure I fully endorse this idea, hence short form, but it's rattling around inside my head and maybe we can talk about it?

I feel like there's a kind of ADHD (or ADD) expansionism happening, where people are identifying all kinds of things as symptoms of ADHD, especially subclinical ADHD.

On the one had this seems good in the sense that performing this kind of expansionism seems to actually be helping people by giving them permission to be the way they are via a diagnosis and giving them strategies they can try to live their life bett... (read more)

2Dagon3y
There's not much agreement on what to call it when "normal" is harmful, but not so overwhelmingly common as to seem immutable.  Agreed that thinking of it as a pathology doesn't quite cut it, but also "acceptable" seems wrong.

You're always doing your best

I like to say "you're always doing your best", especially as kind words to folks when they are feeling regret.

What do I mean by that, though? Certainly you can look back at what you did in any given situation and imagine having done something that would have had a better outcome.

What I mean is that, given the all conditions under which you take any action, you always did the best you could. After all, if you could have done something better given all the conditions you would have.

The key is that all the conditions include the e... (read more)

I feel like something is screwy with the kerning on LW over the past few weeks. Like I keep seeing sentences that look like they are missing space between the period and the start of the next sentence but when I check closely they are not. For whatever reason this doesn't seem to show in the editor, only in the displayed text.

I think I've only noticed this with comments and short form, but maybe it's happening other places? Anyway, wanted to see if others are experiencing this and raise a flag for the LW team that a change they made may be behaving in unexpected ways.

6Ben Pace4y
It is totally real and it's been this way over two months. It's an issue with Chrome, and I'm kinda boggled that Chrome doesn't jump on these issues, it's a big deal for readability.
2habryka4y
Yep, it's a Chrome bug. It's kind of crazy.

Story stats are my favorite feature of Medium. Let me tell you why.

I write primarily to impact others. Although I sometimes choose to do very little work to make myself understandable to anyone who is more than a few inferential steps behind me and then write out on a far frontier of thought, nonetheless my purpose remains sharing my ideas with others. If it weren't for that, I wouldn't bother to write much at all, and certainly not in the same way as I do when writing for others. Thus I care instrumentally a lot about being able to assess if I a... (read more)

2Ruby4y
Very quick thought: basically the reasons we haven't and might not do more in this direction is how it might alter what gets written. It doesn't seem good if people were to start writing more heavily for engagement metrics. Also not clear to me that engagement metrics capture the true value that matters of intellectual contributions.
5Raemon4y
(Habryka has an old comment somewhere delving into this, which I couldn't find. But the basic gist was "the entire rest of the internet is optimizing directly for eyeballs, and it seemed good for LessWrong to be a place trying to have a different set of incentives")