G Gordon Worley III

If you are going to read just one thing I wrote, read The Problem of the Criterion.

More AI related stuff collected over at PAISRI

Sequences

Zen and Rationality
Filk
Formal Alignment
Map and Territory Cross-Posts
Phenomenological AI Alignment

Comments

Seeking Truth Too Hard Can Keep You from Winning

You seem to have anticipated this response. The definition you begin with—truth as "accurate predictions about our experiences"—is fairly narrow. One could respond that what you identify here are the effects of truth (presumably? but maybe not necessarily), while truth is whatever knowledge enables us to make these predictions. In any case, it doesn't seem self-evident that truth is necessarily concerned with making predictions, and I wonder how much of the argument hinges upon this strict premise. How would it alter if etc.

Not much. You could choose some other kind of truth definition if you like. My goal was to use a deflationary definition of truth in order to avoid stumbling into philosophical mindfields, and because I'm not committed to metaphysical realism myself so I'd be dishonest if I used such a definition.

Relatedly, you say that when we seek truth, "we want to know things that tell us what we’ll find as we experience the world." Rather than primarily aiming to predict in advance what we'll find, might we instead aim to know the things that enable us to understand whatever we actually do find, regardless of whether we expected it (or whether it is as we predicted it would be)? Maybe this knowledge amounts to the same thing in the end. I don't know.

I'd say that amounts to the same thing. There's some links in the post relevant to the case for this about Bayesianism and the predictive processing model of the brain.

You refer to the thing outside of truth that grounds the quest for it as purpose. Would belief or faith be an acceptable substitute here?

Maybe. "Purpose" is here a stand-in term for a whole category of things like what Heidegger called Sorge. Although not necessarily exhaustive, I wrote a post about this topic. I could see certain notions of belief and faith fitting in here.

It would seem that [desire for] knowledge of truth already encompasses or takes into account the existence of non-truth-seeking agents and the knowledge requisite to accurately modeling them.

As I think I addressed a couple points up, yes and humans are in the implementation formed such that this is insufficient.

Given your statement in the antepenultimate paragraph—"the reality is that you are not yourself actually a truth-seeking-agent, no matter how much you want it to be so"—this piece ultimately appears to be a reflection on self-knowledge. By encouraging the rigidly truth-obsessed dork to more accurately model non-truth-seeking agents, you are in fact encouraging him to more accurately model himself. So again, the desire for truth (as self-knowledge, or the truth about oneself) still guides the endeavor. (This was the best paragraph in the piece, I think.)

Seeking truth starts at home, so to speak. :-)

Seeking Truth Too Hard Can Keep You from Winning

In theory, yes. In practice this tends to be impractical because of the amount of effort required to think through how other people think in a deliberate way that accurately models them. Most people who succeed in modeling others well seem to do it by having implicit models that are able to model them quickly.

I think the point is that people are complex systems that are too complex to model well if you try to do it in a deliberate, system-2 sort of way. Even if you eventually succeed in modeling them, you'll likely get your answer about what to do way to late to be useful. The limitations of our brains force us to do something else (heck, the limitations of physics seem to force this, since idealized Solomonoff inductors run into similar computability problems, cf. AIXI).

Why do you need the story?

This points at something I find it very hard to work against: a desire to explain why things are the way they are rather than just accept that they are the way they are. Explanations are useful, but things will still be as they are even if i have no explanation for why they are the way they are. Yet when I find something in the world, there's a movement of mind that quickly follows observation that seeks to create a gears level model of the world. On the one hand, such models are useful. On the other, a desire to explain in the absence of any information to build it off of is worse than useless—it's the path to confusion.

Integrating Three Models of (Human) Cognition

Thanks for this thorough summary. At this point the content has become spread over a books worth of posts, so it's handy to have this high level, if long, summary!

Why I am no longer driven

I advised no such thing, notice the /s at the end.

I guess I'm not cool enough to know what that means. Just looks like a typo to me. 🤷

If anecdotal evidence was the standard to be judged by, alternative medicine would be bloody miracle cures - plenty of patients swear it works. And in the absence of empirical data, it's your anecdotal against my anecdotal evidence. I had no intention of being charitable as I think it's a complete snake-oil industry, Tai Lopez & Co. just made it ridiculously obvious in recent years imo. It doesn't even require practitioners to be consciously ill-intentioned.

You make a claim that "[m]otivational videos, speeches and self-help books are essentially modern forms of letters of indulgence", and seem to back it up by saying that there's folks whose experience of self help is that it just makes you feel good and takes your money without offering anything in return. But this is just opinion and conjecture. The strongest evidence you offer is an example of "Tai Lopez & Co.", who I'm not familiar with, that you say "made it ridiculously obvious in recent years [that's it's complete snake-oil]".

Anecdotal evidence is not necessarily the standard to judge by, but anecdotal evidence is sufficient to suggest we cannot dismiss something out of hand. To your point about alternative medicine, that some people find things work means it's worthy of study, not that it can simply be dismissed. And sometimes what looks like alternative medicine, to take your point, turns out to be real medicine or just inefficient medicine (for example, people eating molds containing antibiotics or drinking tea made with witch hazel rather than taking aspirin).

It's fine to have your opinion that self help and motivational videos are not helpful, but my claim is that you're not taking seriously the case for things like self help that lots of people think work, including lots of people on this site, and this lack of charity seems to be resulting in a failure to even consider evidence (which to be fair I'm not providing to you, but your position seems to be rejecting even a willingness to consider the possibility that self-help might work, which means you seem to have already written the bottom line.)

Why I am no longer driven

I think this is an uncharitable strawman of motivational and self help materials.

Is there stuff out there that's trying to get you to buy something that doesn't really help? Yes. Is there also stuff out there that people find transforms theirs lives because it helps them have insights that unstick them from their problems that they couldn't unstick themselves from? Absolutely. Evidence: me and lots of people claiming this.

What you advise might work for some, but for others suck forced action would actually make the situation worse! I know this has been the case for me at times: forcing myself to "grind" actually made the problem worse over time rather than better.

Worst Commonsense Concepts?

But some stuff is explicitly outside of science's purview, though not in the way you're talking about here. That is, some stuff is explicitly about, for example, personal experience, which science has limited tools for working with since it has to strip away a lot of information in order to transform it into something that works with scientific methods.

Compare how psychology sometimes can't say much of anything about things people actually experience because it doesn't have a way to turn experience into data.

Worst Commonsense Concepts?

Probably the most persistent and problem-causing is the common sense way to treating things as having essences.

By this I mean that people tend to think of things like people, animals, organizations, places, etc. etc. as having properties or characteristics as if they had a little file inside them with various bits of metadata set that define their behavior. But this is definitely not how the world works! The property like this is at best a useful fiction or abstraction that allows simplified reasoning about complex systems, but it also leads to lots of mistakes because most people don't seem to realize these are like aggregations over complex interactions in the world rather than real things themselves.

You might say this is mistaking the map for the territory, but I think framing it this way makes it a little clearer just what is going on. People act as if there was some essential properties of things and think that's how the world actually is and as a result make mistakes when that model fails to correspond to what actually happens.

Speaking of Stag Hunts

You say this as if I do not do it, a lot, and get downvoted, a lot.  

Haha, true, I think you and I have occasionally gotten into it directly with regards to this.

I guess to that point I'm not sure the norms you want are actually the norms the community wants. I know for myself the norms I want don't always seem to be the ones the community wants, but I guess I accept this as different people want different things, and I'm just gonna push for the world to be more how I'd like it. I guess that's what you're doing, too, but in a way that feels more forceful, especially in that you sometimes advocate for stuff not belonging rather than to be allowed but argued against.

This is likely some deep difference of opinion, but I see LW like a dojo, and you have to let people mess up, and it's more effective to let people see the correction in action rather than for it to go away because you push so hard that everyone is afraid to make mistakes. I get the vibe that you want LW to make corrections so hard that we'd see engagement drop below critical mass, like what happened with LW 1.0 by other means, but that might be misinterpreting what you want to see happen (although you're pretty clear about some of your ideas, so I'm not that uncertain about this).

Speaking of Stag Hunts

Reading this post I kinda feel like you are failing to take your own advice about gardening in a certain way.

Like it feels good to call people out on their BS and get a conversation going, but also I think there's some alternative version of these two posts that's not about building up a model and trying to convince people that something is wrong that must be done about it, and instead a version that just fights the fight at the object level against particular posts, comments, etc. as part of a long slog to change the culture through direct action that others will see and emulate through a shift in the culture.

My belief here is that you can't really cause much effective, lasting change by just telling people they're doing something that sucks and is causing a problem. Rarely will they get excited about it and take up the fight. Instead you just have to fight it out, one weed at a time, until some corner of the garden is plucked and you have a small team of folks helping you with the gardening, then expanding from there.

If LW is not that place and folks don't seem to be doing the work, then maybe LW is simply not the sort of thing that can be what you'd like it to be. Heck, maybe the thing you'd like to exist can't even exist for a bunch of reasons that aren't currently obvious (I'm not sure about this myself; haven't thought about it much).

My own experience was discovering several years ago that, yeah, rationalists the community actually kinda suck at the basic skills of rationality in really obvious ways on a day to day basis. Like forget big stuff that matters like AI alignment or community norms. They just suck at like figuring out how not to constantly have untied shoelaces and other borning, minor drags on their ability to be effective agents. And that's because they're just human, with no real special powers, only they intellectually know a thing or two about how in theory they could be more effective if only they could do it on a day to day basis and not have to resort to a bunch of cheap hacks that work but also cause constant self harm by repeatedly giving oneself negative feedback in order to beat oneself into the desired shape.

Now of course not all rationalists, there's plenty of folks doing great stuff. But it's for this reason I basically just see LW as one small part that can play a role in helping a person become more fully realized in the world. And a part of that is arguably a place where folks can just kinda suck at being rationalists and others can notice this or not or not notice it for a long time and others go off in anger. Maybe this seems kinda weird, but I hold this sense that LW can only be the thing it's capable of being, because that's all any group can do. For example, I don't expect my Zen community to help me be better at making calibrated bets; it's just not designed for that and any attempt to shove it in probably wouldn't work. So maybe the thing is that the LW community just isn't designed to do the things you'd like it to do, and maybe it just can't do those things, and that's why it seems so impossible to get it to be otherwise.

Likely there is some community that could do those things, but it's hard to see how you could pull that out of the existing rationalist culture by a straight line. Seems more likely to me to require a project on the order of founding LW than on the order of, say, creating LW 2.0.

Load More