G Gordon Worley III

Director of Research at PAISRI

Sequences

Zen and Rationality
Filk
Formal Alignment
Map and Territory Cross-Posts
Phenomenological AI Alignment

Comments

Neural Annealing: Toward a Neural Theory of Everything (crosspost)

On the other hand this post for me exemplifies something I think LessWrong is really good at, which is creating a place where people can find an audience of bleeding-edge research that is not obviously off the rails. Something like the kind of stuff you would otherwise only hear about because you work at a university and happen to go to a talk a person working on that research gave internally to solicit feedback.

Like with many posts the audience may be small for this one, but this is the same problem with many AI alignment posts and I don't think we should hold it against this post in voting unless we plan to also vote against inclusion of most technical AI posts that were nominated.

What’s good about haikus?

A well-loved haiku from Basho:

In Kyoto,

hearing the cuckoo,

I long for Kyoto

G Gordon Worley III's Shortform

ADHD Expansionism

I'm not sure I fully endorse this idea, hence short form, but it's rattling around inside my head and maybe we can talk about it?

I feel like there's a kind of ADHD (or ADD) expansionism happening, where people are identifying all kinds of things as symptoms of ADHD, especially subclinical ADHD.

On the one had this seems good in the sense that performing this kind of expansionism seems to actually be helping people by giving them permission to be the way they are via a diagnosis and giving them strategies they can try to live their life better.

On the other I feel like it's terrible in terms of actually diagnosing ADHD. It might help to explain why I think that.

Much of what I see that I'm terming ADHD expansionism looks to me like taking normal human behavior that is ill fitted to the modern environment and then pathologizing it. As best I can tell, it's normal and adaptive for humans to exhibit various behaviors that that get labeled as ADHD symptoms, like flittering between multiple activities, hyperfocus on things the mind finds important but doesn't necessarily endorse as important (S1 important things, not S2 important), understimulation, overstimulation, and otherwise finding it hard to focus on a one thing.

All of that sounds like normal, adaptive, forager behavior to me. Some of it became maladaptive during the farming era, but not especially, and now in the industrial era are less adaptive.

Thus I think ADHD suffers from the same issue as codependency does, in that if you start to describe the symptoms you quickly realize 90% of humanity has this "problem" and so I think we're doing ourselves a disservice by considering it a pathology because it fails to acknowledge that most of these mental habits are just what it's like to be a normal human and that its our conditions that are unusual and that we are struggling to function within.

I don't see this as cause to throw out modern industrial society, but rather that we need to think about ways to adapt our systems to better accommodate real humans rather than the idealized ones of high modernism.

On the ground level, yes, we may still need to do much to personally intervene against ADHD-like symptoms, just as we may need to do against our natural tendency towards codependency, but I think there's something being lost by even talking about it this way. Rather, we need to think of it as how do we cope with being humans engaged in systems that ask us to behave in unusual ways, and see the systems as the broken things, not ourselves. It's not that everyone has ADHD or codependency; rather, it's that our systems pathologize normal behavior because they are confused about what is typical.

Fittingness: Rational success in concept formation

Oh, but then why have a special word for success to the purpose of rational inquiry? To my ear "fitness" seems like something general we could say about anything, as in it's "fitness for X", like "fitness for rational inquiry" or "fitness for convincing others".

Fittingness: Rational success in concept formation

I agree, but to me this idea is already captured by the concept we variously call telos, purpose, care, or concern. That is, fitness as you described it seems to be a natural consequence of truth necessarily being in the service of some concern, this making truth ready at hand. In this way fitness is something like the extent to which a category serves the purpose its discerner intended.

Science in a High-Dimensional World

This post gave me an idea about how you might approach magic in fiction while keeping it ground in reality: something like magic users are people who learn to pick out relevant variables from the noise to consistently nudge reality in ways that otherwise seem not possible.

Basically placebomancy from Unsong.

Condition-directedness

I generally think in terms of a planning horizon that doesn't extend out much past 18 months. Typically things change fast enough that planning out past that far towards anything specific is not worthwhile, besides a few exceptions, and even those exceptions I can think of involves short-term actions that can't be well planned out far in advance.

Generally I'm just thinking about how to get through the next minute. There's some day-level and week-level planning along the lines of oh the weather will be nice this day and bad that day and I have more meetings this day than that day, and it's often necessary to make plans a few months in advance for things like vacations and conferences and to think about vaguely what I might like to be working on next, but often there's just not enough information to do more than try to point myself in vaguely the right directly.

I think the biggest driver of this for me is just being able to adjust to things that are uncertain now, including unknown uncertainty. Because I don't actually know what things will be like in a minute, a day, a week, a month, or a year from now, and my ability to predict what it will be like decreases in accuracy as time extends, I find it's mostly a bad strategy to try too hard to plan or set specific goals.

So instead I mostly have to make decisions based on what I might call virtue, i.e. does this seem like the kind of thing I would like to do and have done? Will I still feel that way about it later? Will it push me vaguely in the direction of things I would like, or will it work against that?

I'm not sure the world is certain enough to do much more.

G Gordon Worley III's Shortform

You're always doing your best

I like to say "you're always doing your best", especially as kind words to folks when they are feeling regret.

What do I mean by that, though? Certainly you can look back at what you did in any given situation and imagine having done something that would have had a better outcome.

What I mean is that, given the all conditions under which you take any action, you always did the best you could. After all, if you could have done something better given all the conditions you would have.

The key is that all the conditions include the entire history of the world up to the present moment, and so that necessarily includes your life history, the life history of others, the physical environment, your emotional state, how tired you were, how your brain works, etc.. The trick is that when you condition your actions so fully there's no room left for any counterfactuals, for you could have done nothing else!

As you might guess, I'm proposing a deterministic outlook on the world. I won't really argue that too much here, other than to say that if you look long and hard enough at free will it dissolves into an after-the-fact illusion contingent on how your brain compresses reality and models yourself and that this is robust to quantum effects since even if quantum effects result in random outcomes you nonetheless only ever find yourself in a single history where some particular thing happened regardless of how it happened.

The immediate corollary of all this is that you also are always doing your worst, only that doesn't land too well when someone feels regret.

I like this insight because, fully taken it, it dissolves regret. Not that you can't imagine having done better, propose things you might do differently in the future, and then try them to see what happens and maybe actually do better than you previously did. Rather, it dissolves regret because regret hinges on feeling as if a counterfactually could have really happened. Once you deeply believe that counterfactuals are not real, i.e. they are purely of the map and have no existence in the territory independent of the map, regret just has no way to come into existence.

This doesn't mean you can't still feel related emotions like remorse, especially if you realize you were negligent and had a responsibility to have done better but didn't, but that's different than clinging to a desire to have done something different; remorse is owning that you did something less than what you were capable of under the circumstances and might reasonably be asked to make amends.

So next time you feel regret, try reminding yourself it couldn't have gone any other way.

You are Dissociating (probably)

Ah, I couldn't quite remember how I've seen the model described before. So rather than what I presented, I've seen it describes as "it -> I -> me" as the development of place that emotional response comes from, and this this impacts identity formation.

Cryonics signup guide #1: Overview

Oh wow, interesting. Yeah I get a guaranteed 3% return, so clearly they must be getting less than that now. I'm guessing in a product like I have it can only be backed by certain classes of investments that are not yielding enough return to sustain it right now.

Load More