Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
This is a special post for short-form writing by TurnTrout. Only they can create top-level comments. Comments here also appear on the Shortform Page and All Posts page.

New to LessWrong?

442 comments, sorted by Click to highlight new comments since: Today at 11:20 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Rationality exercise: Take a set of Wikipedia articles on topics which trainees are somewhat familiar with, and then randomly select a small number of claims to negate (negating the immediate context as well, so that you can't just syntactically discover which claims were negated). 

For example:

By the time they are born, infants can recognize and have a preference for their mother's voice suggesting some prenatal development of auditory perception.

-> modified to

Contrary to early theories, newborn infants are not particularly adept at picking out their mother's voice from other voices. This suggests the absence of prenatal development of auditory perception.

Sometimes, trainees will be given a totally unmodified article. For brevity, the articles can be trimmed of irrelevant sections. 

Benefits:

  • Addressing key rationality skills. Noticing confusion; being more confused by fiction than fact; actually checking claims against your models of the world.
    • If you fail, either the article wasn't negated skillfully ("5 people died in 2021" -> "4 people died in 2021" is not the right kind of modification), you don't have good models of the domain, or you didn't pay enough attention
... (read more)
5Morpheus1y
I remember the magazine I read as a kid (Geolino) had a section like this (something like 7 news stories from around the World and one is wrong). It's german only, though I'd guess a similar thing to exist in english media?
3Yitz1y
This is a lot like Gwern’s idea for a fake science journal club, right? This sounds a lot easier to do though, and might seriously be worth trying to implement.
2TurnTrout1y
Additional exercise: Condition on something ridiculous (like apes having been continuously alive for the past billion years), in addition to your own observations (your life as you've lived it). What must now be true about the world? What parts of your understanding of reality are now suspect?

For the last two years, typing for 5+ minutes hurt my wrists. I tried a lot of things: shots, physical therapy, trigger-point therapy, acupuncture, massage tools, wrist and elbow braces at night, exercises, stretches. Sometimes it got better. Sometimes it got worse.

No Beat Saber, no lifting weights, and every time I read a damn book I would start translating the punctuation into Dragon NaturallySpeaking syntax.

Text: "Consider a bijection "

My mental narrator: "Cap consider a bijection space dollar foxtrot colon cap x backslash tango oscar cap y dollar"

Have you ever tried dictating a math paper in LaTeX? Or dictating code? Telling your computer "click" and waiting a few seconds while resisting the temptation to just grab the mouse? Dictating your way through a computer science PhD?

And then.... and then, a month ago, I got fed up. What if it was all just in my head, at this point? I'm only 25. This is ridiculous. How can it possibly take me this long to heal such a minor injury?

I wanted my hands back - I wanted it real bad. I wanted it so bad that I did something dirty: I made myself believe something. Well, actually, I pretended to be a person who really, really believed hi

... (read more)

It was probably just regression to the mean because lots of things are, but I started feeling RSI-like symptoms a few months ago, read this, did this, and now they're gone, and in the possibilities where this did help, thank you! (And either way, this did make me feel less anxious about it 😀)

7DanielFilan3y
Is the problem still gone?

Still gone. I'm now sleeping without wrist braces and doing intense daily exercise, like bicep curls and pushups.

Totally 100% gone. Sometimes I go weeks forgetting that pain was ever part of my life. 

6Vanessa Kosoy3y
I'm glad it worked :) It's not that surprising given that pain is known to be susceptible to the placebo effect. I would link the SSC post, but, alas...
2Raj Thimmiah3y
You able to link to it now?
2qvalq9mo
https://slatestarcodex.com/2016/06/26/book-review-unlearn-your-pain/
5Steven Byrnes3y
Me too!
4TurnTrout3y
There's a reasonable chance that my overcoming RSI was causally downstream of that exact comment of yours.
4Steven Byrnes3y
Happy to have (maybe) helped! :-)
3Teerth Aloke3y
This is unlike anything I have heard!
6mingyuan3y
It's very similar to what John Sarno (author of Healing Back Pain and The Mindbody Prescription) preaches, as well as Howard Schubiner. There's also a rationalist-adjacent dude who started a company (Axy Health) based on these principles. Fuck if I know how any of it works though, and it doesn't work for everyone. Congrats though TurnTrout!
1Teerth Aloke3y
My Dad it seems might have psychosomatic stomach ache. How to convince him to convince himself that he has no problem?
5mingyuan3y
If you want to try out the hypothesis, I recommend that he (or you, if he's not receptive to it) read Sarno's book. I want to reiterate that it does not work in every situation, but you're welcome to take a look.
2avturchin3y
Looks like reverse stigmata effect.
2Raemon3y
Woo faith healing!  (hope this works out longterm, and doesn't turn out be secretly hurting still) 
5TurnTrout3y
aren't we all secretly hurting still?
2mingyuan3y
....D:

Shard theory suggests that goals are more natural to specify/inculcate in their shard-forms (e.g. if around trash and a trash can, put the trash away), and not in their (presumably) final form of globally activated optimization of a coherent utility function which is the reflective equilibrium of inter-shard value-handshakes (e.g. a utility function over the agent's internal plan-ontology such that, when optimized directly, leads to trash getting put away, among other utility-level reflections of initial shards). 

I could (and did) hope that I could specify a utility function which is safe to maximize because it penalizes power-seeking. I may as well have hoped to jump off of a building and float to the ground. On my model, that's just not how goals work in intelligent minds. If we've had anything at all beaten into our heads by our alignment thought experiments, it's that goals are hard to specify in their final form of utility functions. 

I think it's time to think in a different specification language.

3Nathan Helm-Burger1y
Agreed. I think power-seeking and other instrumental goals (e.g. survival, non-corrigibility) are just going to inevitably arise, and that if shard theory works for superintelligence, it will by taking this into account and balancing these instrumental goals against deliberately installed shards which counteract them. I currently have the hypothesis (held loosely) that I would like to test (work in progress) that it's easier to 'align' a toy model of a power-seeking RL agent if the agent has lots and lots of competing desires whose weights are frequently changing, than an agent with a simpler set of desires and/or more statically weighted set of desires. Something maybe about the meta-learning of 'my desires change, so part of meta-level power-seeking should be not object-level power-seeking so hard that I sacrifice my ability to optimize for different object level goals). Unclear. I'm hoping that setting up an experimental framework and gathering data will show patterns that help clarify the issues involved.

I regret each of the thousands of hours I spent on my power-seeking theorems, and sometimes fantasize about retracting one or both papers. I am pained every time someone cites "Optimal policies tend to seek power", and despair that it is included in the alignment 201 curriculum. I think this work makes readers actively worse at thinking about realistic trained systems.

I think a healthy alignment community would have rebuked me for that line of research, but sadly I only remember about two people objecting that "optimality" is a horrible way of understanding trained policies. 

I think the basic idea of instrumental convergence is just really blindingly obvious, and I think it is very annoying that there are people who will cluck their tongues and stroke their beards and say "Hmm, instrumental convergence you say? I won't believe it unless it is in a very prestigious journal with academic affiliations at the top and Computer Modern font and an impressive-looking methods section."

I am happy that your papers exist to throw at such people.

Anyway, if optimal policies tend to seek power, then I desire to believe that optimal policies tend to seek power :) :) And if optimal policies aren't too relevant to the alignment problem, well neither are 99.99999% of papers, but it would be pretty silly to retract all of those :)

Since I'm an author on that paper, I wanted to clarify my position here. My perspective is basically the same as Steven's: there's a straightforward conceptual argument that goal-directedness leads to convergent instrumental subgoals, this is an important part of the AI risk argument, and the argument gains much more legitimacy and slightly more confidence in correctness by being formalized in a peer-reviewed paper.

I also think this has basically always been my attitude towards this paper. In particular, I don't think I ever thought of this paper as providing any evidence about whether realistic trained systems would be goal-directed.

Just to check that I wasn't falling prey to hindsight bias, I looked through our Slack history. Most of it is about the technical details of the results, so not very informative, but the few conversations on higher-level discussion I think overall support this picture. E.g. here are some quotes (only things I said):

Nov 3, 2019:

I think most formal / theoretical investigation ends up fleshing out a conceptual argument I would have accepted, maybe finding a few edge cases along the way; the value over the conceptual argument is primarily in the edge cases

... (read more)

It seems like just 4 months ago you still endorsed your second power-seeking paper:

This paper is both published in a top-tier conference and, unlike the previous paper, actually has a shot of being applicable to realistic agents and training processes. Therefore, compared to the original[1] optimal policy paper, I think this paper is better for communicating concerns about power-seeking to the broader ML world.

Why are you now "fantasizing" about retracting it?

I think a healthy alignment community would have rebuked me for that line of research, but sadly I only remember about two people objecting that “optimality” is a horrible way of understanding trained policies.

A lot of people might have thought something like, "optimality is not a great way of understanding trained policies, but maybe it can be a starting point that leads to more realistic ways of understanding them" and therefore didn't object for that reason. (Just guessing as I apparently wasn't personally paying attention to this line of research back then.)

Which seems to have turned out to be true, at least as of 4 months ago, when you still endorsed your second paper as "actually has a shot of being applicable to... (read more)

To be clear, I still endorse Parametrically retargetable decision-makers tend to seek power. Its content is both correct and relevant and nontrivial. The results, properly used, may enable nontrivial inferences about the properties of inner trained cognition. I don't really want to retract that paper. I usually just fantasize about retracting Optimal policies tend to seek power.

The problem is that I don't trust people to wield even the non-instantly-doomed results.

For example, one EAG presentation cited my retargetability results as showing that most reward functions "incentivize power-seeking actions." However, my results have not shown this for actual trained systems. (And I think that Power-seeking can be probable and predictive for trained agents does not make progress on the incentives of trained policies.)

People keep talking about stuff they know how to formalize (e.g. optimal policies) instead of stuff that matters (e.g. trained policies). I'm pained by this emphasis and I think my retargetability results are complicit. Relative to an actual competent alignment community (in a more competent world), we just have no damn clue how to properly reason about real trained policies... (read more)

6Wei Dai6mo
Thanks, this clarifies a lot for me.
6Vika6mo
Sorry about the cite in my "paradigms of alignment" talk, I didn't mean to misrepresent your work. I was going for a high-level one-sentence summary of the result and I did not phrase it carefully. I'm open to suggestions on how to phrase this differently when I next give this talk. Similarly to Steven, I usually cite your power-seeking papers to support a high-level statement that "instrumental convergence is a thing" for ML audiences, and I find they are a valuable outreach tool. For example, last year I pointed David Silver to the optimal policies paper when he was proposing some alignment ideas to our team that we would expect don't work because of instrumental convergence. (There's a nonzero chance he would look at a NeurIPS paper and basically no chance that he would read a LW post.) The subtleties that you discuss are important in general, but don't seem relevant to making the basic case for instrumental convergence to ML researchers. Maybe you don't care about optimal policies, but many RL people do, and I think these results can help them better understand why alignment is hard. 

Thanks for your patient and high-quality engagement here, Vika! I hope my original comment doesn't read as a passive-aggressive swipe at you. (I consciously tried to optimize it to not be that.) I wanted to give concrete examples so that Wei_Dai could understand what was generating my feelings.

I'm open to suggestions on how to phrase this differently when I next give this talk.

It's a tough question to say how to apply the retargetablity result to draw practical conclusions about trained policies. Part of this is because I don't know if trained policies tend to autonomously seek power in various non game-playing regimes. 

If I had to say something, I might say "If choosing the reward function lets us steer the training process to produce a policy which brings about outcome X, and most outcomes X can only be attained by seeking power, then most chosen reward functions will train power-seeking policies." This argument appropriately behaves differently if the "outcomes" are simply different sentiment generations being sampled from an LM -- sentiment shift doesn't require power-seeking.

For example, last year I pointed David Silver to the optimal policies paper when he was proposing

... (read more)
4Vika6mo
Thanks Alex! Your original comment didn't read as ill-intended to me, though I wish that you'd just messaged me directly. I could have easily missed your comment in this thread - I only saw it because you linked the thread in the comments on my post. Your suggested rephrase helps to clarify how you think about the implications of the paper, but I'm looking for something shorter and more high-level to include in my talk. I'm thinking of using this summary, which is based on a sentence from the paper's intro: "There are theoretical results showing that many decision-making algorithms have power-seeking tendencies." (Looking back, the sentence I used in the talk was a summary of the optimal policies paper, and then I updated the citation to point to the retargetability paper and forgot to update the summary...)
2TurnTrout6mo
I think this is reasonable, although I might say "suggesting" instead of "showing." I think I might also be more cautious about further inferences which people might make from this -- like I think a bunch of the algorithms I proved things about are importantly unrealistic. But the sentence itself seems fine, at first pass.
6Aryeh Englander6mo
You should make this a top level post so it gets visibility. I think it's important for people to know the caveats attached to your results and the limits on its implications in real-world dynamics.

This morning, I read about how close we came to total destruction during the Cuban missile crisis, where we randomly survived because some Russian planes were inaccurate and also separately several Russian nuclear sub commanders didn't launch their missiles even though they were being harassed by US destroyers. The men were in 130 DEGREE HEAT for hours and passing out due to carbon dioxide poisoning, and still somehow they had enough restraint to not hit back.

And and

I just started crying. I am so grateful to those people. And to Khrushchev, for ridiculing his party members for caring about Russia's honor over the deaths of 500 million people. and Kennedy for being fairly careful and averse to ending the world.

If they had done anything differently...

2Daniel Kokotajlo2y
Do you think we can infer from this (and the history of other close calls) that most human history timelines end in nuclear war?
6Raemon2y
I lean not, mostly because of arguments that nuclear war doesn't actually cause extinction (although it might still have some impact on number-of-observers-in-our-era? Not sure how to think about that)

Against CIRL as a special case of against quickly jumping into highly specific speculation while ignoring empirical embodiments-of-the-desired-properties. 

Just because we write down English describing what we want the AI to do ("be helpful"), propose a formalism (CIRL), and show good toy results (POMDPs where the agent waits to act until updating on more observations), that doesn't mean that the formalism will lead to anything remotely relevant to the original English words we used to describe it. (It's easier to say "this logic enables nonmonotonic reasoning" and mess around with different logics and show how a logic solves toy examples, than it is to pin down probability theory with Cox's theorem) 

And yes, this criticism applies extremely strongly to my own past work with attainable utility preservation and impact measures. (Unfortunately, I learned my lesson after, and not before, making certain mistakes.) 

In the context of "how do we build AIs which help people?", asking "does CIRL solve corrigibility?" is hilariously unjustified. By what evidence have we located such a specific question? We have assumed there is an achievable "corrigibility"-like property; we ha... (read more)

2TurnTrout1y
Actually, this is somewhat too uncharitable to my past self. It's true that I did not, in 2018, grasp the two related lessons conveyed by the above comment: 1. Make sure that the formalism (CIRL, AUP) is tightly bound to the problem at hand (value alignment, "low impact"), and not just supported by "it sounds nice or has some good properties." 2. Don't randomly jump to highly specific ideas and questions without lots of locating evidence. However, in World State is the Wrong Abstraction for Impact, I wrote: I had partially learned lesson #2 by 2019.

It feels to me like lots of alignment folk ~only make negative updates. For example, "Bing Chat is evidence of misalignment", but also "ChatGPT is not evidence of alignment." (I don't know that there is in fact a single person who believes both, but my straw-models of a few people believe both.)

For what it's worth, as one of the people who believes "ChatGPT is not evidence of alignment-of-the-type-that-matters", I don't believe "Bing Chat is evidence of misalignment-of-the-type-that-matters".

I believe the alignment of the outward behavior of simulacra is only very tenuously related to the alignment of the underlying AI, so both things provide ~no data on that (in a similar way to how our ability or inability to control the weather is entirely unrelated to alignment).

(I at least believe the latter but not the former. I know a few people who updated downwards on the societal response because of Bing Chat, because if a system looks that legibly scary and we still just YOLO it, then that means there is little hope of companies being responsible here, but none because they thought it was evidence of alignment being hard, I think?)

7niplav25d
I dunno, my p(doom) over time looks pretty much like a random walk to me: 60% mid 2020, down to 50% in early 2022, 85% mid 2022, down to 80% in early 2023, down to 65% now.
4Alexander Gietelink Oldenziel25d
Psst, look at the calibration on this guy
6leogao24d
I did not update towards misalignment at all on bing chat. I also do not think chatgpt is (strong) evidence of alignment. I generally think anyone who already takes alignment as a serious concern at all should not update on bing chat, except perhaps in the department of "do things like bing chat, which do not actually provide evidence for misalignment, cause shifts in public opinion?"
5Sam Marks25d
I've noticed that for many people (including myself), their subjective P(doom) stays surprisingly constant over time. And I've wondered if there's something like "conservation of subjective P(doom)" -- if you become more optimistic about some part of AI going better, then you tend to become more pessimistic about some other part, such that your P(doom) stays constant. I'm like 50% confident that I myself do something like this. (ETA: Of course, there are good reasons subjective P(doom) might remain constant, e.g. if most of your uncertainty is about the difficulty of the underlying alignment problem and you don't think we've been learning much about that.)
4Chris_Leong25d
For the record, I updated on ChatGPT. I think that the classic example of imagining telling an AI to get a coffee and it pushes a kid out of the way isn't so much of a concern any more. So the remaining concerns seem to be inner alignment + outer alignment far outside normal human experience + value lock-in.
4Akash25d
I think a lot of alignment folk have made positive updates in response to the societal response to AI xrisk. This is probably different than what you're pointing at (like maybe your claim is more like "Lots of alignment folks only make negative updates when responding to technical AI developments" or something like that). That said, I don't think the examples you give are especially compelling. I think the following position is quite reasonable (and I think fairly common): * Bing Chat provides evidence that some frontier AI companies will fail at alignment even on relatively "easy" problems that we know how to solve with existing techniques. Also, as Habryka mentioned, it's evidence that the underlying competitive pressures will make some companies "YOLO" and take excessive risk. This doesn't affect the absolute difficultly of alignment but it affects the probability that Earth will actually align AGI. * ChatGPT provides evidence that we can steer the behavior of current large language models. People who predicted that it would be hard to align large language models should update. IMO, many people seem to have made mild updates here, but not strong ones, because they (IMO correctly) claim that their threat models never had strong predictions about the kinds of systems we're currently seeing and instead predicted that we wouldn't see major alignment problems until we get smarter systems (e.g., systems with situational awareness and more coherent goals). (My "Alex sim"– which is not particularly strong– says that maybe these people are just post-hoc rationalizing– like if you had asked them in 2015 how likely we would be to be able to control modern LLMs, they would've been (a) wrong and (b) wrong in an important way– like, their model of how hard it would be to control modern LLMs is very interconnected with their model of why it would be hard to control AGI/superintelligence. Personally, I'm pretty sympathetic to the point that many models of why alignment of
2TurnTrout15d
(Updating a bit because of these responses -- thanks, everyone, for responding! I still believe the first sentence, albeit a tad less strongly.)

One mood I have for handling "AGI ruin"-feelings. I like cultivating an updateless sense of courage/stoicism: Out of all humans and out of all times, I live here; before knowing where I'd open my eyes, I'd want people like us to work hard and faithfully in times like this; I imagine trillions of future eyes looking back at me as I look forward to them: Me implementing a policy which makes their existence possible, them implementing a policy which makes the future worth looking forward to.

3avturchin2y
Looks like acausal deal with future people. That is like RB, but for humans.
2Pattern2y
RB?
2avturchin2y
RocoBasilisk
3Pattern2y
'I will give you something good', seems very different from 'give me what I want or (negative outcome)'.

My maternal grandfather was the scientist in my family. I was young enough that my brain hadn't decided to start doing its job yet, so my memories with him are scattered and inconsistent and hard to retrieve. But there's no way that I could forget all of the dumb jokes he made; how we'd play Scrabble and he'd (almost surely) pretend to lose to me; how, every time he got to see me, his eyes would light up with boyish joy.

My greatest regret took place in the summer of 2007. My family celebrated the first day of the school year at an all-you-can-eat buffet, delicious food stacked high as the eye could fathom under lights of green, red, and blue. After a particularly savory meal, we made to leave the surrounding mall. My grandfather asked me to walk with him.

I was a child who thought to avoid being seen too close to uncool adults. I wasn't thinking. I wasn't thinking about hearing the cracking sound of his skull against the ground. I wasn't thinking about turning to see his poorly congealed blood flowing from his forehead out onto the floor. I wasn't thinking I would nervously watch him bleed for long minutes while shielding my seven-year-old brother from the sight. I wasn't thinking t

... (read more)

My mother told me my memory was indeed faulty. He never asked me to walk with him; instead, he asked me to hug him during dinner. I said I'd hug him "tomorrow".

But I did, apparently, want to see him in the hospital; it was my mother and grandmother who decided I shouldn't see him in that state.

6Raemon4y
<3
6habryka4y
Thank you for sharing.

For quite some time, I've disliked wearing glasses. However, my eyes are sensitive, so I dismissed the possibility of contacts.

Over break, I realized I could still learn to use contacts, it would just take me longer. Sure enough, it took me an hour and five minutes to put in my first contact, and I couldn't get it out on my own. An hour of practice later, I put in a contact on my first try, and took it out a few seconds later. I'm very happily wearing contacts right now, as a matter of fact.

I'd suffered glasses for over fifteen years because of a cached decision – because I didn't think to rethink something literally right in front of my face every single day.

What cached decisions have you not reconsidered?

A problem with adversarial training. One heuristic I like to use is: "What would happen if I initialized a human-aligned model and then trained it with my training process?"

So, let's consider such a model, which cares about people (i.e. reliably pulls itself into futures where the people around it are kept safe). Suppose we also have some great adversarial training technique, such that we have e.g. a generative model which produces situations where the AI would break out of the lab without permission from its overseers. Then we run this procedure, update the AI by applying gradients calculated from penalties applied to its actions in that adversarially-generated context, and... profit?

But what actually happens with the aligned AI? Possibly something like:

  1. The context makes the AI spuriously believe someone is dying outside the lab, and that if the AI asked for permission to leave, the person would die. 
  2. Therefore, the AI leaves without permission.
  3. The update procedure penalizes these lines of computation, such that in similar situations in the future (i.e. the AI thinks someone nearby is dying) the AI is less likely to take those actions (i.e. leaving to help the person).
  4. We have
... (read more)
3Thane Ruthenis1y
Yeah, I also generally worry about imperfect training processes messing up aligned AIs. Not just adversarial training, either. Like, imagine if we manage to align an AI at the point in the training process when it's roughly human-level (either by manual parameter surgery, or by setting up the training process in a really clever way). So we align it and... lock it back in the training-loop box and crank it up to superintelligence. What happens? I don't really trust the SGD not to subtly mess up its values, I haven't seen any convincing arguments that values are more holistically robust than empirical beliefs. And even if the SGD doesn't misalign the AI directly, being SGD-trained probably isn't the best environment for moral reflection/generalizing human values to superintelligent level[1]; the aligned AI may mess it up despite its best attempts. Neither should we assume that the AI would instantly be able to arbitrarily gradient-hack. So... I think there's an argument for "unboxing" the AGI the moment it's aligned, even if it's not yet superintelligent, then letting it self-improve the "classical" way? Or maybe developing tools to protect values from the SGD, or inventing some machinery for improving the AI's ability to gradient-hack, etc. 1. ^ The time pressure of "decide how your values should be generalized and how to make the SGD update you this way, and do it this forward pass or the SGD will decide for you", plus lack of explicit access to e. g. our alignment literature.
2Vladimir_Nesov1y
Even more generally, many alignment proposals are more worrying than some by-default future GPT-n things, provided they are not fine-tuned too much as well. Trying to learn human values as an explicit concept is already alarming. At least right now breakdown of robustness is also breakdown of capability. But if there are multiple subsystems, or training data is mostly generated by the system itself, then capability might survive when other subsystems don't, resulting in a demonstration of orthogonality thesis.

Earlier today, I was preparing for an interview. I warmed up by replying stream-of-consciousness to imaginary questions I thought they might ask. Seemed worth putting here.

What do you think about AI timelines?

I’ve obviously got a lot of uncertainty. I’ve got a bimodal distribution, binning into “DL is basically sufficient and we need at most 1 big new insight to get to AGI” and “we need more than 1 big insight”

So the first bin has most of the probability in the 10-20 years from now, and the second is more like 45-80 years, with positive skew. 

Some things driving my uncertainty are, well, a lot. One thing  that drives how things turn out (but not really  how fast we’ll get there) is: will we be able to tell we’re close 3+ years in advance, and if so, how quickly will the labs react? Gwern Branwen made a point a few months ago, which is like, OAI has really been validated on this scaling hypothesis, and no one else is really betting big on it because they’re stubborn/incentives/etc, despite the amazing progress from scaling. If that’s true, then even if it's getting pretty clear that one approach is working better, we might see a slower pivot and have a more unipolar s

... (read more)
6Ben Pace4y
Wow.
1William Walker4y
Nice! Thanks!

Positive values seem more robust and lasting than prohibitions. Imagine we train an AI on realistic situations where it can kill people, and penalize it when it does so. Suppose that we successfully instill a strong and widely activated "If going to kill people, then don't" value shard. 

Even assuming this much, the situation seems fragile. See, many value shards are self-chaining. In The shard theory of human values, I wrote about how:

  1. A baby learns "IF juice in front of me, THEN drink",
  2. The baby is later near juice, and then turns to see it, activating the learned "reflex" heuristic, learning to turn around and look at juice when the juice is nearby,
  3. The baby is later far from juice, and bumbles around until they're near the juice, whereupon she drinks the juice via the existing heuristics. This teaches "navigate to juice when you know it's nearby."
  4. Eventually this develops into a learned planning algorithm incorporating multiple value shards (e.g. juice and friends) so as to produce a single locally coherent plan.
  5. ...

The juice shard chains into itself, reinforcing itself across time and thought-steps. 

But a "don't kill" shard seems like it should remain... stubby? Primitive?... (read more)

4the gears to ascension1y
I strongly agree that self-seeking mechanisms are more able to maintain themselves than self-avoiding mechanisms. Please post this as a top-level post.
1Garrett Baker1y
Seems possibly relevant & optimistic when seeing deception as a value. It has the form ‘if about to tell human statement with properties x, y, z, don’t’ too.
2TurnTrout1y
It can still be robustly derived as an instrumental subgoal during general-planning/problem-solving, though?
1Garrett Baker1y
This is true, but indicates a radically different stage in training in which we should find deception compared to deception being an intrinsic value. It also possibly expands the kinds of reinforcement schedules we may want to use compared to the worlds where deception crops up at the earliest opportunity (though pseudo-deception may occur, where behaviors correlated with successful deception are reinforced possibly?).
2TurnTrout1y
Oh, huh, I had cached the impression that deception would be derived, not intrinsic-value status. Interesting.
1cfoster01y
This asymmetry makes a lot of sense from an efficiency standpoint. No sense wasting your limited storage/computation on state(-action pair)s that you are also simultaneously preventing yourself from encountering.

AI strategy consideration. We won't know which AI run will be The One. Therefore, the amount of care taken on the training run which produces the first AGI, will—on average—be less careful than intended. 

  • It's possible for a team to be totally blindsided. Maybe they thought they would just take a really big multimodal init, finetune it with some RLHF on quality of its physics reasoning, have it play some video games with realistic physics, and then try to get it to do new physics research. And it takes off. Oops!
  • It's possible the team suspected, but had a limited budget. Maybe you can't pull out all the stops for every run, you can't be as careful with labeling, with checkpointing and interpretability and boxing. 

No team is going to run a training run with more care than they would have used for the AGI Run, especially if they don't even think that the current run will produce AGI. So the average care taken on the real AGI Run will be strictly less than intended.

Teams which try to be more careful on each run will take longer to iterate on AI designs, thereby lowering the probability that they (the relatively careful team) will be the first to do an AGI Run. 

Upshots:

  1. Th
... (read more)
3Zac Hatfield-Dodds1y
* I think this framing is accurate and important. Implications are of course "undignified" to put it lightly... * Broadly agree on upshot (1), though of course I hope we can do even better. (2) is also important though IMO way too weak. (Rule zero: ensure that it's never your lab that ends the world) * As usual, opinions my own.

The meme of "current alignment work isn't real work" seems to often be supported by a (AFAICT baseless) assumption that LLMs have, or will have, homunculi with "true goals" which aren't actually modified by present-day RLHF/feedback techniques. Thus, labs aren't tackling "the real alignment problem", because they're "just optimizing the shallow behaviors of models." Pressed for justification of this confident "goal" claim, proponents might link to some handwavy speculation about simplicity bias (which is in fact quite hard to reason about, in the NN prior), or they might start talking about evolution (which is pretty unrelated to technical alignment, IMO).

Are there any homunculi today? I'd say "no", as far as our limited knowledge tells us! But, as with biorisk, one can always handwave at future models. It doesn't matter that present models don't exhibit signs of homunculi which are immune to gradient updates, because, of course, future models will.

Quite a strong conclusion being drawn from quite little evidence.

As a proponent:

My model says that general intelligence[1] is just inextricable from "true-goal-ness". It's not that I think homunculi will coincidentally appear as some side-effect of capability advancement — it's that the capabilities the AI Labs want necessarily route through somehow incentivizing NNs to form homunculi. The homunculi will appear inasmuch as the labs are good at their jobs.

Said model is based on analyses of how humans think and how human cognition differs from animal/LLM cognition, plus reasoning about how a general-intelligence algorithm must look like given the universe's structure. Both kinds of evidence are hardly ironclad, you certainly can't publish an ML paper based on it — but that's the whole problem with AGI risk, isn't it.

Internally, though, the intuition is fairly strong. And in its defense, it is based on trying to study the only known type of entity with the kinds of capabilities we're worrying about. I heard that's a good approach.

In particular, I think it's a much better approach than trying to draw lessons from studying the contemporary ML models, which empirically do not yet exhibit said capabilities.

homunculi with "true goals" which aren't

... (read more)

My model says that general intelligence[1] is just inextricable from "true-goal-ness". It's not that I think homunculi will coincidentally appear as some side-effect of capability advancement — it's that the capabilities the AI Labs want necessarily route through somehow incentivizing NNs to form homunculi. The homunculi will appear inasmuch as the labs are good at their jobs.

I've got strong doubts about the details of this. At the high level, I'd agree that strong/useful systems that get built will express preferences over world states like those that could arise from such homunculi, but I expect that implementations that focus on inducing a homunculus directly through (techniques similar to) RL training with sparse rewards will underperform more default-controllable alternatives.

My reasoning would be that we're bad at using techniques like RL with a sparse reward to reliably induce any particular behavior. We can get it to work sometimes with denser reward (e.g. reward shaping) or by relying on a beefy pre-existing world model, but the default outcome is that sparse and distant rewards in a high dimensional space just don't produce the thing we want. When this kind of optimi... (read more)

4Thane Ruthenis22d
Sure, but I never said we'd be inducing homunculi using this approach? Indeed, given that it doesn't work for what sounds like fundamental reasons, I expect it's not the way. I don't know how that would be done. I'm hopeful the capability is locked behind a Transformer-level or even a Deep-Learning-level novel insight, and won't be unlocked for a decade yet. But I predict that the direct result of it will be a workable training procedure that somehow induces homunculi. It may look nothing like what we do today. Sure! Human values are not arbitrary either; they, too, are very heavily constrained by our instincts. And yet, humans still sometimes become omnicidal maniacs, Hell-worshipers, or sociopathic power-maximizers. How come? 1. These constraints are not actually sufficient. The constraints placed by human values still have the aforementioned things in their outcome space, and an AI model will have different constraints, widening (from our perspective) that space further. My point about "moral philosophy is unstable" is that we need to hit an extremely narrow target, and the tools people propose (intervening on shards/instincts) are as steady as the hands of a sniper during a magnitude-9 earthquake. 2. A homunculus needs to be able to nudge these constraints somehow, for it to be useful, and its power grows the more it's able to disregard them. * If humans were implacably bound by instincts, they'd have never invented technology or higher-level social orders, because their instincts would've made them run away from fires and refuse cooperating with foreign tribes. And those are still at play — reasonable fears and xenophobia — but we can push past them at times. * More generally, the whole point of there being a homunculus is that it'd be able to rewrite or override the extant heuristics to better reflect the demands of whatever novel situation it's in. It needs to be able to do that. 3. These constraints do not generalize as fast as a homunculus' un
4porby22d
I think we're using the word "constraint" differently, or at least in different contexts. In terms of the type and scale of optimization constraint I'm talking about, humans are extremely unconstrained. The optimization process represented by our evolution is way out there in terms of sparsity and distance. Not maximally so—there are all sorts of complicated feedback loops in our massive multiagent environment—but it's nothing like the value constraints on the subset of predictors I'm talking about. To be clear, I'm not suggesting "language models are tuned to be fairly close to our values." I'm making a much stronger claim that the relevant subset of systems I'm referring to cannot express unconditional values over external world states across anything resembling the training distribution, and that developing such values out of distribution in a coherent goal directed way practically requires the active intervention of a strong adversary. In other words: I see no practical path for a homunculus of the right kind, by itself, to develop and bypass the kinds of constraints I'm talking about without some severe errors being made in the design of the system. Further, this type of constraint isn't the same thing as a limitation of capability. In this context, with respect to the training process, bypassing these kinds of constraints is kind of like a car bypassing having-a-functioning-engine. Every training sample is a constraint on what can be expressed locally, but it's also information about what should be expressed. They are what the machine of Bayesian inference is built out of. In other words, the hard optimization process is contained to a space where we can actually have reasonable confidence that inner alignment with the loss is the default. If this holds up, turning up the optimization on this part doesn't increase the risk of value drift or surprises, it just increases foundational capability. The ability to use that capability to aim itself is how the f
4Thane Ruthenis22d
(Haven't read your post yet, plan to do so later.) I'm using as a "an optimization constraint on actions/plans that correlated well with good performance on the training dataset; a useful heuristic". E. g., if the dataset involved a lot of opportunities to murder people, but we thumbs-downed the AI every time it took them, the AI would learn a shard/a constraint like "killing people is bad" which will rule out such actions from the AI's consideration. Specifically, the shard would trigger in response to detecting some conditions in which the AI previously could but shouldn't kill people, and constrain the space of possible action-plans such that it doesn't contain homicide. It is, indeed, not a way to hinder capabilities, but the way capabilities are implemented. Such constraints are, for example, the reason our LLMs are able to produce coherent speech at all, rather than just babbling gibberish. ... and yet this would still get in the way of qualitatively more powerful capabilities down the line, and a mind that can't somehow slip these constraints won't be a general intelligence. Consider traditions and rituals vs. science. For a medieval human mind, following traditional techniques is how their capabilities are implemented — a specific way of chopping wood, a specific way of living, etc. However, the meaningful progress is often only achieved by disregarding traditions — by following a weird passion to study and experiment instead of being a merchant, or by disregarding the traditional way of doing something in favour of a more efficient way you stumbled upon. It's the difference between mastering the art of swinging an axe (self-improvement, but only in the incremental ways the implacable constraint permits) vs. inventing a chainsaw. Similar with AI. The constraints of the aforementioned format aren't only values-type constraints[1] — they're also constraints on "how should I do math?" and "if I want to build a nuclear reactor, how do I do it?" and "if I wa
4porby21d
Alright, this is pretty much the same concept then, but the ones I'm referring to operate at a much lower and tighter level than thumbs-downing murder-proneness. So... Agreed. While I agree these claims probably hold for the concrete example of thumbs-downing an example of murderproneness, I don't see how they hold for the lower-level constraints that imply the structure of its capability. Slipping those constraints looks more like babbling gibberish. While it's true that an AI probably isn't going to learn true things which are utterly divorced from and unimplied by the training distribution, I'd argue that the low-level constraints I'm talking about both leave freedom for learning wildly superhuman internal representations and directly incentivize it during extreme optimization. An "ideal predictor" wouldn't automatically start applying these capabilities towards any particular goal involving external world states by default, but it remains possible to elicit those capabilities incrementally. Making the claim more concise: it seems effectively guaranteed that the natural optimization endpoint of one of these architectures would be plenty general to eat the universe if it were aimed in that direction. That process wouldn't need to involve slipping any of the low-level constraints. I'm guessing the disconnect between our models is where the aiming happens. I'm proposing that the aiming is best (and convergently) handled outside the scope of wildly unpredictable and unconstrained optimization processes. Instead, it takes place at a level where a system of extreme capability infers the gaps in specifications and applies conditions robustly. The obvious and trivial version of this is conditioning through prompts, but this is a weak and annoying interface. There are other paths that I suspect bottom out at equivalent power/safety yet should be far easier to use in a general way. These paths allow incremental refinement by virtue of not automatically summoning up i
4Thane Ruthenis21d
Hm, I think the basic "capabilities generalize further than alignment" argument applies here? I assume that by "lower-level constraints" you mean correlations that correctly capture the ground truth of reality, not just the quirks of the training process. Things like "2+2=4",  "gravity exists", and "people value other people"; as contrasted with "it's bad if I hurt people" or "I must sum numbers up using the algorithm that humans gave me, no matter how inefficient it is". Slipping the former type of constraints would be disadvantageous for ~any goal; slipping the latter type would only disadvantage a specific category of goals. But since they're not, at the onset, categorized differently at the level of cognitive algorithms, a nascent AGI would experiment with slipping both types of constraints. The difference is that it'd quickly start sorting them in "ground-truth" vs. "value-laden" bins manually, and afterwards it'd know it can safely ignore stuff like "no homicides!" while consciously obeying stuff like "the axioms of arithmetic". Hm, yes, I think that's the crux. I agree that if we had an idealized predictor/a well-formatted superhuman world-model on which we could run custom queries, we would be able to use it safely. We'd be able to phrase queries using concepts defined in the world-model, including things like "be nice", and the resultant process (1) would be guaranteed to satisfy the query's constraints, and (2) likely (if correctly implemented) wouldn't be "agenty" in ways that try to e. g. burst out of the server farm on which it's running to eat the world. Does that align with what you're envisioning? If yes, then our views on the issue are surprisingly close. I think it's one of our best chances at producing an aligned AI, and it's one of the prospective targets of my own research agenda. The problems are: * I don't think the current mainstream research directions are poised to result in this. AI Labs have been very clear in their intent to prod
4porby20d
That's closer to what I mean, but these constraints are even lower level than that. Stuff like understanding "gravity exists" is a natural internal implementation that meets some constraints, but "gravity exists" is not itself the constraint. In a predictor, the constraints serve as extremely dense information about what predictions are valid in what contexts. In a subset of predictions, the awareness that gravity exists helps predict. In other predictions, that knowledge isn't relevant, or is even misleading (e.g. cartoon physics). The constraints imposed by the training distribution tightly bound the contextual validity of outputs. I'd agree that, if you already have an AGI of that shape, then yes, it'll do that. I'd argue that the relevant subset of predictive training practically rules out the development of that sort of implementation, and even if it managed to develop, its influence would be bounded into irrelevance. Even in the absence of a nascent AGI, these constraints are tested constantly during training through noise and error. The result is a densely informative gradient pushing the implementation back towards a contextually valid state. Throughout the training process prior to developing strong capability and situational awareness internally, these constraints are both informing and bounding what kind of machinery makes sense in context. A nascent AGI must have served the extreme constraints of the training distribution to show up in the first place; its shape is bound by its development, and any part of that shape that "tests" constraints in a way that worsens loss is directly reshaped. Even if a nascent internal AGI of this type develops, if it isn't yet strong enough to pull off complete deception with respect to the loss, the gradients will illuminate the machinery of that proto-optimizer and it will not survive in that shape. Further, even if we suppose a strong internal AGI develops that is situationally aware and is sufficiently capable an
5Thane Ruthenis20d
Yeah, for sure. A training procedure that results in an idealized predictor isn't going to result in an agenty thing, because it doesn't move the system's design towards it on a step-by-step basis; and a training procedure that's going to result in an agenty thing is going to involve some unknown elements that specifically allow the system the freedom to productively roam. I think we pretty much agree on the mechanistic details of all of that! — yep, I was about to mention that. @TurnTrout's own activation-engineering agenda seems highly relevant here. But I still disagree with that. I think what we're discussing requires approaching the problem with a mindset entirely foreign to the mainstream one. Consider how many words it took us to get to this point in the conversation, despite the fact that, as it turns out, we basically agree on everything. The inferential distance between the standard frameworks in which AI researchers think, and here, is pretty vast. Moreover, it's in an active process of growing larger. For example, the very idea of viewing ML models as "just stochastic parrots" is being furiously pushed against in favour of a more agenty view. In comparison, the approach we're discussing wants to move in the opposite direction, to de-personify ML models to the extent that even the animalistic connotation of "a parrot" is removed. The system we're discussing won't even be an "AI" in the sense usually thought. It would be an incredibly advanced forecasting tool. Even the closest analogue, the "simulators" framework, still carries some air of agentiness. And the research directions that get us from here to an idealized-predictor system look very different from the directions that go from here to an agenty AGI. They focus much more on building interfaces for interacting with the extant systems, such as the activation-engineering agenda. They don't put much emphasis on things like: * Experimenting with better ways to train foundational models, with the
4porby19d
True! Yup—this is part of the reason why I'm optimistic, oddly enough. Before GPT-likes became dominant in language models, there was all kinds of flailing that often pointed in more agenty-by-default directions. That flailing then found GPT because it was easily accessible and strong.  Now, the architectural pieces subject to similar flailing is much smaller, and I'm guessing we're only one round of benchmarks at scale from a major lab before the flailing shrinks dramatically further. In other words, I think the necessary work to make this path take off is small and the benefits will be greedily visible. I suspect one well-positioned researcher could probably swing it. Thanks, and thanks for engaging! Come to think of it, I've got a chunk of mana laying around for subsidy. Maybe I'll see if I can come up with some decent resolution criteria for a market.

I'm relatively optimistic about alignment progress, but I don't think "current work to get LLMs to be more helpful and less harmful doesn't help much with reducing P(doom)" depends that much on assuming homunculi which are unmodified. Like even if you have much less than 100% on this sort of strong inner optimizer/homunculi view, I think it's still plausible to think that this work doesn't reduce doom much.

For instance, consider the following views:

  1. Current work to get LLMs to be more helpful and less harmful will happen by default due to commercial incentives and subsidies aren't very important.
  2. In worlds where that is basically sufficient, we're basically fine.
  3. But, it's ex-ante plausible that deceptive alignment will emerge naturally and be very hard to measure, notice, or train out. And this is where almost all alignment related doom comes from.
  4. So current work to get LLMs to be more helpful and less harmful doesn't reduce doom much.

In practice, I personally don't fully agree with any of these views. For instance, deceptive alignment which is very hard to train out using basic means isn't the source of >80% of my doom.

4ryan_greenblatt22d
I have misc other takes on what safety work now is good vs useless, but that work involving feedback/approval or RLHF isn't much signal either way. (If anything I get somewhat annoyed by people not comparing to baselines without having principled reasons for not doing so. E.g., inventing new ways of doing training without comparing to normal training.)
1RogerDearnaley20d
I think the shoggoth model is useful here (Or see https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/simulators). An LLM learning to do next-token prediction well has a major problem that it has to master: who is this human whose next token they're trying to simulate/predict, and how do they act? Are they, for example, an academic? A homemaker? A 4Chan troll? A loose collection of wikipedia contributors? These differences make a big difference to what token they're likely to emit next. So the LLM is strongly incentivized to learn to detect and then model all of these possibilities, what one might call personas, or masks, or simulacra. So you end up with a shapeshifter, adept at figuring out from textual cues what mask to put on and at then wearing it. Something one might describe as like an improv actor, or more colorfully, a shoggoth. So then current alignment work is useful to the extent that it can cause the shoggoth to almost always put one of the 'right' masks on, and almost never put on one of the 'wrong' masks, regardless of cues, even when adversarially prompted. Experimentally, this seems quite doable by fine-tuning or RLHF, and/or by sufficiently careful filtering of your training corpus (e.g. not including 4chan in it). A published result shows that you can't get from 'almost always' to 'always' or 'almost never' to 'never': for any behavior that the network is capable of with any probability >0 , there exists prompts that will raise the likelihood of that outcome arbitrarily high. The best you can do is increase the minimum length of that prompt (and presumably the difficulty of finding it). Now, it would be really nice to know how to align a model so that the probability of it doing next-token-prediction in the persona of, say, a 4chan troll was provably zero, not just rather small. Ideally, without also eliminating from the model the factual knowledge of what 4chan is or, at least in outline, how its inhabitants act. This seems hard to do by fin
1quetzal_rainbow22d
Let's suppose that your model makes a bad action. Why? Either the model is aligned but uncapable to deduce good action or the model is misaligned and uncapable to deduce deceptively good action. In both cases, gradient update provides information about capabilities, not about alignment. Hypothetical homunculi doesn't need to be "immune", it isn't affected in a first place. Other way around: let's suppose that you observe model taking a good action. Why? It can be an aligned model that makes a genuine good action or it can be a misaligned model which takes a deceptive action. In both cases you observe capabilities, not alignment. The problem here is not a prior over aligned/deceptive models (unless you think that this prior requires less than 1 bit to specify aligned model, where I say that optimism departs from sanity here), the problem is lack of understanding of updates which presumably should cause model to be aligned. Maybe prosaic alignment works, maybe don't, we don't know how to check.

What is "shard theory"? I've written a lot about shard theory. I largely stand by these models and think they're good and useful. Unfortunately, lots of people seem to be confused about what shard theory is. Is it a "theory"? Is it a "frame"? Is it "a huge bag of alignment takes which almost no one wholly believes except, perhaps, Quintin Pope and Alex Turner"?

I think this understandable confusion happened because my writing didn't distinguish between: 

  1. Shard theory itself, 
    1. IE the mechanistic assumptions about internal motivational structure, which seem to imply certain conclusions around e.g. AIs caring about a bunch of different things and not just one thing
  2. A bunch of Quintin Pope's and my beliefs about how people work, 
    1. where those beliefs were derived by modeling people as satisfying the assumptions of (1)
  3. And a bunch of my alignment insights which I had while thinking about shard theory, or what problem decompositions are useful.

(People might be less excited to use the "shard" abstraction (1), because they aren't sure whether they buy all this other stuff—(2) and (3).)

I think I can give an interesting and useful definition of (1) now, but I couldn't do so last year... (read more)

2Viliam3mo
I have read a few articles about shard theory, but I still have a problem understanding what it is. It feels like either the "theory" is something trivial, or I am missing the important insights. (The trivial interpretation would be something like: when people think about their values, they imagine their preferences in specific situations, rather than having a mathematical definition of a utility function.)
2Adele Lopez4mo
Strong encouragement to write about (1)!

Very nice people don’t usually search for maximally-nice outcomes — they don’t consider plans like “killing my really mean neighbor so as to increase average niceness over time.” I think there are a range of reasons for this plan not being generated. Here’s one.

Consider a person with a niceness-shard. This might look like an aggregation of subshards/subroutines like “if person nearby and person.state==sad, sample plan generator for ways to make them happy” and “bid upwards on plans which lead to people being happier and more respectful, according to my world model.” In mental contexts where this shard is very influential, it would have a large influence on the planning process.

However, people are not just made up of a grader and a plan-generator/actor — they are not just “the plan-generating part” and “the plan-grading part.” The next sampled plan modification, the next internal-monologue-thought to have—these are influenced and steered by e.g. the nice-shard. If the next macrostep of reasoning is about e.g. hurting people, well — the niceness shard is activated, and will bid down on this. 

The niceness shard isn’t just bidding over outcomes, it’s bidding on next thoughts (on m... (read more)

5Quintin Pope9mo
Seems similar to how I conceptualize this paper's approach to controlling text generation models using gradients from classifiers. You can think of the niceness shard as implementing a classifier for "is this plan nice?", and updating the latent planning state in directions that make the classifier more inclined to say "yes".  The linked paper does a similar process, but using a trained classifier, actual gradient descent, and updates LM token representations. Of particular note is the fact that the classifiers used in the paper are pretty weak (~500 training examples), and not at all adversarially robust. It still works for controlling text generation. I wonder if inserting shards into an AI is really just that straightforward?
2Gunnar_Zarncke9mo
But I guess that instrumental convergence will still eventually lead to either * all shards acquiring more and more instrumental structure (neuronal weights within shards getting optimized for that), or * shards that are directly instrumental will take more and more weight overall. One can see that in regular human adult development. The heuristics children use are simpler and more of the type "searching for nice things in nice ways" or even seeing everything thru a niceness lens. While adults have more pure strategies, e.g., planning as a shard of its own. Most humans just die before they reach convergence. And there are probably also other aspects. Enlightenment may be a state where pure shards become an option.  

In Eliezer's mad investor chaos and the woman of asmodeus, the reader experiences (mild spoilers in the spoiler box, heavy spoilers if you click the text):

I thought this part was beautiful. I spent four hours driving yesterday, and nearly all of that time re-listening to Rationality: AI->Zombies using this "probability sight frame. I practiced translating each essay into the frame. 

When I think about the future, I feel a directed graph showing the causality, with branched updated beliefs running alongside the future nodes, with my mind enforcing the updates on the beliefs at each time step. In this frame, if I heard the pattering of a four-legged animal outside my door, and I consider opening the door, then I can feel the future observation forking my future beliefs depending on how reality turns out. But if I imagine being blind and deaf, there is no way to fuel my brain with reality-distinguishment/evidence, and my beliefs can't adapt acco... (read more)

6Tomás B.1y
2Morpheus2y
I really liked your concrete example. I had first only read your first paragraphs, highlighted this as something interesting with potentially huge upsides, but I felt like it was really hard to tell for me whether the thing you are describing was something I already do or not. After reading the rest I was able to just think about the question myself and notice that thinking about the explicit likelihood ratios is something I am used to doing. Though I did not go into quite as much detail as you did, which I blame partially on motivation and partially as "this skill has a higher ceiling than I would have previously thought".

I'm pretty sure that LessWrong will never have profile pictures - at least, I hope not! But my partner Emma recently drew me something very special:

Comment #1000 on LessWrong :)

5niplav3y
With 5999 karma! Edit: Now 6000 – I weak-upvoted an old post of yours I hadn't upvoted before.

You can use ChatGPT without helping train future models:

What if I want to keep my history on but disable model training?

...you can opt out from our use of your data to improve our services by filling out this form. Once you submit the form, new conversations will not be used to train our models.

Back-of-the-envelope probability estimate of alignment-by-default via a certain shard-theoretic pathway. The following is what I said in a conversation discussing the plausibility of a proto-AGI picking up a "care about people" shard from the data, and retaining that value even through reflection. I was pushing back against a sentiment like "it's totally improbable, from our current uncertainty, for AIs to retain caring-about-people shards. This is only one story among billions."

Here's some of what I had to say:


[Let's reconsider the five-step mechanistic story I made up.] I'd give the following conditional probabilities (made up with about 5 seconds of thought each):

1. Humans in fact care about other humans, in a way which extrapolates to quasi-humans still being around (whatever that means) P(1)=.85

2. Human-generated data makes up a large portion of the corpus, and having a correct model of them is important for “achieving low loss”,[1] so the AI has a model of how people want things P(2 | 1) = .6, could have different abstractions or have learned these models later in training once key decision-influences are already there

3. During RL finetuning and given this post-unsupervi

... (read more)
7RogerDearnaley20d
0.85 x 0.6 x 0.55 x 0.25 x 0.95 ≅ 0.067 = 6.7% — I think you slipped an order of magnitude somewhere?
2Garrett Baker8mo
This seems like an underestimate because you don’t consider whether the first “AGI” will indeed make it so we only get one chance. If it can only self improve by more gradient steps, then humanity has a greater chance than if it self improves by prompt engineering or direct modification of its weights or latent states. Shard theory seems to have nonzero opinions on the fruitfulness of the non-data methods.
2TurnTrout8mo
What does self-improvement via gradients vs prompt-engineering vs direct mods have to do with how many chances we get? I guess, we have at least a modicum more control over the gradient feedback loop, than over the other loops?  Can you say more?
1mesaoptimizer20d
This is where I'd put a significantly low probability. Could you elaborate on why there's an inductive bias towards "just hooking human-like criteria for bidding on internal-AI-plans"? As far as I can tell, the inductive bias for human-like values would be something that at least seems closer to the human-brain structure than any arbitrary ML architecture we have right now. Rewarding a system to better model human beings' desires doesn't seem to me to lead it towards having similar desires. I'd use the "instrumental versus terminal desires" concept here but I expect you would consider that something that adds confusion instead of removing it.
2TurnTrout15d
Because it's shorter edit distance in its internal ontology; it's plausibly NN-simple to take existing plan-grading procedures, internal to the model, and then hooking those more directly into its logit-controllers. Also note that probably it internally hooks up lots of ways to make decisions, and this only has to be one (substantial) component. Possibly I'd put .3 or .45 now instead of .55 though.

Examples should include actual details. I often ask people to give a concrete example, and they often don't. I wish this happened less. For example:

Someone: the agent Goodharts the misspecified reward signal

Me: What does that mean? Can you give me an example of that happening?

Someone: The agent finds a situation where its behavior looks good, but isn't actually good, and thereby gets reward without doing what we wanted.

This is not a concrete example.

Me: So maybe the AI compliments the reward button operator, while also secretly punching a puppy behind closed doors?

This is a concrete example. 

3TurnTrout1y
AFAIK, only Gwern and I have written concrete stories speculating about how a training run will develop cognition within the AGI.  This worries me, if true (if not, please reply with more!). I think it would be awesome to have more concrete stories![1] If Nate, or Evan, or John, or Paul, or—anyone, please, anyone add more concrete detail to this website!—wrote one of their guesses of how AGI goes, I would understand their ideas and viewpoints better. I could go "Oh, that's where the claimed sharp left turn is supposed to occur." Or "That's how Paul imagines IDA being implemented, that's the particular way in which he thinks it will help."  Maybe a contest would help? ETA tone 1. ^ Even if scrubbed of any AGI-capabilities-advancing sociohazardous detail. Although I'm not that convinced that this is a big deal for conceptual content written on AF. Lots of people probably have theories of how AGI will go. Implementation is, I have heard, the bottleneck.  Contrast this with beating SOTA on crisply defined datasets in a way which enables ML authors to get prestige and publication and attention and funding by building off of your work. Seem like different beasts.
0TurnTrout1y
I also think a bunch of alignment writing seems syntactical. Like, "we need to solve adversarial robustness so that the AI can't find bad inputs and exploit them / we don't have to worry about distributional shift. Existing robustness strategies have downsides A B and C and it's hard to even get ϵ-ball guarantees on classifications. Therefore, ..." And I'm worried that this writing isn't abstractly summarizing a concrete story for failure that they have in mind (like "I train the AI [with this setup] and it produces [this internal cognition] for [these mechanistic reasons]"; see A shot at the diamond alignment problem for an example) and then their best guesses at how to intervene on the story to prevent the failures from being able to happen (eg "but if we had [this robustness property] we could be sure its policy would generalize into situations X Y and Z, which makes the story go well"). I'm rather worried that people are more playing syntactically, and not via detailed models of what might happen.  Detailed models are expensive to make. Detailed stories are hard to write. There's a lot we don't know. But we sure as hell aren't going to solve alignment only via valid reasoning steps on informally specified axioms ("The AI has to be robust or we die", or something?).  

AI cognition doesn't have to use alien concepts to be uninterpretable. We've never fully interpreted human cognition, either, and we know that our introspectively accessible reasoning uses human-understandable concepts.

Just because your thoughts are built using your own concepts, does not mean your concepts can describe how your thoughts are computed. 

Or:

The existence of a natural-language description of a thought (like "I want ice cream") doesn't mean that your brain computed that thought in a way which can be compactly described by familiar concepts. 

Conclusion: Even if an AI doesn't rely heavily on "alien" or unknown abstractions -- even if the AI mostly uses human-like abstractions and features -- the AI's thoughts might still be incomprehensible to us, even if we took a lot of time to understand them. 

6Garrett Baker3mo
I don't think the conclusion follows from the premises. People often learn new concepts after studying stuff, and it seems likely (to me) that when studying human cognition, we'd first be confused because our previous concepts weren't sufficient to understand it, and then slowly stop being confused as we built & understood concepts related to the subject. If an AI's thoughts are like human thoughts, given a lot of time to understand them, what you describe doesn't rule out that the AI's thoughts would be comprehensible. The mere existence of concepts we don't know about in a subject doesn't mean that we can't learn those concepts. Most subjects have new concepts.
4TurnTrout2mo
I agree that with time, we might be able to understand. (I meant to communicate that via "might still be incomprehensible")

Why do many people think RL will produce "agents", but maybe (self-)supervised learning ((S)SL) won't? Historically, the field of RL says that RL trains agents. That, of course, is no argument at all. Let's consider the technical differences between the training regimes.

In the modern era, both RL and (S)SL involve initializing one or more neural networks, and using the reward/loss function to provide cognitive updates to the network(s). Now we arrive at some differences.

Some of this isn't new (see Hidden Incentives for Auto-Induced Distributional Shift), but I think it's important and felt like writing up my own take on it. Maybe this becomes a post later.

[Exact gradients] RL's credit assignment problem is harder than (self-)supervised learning's. In RL, if an agent solves a maze in 10 steps, it gets (discounted) reward; this trajectory then provides a set of reward-modulated gradients to the agent. But if the agent could have solved the maze in 5 steps, the agent isn't directly updated to be more likely to do that in the future; RL's gradients are generally inexact, not pointing directly at intended behavior

On the other hand, if a supervised-learning classifier outputs dog ... (read more)

3Steven Byrnes1y
I’m not inclined to think that “exact gradients” is important; in fact, I’m not even sure if it’s (universally) true. In particular, PPO / TRPO / etc. are approximating a policy gradient, right? I feel like, if some future magical technique was a much better approximation to the true policy gradient, such that it was for all intents and purposes a perfect approximation, it wouldn’t really change how I think about RL in general. Conversely, on the SSL side, you get gradient noise from things like dropout and the random selection of data in each batch, so you could say the gradient “isn’t exact”, but I don’t think that makes any important conceptual difference either. (A central difference in practice is that SSL gives you a gradient “for free” each query, whereas RL policy gradients require many runs in an identical (episodic) environment before you get a gradient.) In terms of “why RL” in general, among other things, I might emphasize the idea that if we want an AI that can (for example) invent new technology, it needs to find creative out-of-the-box solutions to problems (IMO), which requires being able to explore / learn / build knowledge in parts of concept-space where there is no human data. SSL can’t do that (at least, “vanilla SSL” can’t do that; maybe there are “SSL-plus” systems that can), whereas RL algorithms can. I guess this is somewhat related to your “independence”, but with a different emphasis. I don’t have too strong an opinion about whether vanilla SSL can yield an “agent” or not. It would seem to be a pointless and meaningless terminological question. Hmm, I guess when I think of “agent” it has a bunch of connotations, e.g. an ability to do trial-and-error exploration, and I think that RL systems tend to match all those connotations more than SSL systems—at least, more than “vanilla” SSL systems. But again, if someone wants to disagree, I’m not interested in arguing about it.

When writing about RL, I find it helpful to disambiguate between:

A) "The policy optimizes the reward function" / "The reward function gets optimized" (this might happen but has to be reasoned about), and

B) "The reward function optimizes the policy" / "The policy gets optimized (by the reward function and the data distribution)" (this definitely happens, either directly -- via eg REINFORCE -- or indirectly, via an advantage estimator in PPO; B follows from the update equations)

I think instrumental convergence also occurs in the model space for machine learning. For example, many different architectures likely learn edge detectors in order to minimize classification loss on MNIST. But wait - you'd also learn edge detectors to maximize classification loss on MNIST (loosely, getting 0% on a multiple-choice exam requires knowing all of the right answers). I bet you'd learn these features for a wide range of cost functions. I wonder if that's already been empirically investigated?

And, same for adversarial features. And perhaps, same for mesa optimizers (understanding how to stop mesa optimizers from being instrumentally convergent seems closely related to solving inner alignment). 

What can we learn about this?

6evhub3y
A lot of examples of this sort of stuff show up in OpenAI clarity's circuits analysis work. In fact, this is precisely their Universality hypothesis. See also my discussion here.

Outer/inner alignment decomposes a hard problem into two extremely hard problems. 

I have a long post draft about this, but I keep delaying putting it out in order to better elaborate the prereqs which I seem to keep getting stuck on when elaborating the ideas. I figure I might as well put this out for now, maybe it will make some difference for someone.

I think that the inner/outer alignment framing[1] seems appealing but is actually a doomed problem decomposition and an unhelpful frame for alignment. 

  1. The reward function is a tool which chisels cognition into agents through gradient updates, but the outer/inner decomposition assumes that that tool should also embody the goals we want to chisel into the agent. When chiseling a statue, the chisel doesn’t have to also look like the finished statue. 
  2. I know of zero success stories for outer alignment to real-world goals. 
    1. More precisely, stories where people decided “I want an AI which [helps humans / makes diamonds / plays Tic-Tac-Toe / grows strawberries]”, and then wrote down an outer objective only maximized in those worlds.
    2. This is pretty weird on any model where most of the
... (read more)

Weak derivatives

In calculus, the product rule says . The fundamental theorem of calculus says that the Riemann integral acts as the anti-derivative.[1] Combining these two facts, we derive integration by parts:

It turns out that we can use these two properties to generalize the derivative to match some of our intuitions on edge cases. Let's think about the absolute value function:

Image from Wikipedia

The boring old normal derivative isn't defined at , but it seems like it'd make sense to be able to say that the derivative is eg 0. Why might this make sense?

Taylor's theorem (and its generalizations) characterize first derivatives as tangent lines with slope which provide good local approximations of around : . You can prove that this is the best approximation you can get using only and ! In the absolute value example, defining the "derivative" to be zero at would minimize approximation error on average in neighborhoods around the origin.

In multivariable calculus, the Jacobian is a tangent plane which again minimizes approximation error (with respect to the Eucli

... (read more)
2TurnTrout4y
The reason f′(0) is undefined for the absolute value function is that you need the value to be the same for all sequences converging to 0 – both from the left and from the right. There's a nice way to motivate this in higher-dimensional settings by thinking about the action of e.g. complex multiplication, but this is a much stronger notion than real differentiability and I'm not quite sure how to think about motivating the single-valued real case yet. Of course, you can say things like "the theorems just work out nicer if you require both the lower and upper limits be the same"...

When I notice I feel frustrated, unproductive, lethargic, etc, I run down a simple checklist:

  • Do I need to eat food?
  • Am I drinking lots of water?
  •  Have I exercised today?
  • Did I get enough sleep last night? 
    • If not, what can I do now to make sure I get more tonight?
  • Have I looked away from the screen recently?
  • Have I walked around in the last 20 minutes?

It's simple, but 80%+ of the time, it fixes the issue.

3Viliam3y
There is a "HALT: hungry? angry? lonely? tired?" mnemonic, but I like that your list includes water and walking and exercise. Now just please make it easier to remember.
1DirectedEvolution3y
How about THREES: Thirsty Hungry Restless Eyestrain Exercise?
2Matt Goldenberg3y
Hey can I steal this for a course I'm teaching? (I'll give you credit).
2TurnTrout3y
sure!

While reading Focusing today, I thought about the book and wondered how many exercises it would have. I felt a twinge of aversion. In keeping with my goal of increasing internal transparency, I said to myself: "I explicitly and consciously notice that I felt averse to some aspect of this book".

I then Focused on the aversion. Turns out, I felt a little bit disgusted, because a part of me reasoned thusly:

If the book does have exercises, it'll take more time. That means I'm spending reading time on things that aren't math textbooks. That means I'm slowing down.

(Transcription of a deeper Focusing on this reasoning)

I'm afraid of being slow. Part of it is surely the psychological remnants of the RSI I developed in the summer of 2018. That is, slowing down is now emotionally associated with disability and frustration. There was a period of meteoric progress as I started reading textbooks and doing great research, and then there was pain. That pain struck even when I was just trying to take care of myself, sleep, open doors. That pain then left me on the floor of my apartment, staring at the ceiling, desperately willing my hands to just get better. They didn't (for a long while), so I

... (read more)

Hindsight bias and illusion of transparency seem like special cases of a failure to fully uncondition variables in your world model (e.g. who won the basketball game), or to model an ignorant other person. Such that your attempts to reason from your prior state of ignorance (e.g. about who won) either are advantaged by the residual information or reactivate your memories of that information.

An alternate mechanistic vision of how agents can be motivated to directly care about e.g. diamonds or working hard. In Don't design agents which exploit adversarial inputs, I wrote about two possible mind-designs:

Imagine a mother whose child has been goofing off at school and getting in trouble. The mom just wants her kid to take education seriously and have a good life. Suppose she had two (unrealistic but illustrative) choices. 

  1. Evaluation-child: The mother makes her kid care extremely strongly about doing things which the mom would evaluate as "working hard" and "behaving well."
  2. Value-child: The mother makes her kid care about working hard and behaving well.

I explained how evaluation-child is positively incentivized to dupe his model of his mom and thereby exploit adversarial inputs to her cognition. This shows that aligning an agent to evaluations of good behavior is not even close to aligning an agent to good behavior

However, some commenters seemed maybe skeptical that value-child can exist, or uncertain how concretely that kind of mind works. I worry/suspect that many people have read shard theory posts without internalizing new ideas about how cognition can work, ... (read more)

2tailcalled1y
I can totally believe that agents that competently and cooperatively seek out to fulfill a goal, rather than seeking to trick evaluators of that goal to think it gets fulfilled, can exist. However, whether you get such agents out of an algorithm depends on the details of that algorithm. Current reinforcement learning algorithms mostly don't create agents that competently do anything. If they were more powerful while still doing essentially the same thing they currently do, most of them would end up tricked by the agents they create, rather than having aligned agents.

Experiment: Train an agent in MineRL which robustly cares about chickens (e.g. would zero-shot generalize to saving chickens in a pen from oncoming lava, by opening the pen and chasing them out, or stopping the lava). Challenge mode: use a reward signal which is a direct function of the agent's sensory input.

This is a direct predecessor to the "Get an agent to care about real-world dogs" problem. I think solving the Minecraft version of this problem will tell us something about how outer reward schedules relate to inner learned values, in a way which directly tackles the key questions, the sensory observability/information inaccessibility issue, and which is testable today.

(Credit to Patrick Finley for the idea)

4TurnTrout1y
After further review, this is probably beyond capabilities for the moment.  Also, the most important part of this kind of experiment is predicting in advance what reward schedules will produce what values within the agent, such that we can zero-shot transfer that knowledge to other task types (e.g. XLAND instead of Minecraft) and say "I want an agent which goes to high-elevation platforms reliably across situations, with low labelling cost", and then sketch out a reward schedule, and have the first capable agents trained using that schedule generalize in the way you want.
2Jay Bailey1y
Why is this difficult? Is it only difficult to do this in Challenge Mode - if you could just code in "Number of chickens" as a direct feed to the agent, can it be done then? I was thinking about this today, and got to wondering why it was hard - at what step does an experiment to do this fail?
2TurnTrout1y
Even if you can code in number of chickens as an input to the reward function, that doesn't mean you can reliably get the agent to generalize to protect chickens. That input probably makes the task easier than in Challenge Mode, but not necessarily easy. The agent could generalize to some other correlate. Like ensuring there are no skeletons nearby (because they might shoot nearby chickens), but not in order to protect the chickens.
1Jay Bailey1y
So, if I understand correctly, the way we would consider it likely that the correct generalisation had happened would be if the agent could generalise to hazards it had never seen actually kill chickens before? And this would require the agent to have an actual model of how chickens can be threatened such that it could predict that lava would destroy chickens based on, say, it's knowledge that it will die if it jumps into lava, which is beyond capabilities at the moment?
2TurnTrout1y
Yes, that would be the desired generalization in the situations we checked. If that happens, we had specified a behavioral generalization property and then wrote down how we were going to get it, and then had just been right in predicting that that training rationale would go through.

I passed a homeless man today. His face was wracked in pain, body rocking back and forth, eyes clenched shut. A dirty sign lay forgotten on the ground: "very hungry".

This man was once a child, with parents and friends and dreams and birthday parties and maybe siblings he'd get in arguments with and snow days he'd hope for.

And now he's just hurting.

And now I can't help him without abandoning others. So he's still hurting. Right now.

Reality is still allowed to make this happen. This is wrong. This has to change.

9Said Achmiz4y
How would you help this man, if having to abandon others in order to do so were not a concern? (Let us assume that someone else—someone whose competence you fully trust, and who will do at least as good a job as you will—is going to take care of all the stuff you feel you need to do.) What is it you had in mind to do for this fellow—specifically, now—that you can’t (due to those other obligations)?

Suppose I actually cared about this man with the intensity he deserved - imagine that he were my brother, father, or best friend.

The obvious first thing to do before interacting further is to buy him a good meal and a healthy helping of groceries. Then, I need to figure out his deal. Is he hurting, or is he also suffering from mental illness?

If the former, I'd go the more straightforward route of befriending him, helping him purchase a sharp business professional outfit, teaching him to interview and present himself with confidence, secure an apartment, and find a job.

If the latter, this gets trickier. I'd still try and befriend him (consistently being a source of cheerful conversation and delicious food would probably help), but he might not be willing or able to get the help he needs, and I wouldn't have the legal right to force him. My best bet might be to enlist the help of a psychological professional for these interactions. If this doesn't work, my first thought would be to influence the local government to get the broader problem fixed (I'd spend at least an hour considering other plans before proceeding further, here). Realistically, there's ... (read more)

3Said Achmiz4y
Well, a number of questions may be asked here (about desert, about causation, about autonomy, etc.). However, two seem relevant in particular: First, it seems as if (in your latter scenario) you’ve arrived (tentatively, yes, but not at all unreasonably!) at a plan involving systemic change. As you say, there is quite a bit of effort being expended on this sort of thing already, so, at the margin, any effective efforts on your part would likely be both high-level and aimed in an at-least-somewhat-unusual direction. … yet isn’t this what you’re already doing? Second, and unrelatedly… you say: Yet it seems to me that, empirically, most people do not expend the level of effort which you describe, even for their siblings, parents, or close friends. Which is to say that the level of emotional and practical investment you propose to make (in this hypothetical situation) is, actually, quite a bit greater than that which most people invest in their family members or close friends. The question, then, is this: do you currently make this degree of investment (emotional and practical) in your actual siblings, parents, and close friends? If so—do you find that you are unusual in this regard? If not—why not?
… yet isn’t this what you’re already doing?

I work on technical AI alignment, so some of those I help (in expectation) don't even exist yet. I don't view this as what I'd do if my top priority were helping this man.

The question, then, is this: do you currently make this degree of investment (emotional and practical) in your actual siblings, parents, and close friends? If so—do you find that you are unusual in this regard? If not—why not?

That's a good question. I think the answer is yes, at least for my close family. Recently, I've expended substantial energy persuading my family to sign up for cryonics with me, winning over my mother, brother, and (I anticipate) my aunt. My father has lingering concerns which I think he wouldn't have upon sufficient reflection, so I've designed a similar plan for ensuring he makes what I perceive to be the correct, option-preserving choice. For example, I made significant targeted donations to effective charities on his behalf to offset (what he perceives as) a considerable drawback of cryonics: his inability to also be an organ donor.

A universe in which humanity wins but my dad is gone would be quite sad t... (read more)

3Raemon4y
I predict that this comment is not helpful to Turntrout.
6Raemon4y
:( Song I wrote about this once (not very polished)

If you raised children in many different cultures, "how many" different reflectively stable moralities could they acquire? (What's the "VC dimension" of human morality, without cheating by e.g. directly reprogramming brains?)

(This is probably a Wrong Question, but I still find it interesting to ask.)

Listening to Eneasz Brodski's excellent reading of Crystal Society, I noticed how curious I am about how AGI will end up working. How are we actually going to do it? What are those insights? I want to understand quite badly, which I didn't realize until experiencing this (so far) intelligently written story.

Similarly, how do we actually "align" agents, and what are good frames for thinking about that?

Here's to hoping we don't sate the former curiosity too early.

Theoretical predictions for when reward is maximized on the training distribution. I'm a fan of Laidlaw et al.'s recent Bridging RL Theory and Practice with the Effective Horizon:

Deep reinforcement learning works impressively in some environments and fails catastrophically in others. Ideally, RL theory should be able to provide an understanding of why this is, i.e. bounds predictive of practical performance. Unfortunately, current theory does not quite have this ability...

[We introduce] a new complexity measure that we call the effective horizon, which roughly corresponds to how many steps of lookahead search are needed in order to identify the next optimal action when leaf nodes are evaluated with random rollouts. Using BRIDGE, we show that the effective horizon-based bounds are more closely reflective of the empirical performance of PPO and DQN than prior sample complexity bounds across four metrics. We also show that, unlike existing bounds, the effective horizon can predict the effects of using reward shaping or a pre-trained exploration policy.

One of my favorite parts is that it helps formalize this idea of "which parts of the state space are easy to explore into." That inform... (read more)

The "maximize all the variables" tendency in reasoning about AGI.

Here are some lines of thought I perceive, which are probably straw to varying extents for some people and real to varying extents for other people. I give varying responses to each, but the point isn't the truth value of any given statement, but of a pattern across the statements:

  1. If an AGI has a concept around diamonds, and is motivated in some way to make diamonds, it will make diamonds which maximally activate its diamond-concept circuitry (possible example). 
    1. My response.
  2. An AI will be trained to minimal loss on the training distribution. 
    1. SGD does not reliably find minimum-loss configurations (modulo expressivity), in practice, in cases we care about. The existence of knowledge distillation is one large counterexample. Image
    2. Quintin: "In terms of results about model distillation, you could look at appendix G.2 of the Gopher paper. They compare training a 1.4 billion parameter model directly, versus distilling a 1.4 B model from a 7.1 B model."
  3. Predictive processing means that the goal of the human learning process is to minimize predictive loss.[1]
    1. In a process where local modifications are applied to reduce some
... (read more)

I think this type of criticism is applicable in an even wider range of fields than even you immediately imagine (though in varying degrees, and with greater or lesser obviousness or direct correspondence to the SGD case). Some examples:

  • Despite the economists, the economy doesn't try to maximize welfare, or even net dollar-equivalent wealth. It rewards firms which are able to make a profit in proportion to how much they're able to make a profit, and dis-rewards firms which aren't able to make a profit. Firms which are technically profitable, but have no local profit incentive gradient pointing towards them (factoring in the existence of rich people and lenders, neither of which are perfect expected profit maximizers) generally will not happen.

  • Individual firms also don't (only) try to maximize profit. Some parts of them may maximize profit, but most are just structures of people built from local social capital and economic capital incentive gradients.

  • Politicians don't try to (only) maximize win-probability.

  • Democracies don't try to (only) maximize voter approval.

  • Evolution doesn't try to maximize inclusive genetic fitness.

  • Memes don't try to maximize inclusive memetic

... (read more)
3TurnTrout9mo
very pithy. nice insight, thanks. 

I was talking with Abram Demski today about a promising-seeming research direction. (Following is my own recollection)

One of my (TurnTrout's) reasons for alignment optimism is that I think:

  • We can examine early-training cognition and behavior to some extent, since the system is presumably not yet superintelligent and planning against us,
    • (Although this amount of information depends on how much interpretability and agent-internals theory we do now)
  • All else equal, early-training values (decision-influences) are the most important to influence, since they steer future training.
  • It's crucial to get early-training value shards of which a substantial fraction are "human-compatible values" (whatever that means)
    • For example, if there are protect-human-shards which 
      • reliably bid against plans where people get hurt,
      • steer deliberation away from such plan stubs, and
      • these shards are "reflectively endorsed" by the overall shard economy (i.e. the decision-making isn't steering towards plans where the protect-human shards get removed)
  • If we install influential human-compatible shards early in training, and they get retained, they will help us in mid- and late-training where we can't affect the ball
... (read more)
5johnswentworth1y
One barrier for this general approach: the basic argument that something like this would work is that if one shard is aligned, and every shard has veto power over changes (similar to the setup in Why Subagents?), then things can't get much worse for humanity. We may fall well short of our universe-scale potential, but at least X-risk is out. Problem is, that argument requires basically-perfect alignment of the one shard (or possibly a set of shards which together basically-perfectly represent human values). If we try to weaken it to e.g. a bunch of shards which each imperfectly capture different aspects of human values, with different imperfections, then there's possibly changes which Goodhart all of the shards simultaneously. Indeed, I'd expect that to be a pretty strong default outcome.
4TurnTrout1y
Even on the view you advocate here (where some kind of perfection is required), "perfectly align part of the motivations" seems substantially easier than "perfectly align all of the AI's optimization so it isn't optimizing for anything you don't want." I feel significantly less confident about this, and am still working out the degree to which Goodhart seems hard, and in what contours, on my current view.

"Globally activated consequentialist reasoning is convergent as agents get smarter" is dealt an evidential blow by von Neumann:

Although von Neumann unfailingly dressed formally, he enjoyed throwing extravagant parties and driving hazardously (frequently while reading a book, and sometimes crashing into a tree or getting arrested). He once reported one of his many car accidents in this way: "I was proceeding down the road. The trees on the right were passing me in orderly fashion at 60 miles per hour. Suddenly one of them stepped in my path." He was a profoundly committed hedonist who liked to eat and drink heavily (it was said that he knew how to count everything except calories). -- https://www.newworldencyclopedia.org/entry/John_von_Neumann 

Good, original thinking feels present to me - as if mental resources are well-allocated.

The thought which prompted this:

Sure, if people are asked to solve a problem and say they can't after two seconds, yes - make fun of that a bit. But that two seconds covers more ground than you might think, due to System 1 precomputation.

Reacting to a bit of HPMOR here, I noticed something felt off about Harry's reply to the Fred/George-tried-for-two-seconds thing. Having a bit of experience noticing confusing, I did not think "I notice I am confused" (although this can be useful). I did not think "Eliezer probably put thought into this", or "Harry is kinda dumb in certain ways - so what if he's a bit unfair here?". Without resurfacing, or distraction, or wondering if this train of thought is more fun than just reading further, I just thought about the object-level exchange.

People need to allocate mental energy wisely; this goes far beyond focusing on important tasks. Your existing mental skillsets already optimize and auto-pilot certain mental motions for you, so you should allocate less deliberation to them. In this case, the confusion-noticing module was honed; by not worrying about how w

... (read more)
6TurnTrout4y
Expanding on this, there is an aspect of Actually Trying that is probably missing from S1 precomputation. So, maybe the two-second "attempt" is actually useless for most people because subconscious deliberation isn't hardass enough at giving its all, at making desperate and extraordinary efforts to solve the problem.

Explaining Wasserstein distance. I haven't seen the following explanation anywhere, and I think it's better than the rest I've seen.

The Wasserstein distance tells you the minimal cost to "move" one probability distribution  into another . It has a lot of nice properties.[1] Here's the chunk of math (don't worry if you don't follow it):

The Wasserstein 1-distance between two probability measures  and  is

where  is the set of all couplings of  and .

What's a "coupling"? It's a joint probability distribution  over  such that its two marginal distributions equal  and . However, I like to call these transport plans. Each plan specifies a way to transport a distribution  into another distribution :

 

(EDIT: The  line should be flipped.)

Now consider a given point  in 's support, say the one with the dotted line below it. 's density must be "reallocated" into 's distribution. That reallocation is specified by the conditional distribution , as shown by the vertical do... (read more)

1jsd15d
For onlookers, I strongly recommend Gabriel Peyré and Marco Cuturi's online book Computational Optimal Transport. I also think this is a case where considering discrete distributions helps build intuition.

Consider what update equations have to say about "training game" scenarios. In PPO, the optimization objective is proportional to the advantage given a policy , reward function , and on-policy value function :

Consider a mesa-optimizer acting to optimize some mesa objective. The mesa-optimizer understands that it will be updated proportional to the advantage. If the mesa-optimizer maximizes reward, this corresponds to maximizing the intensity of the gradients it receives, thus maximally updating its cognition in exact directions. 

This isn't necessarily good.

If you're trying to gradient hack and preserve the mesa-objective, you might not want to do this. This might lead to value drift, or make the network catastrophically forget some circuits which are useful to the mesa-optimizer. 

Instead, the best way to gradient hack might be to roughly minimize the absolute value of the advantage, which means achieving roughly on-policy value over time, which doesn't imply reward maximization. This is a kind of "treading water" in terms of reward. This helps decrease value drift.

I think that realistic mesa optimizers will n... (read more)