johnswentworth

Sequences

From Atoms To Agents
"Why Not Just..."
Basic Foundations for Agent Models
Framing Practicum
Gears Which Turn The World
Abstraction 2020
Gears of Aging
Model Comparison

Wiki Contributions

Comments

The linked abstract describes how

[good generalization] holds across multiple patterns of label noise, even when erroneous labels are biased towards confusing classes.

Reading their experimental procedure and looking at Figures 4 & 5, it looks like their experiments confirm the general story of lethality #20, not disprove it.

The relevant particulars: when they used biased noise, they still ensured that the correct label was the most probable label. Their upper-limit for biased noise made the second-most-probable label equal in probability to the correct one, and in that case the predictor's generalization accuracy plummeted from near-90% (when the correct label was only slightly more probable than the next-most-probable) to only ~50%.

How this relates to lethality #20: part of what "regular, compactly describable, predictable errors" is saying is that there will be (predictable) cases where the label most probably assigned by a human labeller is not correct (i.e. it's not what a smart well-informed human would actually want if they had all the relevant info and reflected on it). What the results of the linked paper predict, in that case, is that the net will learn to assign the "incorrect" label - the one which human labellers do, in fact, choose more often than any other. (Though, to be clear, I think this experiment is not very highly relevant one way or the other.)

As for OpenAI's weak-to-strong results...

I had some back-and-forth about those in a private chat shortly after they came out, and the main thing I remember is that it was pretty tricky to back out the actually-relevant numbers, but it was possible. Going back to the chat log just now, this is the relevant part of my notes:

Rough estimate: on the NLP task the weak model has like 60% accuracy (fig 2).

  • In cases where the weak model is right, the strong student agrees with it in like 90% of cases (fig 8b). So, on ~6% of cases (10% * 60%), the strong student is wrong by "just being dumb".
  • In cases where the weak model is wrong, the strong student's agreement is very compute-dependent, but let's pick a middle number and call it 70% (fig 8c). So, on ~28% of cases (70% * 40%), the strong student is wrong by "overfitting to weak supervision".

So in this particular case, the strong student is wrong about 34% of the time, and 28 of those percentage points are attributable to overfitting to weak supervision.

(Here "overfitting to weak supervision" is the thing where the weak supervisor is predictably wrong, and the stronger model learns to predict those errors.) So in fact what we're seeing in the weak-to-strong paper is that the strong model learning the weak supervisor's errors is already the main bottleneck to better ground-truth performance, in the regime that task and models were in.

So overall, I definitely maintain that the empirical evidence is solidly in favor of Doomimir's story here. (And, separately, I definitely maintain that abstracts in ML tend to be wildly unreliable and misleading about the actual experimental results.)

Yes, I am familiar with Levin's work.

My hope here would be that a few upstream developmental signals can trigger the matrix softening, re-formation of the chemotactic signal gradient, and whatever other unknown factors are needed, all at once.

Fleshing this out a bit more: insofar as development is synchronized in an organism, there usually has to be some high-level signal to trigger the synchronized transitions. Given the scale over which the signal needs to apply (i.e. across the whole brain in this case), it probably has to be one or a few small molecules which diffuse in the extracellular space. As I'm looking into possibilities here, one of my main threads is to look into both general and brain-specific developmental signal molecules in human childhood, to find candidates for the relevant molecular signals.

(One major alternative model I'm currently tracking is that the brain grows to fill the brain vault, and then stops growing. That could in-principle mechanistically work via cells picking up on local physical forces, rather than a small molecule signal. Though I don't think that's the most likely possibility, it would be convenient, since it would mean that just expanding the skull could induce basically-normal new brain growth by itself.)

Any particular readings you'd recommend?

Doomimir: I'll summarize the story you seem excited about as follows:

  • We train a predictive model on The Whole Internet, so it's really good at predicting text from that distribution.
  • The human end-users don't really want a predictive model. They want a system which can take a natural-language request, and then do what's requested. So, the humans slap a little RL (specifically RLHF) on the predictive model, to get the "request -> do what's requested" behavior.
  • The predictive model serves as a strong baseline for the RL'd system, so the RL system can "only move away from it a little" in some vague handwavy sense. (Also in the KL divergence sense, which I will admit as non-handwavy for exactly those parts of your argument which you can actually mathematically derive from KL-divergence bounds, which is currently zero of the parts of your argument.)
  • The "only move away from The Internet Distribution a little bit" part somehow makes it much less likely that the RL'd model will predict and exploit the simple predictable ways in which humans rate things. As opposed to, say, make it more likely that the RL'd model will predict and exploit the simple predictable ways in which humans rate things.

There's multiple problems in this story.

First, there's the end-users demanding a more agenty system rather than a predictor, which is why people are doing RLHF in the first place rather than raw prompting (which would be better from a safety perspective). Given time, that same demand will drive developers to make models agentic in other ways too (think AgentGPT), or to make the RLHF'd LLMs more agentic and autonomous in their own right. That's not the current center of our discussion, but it's worth a reminder that it's the underlying demand which drives developers to choose more risky methods (like RLHF) over less risky methods (like raw predictive models) in the first place.

Second, there's the vague handwavy metaphor about the RL system "only moving away from the predictive model a little bit". The thing is, we do need more than a handwavy metaphor! "Yes, we don't understand at the level of math how making that KL-divergence small will actually impact anything we actually care about, but my intuition says it's definitely not going to kill everyone. No, I haven't been able to convince relevant experts outside of companies whose giant piles of money are contingent on releasing new AI products regularly, but that's because they're not releasing products and therefore don't have firsthand experience of how these systems behave. No, I'm not willing to subject AI products to a burden-of-proof before they induce a giant disaster" is a non-starter even if it turns out to be true.

Third and most centrally to the current discussion, there's still the same basic problem as earlier: to a system with priors instilled by The Internet, ["I'll give you $100 if you classify this as an apple" -> (predict apple classification)] is still a simple thing to learn. It's not like pretraining on the internet is going to make the system favor models which don't exploit the highly predictable errors made by human raters. If anything, all that pretraining will make it easier for the model to exploit raters. (And indeed, IIUC that's basically what we see in practice.)

As you say: the fact that GPT-4 can do that seems like it's because that kind of reasoning appears on the internet.

(This one's not as well-written IMO, it's mashing a few different things together.)

I'd be interested in hearing more details about those rumors of smarter models being more prone to exploit rater mistakes.

See here. I haven't dug into it much, but it does talk about the same general issues specifically in the context of RLHF'd LLMs, not just pure-RL-trained models.

(I'll get around to another Doomimir response later, just dropping that link for now.)

Zeroth point: under a Doomimir-ish view, the "modelling the human vs modelling in a similar way to the human" frame is basically right for current purposes, so no frame clash.

On to the main response...

Doomimir: This isn't just an "in the limit" argument. "I'll give you $100 if you classify this as an apple" -> (predict apple classification) is not some incredibly high-complexity thing to figure out. This isn't a jupiter-brain sort of challenge.

For instance, anything with a simplicity prior at all similar to humans' simplicity prior will obviously figure it out, as evidenced by the fact that humans can figure out hypotheses like "it's bribing the classifier" just fine. Even beyond human-like priors, any ML system which couldn't figure out something that basic would apparently be severely inferior to humans in at least one very practically-important cognitive domain.

Even prior to developing a full-blown model of the human rater, models can incrementally learn to predict the systematic errors in human ratings, and we can already see that today. The classic case of the grabber hand is a go-to example:

(A net learned to hold the hand in front of the ball, so that it looks to a human observer like the ball is being grasped. Yes, this actually happened.)

... and anecdotally, I've generally heard from people who've worked with RLHF that as models scale up, they do in fact exploit rater mistakes more and more, and it gets trickier to get them to do what we actually want. This business about "The technology in front of us really does seem like it's 'reasoning with' rather than 'reasoning about'" is empirically basically false, and seems to get more false in practice as models get stronger even within the current relatively-primitive ML regime.

So no, this isn't a "complicated empirical question" (or a complicated theoretical question). The people saying "it's a complicated empirical question, we Just Can't Know" are achieving their apparent Just Not Knowing by sticking their heads in the sand; their lack of knowledge is a fact about them, not a fact about the available evidence.

(I'll flag here that I'm channeling the character of Doomimir and would not necessarily say all of these things myself, especially the harsh parts. Happy to play out another few rounds of this, if you want.)

Ever since GeneSmith's post and some discussion downstream of it, I've started actively tracking potential methods for large interventions to increase adult IQ.

One obvious approach is "just make the brain bigger" via some hormonal treatment (like growth hormone or something). Major problem that runs into: the skull plates fuse during development, so the cranial vault can't expand much; in an adult, the brain just doesn't have much room to grow.

BUT this evening I learned a very interesting fact: ~1/2000 infants have "craniosynostosis", a condition in which their plates fuse early. The main treatments involve surgery to open those plates back up and/or remodel the skull. Which means surgeons already have a surprisingly huge amount of experience making the cranial vault larger after plates have fused (including sometimes in adults, though this type of surgery is most common in infants AFAICT)

.... which makes me think that cranial vault remodelling followed by a course of hormones for growth (ideally targeting brain growth specifically) is actually very doable with current technology.

Were these supposed to embed as videos? I just see stills, and don't know where they came from.

Load More