my next guess would be that they ran the PPO for many more episodes than the 31 shown, and trained the GLA on all that
This was my read too. Unfortunately we don't have access to the source code but this is the assumption i made after seeing the graph on the left in Figure 3. Around 40 episodes in, their PPO agent is still struggling but their Gap 8 GLA is near optimal. But that Gap 8 GLA was necessarily trained on data from a PPO agent that ran for 8 times longer.
I mentioned in a footnote that the “algorithmic distillation” paper (Laskin et al. 2022) was misleading, as discussed here. Your links are in the same genre
As I understand it your critique of that line of in-context RL research was that the meta-training and meta-testing tasks were too similar and too simple. I don't think the former is true for any of the papers I linked (the latter is debatable). GLAs train on a single task, but achieve generalization by very heavily augmenting data from that task, and can be applied to new tasks that are as different as...
Here's a possible counterexample: Towards General-Purpose In-Context Learning Agents.
They train a meta-RL agent using imitation learning on another RL agent's learning history. The trained meta-RL agent isn't limited to minor variations of the meta-training task (as is usually the case), but can learn completely new (although fairly basic) continuous control tasks, each very different from the one it was trained on, using only activations at inference.
The author's prior work in SSL (Meta-Learning Transformers to Improve In-Context Generalization) is also o...
there was a result (from Pieter Abbeel's lab?) a couple of years ago that showed that pretraining a model on language would lead to improved sample efficiency in some nominally-totally-unrelated RL task
Pretrained Transformers as Universal Computation Engines
From the abstract:
We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning – in particular [...] a variety of sequence classification tasks spanning numerical, computation, vision, and protein fold prediction
Given your perspective, you may enjoy: Lies Told To Children: Pinocchio, Which I found posted here.
Personally I think I'd be fine with the bargain, but having read that alternative continuation, I think I better understand how you feel.
Oops, strangely enough I just wasn't thinking about that possibility. It's obvious now, but I assumed that SL vs RL would be a minor consideration, despite the many words you've already written on reward.
Hey Steve, I might be wrong here but I don't think Jon's question was specifically about what architectures you'd be talking about. I think he was asking more specifically about how to classify something as Brain-like-AGI for the purposes of your upcoming series.
The way I read your answer makes it sound like the safety considerations you'll be discussing depend more on whether the NTM is trained via SL or RL rather than whether it neatly contains all your (soon to be elucidated) Brain-like-AGI properties.
Though that might actually have been what you meant so I probably should have asked for clarification before I presumptively answered Jon for you.
If I'm reading your question right I think the answer is:
I’m going to make a bunch of claims about the algorithms underlying human intelligence, and then talk about safely using algorithms with those properties. If our future AGI algorithms have those properties, then this series will be useful, and I would be inclined to call such an algorithm "brain-like".
i.e. The distinction depends on whether or not a given architecture has some properties Steve will mention later. Which, given Steve's work, are probably the key properties of "A learned population of Compositional Generative Models + A largely hardcoded Steering Subsystem".
Regarding "posts making a bearish case" against GPT-N, there's Steve Byrnes', Can you get AGI from a transformer.
I was just in the middle of writing a draft revisiting some of his arguments, but in the meantime one claim that might be of particular interest to you is that: "...[GPT-N type models] cannot take you more than a couple steps of inferential distance away from the span of concepts frequently used by humans in the training data"
You're right, I misread the graph.
I also concede that this claim is probably right for Figure 3.
I still don't think this is true for Figure 5 but i'm less confident no... (read more)