Comments

Sorted by
gwern*110

Apropos of very low-latency LLMs and revisiting this topic a little: what does this imply about DRL robotics, rather than animals? Will DRL NNs have to have brains as big as humans in order to run superhuman humanoid robots?

One possible implication is that Portia-like NNs are possible for robotics in general. Robotics may be quite 'easy' in that sense.

It is striking that when we look at NN parameter/FLOPS-counts, we generally do not see 'large' robotics, vision, or sound models, but LLMs; the largest pure-vision models like PaLI-X are <100b-parameters, the largest robotics are usually <10b, with Gato 1's ~1b having been, if anything, unusually large because of all the other stuff it was doing. (I'm very behind on the robotics literature so maybe there are now much larger 100b-parameter models as they move into the 'foundation model' multi-modal/task scaling paradigm, but I'd bet that there still are none >1,000b.) Even sound/image/video generative models, which would be expected to be much larger than necessary for robotics tasks, are often small enough to run on a single consumer GPU, still. And these are usually trained with scaling laws now, so these are compute-optimal sizes and it is not just that they are wildly under-parameterized (the way almost all models were pre-2020).

So, if robotics is intrinsically easy, but animal brains do not show this because of their latency requirements, which forces them into misleadingly expensive brains, the implication is that we can do robotics by lifting the limitations of biological brains, like being forced to learn in realtime, in the real world, one animal at a time, without any sharing.

We should be able to train deep but small NNs in silico: turning all animal problems into Portia problems, if you will, pausing the simulation to let the NNs think & act for as long as necessary to plan the right action, and only then letting time flow to see what happens, and reset it to try again.

We remove all burdens of wallclock time or caloric consumption or childhood development, which are powerful general robotic controllers, and only then use these teacher-models to optimize low-latency controllers. The wider low-latency student models will be easier to train when they simply must imitate the teacher in a supervised-learning setting instead of RL from scratch, and so the size should be a lot better. (If nothing else, the student models can't 'die' if they make a mistake like breaking a latency constraint, so this learning setting is way easier than an animal's task.)


On a related note, it is also striking how far down in size LLMs can be pushed. You can get good reasoning out of tiny billion-parameter LLMs trained hard enough on high-quality-enough data, and the 'densifying experience curve' is steady and rapid (halving period of ~4 months), so we can expect that at some point we may have superhuman reasoning LLMs in the billion or sub-billion parameter range... which are just very, very ignorant, perhaps even more ignorant than you or me, of all the real-world knowledge & text that a proper LLM has. We can't train those from scratch, but we can train trillion-parameter LLMs to suck in all the text in the world, and then exhale training data for small fast cheap models.

So it seems that Moravec's Paradox remains undefeated: as difficult as we find the abstract intellectual capabilities like the process of doing math or reasoning, so difficult we struggle to even write them down to train LLMs on, so difficult to train on we need giant gigawatt datacenters to just get started, they are not intrinsically difficult and in the long run, do not require big expensive NNs.

gwern7-8

But does that necessarily matter? Many of those models can't use tools; and since much of the point of the end-to-end RL training of Deep Research is to teach tool use, showing DR results without tool use would be either irrelevant or misleading (eg. it might do worse than the original o3 model it is trained from, when deprived of the tools it is supposed to use).

gwern70

Who right now is standing on the sidelines with a killer AI app that could rip up the market if only tokens were a bit cheaper?

OpenAI's Deep Research is looking like something that could be big and they were standing on the sidelines in part because the tokens weren't cheap.

gwern40

Most people do not read many books or spend time in spaces where SAT vocab words would be used at all. If that was the sole determinant, you would then expect any vocab test to fail catastrophically and not predict/discriminate in most of the population (which would have downstream consequences like making SATs weirdly unreliable outside the elite colleges or much less predictive validity for low-performing demographics, the former of which I am unaware of being true and the latter of which I know is false); this would further have the surprising consequence that if a vocab test is, say, r = 0.5 with g while failing catastrophically on most of the population, it would have to be essentially perfectly correlated r = 1 in the remainder to even be arithmetically possible, which just punts the question: how did two book-readers come away from that book with non-overlapping vocabs...?

I have good vocabulary, e.g. 800 on GRE verbal, but feel like I have a pretty bad memory for words and terms that I’ve only seen a few times.

How could you possibly know something like that?

gwern51

One benefit of his 'no-nut January' is that by cutting out peanuts entirely, he's also avoiding problems from oxalates. I would expect powdered peanut butter to be as dangerous in that regard.

gwern132

And yet, despite the SAT being so studied for, it remains a pretty good IQ test overall, and SAT-V or the GRE verbal parts OK. I think that's because there are so many words (500k+ in English, and the GRE-V has no compunction about mining the obscurest just to f--- with you), and you would have to study so many in order to meaningful inflate your scores (because after all, while there may be only a hundred 'vocab words' on any given SAT test, you don't know which hundred). Let's see... Here's an interesting-looking reference: "How Many Words Do We Know? Practical Estimates of Vocabulary Size Dependent on Word Definition, the Degree of Language Input and the Participant’s Age", Brysbaert et al 2016

an average 20-year-old native speaker of American English knows 42,000 lemmas and 4,200 non-transparent multiword expressions, derived from 11,100 word families. The numbers range from 27,000 lemmas for the lowest 5% to 52,000 for the highest 5%. Between the ages of 20 and 60, the average person learns 6,000 extra lemmas or about one new lemma every 2 days.

So, if you wanted to boost your score from the mean to the 95th percentile, that seems to imply that you'd have to memorize 10,000 'lemmas' ("Uninflected word from which all inflected words are derived"). That's a big number, and then you have to ask how much work that would be.

If you did this in the optimal way with spaced repetition (ignoring the time it takes to figure out the 10k you want to memorize in the first place or the time to construct the flashcards or any penalty from needing to inefficiently cram them for an upcoming SAT instead of life-long efficient review), which of course still few students do, as spaced repetition systems remain a niche outside of medical school & foreign language study, the SuperMemo rough estimate is a long-term investment of 5 minutes per flashcard, and we'll assume 1 lemma = 1 flashcard. That means you have to invest 10,00 * 5 = 50,000 minutes or 833 hours of studying! Meanwhile, hardly anyone is doing more than 8 hours of studying for the SAT as a whole (among the kids I knew at a prep high school, many didn't even do a weekend course, which would entail about 8 hours of classwork & study). 833 hours for vocab alone would be insane.

That's why people generally learn vocab from passive exposure rather than targeted study. Because no one, not even the most teacher's-pet student, wants to do that. And so vocab measures keep working.

gwern2115

then I think it is also very questionable whether the AI that wins wars is the most "advanced" AI. / People like Dario whose bread-and-butter is model performance invariably over-index on model performance, especially on benchmarks. But practical value comes from things besides the model; what tasks you use it for and how effective you are at deploying it.

Dario is about the last AI CEO you should be making this criticism of. Claude has been notable for a while for the model which somehow winds up being the most useful and having the best 'vibes', even when the benchmarks indicate it's #2 or #3; and meanwhile, it is the Chinese models which historically regress the most from their benchmarks when applied (and DeepSeek models, while not as bad as the rest, still do this and r1 is already looking shakier as people try out heldout problems or benchmarks).

gwern86

Only if you ignore that yesterday was when the Trump GPU tariffs would also be leaking and, pace event-studies, be expected to be changing prices too.

gwern84

It's not RL, but what is RL any more? It's becoming blurry. They don't reward or punish it for anything in the thought token. So it learns thoughts that are helpful in outputting the correct answer.

That's definitely RL (and what I was explaining was simply the obvious basic approach anyone in DRL would think of in this context and so of course there is research trying things like it). It's being rewarded for a non-differentiable global loss where the correct alternative or answer or label is not provided (not even information of the existence of a better decision) and so standard supervised learning is impossible, requiring exploration. Conceptually, this is little different from, say, training a humanoid robot NN to reach a distant point in fewer actions: it can be a hard exploration problem (most sequences of joint torques or actions simply result in a robot having a seizure while laying on the ground going nowhere), where you want to eventually reach the minimal sequence (to minimize energy / wear-and-tear / time) and you start by solving the problem in any way possible, rewarding solely on the final success, and then reward-shape into a desirable answer, which in effect breaks up the hard original problem into two more feasible problems in a curriculum - 'reach the target ever' followed by 'improve a target-reaching sequence of actions to be shorter'.

gwern180

While we're at it, one example I learned afterwards was that the 'caribou randomization' story is probably bogus (excerpts):

We will show that hunters do not randomize their behavior, that caribou populations do not fluctuate according to human predation, and that scapulimancy apparently is not selected because it is ecologically advantageous. We shall also show that there is no cross-cultural evidence of divinatory random devices producing randomized subsistence behavior, but rather that people manipulate divination with the explicit or implicit intervention of personal choice.

What is particularly interesting to me is that the apparent beautiful match of this traditional hunting practice with contemporary game theory may be 'too good to be true' because it was actually the opposite: I suspect that the story was made up to launder (secret) game-theoretic work from WWII into academic writing; the original author's career & funder are exactly where that sort of submarine-warfare operations-research idea would come from... (There were many cases post-WWII of civilians carefully laundering war or classified work into publishable form, which means that any history-of-ideas has to be cautious about taking at face value anything published 1940–1960 which looks even a little bit like cryptography, chemistry, physics, statistics, computer science, game theory, or operations research.)

Load More