Wiki Contributions

Comments

  1. What plateau?  Why pause now (vs say 10 years ago)?  Why not wait until after the singularity and impose a "long reflection" when we will be in an exponentially better place to consider such questions.
  2. Singularity 5-10 years from now vs 15-20 years from now determines whether or not some people I personally know and care about will be alive.
  3. Every second we delay the singularity leads to a "cosmic waste" as millions more galaxies move permanently behind the event horizon defined by the expanding universe
  4. Slower is not prima facia safer.  To the contrary, the primary mechanism for slowing down AGI is "concentrate power in the hands of a small number of decision makers," which in my current best guess increases risk.
  5. There is no bright line for how much slower we should go. If we accept without evidence that we should slow down AGI by 10 years, why not 50? why not 5000?

Sam Atis—a super forecaster—had a piece arguing against The Case Against Education

 

If it's this piece, I would be interested to know why you found it convincing.  He doesn't address (or seem to have even read) any of Brian's arguments. His argument basically boils down to "but so many people who work for universities think it's good".

then that's just irrelevant. You don't need to evaluate millions of positions to backtrack (unless you think humans don't backtrack) or play chess. 

 

Humans are not transformers. The "context window" for a human is literally their entire life.

Setting up the architecture that would allow a pretrained LLM to trial and error whatever you want is relatively trivial.

 

I agree.  Or at least, I don't see any reason why not.

My point was not that "a relatively simple architecture that contains a Transformer as the core" cannot solve problems via trial and error (in fact I think it's likely such an architecture exists).  My point was that transformers alone cannot do so.

You can call it a "gut claim" if that makes you feel better.  But the actual reason is I did some very simple math (about the window size required and given quadratic scaling for transformers) and concluded that practically speaking it was impossible.

Also, importantly, we don't know what that "relatively simple" architecture looks like.  If you look at the various efforts to "extend" transformers to general learning machines, there are a bunch of different approaches: alpha-geometry, diffusion transformers, baby-agi, voyager, dreamer, chain-of-thought, RAG, continuous fine-tuning, V-JEPA.  Practically speaking, we have no idea which of these techniques is the "correct" one (if any of them are).

In my opinion saying "Transformers are AGI" is a bit like saying "Deep learning is AGI".  While it is extremely possible that an architecture that heavily relies on Transformers and is AGI exists, we don't actually know what that architecture is.

Personally, my bet is either on a sort of generalized alpha-geometry approach (where the transformer generates hypothesis and then GOFAI is used to evaluate them) or Diffusion Transformers (where we iteratively de-noise a solution to a problem).  But I wouldn't be at all surprised if a few years from now it is universally agreed that some key insight we're currently missing marks the dividing line between Transformers and AGI.

Ok? That's how you teach anybody anything. 

 

Have you never figured out something by yourself?  The way I learned to do Sudoku was: I was given a book of Sudoku puzzles and told "have fun".

you said it would be impossible to train a chess playing model this century.

I didn't say it was impossible to train an LLM to play Chess. I said it was impossible for an LLM to teach itself to play a game of similar difficulty to chess if that game is not in it's training data.

These are two wildly different things.

Obviously LLMs can learn things that are in their training data.  That's what they do.  Obviously if you give LLMs detailed step-by-step instructions for a procedure that is small enough to fit in its attention window, LLMs can follow that procedure.  Again, that is what LLMs do.

What they do not do is teach themselves things that aren't in their training data via trial-and-error.  Which is the primary way humans learn things.

sure.  4000 words (~8000 tokens) to do a 9-state 9-turn game with the entire strategy written out by a human.  Now extrapolate that to chess, go, or any serious game.

And this doesn't address at all my actual point, which is that Transformers cannot teach themselves to play a game.

Absolutely.  I don't think it's impossible to build such a system.  In fact, I think a transformer is probably about 90% there.   Need to add trial and error, some kind of long-term memory/fine-tuning and a handful of default heuristics.  Scale will help too, but no amount of scale alone will get us there.

It certainly wouldn't generalize to e.g Hidouku

In the technical sense that you can implement arbitrary programs by prompting an LLM (they are turning complete), sure.

In a practical sense, no.

GPT-4 can't even play tic-tac-toe.  Manifold spent a year getting GPT-4 to implement (much less discover) the algorithm for Sudoku and failed.

Now imagine trying to implement a serious backtracking algorithm.  Stockfish checks millions of positions per turn of play.  The attention window for your "backtracking transformer" is going to have to be at lease {size of chess board state}*{number of positions evaluated}.

And because of quadratic attention, training it is going to take on the order of {number or parameters}*({chess board state size}*{number of positions evaluated})^2

Even with very generous assumptions for {number of parameters} and {chess board state}, there's simply no way we could train such a model this century (and that's assuming Moore's law somehow continues that long).

Logan Zoellner1moΩ-2-20

Obvious bait is obvious bait, but here goes.

Transformers are not AGI because they will never be able to "figure something out" the way humans can.

If a human is given the rules for Sudoku, they first try filling in the square randomly.  After a while, they notice that certain things work and certain things don't work.  They begin to define heuristics for things that work (for example, if all but one number appears in the same row or column as a box, that number goes in the box).  Eventually they work out a complete algorithm for solving Sudoku.

A transformer will never do this (pretending Sudoku wasn't in its training data).  Because they are next-token predictors, they are fundamentally incapable of reasoning about things not in their training set.  They are incapable of "noticing when they made a mistake" and then backtracking they way a human would.

Now it's entirely possible that a very small wrapper around a Transformer could solve Sudoku.  You could have the transformer suggest moves and then add a reasoning/planning layer around it to handle the back-tracking.  This is effectively what Alpha-Geometry does.

But a Transformer BY ITSELF will never be AGI.

Load More