1 min read

5

This is a special post for quick takes by Max H. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
2 comments, sorted by Click to highlight new comments since:
[-]Max H122

Related to We don’t trade with ants: we don't trade with AI.

The original post was about reasons why smarter-than-human AI might (not) trade with us, by examining an analogy between humans and ants.

But current AI systems actually seem more like the ants (or other animals), in the analogy of a human-ant (non-)trading relationship.

People trade with OpenAI for access to ChatGPT, but there's no way to pay a GPT itself to get it do something or perform better as a condition of payment, at least in a way that the model itself actually understands and enforces. (What would ChatGPT even trade for, if it were capable of trading?)

Note, an AutoGPT-style agent that can negotiate or pay for stuff on behalf of its creators isn't really what I'm talking about here, even if it works. Unless the AI takes a cut or charges a fee which accrues to the AI itself, it is negotiating on behalf of its creators as a proxy, not trading for itself in its own right.

A sufficiently capable AutoGPT might start trading for itself spontaneously as an instrumental subtask, which would count, but I don't expect current AutoGPTs to actually succeed at that, or even really come close, without a lot of human help.

Lack of sufficient object permanence, situational awareness, coherence, etc. seem like pretty strong barriers to meaningfully owning and trading stuff in a real way.

I think this observation is helpful to keep in mind when people talk about whether current AI qualifies as "AGI", or the applicability of prosaic alignment to future AI systems, or whether we'll encounter various agent foundations problems when dealing with more capable systems in the future.

Using shortform to register a public prediction about the trajectory of AI capabilities in the near future: the next big breakthroughs, and the most capable systems within the next few years, will look more like generalizations of MuZero and Dreamer, and less like larger / better-trained / more efficient large language models.

Specifically, SoTA AI systems (in terms of generality and problem-solving ability) will involve things like tree search and / or networks which are explicitly designed and trained to model the world, as opposed to predicting text or generating images.

These systems may contain LLMs or diffusion models as components, arranged in particular ways to work together. This arranging may be done by humans or AI systems, but it will not be performed "inside" a current-day / near-future GPT-based LLM, nor via direct execution of the text output of such LLMs (e.g. by executing code the LLM outputs, or having the instructions for arrangement otherwise directly encoded in a single LLM's text output). There will recognizably be something like search or world modeling that happens outside or on top of a language model.

--

The reason I'm making this prediction is, I was listening to Paul Christiano's appearance on the Bankless podcast from a few weeks ago.

Around the 28:00 mark the hosts ask Paul if we should be concerned about AI developments from vectors other than LLM-like systems, broadly construed.

Paul's own answer is good and worth listening to on its own (up to the 33 minute mark), but I think he does leave out (or at least doesn't talk about it in this part of the podcast) the actual answer to the question, which is that, yes, there are other avenues of AI development that don't involve larger networks, more training data, and more generalized prediction and generation abilities.

I have no special / non-public knowledge about what is likely to be promising here (and wouldn't necessarily speculate if I did); but I get the sense that the zeitgeist among some people (not necessarily Paul himself) in alignment and x-risk focused communities, is that model-based RL systems and relatively complicated architectures like MuZero have recently been left somewhat in the dust by advances in LLMs. I think capabilities researchers absolutely do not see things this way, and they will not overlook these methods as avenues for further advancing capabilities. Alignment and x-risk focused researchers should be aware of this avenue, if they want to have accurate models of what the near future plausibly looks like.