Daniel Kokotajlo

Philosophy PhD student, worked at AI Impacts, then Center on Long-Term Risk, now OpenAI Futures/Governance team. Views are my own & do not represent those of my employer. I subscribe to Crocker's Rules and am especially interested to hear unsolicited constructive criticism. http://sl4.org/crocker.html

Two of my favorite memes:


(by Rob Wiblin)

My EA Journey, depicted on the whiteboard at CLR:

 

Sequences

Agency: What it is and why it matters
AI Timelines
Takeoff and Takeover in the Past and Future

Wiki Contributions

Comments

OK, thanks.

Your answer to my first question isn't really an answer -- "they will, if sufficiently improved, be quite human--they will behave in a quite human way." What counts as "quite human?" Also are we just talking about their external behavior now? I thought we were talking about their internal cognition.

You agree about the cluster analysis thing though -- so maybe that's a way to be more precise about this. The claim you are hoping to see argued for is "If we magically had access to the cognition of all  current humans and LLMs, with mechinterp tools etc. to automatically understand and categorize it, and we did a cluster analysis of the whole human+llm population, we'd find that there are two distinct clusters: the human cluster and the llm cluster."

Is that right?

If so then here's how I'd make the argument. I'd enumerate a bunch of differences between LLMs and humans, differences like "LLMs don't have bodily senses" and "LLMs experience way more text over the course of their training than humans experience in their lifetimes" and "LLMs have way fewer parameters" and "LLMs internal learning rule is SGD whereas humans use hebbian learning or whatever" and so forth, and then for each difference say "this seems like the sort of thing that might systematically affect what kind of cognition happens, to an extent greater than typical intra-human differences like skin color, culture-of-childhood, language-raised-with, etc." Then add it all up and be like "even if we are wrong about a bunch of these claims it still seems like overall the cluster analysis is gonna keep humans and LLMs apart instead of mingling them together. Like what the hell else could it do? Divide everyone up by language maybe, and have primarily-English LLMs in the same cluster as humans raised speaking English, and then nonenglish speakers and nonenglish LLMs in the other cluster? That's probably my best guess as to how else the cluster analysis could shake out, and it doesn't seem very plausible to me--and even if it were true, it would be true on the level of 'what concepts are used internally' rather than more broadly about stuff that really matters like what the goals/values/architecture of the system is (i.e. how they are used)

Can you say more about what you mean by "Where can I find a post or article arguing that the internal cognitive model of contemporary LLMs is quite alien, strange, non-human, even though they are trained on human text and produce human-like answers, which are rendered "friendly" by RLHF?"

Like, obviously it's gonna be alien in some ways and human-like in other ways. Right? How similar does it have to be to humans, in order to count as not an alien? Surely you would agree that if we were to do a cluster analysis of the cognition of all humans alive today + all LLMs, we'd end up with two distinct clusters (the LLMs and then humanity) right? 

I found myself coming back to this now, years later, and feeling like it is massively underrated. Idk, it seems like the concept of training stories is great and much better than e.g. "we have to solve inner alignment and also outer alignment" or "we just have to make sure it isn't scheming." 

Anyone -- and in particular Evhub -- have updated views on this post with the benefit of hindsight? Should we e.g. try to get model cards to include training stories?

  • a) gaslit by "I think everyone already knew this" or even "I already invented this a long time ago" (by people who didn't seem to understand it); and that 

Curious to hear whether I was one of the people who contributed to this.

This part resonates with me; my experience in philosophy of science + talking to people unfamiliar with philosophy of science also led me to the same conclusion:

"You talk it out on the object level," said the Epistemologist.  "You debate out how the world probably is.  And you don't let anybody come forth with a claim that Epistemology means the conversation instantly ends in their favor."

"Wait, so your whole lesson is simply 'Shut up about epistemology'?" said the Scientist.

"If only it were that easy!" said the Epistemologist.  "Most people don't even know when they're talking about epistemology, see?  That's why we need Epistemologists -- to notice when somebody has started trying to invoke epistemology, and tell them to shut up and get back to the object level."

The main benefit of learning about philosophy is to protect you from bad philosophy. And there's a ton of bad philosophy done in the name of Empiricism, philosophy masquerading as science.

Some ideas for definitions of AGI / resolution criteria for the purpose of herding a bunch of cats / superforecasters into making predictions: 

(1) Drop-in replacement for human remote worker circa 2023 (h/t Ajeya Cotra): 

When will it first be the case that there exists an AI system which, if teleported back in time to 2023, would be able to function as a drop-in replacement for a human remote-working professional, across all* industries / jobs / etc.? So in particular, it can serve as a programmer, as a manager, as a writer, as an advisor, etc. and perform at (say) 90%th percentile or higher at any such role, and moreover the cost to run the system would be less than the hourly cost to employ a 90th percentile human worker. 

(Exception to the ALL: Let's exclude industries/jobs/etc. where being a human is somehow important to one's ability to perform the job; e.g. maybe therapy bots are inherently disadvantaged because people will trust them less than real humans; e.g. maybe the same goes for some things like HR or whatever. But importantly, this is not the case for anything actually key to performing AI research--designing experiments, coding, analyzing experimental results, etc. (the bread and butter of OpenAI) are central examples of the sorts of tasks we very much want to include in the "ALL.")

(2) Capable of beating All Humans in the following toy hypothetical: 

Suppose that all nations in the world enacted and enforced laws that prevented any AIs developed after year X from being used by any corporation other than AICORP. Meanwhile, they enacted and enforced laws that grant special legal status to AICORP: It cannot have human employees or advisors, and must instead be managed/CEO'd/etc. only by AI systems. It can still contract humans to do menial labor for it, but the humans have to be overseen by AI systems. The purpose is to prevent any humans from being involved in high-level decisionmaking, or in corporate R&D, or in programming. 

In this hypothetical, would AI corp probably be successful and eventually become a major fraction of the economy? Or would it sputter out, flail embarrassingly, etc.? What is the first year X such that the answer is "Probably it would be successful..."?

(3) The "replace 99% of currently remote jobs" thing I used with Ajeya and Ege

(4) the Metaculus definition (the hard component of which is "2 hour adversarial turing test" https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/ except instead of the judges trying to distinguish between the AI and an average human, they are trying to distinguish between a top AI researcher at a top AI lab, and the AI.

To put it in terms of the analogy you chose: I agree (in a sense) that the routes you take home from work are strongly biased towards being short, otherwise you wouldn't have taken them home from work. But if you tell me that today you are going to try out a new route, and you describe it to me and it seems to me that it's probably going to be super long, and I object and say it seems like it'll be super long for reasons XYZ, it's not a valid reply for you to say "don't worry, the routes I take home from work are strongly biased towards being short, otherwise I wouldn't take them." At least, it seems like a pretty confusing and maybe misleading thing to say. I would accept "Trust me on this, I know what I'm doing, I've got lots of experience finding short routes" I guess, though only half credit for that since it still wouldn't be an object level reply to the reasons XYZ and in the absence of such a substantive reply I'd start to doubt your expertise and/or doubt that you were applying it correctly here (especially if I had an error theory for why you might be motivated to think that this route would be short even if it wasn't.)

I agree that they'll be able to automate most things a remote human expert could do within a few days before they are able to do things autonomously that would take humans several months. However, I predict that by the time they ACTUALLY automate most things a remote human expert could do within a few days, they will already be ABLE to do things autonomously that would take humans several months. Would you agree or disagree? (I'd also claim that they'll be able to take over the world before they have actually automated away most of the few-days tasks. Actually automating things takes time and schlep and requires a level of openness & aggressive external deployment that the labs won't have, I predict.)

Thanks. The routes-home example checks out IMO. Here's another one that also seems to check out, which perhaps illustrates why I feel like the original claim is misleading/unhelpful/etc.: "The laws of ballistics strongly bias aerial projectiles towards landing on targets humans wanted to hit; otherwise, ranged weaponry wouldn't be militarily useful."

There's a non-misleading version of this which I'd recommend saying instead, which is something like "Look we understand the laws of physics well enough and have played around with projectiles enough in practice, that we can reasonably well predict where they'll land in a variety of situations, and design+aim weapons accordingly; if this wasn't true then ranged weaponry wouldn't be militarily useful."

And I would endorse the corresponding claim for deep learning: "We understand how deep learning networks generalize well enough, and have played around with them enough in practice, that we can reasonably well predict how they'll behave in a variety of situations, and design training environments accordingly; if this wasn't true then deep learning wouldn't be economically useful."

(To which I'd reply "Yep and my current understanding of how they'll behave in certain future scenarios is that they'll powerseek, for reasons which others have explained... I have some ideas for other, different training environments that probably wouldn't result in undesired behavior, but all of this is still pretty up in the air tbh I don't think anyone really understands what they are doing here nearly as well as e.g. cannoneers in 1850 understood what they were doing.")

Load More