1121

LESSWRONG
LW

1120
AI
Frontpage

11

[ Question ]

When was the term "AI alignment" coined?

by David Scott Krueger (formerly: capybaralet)
21st Oct 2020
1 min read
A
3
8

11

AI
Frontpage

11

When was the term "AI alignment" coined?
7Multicore
5Rob Bensinger
2Rob Bensinger
6Ben Pace
6Gurkenglas
2Ben Pace
1RUTHECHANG@GMAIL.COM
2Shmi
New Answer
New Comment

3 Answers sorted by
top scoring

Multicore

Oct 22, 2020*

70

The first MIRI paper to use the term is "Aligning Superintelligence with Human Interests: A Technical Research Agenda" from 2014, the original version of which appears not to exist anywhere on the internet anymore, replaced by the 2017 rewrite "Agent Foundations for Aligning Machine Intelligence with Human Interests:A Technical Research Agenda". Previous papers sometimes offhandedly talked about AI being aligned with human values as one choice of wording among many.

Edit: the earliest citation I can find for Russell talking about alignment is also 2014.

Add Comment
[-]Rob Bensinger5y50

Regarding v1 of the "Agent Foundations..." paper (then called "Aligning Superintelligence with Human Interests: A Technical Research Agenda"), the original file is here.

To make it easier to find older versions of MIRI papers and see whether there are substantive changes (e.g., for purposes of citing a claim), I've made a https://intelligence.org/revisions/ page listing obsolete versions of a bunch of papers.

Regarding the term "alignment" as a name for the field/problem: my recollection is that Stuart Russell suggested the term to MIRI in 2014, before anyon... (read more)

Reply
2Rob Bensinger5y
Footnote: Looks like MIRI was using "Friendly AI" in our research agenda drafts as of Oct. 23, and we switched to "aligned AI" by Nov. 20 (though we were using phrasings like "reliably aligned with the intentions of its programmers" earlier than that).

Ben Pace

Oct 21, 2020

60

I recall Eliezer saying that Stuart Russell named the 'value alignment problem', and that it was derived from that. (Perhaps Eliezer derived it?)

Add Comment
[-]Gurkenglas5y60

I recall Eliezer asking on Facebook for a good word for the field of AI safety research before it was called alignment.

Reply
2Ben Pace5y
Would be interested in a link if anyone is willing to go look for it.

RUTHECHANG@GMAIL.COM

Mar 04, 2025

10

I have just asked Stuart Russell this question at an event I co-hosted featuring him as the speaker. He says he did not coin the name of the problem. Still unclear who did, but the conceptualization of the problem goes at least as far back as Norbert Weiner in the 1960s and likely further back still.  

Add Comment
Rendering 1/5 comments, sorted by
top scoring
(show more)
Click to highlight new comments since: Today at 3:17 PM
[-]Shmi5y20

Google advanced search sucks, but it's clear that AI friendliness and AI safety became AI alignment some time in 2016.

Reply
Moderation Log
More from David Scott Krueger (formerly: capybaralet)
View more
Curated and popular this week
A
3
1