New Answer
Ask Related Question
New Comment

2 Answers sorted by

The first MIRI paper to use the term is "Aligning Superintelligence with Human Interests: A Technical Research Agenda" from 2014, the original version of which appears not to exist anywhere on the internet anymore, replaced by the 2017 rewrite "Agent Foundations for Aligning Machine Intelligence with Human Interests:A Technical Research Agenda". Previous papers sometimes offhandedly talked about AI being aligned with human values as one choice of wording among many.

Edit: the earliest citation I can find for Russell talking about alignment is also 2014.

Regarding v1 of the "Agent Foundations..." paper (then called "Aligning Superintelligence with Human Interests: A Technical Research Agenda"), the original file is here.

To make it easier to find older versions of MIRI papers and see whether there are substantive changes (e.g., for purposes of citing a claim), I've made a page listing obsolete versions of a bunch of papers.

Regarding the term "alignment" as a name for the field/problem: my recollection is that Stuart Russell suggested the term to MIRI in 2014, before anyon... (read more)

2Rob Bensinger2y
Footnote: Looks like MIRI was using "Friendly AI" in our research agenda drafts as of Oct. 23, and we switched to "aligned AI" by Nov. 20 (though we were using phrasings like "reliably aligned with the intentions of its programmers" earlier than that).

I recall Eliezer saying that Stuart Russell named the 'value alignment problem', and that it was derived from that. (Perhaps Eliezer derived it?)

I recall Eliezer asking on Facebook for a good word for the field of AI safety research before it was called alignment.

2Ben Pace2y
Would be interested in a link if anyone is willing to go look for it.
1 comment, sorted by Click to highlight new comments since: Today at 11:54 PM

Google advanced search sucks, but it's clear that AI friendliness and AI safety became AI alignment some time in 2016.

New to LessWrong?