The first MIRI paper to use the term is "Aligning Superintelligence with Human Interests: A Technical Research Agenda" from 2014, the original version of which appears not to exist anywhere on the internet anymore, replaced by the 2017 rewrite "Agent Foundations for Aligning Machine Intelligence with Human Interests:A Technical Research Agenda". Previous papers sometimes offhandedly talked about AI being aligned with human values as one choice of wording among many.
Edit: the earliest citation I can find for Russell talking about alignment is also 2014.
I recall Eliezer saying that Stuart Russell named the 'value alignment problem', and that it was derived from that. (Perhaps Eliezer derived it?)
I recall Eliezer asking on Facebook for a good word for the field of AI safety research before it was called alignment.
Google advanced search sucks, but it's clear that AI friendliness and AI safety became AI alignment some time in 2016.
Regarding v1 of the "Agent Foundations..." paper (then called "Aligning Superintelligence with Human Interests: A Technical Research Agenda"), the original file is here.
To make it easier to find older versions of MIRI papers and see whether there are substantive changes (e.g., for purposes of citing a claim), I've made a https://intelligence.org/revisions/ page listing obsolete versions of a bunch of papers.
Regarding the term "alignment" as a name for the field/problem: my recollection is that Stuart Russell suggested the term to MIRI in 2014, before anyon... (read more)