The "alignment problem for advanced agents" or "AI alignment" is the overarching research topic of how to develop sufficiently advanced machine intelligences such that running them produces good real-world outcomes.
Other terms that have been used to describe this research problem include "robust and beneficial AI" and "Friendly AI". The term "value alignment problem" was coined by Stuart Russell to refer to the primary subproblem of aligning AI preferences with (potentially idealized) human preferences.
A good introductory article or survey paper for this field does not presently exist. If you have no idea what this problem is about, consider reading Nick Bostrom's popular book Superintelligence.
You can explore this Arbital domain by following this link. See also the List of Value Alignment Topics on Arbital although this is not up-to-date.
For the definition of 'value alignment' as contrasted to 'value identification' or 'value achievement', see the page on value alignment problem. For the definition of 'value' as a metasyntactic placeholder for "the still-debated thing we want our AIs to accomplish", see the page on value.