"Value alignment theory" or "AI alignment theory" is the overarching research topic of how to develop a highly advanced Artificial Intelligence such that running this AI produces good real-world outcomes. Other terms that have been used to describe this research subject are "robust and beneficial AGI" and "Friendly AI theory".
Where to go first if you're just coming in and want to poke around: List of Value Alignment Topics, or start browsing from children of this page.
Introductory articles don't exist yet. Meanwhile, if you have no idea what this is all about, try reading Nick Bostrom's book Superintelligence.
For the definition of 'value alignment' as contrasted to 'value identification' or 'value achievement', see the page on value alignment problem. For the definition of 'value' as a metasyntactic placeholder for "the still-debated thing we want our AIs to accomplish", see the page on value.