Around January 2021, I came across the Future of Life Institute's Value Alignment Map, and found it to be a useful resource for getting a sense of the different alignment topics. I was going through it again today, and recalled that I have not seen many mentions of this map in the wider LW/EAF/Alignment community, and have also not seen any external commentary on its comprehensiveness. Additionally, the corresponding document for the map indicates that it was last edited in January 2017; I haven't been able to find an updated version on LW or FLI's website, and spent about half an hour searching. With this in mind, I have a few questions:
- Does anyone know of an external review of the comprehensiveness and accuracy of the topics covered in this map?
- Does anyone know if there are plans to update it OR, if it has been updated, where this newer version can be found?
- Does anyone know if there are similar maps of value alignment research / AI safety research topics in terms of comprehensiveness?
I am sorry that I took such a long time replying to this. First, thank you for your comment, as it answers all of my questions in a fairly detailed manner.
The impact of a map of research that includes the labs, people, organizations, and research papers focused on AI Safety seems high, and FLI's 2017 map seems like a good start at least for what types of research is occurring in AI Safety. In this vein, it is worth noting that Superlinear is offering a small prize of $1150 for whoever can "Create a visual map of the AGI safety ecosystem", but I don't... (read more)