Around January 2021, I came across the Future of Life Institute's Value Alignment Map, and found it to be a useful resource for getting a sense of the different alignment topics. I was going through it again today, and recalled that I have not seen many mentions of this map in the wider LW/EAF/Alignment community, and have also not seen any external commentary on its comprehensiveness. Additionally, the corresponding document for the map indicates that it was last edited in January 2017; I haven't been able to find an updated version on LW or FLI's website, and spent about half an hour searching. With this in mind, I have a few questions:
- Does anyone know of an external review of the comprehensiveness and accuracy of the topics covered in this map?
- Does anyone know if there are plans to update it OR, if it has been updated, where this newer version can be found?
- Does anyone know if there are similar maps of value alignment research / AI safety research topics in terms of comprehensiveness?

No need to apologize, I'm usually late as well!
There is no great answer, but I am compelled to list some of the few I know of (that I wanted to update my Resources post with) :
Other mapping resources involve not the work being done but arguments and scenarios, as an example there's Lukas Trötzmüller's excellent argument compilation, but that wouldn't exactly help someone get into the field faster.
Just in case you don't know about it there's the AI alignment field-building tag on LW, which mentions an initiative run by plex, who also coordinates Stampy.
I'd be interested in reviewing stuff, yes, time permitting!