Answers in order: there is none, there were, there are none yet.
(Context starts, feel free to skip, this is the first time I can share this story)
After posting this, I was contacted by Richard Mallah, who (if memory serves right) created the map, compiled the references and wrote most of the text in 2017, to help with the next iteration of the map. The goal was to build a Body of Knowledge for AI Safety, including AGI topics but also more current-capabilities ML Safety methods.
This was going to happen in conjunction with the contributions of many academic & industry stakeholders, under the umbrella of CLAIS (Consortium on the Landscape of AI Safety), mentioned here.
There were design documents for the interactivity of the resource, and I volunteered Back in 2020 I had severely overestimated both my web development skills and ability to work during a lockdown, never published a prototype interface, and for unrelated reasons the CLAIS project... winded down.
(End of context)
I do not remember Richard mentioning a review of the map contents, apart from the feedback he received back when he wrote them. The map has been a bit tucked in a corner of the Internet for a while now.
The plans to update/expand it failed as far as I can tell. There is no new version and I'm not aware of any new plans to create one. I stopped working on this in April 2021.
There is no current map with this level of interactivity and visualization, but there has been a number of initiatives trying to be more comprehensive and up-to-date!
Huh, I hadn't seen that before. Looks interesting though! Not sure it's worth giving a complete review, but it seems complete-in-the-sense-of-an-encyclopedia-entry. It's not covering research in progress so much as it's one person's really good shot at making a map of value alignment.