I have made a map of the AI Safety Community!

The map is greatly inspired by the map of the rationalist community made by Scott Alexander.

There are bound to be omissions and misunderstandings, and I will be grateful for any corrections. I promise that I will incorporate the feedback into a new version of the map.

The sizes of the cities/dwellings reflect my understanding of how much they contribute to AI Safety. The locations and borders reflect my judgement of who focus on what, and I had to make some difficult choices.

(Made with Fractal Mapper 8, and crossposted to AISafety.com and r/controlProblem)

I hope that you will find the map useful, and find inspiration to visit new places.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 6:53 PM

I downvoted because this doesn't seem on-topic for lesswrong. The posting guidelines says to avoid discussion of community, rather than discussion of more enduring fact.

Thank you for explaining.

I'd consider putting FRI closer to Effective Altruism, since they are also concerned with suffering more generally.

Do you have criteria for including fiction? Other relevant fiction I am aware of:

Also Vernor Vinge is spelled with an 'o'.

Thank you for your comments. I have included them in version 1.1 of the map, where I have swapped FRI and OpenAI/DeepMind, added Crystal Trilogy and corrected the spelling of Vernor Vinge.