Written by Zach Freitas-Groff and posted at his request.
I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.
For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field...
This reminds me of related questions around slowing down AI, discussing AI with a mass audience, or building public support for AI policy (e.g. https://forum.effectivealtruism.org/posts/pm6Mn4a3h4oekCCay/two-strange-things-about-ai-safety-policy, http://www.zachgroff.com/2017/08/does-ai-safety-and-effective-altruist.html). A lot of the arguments against doing these things have this same motivation that we are concerned about the others for reasons that are somewhat abstruse. Where would these "sociopolitical" considerations get us on these questi... (read more)