Welcome to the first edition of the Digital Minds Newsletter, collating all the latest news and research on digital minds, AI consciousness, and moral status.
Our aim is to help you stay on top of the most important developments in this emerging field. In each issue, we will share a curated overview of key research papers, organizational updates, funding calls, public debates, media coverage, and events related to digital minds. We want this to be useful for people already working on digital minds as well as newcomers to the topic.
This first issue looks back at 2025 and reviews developments relevant to digital minds. We plan to release multiple editions per year.
If you find this useful, please consider subscribing, sharing it with others, and sending us suggestions or corrections to digitalminds@substack.com.
In 2025, the idea of digital minds shifted from a niche research topic to one taken seriously by a growing number of researchers, AI developers, and philanthropic funders. Questions about real or perceived AI consciousness and moral status appeared regularly in tech reporting, academic discussions, and public discourse.
Anthropic’s early steps on model welfare
Following their support for the 2024 report “Taking AI Welfare Seriously”, Anthropic expanded its model welfare efforts in 2025 and hired Kyle Fish as an AI welfare researcher. Fish discussed the topic and his work in an 80,000 Hours interview. Anthropic leadership is taking the issue of AI welfare seriously. CEO Dario Amodei drew attention to the relevance of model interpretability to model welfare and mentioned model exit rights at the council on foreign relations.
Several of the year’s most notable developments came from Anthropic: they facilitated an external model welfare assessment conducted by Eleos AI Research, included references to welfare considerations in model systemcards, ran a related fellowship program, introduced a “bail button” for distressed behavior, and made internal commitments around keeping promises and discretionary compute. In addition to hiring Fish, Anthropic also hired a philosopher—Joe Carlsmith—who has worked on AI moral patiency.
In the private sector, Anthropic has been leading the way (see section above), but others have also been making strides. Google researchers organized an AI consciousness conference three years after firing Blake Lemoine. AE Studio expanded its research into subjective experiences in LLMs. And Conscium launched an open letter encouraging a responsible approach to AI consciousness.
Philanthropic actors have also played a key role this year. The Digital Sentience Consortium, coordinated by Longview Philanthropy, issued the first large-scale funding call specifically for research, field-building, and applied work on AI consciousness, sentience, and moral status.
Organized the AI, Animals, and Digital Minds Conference in London and New York.
Started an artificial sentience channel on its Slack Community.
Other noteworthy organizations
AE Studio started researching issues related to AI welfare.
Astera Institute is launching a major new neuroscience research effort led by Doris Tsao on how the brain produces conscious experience, cognition, and intelligent behavior. Astera plans to support this effort with $600M+ over the next decade.
Conscium issued an open letter calling for responsible approaches to research that could lead to the creation of conscious machines and seed-funded PRISM.
If you are considering moving into this space, here are some entry points that opened or expanded in 2025. We will use future issues to track new calls, fellowships, and events as they arise.
Funding and fellowships
The Anthropic Fellows Program for AI safety research is accepting applications and plans to work with some fellows on model welfare; deadline January 12, 2026.
Good Ventures appears now open to supporting work on digital minds recommended by Coefficient Giving (previously Open Philanthropy).
Sentient Futures are holding a Summit in the Bay Area from the 6th to 8th of February. They will likely hold another event in London in the summer. Keep an eye on their website for details.
Benjamin Henke and Patrick Butlin will continue running a speaker series on AI agency in the spring. Remote attendance is possible. Requests to be added to the mailing list can be sent to benhenke@gmail.com. Speakers will include Blaise Aguera y Arcas, Nicholas Shea, Joel Leibo, and Stefano Palminteri.
Soenke Ziesche and Roman Yampolskiy released Considerations on the AI Endgame. It covers AI welfare science, value alignment, identity, and proposals for universal AI ethics.
Eric Schwitzgebel released a draft of AI and Consciousness. It’s a skeptical overview of the literature on AI consciousness.
Geoff Keeling and Winnie Street announced a forthcoming book called Emerging Questions onAI Welfare with Cambridge University Press.
Simon Goldstein and Cameron Domenico Kirk-Giannini released a draft ofAI Welfare: Agency, Consciousness, Sentience, a systematic investigation of the possibility of AI welfare.
Podcasts
This year, we’ve encountered many podcast guests discuss topics related to digital minds, and we’ve also listed to podcasts dedicated entirely to the topic.
80,000 Hours featured an episode with Kyle Fish on the most bizarre findings from 5 AI welfare experiments.
Am I?A podcast by the AI Risk Network dedicated to exploring AI consciousness was launched.
In 2025, there was an uptick of discussion of AI consciousness in the public sphere, with articles in the mainstream press and prominent figures weighing in. Below are some of the key pieces.
Mustafa Suleyman, CEO of Microsoft AI, argued in “We must build AI for people; not to be a person” that “Seemingly Conscious AI” poses significant risks, urging developers to avoid creating illusions of personhood, given there is “zero evidence” of consciousness today.
Robert Long challenged the “zero evidence” claim, clarifying that the research Suleyman cited actually concludes there are no obvious technical barriers to building conscious systems in the near future.
The New York Times, Zvi Mowshowitz, Douglas Hofstadter, and several other reportsdescribe “AI Psychosis,” a phenomenon where users interacting with chatbots develop delusions, paranoia, or distorted beliefs—such as believing the AI is conscious or divine—often reinforced by the model’s sycophantic tendency to validate the user’s own projections.
Lucius, Bradford, and collaborators launched the guide WhenAISeemsConscious.org, and Vox’s Sigal Samuel published practical advice to help users ground themselves and critically evaluate these interactions.
6. A Deeper Dive by Area
Below is a deeper dive by area, covering a longer list of developments from 2025. This section is designed for skimming, so feel free to jump to the areas most relevant to you.
The EU AI Act Code of practice identifies risks to non-human welfare as a type to be considered in the process of systemic risk identification, in line with recommendations given in consultations by people at Anima International, people at Sentient Futures, Adrià Moret, and others.
Ned Block asks can only meat machines be conscious? He argues that there is opposition between views on which AIs can be conscious and views on which simple animals can be.
Konstantin Denim and collaborators propose functional conditions for sentience, sketch approaches to implementing them in deep learning systems, and note that knowing what sentience requires may help us avoid inadvertently creating sentient AI systems.
Stephen Fleming and Matthias Michel argue that consciousness is surprisingly slow and that this has implications for the function and distribution of consciousness; Ian Phillipsresponds.
Mark MacCarthy, in a Brookings Institution piece, asks whether AI systems have moral status and claims that other challenges are more worthy of our scarce resources.
We (Lucius and Bradford) surveyed 67 experts on digital minds takeoff, who anticipated a rapid expansion of collective digital welfare capacity once such systems emerge.
Justin B. Bullock and collaborators use the AIMS survey to examine how trust and risk perception shape AI regulation preferences, finding broad public support for regulation.
Eleos AIoutlines five research priorities for AI welfare: developing concrete interventions, establishing human-AI cooperation frameworks, leveraging AI progress to advance welfare research, creating standardized welfare evaluations, and credible communication.
Eric Schwitzgebel and Jeff Sebo propose the Emotional Alignment Design Policy: AI systems should be designed to elicit emotional reactions appropriate to their actual moral status, avoiding both overshooting and undershooting.
Adam Bradley and Bradford Saad identify three agency-based dystopian risks: artificial absurdity (disconnected self-conceptions), oppression of AI rights, and unjust distribution of moral agency.
Joel Leibo and collaborators of Google DeepMind defend a pragmatic view of personhood as a flexible bundle of obligations rather than a metaphysical property with an eye toward enabling governance solutions while sidestepping consciousness debates.
Hilary Greaves, Jacob Barrett, and David Thorstad publish Essays on Longtermism, which includes chapters touching on digital minds and future population ethics, including discussion of emulated minds.
Aksel Sterri and Peder Skjelbred discuss howwould-be AGI creators face a dilemma: don’t align AGI and risk catastrophe, or align AGI and commit a serious moral wrong.
Bradford Saad discusses Claude Sonnet 4.5’s step change in evaluation awareness and other parts of the system card that are potentially relevant to digital minds research.
Shoshannah Tekofsky gives an overview of howLLM agents in the AI village raised money for charity. Eleos affiliate Larissa Schiavo recounts her personal experience interacting with the agents.
Brain-inspired technologies
The Human Brain Project Founder, Henry Markram, and Kamila Markram, launched the Open Brain Institute; part of its mission is to enable users to conduct realistic brain simulations.
Thank you for reading! If you found this article useful, please consider subscribing, sharing it with others, and sending us suggestions or corrections to digitalminds@substack.com.