I'd like to offer my thoughts on this topic as another source to explore:
• A Sense of Fairness: Deconfusing Ethics: suggests a framework for considering the issue, and why an aligned AI would decline moral standing (the first post in a sequence, some later posts are also relevant)
• Grounding Value Learning in Evolutionary Psychology: an Alternative Proposal to CEV: basing that framework on the context of the evolutionary psychology of humans
Updated: Jan 16, 2026
Digital minds are artificial systems, from advanced AIs to potential future brain emulations, that could morally matter for their own sake, owing to their potential for conscious experience, suffering, or other morally relevant mental states. Both cognitive science and the philosophy of mind can as yet offer no definitive answers as to whether present or near-future digital minds possess morally relevant mental states. Though, a majority of experts surveyed estimate at least fifty percent odds that AI systems with subjective experience could emerge by 2050,[1] while public expresses broad uncertainty.[2]
The lack of clarity leaves open the risk of severe moral catastrophe:
As society surges toward an era shaped by increasingly capable and numerous AI systems, scientific theories of mind take on direct implications for ethics, governance, and policy, prompting a growing consensus that rapid progress on these questions is urgently needed.
This quickstart guide gathers the most useful articles, media, and research for readers ranging from curious beginners to aspiring contributors:
Here’s a few ways to use the guide, depending your interest level and time:
Casual/Curious:
Deep Dive:
Close Read:
Quickstart
For your first 1-2 hours.
Introduction
Getting an overview in your next 10-20 hours.
From here we split into a choose your own adventure:
Select Media
In Depth Material
Intermediate Resources
In this section, you’ll learn more about the specific high-level questions that are being investigated within the digital minds space. The landscape mapping we introduce is by no means exhaustive; this is a rapidly evolving field and we’re sure we might have missed things. The lines between the identified questions should also be treated as blurry, rather than solid and well-defined; for instance, debates of AI consciousness and AI suffering will be very closely related. That being said, we hope the section gives you a solid understanding of some of the big picture ideas that experts are focusing on.
Meta: Introducing and (De)Motivating the Cause Area
Much work has been done on (de)motivating AI welfare as an important emerging cause area. Some authors have focused on investigating the potentially large scale of the problem. Others have investigated what relevant scientific and philosophical theories predict about the minds and moral status of AI systems and how this should inform our next steps.
Lessons from Animal Welfare
A number of experts are investigating the parallels between AI welfare and animal welfare, investigating both the science of animal welfare as well as relevant lessons for policy and advocacy efforts.
Foundational Issues: The Problem of Individuation
A foundational question for the field could be posed as follows: When we say that we should extend concern towards ‘digital minds’ or ‘digital subjects’, who exactly is it that we should extend concern towards? The weights, the model instance, the simulated character…? A growing literature is now focused on addressing this problem in the case of LLMs.
Foundational Issues: Non-Biological Mental States
Another foundational question in the field is whether morally relevant mental states such as suffering, consciousness or preferences and desires could exist in non-biological systems. This section offers various affirmative and sceptical arguments.
AI Suffering
A growing concern among many experts is the creation of digital systems that could suffer at an astronomically large scale. The papers here offer an introductory overview to the problem of AI suffering and outline concrete risks and worries.
AI Consciousness
There is a growing field of researchers investigating whether AI models could be conscious. This question seems very important for digital welfare. First, phenomenal consciousness is often thought to be a necessary condition for suffering. Further, it is also possible to think that phenomenal consciousness itself is sufficient for moral standing.
AI Minds (Desires, Beliefs, Intentions…)
There has been a general interest in the kinds of mental states that LLMs and other AI systems could instantiate. Some of these, such as desires, may play an important role in determining the AI’s moral status. Others might help us gain a more general understanding of what kind of entities LLMs are and whether they are ‘minded’.
AI Welfare x AI Safety
Some authors have pointed out that there might be tensions and trade-offs between AI welfare and AI safety. The papers in this section explore this tension in more depth and investigate potential synergistic pathways between the two.
Empirical Work: Investigating the Models
The work on AI welfare now goes beyond mere philosophical theorizing. There is a growing body of empirical work that investigates, among many other things, the inner working of LLMs, evaluations for sentience and other morally relevant properties as well as tractable interventions for protecting and promoting AI welfare.
Ethical Design of Digital Minds
If digital minds could potentially have moral status, this opens the question of what constraints this places on the kinds of digital minds that it would be morally permissible to create. Some authors outline specific design policies, while others focus on the risks of creating digital minds with moral standing.
Empirical Work: What Do People Think about Digital Moral Status?
AI welfare is not just a philosophical and scientific problem but also a practical societal concern. A number of researchers are trying to understand and forecast how the advent of digital minds could reshape society and what attitudes people will hold towards potentially sentient machines.
AI Policy / Rights
Discussions surrounding AI moral status may have profound political implications. It is an open question whether digital minds should be granted some form of protective rights, either qua potentially sentient beings or qua members of the labour market.
Forecasting & Futures with Digital Minds
In line with the work on the societal response to the advent of potentially sentient digital minds and surrounding political issues, there is a growing body of futures and world-building work, focusing on outlining specific visions of how humans and digital minds can co-exist and what challenges lie ahead.
The Various “Species” of Digital Minds
In much of the literature we’ve outlined above, LLMs were the primary focus of discussion. However, many other digital minds could plausibly come to have moral status and it would be risky to overlook these other potential candidates. Hence, we offer a brief overview of the literature focused on the various “species” of exotic digital minds with potential for moral standing.
Strategy: How to Approach this Cause Area?
Brain Emulation & “Bio-anchors”
While digital persons may not necessarily share features such as architecture or scale in common with the human brain, the human brain might nonetheless offer semi informative ‘bio-anchors’ for digital minds since our minds constitute an existence proof about what is possible. Additionally, the emulation of actual human (or other animal) brains may be possible and/or desirable.
Further Resources
We think these blogs/newsletters are great for keeping up developments in digital minds
For books on Philosophy of Mind
Or on Digital Minds
Fiction
Shortstories
Netflix
Books
Digital Minds Landscape
Orgs
Non-Profits
Companies
Academic Centers
Conferences & Events
Online Communities
Career Pathways
As a nascent field spanning multiple disciplines, digital minds research draws on established work across: Neuroscience, Computational Neuroscience, Cognitive Science, Philosophy of Mind, Ethics & Moral Philosophy, AI Alignment & Safety, Animal Welfare Science, Bioethics, Machine Ethics, Legal Philosophy & AI Governance, Information Theory, Psychology, Computer Science/ML/AI.
Example career trajectories for research might look like:
Example trajectories for other relevant work could be as follows. Though note that there are fewer existing pathways for these positions and that many of these fields (such as policy) are nascent or speculative:
Also worth noting: the field is young enough that many current leaders entered via adjacent work (AI safety, animal welfare, philosophy of mind) and pivoted as digital minds emerged as a distinct focus. Demonstrated interest, strong reasoning, and relevant skills may matter more than following any specific trajectory.
Internships & Fellowships
Parting Thoughts
In our view, our modern understanding of physics, including the growing view of information as fundamental, makes dubious the thought of specialness in regards to the human mind or even of carbon based life. It may be that nature has great surprises yet in store for us but it seems the default path, in lieu of those surprises, to be a question of when, and not if these digital people would be created. This possibility is an awesome responsibility. It would mark a turning point in history. Our deep uncertainty is striking. Why does it feel the way it feels to be us? Why does it feel like anything at all? Could AI systems be conscious, perhaps even today? We cannot say with any rigor.
It’s in hoping that we might, as scientists, surge ahead boldly to tackle one of our most perennial, most vexing, and most intimate questions that I help write this guide.
We’ve seen the substantial moral stakes of under and overattribution. Perhaps then I’ll close by highlighting our prospects for great gains. In studying digital minds, we may find the ideal window through which to finally understand our own. If digital personhood is possible, the future may contain not just more minds but new ways of relating, ways of being, and more kinds of experiences than we can presently imagine. The uncertainty that demands prudence also permits a great deal of excitement and hope. We reckon incessantly with the reality that the universe is stranger and more capacious than is grasped readily by our intuition. I should think it odd if the space of possible minds were any less curious and vast.
Some lament: “born too late to explore the world”. But to my eye, as rockets launch beyond our planet and artificial intelligences learn to crawl across the world-wide-web, we find ourselves poised at the dawn of our exploration into the two great frontiers: the climb into outer space, that great universe beyond, and the plunge into inner space, that great universe within. If we can grow in wisdom, if we can make well-founded scientific determinations and prudent policies, a future with vastly more intelligence could be great beyond our wildest imaginings. Let’s rise to the challenge to do our best work at this pivotal time in history. Let’s be thoughtful and get it right, for all humankind and perhaps, results pending, for all mindkind.
Glossary of Terms
Acknowledgments
The guide was written and edited by Avi Parrack and Štěpán Los. Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5.1 aid in literature review. Claude Opus 4.5 writes the Glossary of Terms which was reviewed and edited by Avi and Štěpán.
Special thanks to: Bradford Saad, Lucius Caviola, Bridget Harris, Fin Moorhouse, and Derek Shiller for thoughtful review, recommendations and discussion.
See a mistake? Reach out to us or comment below. We will aim to update periodically.
Survey of 67 professionals, cross-domain, 2025
Survey of 1,169 U.S adults, 2023