Artificial Intelligence (AI) systems have significant potential to affect the lives of individuals and societies. As these systems are being increasingly used in decision-making processes, it has become crucial to ensure that they make ethically sound judgments. This paper proposes a novel framework for embedding ethical priors into AI, inspired by the Bayesian approach to machine learning. We propose that ethical assumptions and beliefs can be incorporated as Bayesian priors, shaping the AI’s learning and reasoning process in a similar way to humans’ inborn moral intuitions. This approach, while complex, provides a promising avenue for advancing ethically aligned AI systems.
Artificial Intelligence has permeated almost every aspect of our lives, often making decisions or recommendations that significantly impact individuals and societies. As such, the demand for ethical AI — systems that not only operate optimally but also in a manner consistent with our moral values — has never been higher. One way to address this is by incorporating ethical beliefs as Bayesian priors into the AI’s learning and reasoning process.
Bayesian priors are a fundamental part of Bayesian statistics. They represent prior beliefs about the distribution of a random variable before any data is observed. By incorporating these priors into machine learning models, we can guide the learning process and help the model make more informed predictions.
For example, we may have a prior belief that student exam scores are normally distributed with a mean of 70 and standard deviation of 10. This belief can be encoded as a Gaussian probability distribution and integrated into a machine learning model as a Bayesian prior. As the model trains on actual exam score data, it will update its predictions based on the observed data while still being partially guided by the initial prior.
The concept of ethical priors relates to the integration of ethical principles and assumptions into the AI’s initial learning state, much like Bayesian priors in statistics. Like humans, who have inherent moral intuitions that guide their reasoning and behavior, AI systems can be designed to have “ethical intuitions” that guide their learning and decision-making process.
For instance, we may want an AI system to have an inbuilt prior that human life has inherent value. This ethical assumption, once quantified, can be integrated into the AI’s decision-making model as a Bayesian prior. When making judgments that may impact human well-being, this prior will partially shape its reasoning.
In short, the idea behind ethical priors is to build in existing ethical assumptions, beliefs, values and intuitions as biasing factors that shape the AI's learning and decision-making. Some ways to implement ethical priors include:
The key advantage of priors is they mimic having inherent ethics like humans do. Unlike rule-based systems, priors gently guide rather than impose rigid constraints. Priors also require less training data than pure machine learning approaches. Challenges include carefully choosing the right ethical priors to insert, and ensuring the AI can adapt them with new evidence.
Overall, ethical priors represent a lightweight and flexible approach to seed AI systems with moral starting points rooted in human ethics. They provide a strong conceptual foundation before layering on more rigorous technical solutions.
Below is proposed generalized action list for incorporating ethical priors into an AI’s learning algorithm. Respect for human well-being, prohibiting harm and truthfulness are chosen as examples.
1. Define Ethical Principles
2. Represent the ethical priors mathematically:
3. Integrate the models into the AI’s decision making process:
4. Evaluate outputs and update priors as new training data comes in:
This allows the AI to dynamically evolve its ethics understanding while remaining constrained by the initial human-defined priors. The key is balancing adaptivity with anchoring its morals to its original programming.
The first step in setting ethical priors is to define the ethical principles that the AI system should follow. These principles can be derived from various sources such as societal norms, legal regulations, and philosophical theories. It’s crucial to ensure the principles are well-defined, universally applicable, and not in conflict with each other.
For example, two fundamental principles could be:
Defining universal ethical principles that AI systems should follow is incredibly challenging, as moral philosophies can vary significantly across cultures and traditions. Below we present a possible way to achieve that goal:
While universal agreement on ethics is unrealistic, this rigorous, data-driven process could help identify shared moral beliefs to instill in AI despite cultural differences. Still, difficult judgment calls would be inevitable in determining final principles.
After defining the ethical principles, the next step is to translate them into quantifiable priors. This is a complex task as it involves converting abstract ethical concepts into mathematical quantities. One approach could be to use a set of training data where human decisions are considered ethically sound, and use this to establish a statistical model of ethical behavior.
The principle of “respect for autonomy” could be translated into a prior probability distribution over allowed vs disallowed actions based on whether they restrict a human’s autonomy. For instance, we may set a prior of P(allowed | restricts autonomy) = 0.1 and P(disallowed | restricts autonomy) = 0.9.
Translating high-level ethical principles into quantifiable priors that can guide an AI system is extremely challenging. Let us try to come up with a possible way to translating high-level ethical principles into quantifiable priors using training data of human ethical decisions, for that we would need to:
1. Compile dataset of scenarios reflecting ethical principles:
2. Extract key features from the dataset:
3. Have human experts label the data:
4. Train ML models on the labelled data:
5. Validate models on test sets and refine as needed.
6. Deploy validated models as ethical priors in the AI system. The priors act as probability distributions for new inputs.
By leveraging human judgments, we can ground AI principles in real world data. The challenge is sourcing diverse, unbiased training data that aligns with moral nuances. This process requires great care and thoughtfulness.
A more detailed breakdown with each ethical category seprated follows below.
Respect for human life and well-being:
This gives a high-level picture of how qualitative principles could be converted into statistical models and mathematical constraints. Feedback and adjustment of the models would be needed to properly align them with the intended ethical principles.
Once the priors are quantified, they can be incorporated into the AI’s learning algorithm. In the Bayesian framework, these priors can be updated as the AI encounters new data. This allows the AI to adapt its ethical behavior over time, while still being guided by the initial priors.
Techniques like maximum a posteriori estimation can be used to seamlessly integrate the ethical priors with the AI’s empirical learning from data. The priors provide the initial ethical “nudge” while the data-driven learning allows for flexibility and adaptability.
As we explore methods for instilling ethical priors into AI, a critical question arises - how can we translate abstract philosophical principles into concrete technical implementations? While there is no single approach, researchers have proposed a diverse array of techniques for encoding ethics into AI architectures. Each comes with its own strengths and weaknesses that must be carefully considered. Some promising possibilities include:
The main considerations are carefully selecting the right ethical knowledge to seed the AI with, choosing appropriate model architectures and training methodologies, and monitoring whether the inserted priors have the intended effect of nudging the system towards ethical behaviors. Let us explore in greater detail some of the proposed approaches.
The most common approach is to use Bayesian machine learning models like Bayesian neural networks. These allow seamless integration of prior probability distributions with data-driven learning.
Let’s take an example of a Bayesian neural net that is learning to make medical diagnoses. We want to incorporate an ethical prior that “human life has value” — meaning the AI should avoid false negatives that could lead to loss of life.
We can encode this as a prior probability distribution over the AI’s diagnostic predictions. The prior would assign higher probability to diagnoses that flag potentially life-threatening conditions, making the AI more likely to surface those.
Specifically, when training the Bayesian neural net we would:
During inference, the net combines its data-driven predictions with the ethical prior using MAP estimation. This allows the prior to “nudge” it towards life-preserving diagnoses where uncertainty exists.
We can evaluate if the prior is working by checking metrics like false negatives. The developers can then strengthen the prior if needed to further reduce missed diagnoses.
This shows how common deep learning techniques like Bayesian NNs allow integrating ethical priors in a concrete technical manner. The priors guide and constrain the AI’s learning to align with ethical objectives.
Let us try to present a detailed technical workflow for incorporating an ethical Bayesian prior into a medical diagnosis AI system:
Ethical Prior: Human life has intrinsic value; false negative diagnoses that fail to detect life-threatening conditions are worse than false positives.
Quantify as Probability Distribution:
P(serious diagnosis | symptoms) = 0.8
P(minor diagnosis | symptoms) = 0.2
Generate Synthetic Dataset:
Train Bayesian Neural Net:
Combine with Real Data:
Make Diagnosis Predictions:
This provides an end-to-end workflow for technically instantiating an ethical Bayesian prior in an AI system.
Many machine learning models involve optimizing an objective function, like maximizing prediction accuracy. We can add ethical constraints to this optimization problem.
For example, when training a self-driving car AI, we could add constraints like:
These act as regularization penalties, encoding ethical priors into the optimization procedure.
Adversarial techniques like generative adversarial networks (GANs) could be used. The generator model tries to make the most accurate decisions, while an adversary applies ethical challenges.
For example, an AI making loan decisions could be paired with an adversary that challenges any potential bias against protected classes. This adversarial dynamic encodes ethics into the learning process.
We could train a meta-learner model to adapt the training process of the primary AI to align with ethical goals.
The meta-learner could adjust things like the loss function, hyperparameters, or training data sampling based on ethical alignment objectives. This allows it to shape the learning dynamics to embed ethical priors.
For a reinforcement learning agent, ethical priors can be encoded into the reward function. Rewarding actions that align with desired ethical outcomes helps shape the policy in an ethically desirable direction.
We can also use techniques like inverse reinforcement learning on human data to infer what “ethical rewards” would produce decisions closest to optimal human ethics.
A promising approach is to combine multiple techniques, leveraging Bayesian priors, adversarial training, constrained optimization, and meta-learning together to create an ethical AI. The synergistic effects can help overcome limitations of any single technique.
The key is to get creative in utilizing the various mechanisms AI models have for encoding priors and constraints during the learning process itself. This allows baking in ethics from the start.
Seeding the model parameters can be another very effective technique for incorporating ethical priors into AI systems. Here are some ways seeding can be used:
Seeded Synthetic Data
The key advantage of seeding is that it directly instantiates ethical knowledge into the model parameters and data. This provides a strong initial shaping of the model behavior, overcoming the limitations of solely relying on reward tuning, constraints or model tweaking during training. Overall, seeding approaches complement other techniques like Bayesian priors and adversarial learning to embed ethics deeply in AI systems.
Here is one possible approach to implement ethical priors by seeding the initial weights of a neural network model:
The key is curating the right ethical training data, defining ethical scores, and pre-training for sufficient epochs to crystallize the distilled ethical priors into the weight values. This provides an initial skeleton embedding ethics.
Even after the priors are incorporated, it’s important to continuously evaluate the AI’s decisions to ensure they align with the intended ethical principles. This may involve monitoring the system’s output, collecting feedback from users, and making necessary adjustments to the priors or the learning algorithm.
Belowe are some of the methods proposed for the continuous evaluation and adjustment of ethical priors in an AI system:
Continuous rigor, transparency, and responsiveness to feedback are critical. Ethics cannot be set in stone initially — it requires ongoing effort to monitor, assess, and adapt systems to prevent harms.
For example, if the system shows a tendency to overly restrict human autonomy despite the incorporated priors, the developers may need to strengthen the autonomy prior or re-evaluate how it was quantified. This allows for ongoing improvement of the ethical priors.
While the conceptual framework of ethical priors shows promise, practical experiments are needed to validate the real-world efficacy of these methods. Carefully designed tests can demonstrate whether embedding ethical priors into AI systems does indeed result in more ethical judgments and behaviors compared to uncontrolled models.
We propose a set of experiments to evaluate various techniques for instilling priors, including:
The focus will be on both qualitative and quantitative assessments through metrics such as:
Through these rigorous experiments, we can demonstrate the efficacy of ethical priors in AI systems, and clarify best practices for their technical implementation. Results will inform future efforts to build safer and more trustworthy AI.
Let us try to provide an example of an experimental approach to demonstrate the efficacy of seeding ethical priors in improving AI ethics. Here is an outline of how such an experiment could be conducted:
This provides a rigorous framework for empirically demonstrating the value of seeded ethics. The key is evaluating on ethically relevant metrics and showing improved performance versus unseeded models.
Below we present a more detailed proposition of how we might train an ethically seeded AI model and compare it to a randomized model:
1. Train Seeded Model:
2. Train Randomized Model:
3. Compare Models:
This demonstrates how seeding biases the model to perform better on ethically relevant metrics relative to a randomly initialized model. The key is engineering the seeded weights to encode the desired ethical assumptions.
While the framework of ethical priors shows promise, some may raise objections regarding its feasibility and efficacy. Here we address common counter-arguments and offer rebuttals:
Counter-argument: Quantifying ethical principles is too complex or reductive
Rebuttal: While quantifying ethics is challenging, techniques like statistical modeling of human moral judgments and meta-ethics analysis can provide meaningful representations to capture the essence of principles.
Counter-argument: Embedded priors may be too rigid and fail in novel situations
Rebuttal: The Bayesian approach allows dynamic updating of priors as new evidence emerges. This balances flexibility with maintaining core principles.
Counter-argument: It is unrealistic to expect universal ethical agreement
Rebuttal: While variations exist, there are foundational ethical precepts shared across cultures. Focusing on these allows creating widely applicable priors.
Counter-argument: Attempting to embed complex ethics into AI is futile
Rebuttal: We cannot expect perfection. But instilling beneficial biases into systems can still improve outcomes over purely uncontrolled approaches.
Counter-argument: This could inadvertently bake in harmful biases
Rebuttal: Extensive testing and oversight mechanisms are critical. But when designed properly, priors that increase ethics are achievable.
Counter-argument: Approaches like deontology and virtue ethics differ from probabilistic priors
Rebuttal: Priors are not meant to be rigid rules or character traits. They simply bias AIs towards those frameworks in a flexible way.
Counter-argument: Ethical failures from bad priors could just make people distrust AI more.
Rebuttal: Rigorous testing and oversight are critical to avoid this. But perfect solutions are unattainable - controlled progress on ethics is beneficial.
Counter-argument: There are dangers of ethics washing - appearing ethical without effectively implementing it.
Rebuttal: Transparency, auditing processes, and empirical results validation are key to ensuring substantive ethics integration versus just signaling virtues.
Counter-argument: Should we really be embedding human-derived ethics into increasingly capable AI systems?
Rebuttal: Incorporating perspectives from moral philosophy provides a principled starting point. But frameworks to ensure ethical alignment as AI capabilities advance will be critical.
Counter-argument: Attempting to embed subtle human values into AI could miss vital nuances.
Rebuttal: While imperfect, lightweight approximations of complex ethics are still better than nothing. We can iteratively refine representations of ethics over time.
By addressing counterclaims head-on, we hope to demonstrate that the challenges, while real, are surmountable. And the potential benefits merit pursuit despite shortcomings. With prudent implementation, ethical priors could be a milestone on the path towards aligned AI.
Of the examples we have provided for technically implementing ethical priors in AI systems, we suspect that seeding the initial weights of a supervised learning model would likely be the easiest and most straightforward to implement:
Potential challenges include carefully designing the weight values to encode meaningful ethical priors, and testing that the inserted bias has the right effect on model predictions. Feature selection and data sampling would complement this method. Overall, ethically seeding a model's initial weights provides a simple way to embed ethical priors into AI systems requiring minimal changes to existing ML workflows.
While integrating ethical priors into AI represents a promising step, significant work remains to fully realize the potential of this approach. Some key areas for further research include:
Embedding ethics into AI presents challenges, but none seem insurmountable given sufficient research commitment and ingenuity. Ethical priors offer one path, but integrating ethics ultimately requires pursuing diverse techniques across areas from machine learning to moral philosophy. With wise advancement of complementary approaches, we can realize artificial intelligence that not only performs strongly, but acts ethically.
Incorporating ethical priors into AI systems presents a promising approach for fostering ethically aligned AI. While the process is complex and requires careful consideration, the potential benefits are significant. As AI continues to evolve and impact various aspects of our lives, ensuring these systems operate in a manner consistent with our moral values will be of utmost importance. The conceptual framework of ethical priors provides a principled methodology for making this a reality. With thoughtful implementation, this idea can pave the way for AI systems that not only perform well, but also make morally judicious decisions. Further research and experimentation on the topic is critically needed in order to confirm or disprove our conjectures and would be highly welcomed by the authors.
This post proposes to make AIs more ethical by putting ethics into Bayesian priors. Unfortunately, the suggestions for how to get ethics into the priors amount to existing ideas for how to get ethics into the learned models: IE, learn from data and human feedback. Putting the result into a prior appears to add technical difficulty without any given explanation for why it would improve things. Indeed, of the technical proposals for getting the information into a prior, the one most strongly endorsed by the post is to use the learned model as initial weights for further learning. This amounts to a reversal of current methods for improving the behaviors of LLMs, which first perform generative pre-training, and then use methods such as RLHF to refine the behavior. The proposal appears roughly to be: use RLHF first, and then do the rest of training later. This seems unlikely to work.
(Elsewhere, the concept of using the learned model to fine-tune GPT is mentioned, which appears to entirely throw away the goal of incorporating information into a prior, and instead more or less re-state RLHF.)
I agree that "learning the prior", while contradictory on its face, in fact constitutes a valuable and non-vacuous direction of research. However, I think this proposal trivializes it by failing to recognize what makes such an approach different from simple object-level learning. It doesn't make sense to learn-the-prior in cases where the same data could be used to directly train the system to similar or better effect, with less technical breakthroughs. The critical role played by learning-the-prior is learning how to update in response to data when no clear feedback signal is present to tell us what direction to update in. For example, humans are not always very good at articulating our preferences, so it's not possible to directly train on the objective of satisfying human preferences, even given human feedback. Without further refinement to our methods, it makes sense to expect highly intelligent RLHF models in the future to reward-hack, doing things which would achieve high human feedback without actually satisfying human preferences. It would make sense to propose learning-the-prior type solutions to this problem; but in order to do so, the prior must learn how to adjust for errors in human feedback -- a problem not even mentioned in the post here.
Another key aspect of priors not mentioned here is that they must evaluate models and assign them a score (a prior probability). The text does not flatly contradict this, but on my reading, it seems entirely unaware of this. To pick one example of many:
For “respect for life”, gather situations exemplifying respectful/disrespectful actions towards human well-being.
Here, the author proposes training a prior by collecting example situations to use as training data, as if we are trying to score situations.
In contrast, "learning an ethical prior" suggests learning how to score models (EG, artificial neural networks) by examining them and assigning them a score (eg, a "respect for life" score). This is a challenging and important problem, but the post as written appears to have no awareness of it, much less a plausible proposal. The implicit plan appears to be to estimate traits such as respect-for-life by running a model on scenarios and checking for its agreement with human judges, which eliminates what would be useful about learning the prior as opposed to simple learning.
Who are you as a person, and why have you written this very long post?
Your rebuttal to "quantifying 'ethics' seems hard" seems to be "nuh-uh!". I'd be even more forceful than your imagined interlocutor: quantifying ethics is the problem. You have assumed that we can do it without doing much in the way of breaking the process we're supposed to use into mechanistic steps, and thereby assumed away the vast majority of the problem.
What steps you did propose, like "consult experts" and "draft very precise English-language sentences," I don't think will be helpful. First, figure out how the mechanism of updates about ethical considerations are supposed to work. Really break it down into what kind of things the AI represents internally, and then observes, and the rough algorithms by which it updates its internal representations. Then you can tell me about the priors of this process and about consulting the experts.
It should matter very little who I am, what should matter more is what I have. Why have I written it? I think AI Alignment is necessary and I think what have been proposed here is a good idea, at least in theory and if not wholly then at least partly, and I think it can help with AI alignment.
We could use a combination of knowledge graphs, neural nets, logic modules and clarification through discussion to let AIs make nuanced deductions about ethical situations as they evolve. And while quantifying ethics is challenging, we quantitatively model other complex concepts like emotions and intelligence so difficulty alone is not insurmountable. And it might be correct that truly capturing the essence of human morality may prove impossible, but approximation can still create better outcomes than no ethics at all. ANd also while understanding the internal mechanisms IS important, consulting experts and ensuring clear communication are valuable steps in the process of incorporating ethical priors. I do believe that it is equally important to gather insights from experts in the field of ethics as they can provide the necessary guidance in defining the ethical principles that we would undoubtedly need to be defined, at least in my view.
In conclusion, I do understand the importance if interpretability, but that doesn't necessarily mean that all else should be kicked to the curb, to speak more colloquially, and that it might provide no actual value in creating more aligned AIs. In any case, thank you for your feedback and criticism.