Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Unaligned AI systems may cause illegitimate value change. At the heart of this risk lies the observation that the malleability inherent to human values can be exploited in ways that make the resulting value change illegitimate. Recall that I take illegitimacy to follow from a lack of or (significant) impediment to a person’s ability to self-determine and course-correct a value-change process.

Mechanisms causing illegitimate value change 

Instantiations of this risk can already be observed today, such as in the case of recommender systems. It is worth spending a bit of time understanding this example before considering what lessons it can teach us about risks from advanced AI systems more generally. To this effect, I will draw on work by Hardt et al. (2022), which introduces the notion of ‘performative power’. Performative power is a quantitative measure of ‘the ability of a firm operating an algorithmic system, such as a digital content recommendation platform, to cause change in a population of participants’ (p. 1). The higher the performative power of a firm, the higher its ability to ‘benefit from steering the population towards more profitable [for the firm] behaviour’ (p. 1). In other words, performative power allows us to measure the ability of the firm running the recommender systems to cause exogenously induced value change[1] in the customer population. The measure was specifically developed to advance the study of competition in digital economies, and in particular, to identify anti-competitive dynamics. 

What is happening here? To better understand this, we can help ourselves to the distinction between ‘ex-ante optimization’ and ‘ex-post optimization’, introduced by Predomo et al. (2020). The former—ex-ante optimisation—is the type of predictive optimisation that occurs under conditions of low performative power, where a predictor (a firm in this case) cannot do better than the information that standard statistical learning allows to extract from past data about future data. Ex-post optimisation, on the other hand, involves steering the predicted behaviour such as to improve the predictor’s predictive performance. In other words, in the first case, the to-be-predicted data is fixed and independent from the activity of the predictor, while in the second case, the to-be-predicted data is influenced by the prediction process. As Hardt et al. (2022) remark: ‘[Ex-post optimisation] corresponds to implicitly or explicitly optimising over the counterfactuals’ (p. 7). In other words, an actor with high performative power does not only predict the most likely outcome; functionally speaking, it can perform as if it can choose which future scenarios to bring about, and then predicts those (thereby being able to achieve higher levels of predictive accuracy). 

According to our earlier discussion of the nature of (il)legitimate value change, cases where performative power drives value change in a population constitute an example of illegitimate change. The people undergoing said change were in no meaningful way actively involved in the change that the performative predictor affected upon said population, and their ability to ‘course-correct’ was actively reduced by means of (among others) choice design (i.e., affecting the order of recommendations a consumer is exposed to) or by exploiting certain psychological features which make it such that some types of content are experienced as locally more compelling than others, irrespective of said content’s relationship to the individuals’ values or proleptic reasons. 

What is more, the change that the population undergoes is shaped in such a way that it tends towards making the values more predictable. To explain this, first note that the performative predictor (i.e., the firm running the recommender platform) is embedded in an economic logic which imposes an imperative to minimise costs and increase profits. As a result, a firms’ steering power will specifically tend towards making the predicted behaviour easier to predict, because it is this predictability that the firm is able to exploit for profit (e.g., via increases in advertisement revenues). This process has been well documented to date. For example, in the case of recommendation platforms, rather than finding an increased heterogeneity in viewing behaviour, studies have observed that these platforms suffer from what is called a ‘popularity bias’, which leads to a loss of diversity and a homogenisation in the content recommended (see, e.g., Chechkin et al. (2007), DiFranzo et al. (2017), & Hazrati et al. (2022)). As such, predictive optimisers impose pressures towards making behaviour more predictable, which, in reality, often imply pressures towards simplification, homogenisation, and/or polarisation of (individual and collective) values.

...in the case of (advanced) AI systems

While current-day recommender platforms may already possess a significant degree of performative power, it is not hard to imagine that more advanced AI systems come to be able to exploit human psychology and socio-economic dynamics yet more powerfully. There is a priori not much reason to expect that humans’ evolved psychology would be particularly robust against an artificial superintelligent ‘persuader’. Beyond recommender systems powered by highly advanced AI systems, we can also imagine an increasingly widespread use of personalised ‘AI assistants’. We can imagine the tasks of an AI assistant as helping the person they are assisting to meet their needs, achieve their goals or satisfy their preferences. Given the difficulty of comprehensively and unambiguously specifying what a person wants across a wide range of contexts, such ‘assistance’ will typically involve some element of guessing (i.e., predicting) on the parts of the AI systems. As such, and given the dynamics discussed above, and if not successfully designed to avoid VCP, such ‘AI assistants’ are likely to ‘improve their performance’ by causing substantive and cumulative changes in individuals' goals and values. What is more, just like with the above case of recommender algorithms, the nature of change induced by such an ‘AI assistant’ will tend (unless relevant corrective measures are taken) towards a simplification of the data structures that are being predicted—in this case the human’s values. To illustrate this: an ‘AI assistant‘ might be able to improve its performance measures by effectively narrowing my culinary preferences to always ask for burger and fries, instead of occasionally being interested in exploring novel flavours and dishes. The picture painted above is concerning because, for one, it undermines the person’s ability to self-determine their values, and, for two, the ensuing change might bring about what effectively is an impoverishment of what once were richer or more subtle values. The described effect does not require any ’maliciousness’ on the side of the AI systems, but can arise as ‘merely’ unintended consequence of their way of functioning.

What is important to recognise is that the described mechanism has the potential to reach both ‘far’ and ‘deep’—in other words, it has the potential to substantially affect both our public and private lives; people’s economic, social, political and personal beliefs, values, behaviours and relationships. Think for example of the pervasive presence of advertisement (reaching, these days, even far into the private sphere via smartphones and television) and of how much of economic behaviour is shaped by it everyday. Or, think about how the same mechanism can affect opinion formation, public deliberation and, consequently, political outcomes. As such, AI-powered advertisement or political propaganda, as well as other applications we may not even be able to conceive of at this point, hold tremendous potential for harm.

Let us recap the mechanics underlying the risk of illegitimate value change that we have identified here. Generally speaking, we are concerned with cases where a predictive optimiser (or a process that acts functionally equivalent to one) comes to be able to systematically affect that which it is predicting. If the phenomenon that is being predicted involves what some set of humans want, the performative optimiser will come to influence those humans’ values. If one assumes human values to be fixed and unchangeable, one might conclude that there is nothing to worry about here. However, recognising the malleability of human values makes this risk stand out as salient and potentially highly pressing. Advanced AI systems will become increasingly more capable at this form of performative prediction, thus exacerbating whatever patterns we can already make out today. The wider these AI systems will be deployed in relevant socio-economic contexts—such as advertisement, information systems, our political lives, our private lives and more—the more severe and far-reaching the potential harm. 

  1. ^

     The observed change in the population might not be exclusively due to value change. However, it can (and typically will) involve a non-trivial amount of value change, and as such, performative power is a relevant measure to understand the phenomena of exogenously induced value change.

New Comment
3 comments, sorted by Click to highlight new comments since:

What is more, the change that the population undergoes is shaped in such a way that it tends towards making the values more predictable.

(...)

As a result, a firms’ steering power will specifically tend towards making the predicted behaviour easier to predict, because it is this predictability that the firm is able to exploit for profit (e.g., via increases in advertisement revenues).

A small misconception that lies at the heart of this section is that AI systems (and specifically recommenders) will try to make people more predictable. This is not necessarily the case.

For example, one could imagine incentives for modifying someone's values to be more unpredictable (changing constantly within some subset) but in an area of the value-space that leads to much higher reward for any AI action.

Moreover, most recommender systems (given that they only optimize instantaneous engagement) don't really optimize for making people more predictable, and can't reason about changing the human's long-term predictability. In fact, most recsystems today are "myopic": their objective is a one-timestep optimization that won't account for much change in the human, and can essentially be thought of as ~"let me find the single content item X that maximizes the probability that you'd engage with X right now". This often doesn't have much to do with long-term predictability: clickbait often will maximize the current chance of a click but might make you more unpredictable later.

For example, in the case of recommendation platforms, rather than finding an increased heterogeneity in viewing behaviour, studies have observed that these platforms suffer from what is called a ‘popularity bias’, which leads to a loss of diversity and a homogenisation in the content recommended (see, e.g., Chechkin et al. (2007), DiFranzo et al. (2017), & Hazrati et al. (2022)). As such, predictive optimisers impose pressures towards making behaviour more predictable, which, in reality, often imply pressures towards simplification, homogenisation, and/or polarisation of (individual and collective) values.

Related to my point above (and this quoted paragraph), a fundamental nuance here is the distinction between "accidental influence side effects"  and "incentivized influence effects". I'm happy to answer more questions on this difference if it's not clear from the rest of my comment.

Popularity bias and homogenization have mostly been studied as common accidental influence side effects: even if you just optimize for instantaneous engagement, often in practice it seems like this homogenization effect will occur, but there's not a sense that the AI system is "trying to bring homogenization about" – it just happens by chance, similarly to how introducing TV will change the dynamics of how people produce and consume information.

I think most people's concern about AI influencing us (and our values) comes instead from incentivized influence: the AI "planning out" how to influence us in ways that are advantageous to its objective, and actively trying to change people's values because of manipulation incentives emerged from the optimization [3, 8]. For instance, various works [1-2] have shown that recommenders which optimize long-term engagement via RL (or other forms of ~planning) will have these kinds of incentives to manipulate users (potentially by making them more predictable, but not necessarily). 


Regarding grounding the discussion of "mechanisms causing illegitimate value change": I do think that it makes sense to talk about performative power as a measure of how much a population can be steered, and why we would expect firms to have incentives to intentionally try to steer user values. However, imo performative power is more an issue of AI policy, misuse, and mechanism design (to discourage firms from trying to cause value change for profit), rather than the "core mechanism" of the VCP.

In part because of this, imo performative prediction/power seem like a potentially misleading lens to analyze the VCP. Here are some reasons why I've come to think so: 

  • The lens of performative power suggests that the problem has mostly got to do with conscious choices of misaligned profit-maximizing firms. In fact, even with completely benevolent firms, it would still be unclear how to avoid the issue: the VCP will remain an issue even in settings of full alignment between the system designer and the user, because of the fundamental difficulties in specifying exactly what kinds of value changes should be considered legitimate or illegitimate. In fact, the line of work about incentivized influence effects [1-5] shows that even with the best intentions, without the designers intentionally trying to bring about changes, AI systems can learn to systematically and "intentionally" induce illegitimate shifts, because of objective misspecification arising from the core issue of the VCP – distinguishing between legitimate and illegitimate changes. 
  • Performative prediction and power are mostly focused on firms that are trying to solve sequential decision problems (e.g. multi-timestep interactions, where the algorithm's choices affect users' future behavior) with algorithms that optimize over only the next timestep's outcomes. Mathematically, performative power can be thought of as a measure of how much a firm can shift users in a single timestep if they choose to do so. The steering analysis with ex-ante and ex-post optimization only performs a one-timestep lookahead, which isn't a natural formalism for the multi-timestep nature of value change. Instead, the RL formalism automatically solves the multi-timestep equivalent of the ex-post optimization problem: in RL training, the human's adaptation to the AI is already factored into how the AI should be making decisions in order to maximize the multi-timestep objectives. In short, the lens of RL is strictly more expressive than that of performative prediction.
  • I expect most advanced AI systems to be trained on multi-timestep objectives (explicitly or implicitly), making the performative power framework less naturally applicable (because it was developed with single-timestep objectives in mind). When imagining an AI assistant that might significantly change one's values in illegitimate ways, the most likely story in my head is that it was trained on multi-timestep objectives (by doing some form of RL / planning) – this is the only way one can hope to go beyond human performance (relative to imitation), so there will be strong incentives to use this kind of training across the board. In fact, many recommender systems are already trying to use multi-timestep objectives with RL [7]. 

The story seems a lot cleaner (at least in my head) from the perspective of sequential decision problems and RL [1-5], which makes much less assumptions about the nature of the interaction. It goes something like this (even in the best case in which we are assuming a system designer aligned with the user):

  • We will make our best attempt at operationalizing our long-term objectives, but we will specify the rules for value changes incorrectly unless we solve the VCP
  • We will optimize AI assistants / agents with such mis-specified objective in environments which include humans. This is a sequential decision problem, and we will try to solve it via some forms of approximate planning or RL-like methods
  • By optimizing a multi-timestep objective, we will obtain agents that do what ~RL agents do: they try to change the state of the world in ways that lead to high-reward areas of the state space. It just so happens in this case that the human is part of the state of the world, and that we're not very good at specifying what changes to the human's values are legitimate or illegitimate
  • This is how you get illegitimate preference change (as a form of reward hacking) by changing the human's values to the most advantageous settings for the reward as defined

On another note, in some of our work [1] we propose a way to ground a notion of value-change legitimacy based on counterfactual preference evolution (what we call "natural preference shifts"). While it's not perfect (in part also because it's challenging to implement computationally), I believe it could limit some of the main potential harms we are worried about, and might be of interest to you.

The idea behind natural preference shifts is to consider "what would have the person's value been without the actions of the AI system", and evaluate the AIs actions based on such counterfactual preferences rather than their current ones. This ensures that the AI won't drive the person to internal states that they would have judged negatively according to their counterfactual preferences. While this might prevent beneficial legitimate preference shifts from being induced by the AI (as they wouldn't have happened without the AI), it at least can guarantee that the effect of the system is not arbitrarily bad. For an alternate description of natural preference shifts, you can also see [3].

Sorry for the very long comment! Would love to chat more, and see the full version of the paper – feel free to reach out!

[1] Estimating and Penalizing Induced Preference Shifts in Recommender Systems

[2] User Tampering in Reinforcement Learning Recommender Systems

[3] Characterizing Manipulation from AI Systems

[4] Hidden Incentives for Auto-Induced Distributional Shift

[5] Path-Specific Objectives for Safer Agent Incentives

[6] Agent Incentives: A Causal Perspective

[7] Reinforcement learning based recommender systems: A survey

[8] Emergent Deception and Emergent Optimization
 

Related to my point above (and this quoted paragraph), a fundamental nuance here is the distinction between "accidental influence side effects"  and "incentivized influence effects". I'm happy to answer more questions on this difference if it's not clear from the rest of my comment.

Thanks for clarifying; I agree it's important to be nuanced here!

I basically agree with what you say. I also want to say something like: whether to best count it as side effect or incentivized depends on what optimizer we're looking at/where you draw the boundary around the optimizer in question. I agree that a) at the moment, recommender systems are myopic in the way you describe, and the larger economic logic is where some of the pressure towards homogenization comes from (while other stuff is happening to, including humans pushing to some extent against that pressure, more or less successfully); b) at some limit, we might be worried about an AI systems becoming so powerful that its optimization arc comes to sufficiently large in scope that it's correctly understood as directly doign incentivized influence; but I also want to point out a third scanrios, c) where we should be worried about basically incentivized influence but not all of the causal force/optimization has to be enacted from wihtin the boundaries of a single/specific AI system, but where the economy as a whole is sufficiently integrated with and accelerated by advanced AI to justify the incentivized influence frame (e.g. a la ascended economy, fully automated tech company singularity). I think the general pattern here is basically one of "we continue to outsource ever more consequential decisions to advanced AI systems, without having figured out how to make these systems reliably (not) do any thing in particular". 

A small misconception that lies at the heart of this section is that AI systems (and specifically recommenders) will try to make people more predictable. This is not necessarily the case.

Yes, I'd agree (and didn't make this clear in the post, sorry) -- the pressure towards predictability comes from a combination of the logic of performative prediction AND the "economic logic" that provide the context in which these performative predictors are being used/applied. This is certainly an important thing to be clear about! 

(Though it also can only give us so much reassurance: I think it's an extremely hard problem to find reliable ways for AI models to NOT be applied inside of the capitalist economic logic, if that's what we're hoping to do to avoid the legibilisation risk.)