Concept

A rational agent will follow a standard where the benefit he receives (or disbenefit he avoids) from following that standard, is less than the cost of following that standard. 

In this way, raising standards can perversely disincentivise the behaviour those standards intend to promote; by raising the cost of complying with a standard above the benefit gained by doing so. 

Exploration

Given the simplicity of the above concept, one might think that perversely high standards would be rare and easily avoided. However, my perception is that they are surprisingly impactful. One especially important example involves popular standards of political honesty in western democracies. 

A common refrain in the popular culture of these societies is that politicians are “all corrupt”, “all liars”, or “all the same” (in the sense of “similarly awful” rather than “similarly wonderful”)[1]. One way to conceptualise this assessment is that the assessors set exceptionally high standards for political propriety[2]. For instance, they may believe that accepting any campaign contribution not derived from small donors constitutes corruption; that telling a single mistruth merits being labelled a liar; or that having a voting record of less than perfect ideological purity constitutes moral evil or cynicism. 

But to the extent that politicians believe that – for instance – the public treat a single misstatement as equal to repeated and deliberate deception, their incentive to stay “honest” decreases markedly. The implicit standard for “honesty” here is exceptionally difficult and costly to meet[3], and therefore unlikely to warrant the effort required. Not only does this standard fail to incentivise the exceptional “honesty” it supposedly promotes, it also undermines politicians incentive to avoid far more severe deception. They and their opponents are going to be seen as liars in any case, so why not “go all in” and employ all advantages of deception, instead of pursuing an unrewarded honesty.

Following this theory, it is precisely those leaders with the least regard for truth who seem to have most capitalised on the decline in public trust for politicians[4]; with Trump in particular being able to shrug off accusations of corruption amongst a voter base who write off his opponents, and the US political system, as irredeemably crooked[5]. Something similar, though less severe, could also be said about the reception to Boris Johnson’s rhetoric around the EU referendum[6].

Solutions

The two simplest solutions to perverse standards are the most obvious: one can lower standards and/or raise rewards[7][8]. But neither of these are without their problems: 

Taking lowering standards first, it should be noted that standards are set high for a reason: to incentivise exceptionally good behaviour (such as politicians not using “technically true” but potentially misleading statements). By lowering standards, we may incentivise good behaviour more, but we incentivise exceptionally good behaviour less.  

Moving to raising rewards, two problems are pertinent. First, for very onerous standards, standard-setting agents may be unable to raise rewards high enough to compensate the costs required to meet them[9]. Second, some agents may be simply unable to meet very onerous standards. Despite this, we would still want to encourage these agents to comply as far as possible.

A key means to address these issues involves variegating standards; that is, having multiple standards with different rewards attached to each, with higher standards meriting greater rewards. This encourages partial compliance by those unable to meet the highest standards, whilst still spurring exceptional compliance by those who easily vaunt lower standards[10].  

Conclusion

In justified zeal to encourage good behaviour, it can be tempting and intuitive to set the highest possible standards for good performance. But doing so may well have a perverse impact – undermining the very behaviour we aim to incentivise. Realistic standards, with variegation to encourage both minimal and exceptional performance, are far more likely to achieve our ends. 


[1] https://www.theguardian.com/australia-news/2018/aug/21/overwhelming-majority-of-australians-believe-federal-politicians-are-corrupt

[2] Even though they may not recognise themselves as doing so. 

[3] Difficult in the sense that even a good faith attempt to meet it might well fail due to an honest mistake; costly in the sense that being unable to ever distort a single fact would likely vastly undermine the persuasive efficacy of their political messaging. 

[4] https://www.pewresearch.org/politics/2019/04/11/public-trust-in-government-1958-2019/

[5] https://www.nytimes.com/2018/07/10/magazine/americans-think-corruption-is-everywhere-is-that-why-we-vote-for-it.html

[6] https://www.theguardian.com/commentisfree/2019/nov/29/politicians-liars-boris-johnson-voters-prime-minister-brexit

[7] Raising rewards encompassing both raising the benefit awarded for compliance, and lowering the disbenefit exacted for noncompliance. 

[8] An interesting example of successful incentivisation by high rewards in the “Political Honesty” domain derives from the Sanders’ campaign’s rejection of large donations. The loss of corporate donations engendered by this (the cost of compliance with the high standard for “non-corrupt” behaviour on the left) was compensated by a flood of small donations from voters attracted by Sanders incorruptible image (the reward for compliance with the same). 

[9] As a voter my “maximal reward” to a politician who meets my highest standards is likely to be my vote, several hours of campaigning time and a small donation. Sadly, even when this is aggregated with the maximal reward of thousands of like-minded individuals, this may still prove less than the cost to that politician of complying with those standards.  

[10] Informational issues are likely to be the central constraint on the degree of variegation a (set of) standards ought support. Specifically, it is useful to have clear standards and rewards, so that those assessed by them can more easily comply with them, and those enforcing them can more easily administer them. But increasing the variegation of a (set of) standards increases their complexity, thereby undermining compliance and enforceability. Ultimately the best trade-off here will be context specific – the more important the behaviour being incentivised, the more of the assessor’s limited mental bandwidth will be best spent on variegating standards.  

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 2:18 AM

If you want to continue developing this line of thinking, I can think of a few questions you could explore.

  1. Can you find historical examples of a culture changing their standards for some group (the honesty of politicians, the behavior of children, etc.)?
  2. If so, was the change due to drift over time, was it organized and deliberate, or what?
  3. Can you use your model to make any predictions about any concrete events that we should expect in the future, even if just on an intuitive basis, and even if you had only low confidence about those predictions?

I don't think the problem in this case is one of excessively high standards. I think the problem is that our political system selects people who are good, talkers, who can spin a narrative, who can and do lie convincingly. 

After 50 years following politics my heuristic is to ignore what politicians say and watch only their actions. I find that what they say is generally devoid of useful information. The only conclusion one can draw is that they want you to believe what they are saying. 

Track record is the only useful guide to their likely future actions.

There is still difference between "X can lie convincingly" and "X lies completely transparently, but his voters don't care". With the former, you can try to convince his voters that he lied about something important. With the latter... you would just waste your time, they obviously don't care.

It seems to me the latter are worse, but I cannot explain exactly why. Perhaps some intuition like "if voters forgive lies so easily, they would probably forgive other things, too". Or maybe a feeling "if people are trying, all hope is not lost yet" (here "people" refers to the voters, not the politicians).

EDIT: Reading what I wrote here, I guess it's not about openly lying politicians being necessarily worse, but rather about this being an evidence that there is something seriously wrong with the voter base, which is even more dangerous in long term.

There is of course a possibility that most voters do not genuinely approve of X being a liar, but still for some reason consider him a lesser evil compared to Y. Still makes me worry, because those voters may fix their cognitive dissonance in a way that will cause harm later.