Your group/company/organization performs well, doing great work and dealing with new problems efficiently. As one of its leaders, you want to understand why, so that you can make it even more successful, and maybe emulate this success in other settings.

Your group/company/organization performs badly, not delivering on what was promised and missing deadline after deadline. As one of its leaders, you want to understand why, so that you can correct its course, or at least not repeat the same mistakes in other settings.

Both cases apparently involve credit assignment: positive credit (praise) for success or negative credit (blame) for failure. And you can easily think of different ways to do so:

Heuristics for Credit Assignment

Baseline

The most straightforward approach starts with your initial prediction, and then assigns credit for deviations from it. So praise people who did better than expected and blame people who did worse than expected.

Then you remember Janice. She’s your star performer, amazing in everything she does, and you knew it from the start. So she performed according to prediction, being brilliant and reliable. Which means she doesn’t deserve any praise by this criterion.

On the other hand there is Tom. He’s quite good, but you also knew from the start he was a prickly showoff with an easily scratched ego. Still, he did his job, and when he acted like an asshole, that was within the prediction. So he doesn't deserve any blame by this criterion.

Incentive wise, this sounds like a terrible idea. If you push this credit assignment strategy, not only will you neglect the value of Janice and the cost of Tom, but you will probably drive away high-performers and attract problem-makers.

Bottleneck

Instead of starting from a baseline, let’s focus on the key bottlenecks. What would have doomed the project if not done? What ended up blocking everything and dooming the project? This focuses on the real cruxes, which is good.

Yet what about Marcel, your tireless tool engineer? None of his stuff is ever absolutely necessary, but everyone in your group constantly mentions the value they get from his well-maintained and efficient tools. Should he not get any credit for it?

And Bertha, who is the only security expert of your group, and always finds excuses to make herself more and more necessary? Is this really a behavior you want to condone and praise? Shouldn’t she get blamed for it instead?

It’s at this point that you remember this short parable.

First cause

No, really, what matters most is the initial spark that puts everything in motion. Without the original idea, nothing happens; skills and expertise remain useless and flaccid, unable to toil for a worthwhile goal.

And what a coincidence: you were one of these key idea generators! All the more power to you, then. It’s only fair that you get a large share of the credit, given that the group wouldn’t even exist without you.

But that still doesn’t seem right. Yes, your contribution was integral. But could you really have done it by yourself? Probably not. Or at least not as well, as quickly, as beautifully as it has been.

Or conversely, if it failed totally, was it because the idea was doomed from the start, or because the execution proved bad enough to torpedo even sensible propositions?

Final step

You got it exactly wrong above: it’s not the first step that trumps them all, it’s the final one. Making the abstract real, adding the finishing touches, this is what makes success or failure. So you should focus your assignment on the success and failure of these last steps.

But what about you? Yes, you. You have never finished anything yourself, you’re the organizer, idea maker, coordinator. It’s not your role or your job to put the finishing touch, to usher something into the physical world. Does that mean that none of your actions mattered, for failure or success?

No, obviously not. You know your group, your team, your company. You know how many issues you’ve addressed, how much would value have been left on the table without you. And you also know when you fell short, when you could have repaired relationships, problems, egos, and instead neglected them or smashed them to pieces yourself.

Asking the wrong question

None of your ideas for assigning credit seems to work. Whenever you try to fix issues in one of them, you end up creating different problems.

This should be a signal. A signal that you might be asking the wrong question.

During all this post we asked “How should praise and blame be assigned?”. But was that the original question? Was that the final goal? No: the original point was to figure out why something has happened (be it success or failure) AND to find out how to act in order to get more or less of this result.

As highlighted above, this actually splits into two goals:

  • an epistemic goal about having a better causal model of what happened and why.
  • a decision theoretic goal about what to do to ensure good outcomes (through incentives notably).

And most of our troubles came from trying to solve both at the same time. To build a model that embedded credit assignment inside, as if it was part of reality. But these are distinct realms, so conflating solutions creates an unnecessary dependency.

What the examples above also showed was the relative ease of building the causal model compared to assigning credit. In almost all cases we could point out the role and contribution of each group member, but any attempts at praise and blame created bad incentives.

As such, an easy way to avoid many pitfalls of credit assignment is to simply hold off on doing the decision theoretic aspect until you actually have to (which is far less often than you think). And instead to always start with the epistemic goal, causally modeling the situation.

Then if you ever need to intervene, you will be able to rely on the best model you have, one not already unconsciously polluted with baked-in decision-theoretic choices.
 

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 5:33 PM

For the record, the canonical solution to the object-level problem here is Shapley Value. I don’t disagree with the meta-level point, though: a calculation of Shapley Value must begin with a causal model that can predict outcomes with any subset of contributors removed.

I walked through some examples of Shapley Value here, and I'm not so sure it satisfies exactly what we want on an object level. I don't have a great realistic example here, but Shapley Value assigns counterfactual value to individuals who did in fact not contribute at all, if they would have contributed were your higher-performers not present. So you can easily have "dead weight" on a team which has a high Shapley Value, as long as they could provide value if their better teammates were gone.

Thanks for the pointer!

Shapely values are very cool. Let me mention some cool facts:

They arise in (cooperative) game theory but also in ML when doing credit allocation a combined prediction from mixing predictions from different modules of a system.

One piece of evidence of their fundamentalness is that they arise naturally from the Hodge theory on the hypercube of a coalition game: https://arxiv.org/abs/1709.08318

Another interesting fact I learned from Davidad: Shapley values are not compositional: a group of actors can increase their total Shapley value by forming a single cabal such that individuals within that cabal refuse to cooperate with individuals outside the cabal without the rest of the cabal joining in. This is can be a measure of collusion potential.

This paper of mine answers exactly this question (nonconstructively, using the minimax theorem).