Hanlon's Razor says: Never attribute to malice what is adequately explained by stupidity. This is a clear bias toward mistake theory.
On the other hand, economics, evolutionary psychology, and some other fields are based on rational choice theory, IE, an assumption that behavior can be explained by rational decision-making. (Economic rationality assumes that individuals choose rationally to maximize economic value, based on the incentives of the current situation. Evolutionary psychology instead assumes that human and animal behaviors will be optimal solutions to the problems they faced in evolutionary history. Bruce Bueno de Mesquita assumes that politicians act rationally so as to maximize their tenure in positions of power. The ACT-R theory of cognition assumes that individual cognitive mechanisms are designed to optimally perform their individual cognitive tasks, such as retrieving memories which are useful in expectation, even if the whole brain is not perfectly rational.) This assumption of rationality lends itself more naturally to conflict theories.
A conflict theorist thinks problems are primarily due to the conflicting interests of different players. If someone is suffering, someone else must be making money off of it. Karl Marx was a conflict theorist; he blamed the ills of society on class conflict.
A mistake theorist thinks problems are primarily due to mistakes. If only we knew how to run society better, there would be less problems. Jeremy Bentham was more of a mistake theorist: he thought producing a formula by which we could calculate the quality of social interventions would help improve society.
Humans are not automatically strategic is a mistake theory of human (ir)rationality. Things are hard. If people are doing something dumb, it's probably because they don't know better.
The Elephant in the Brain is more like a conflict theory of human (ir)rationality. Apparent irrationality is attributed mainly to humans not actually wanting what they think they want.
In game theory, assuming that people can make mistakes (a so-called trembling hand) can complicate cooperative strategies.
For example, in iterated prisoner's dilemma, tit for tat is a cooperative equilibrium (that is to say, it is pareto-optimal, and it is a Nash equilibrium). The tit-for-tat strategy is: cooperate on the first round; then, copy the other person's move from the previous round. This enforces cooperation, because if I defect, I expect my partner to defect on the next round (which is bad for me). This is effectively eye-for-an-eye morality.
However, if people make mistakes (the trembling-hand assumption), then tit-for-tat only results in cooperation for an initial period before anyone makes a mistake. If both mistakes are equally probable, then in the long run we'll average only 50% cooperation. We can see this as an interminable family feud where both sides see the other as having done more wrong. "An eye for an eye makes everyone blind."
We need to recognize that people make mistakes sometimes -- we can't punish everything eye-for-an-eye.
Therefore, some form of forgiving tit-for-tat does better. For example, copy cooperation 100% of the time, but copy defection 90% of the time. This can still work to enforce rational cooperation (depending on the exact payouts and time-discounting of the players), but without everlasting feuds. See also Contrite Strategies and the Need for Standards.
In this framing, a conflict theorist thinks people are actually defecting on purpose. They know what they're doing, and therefore, would respond to incentives. Punishing them is prosocial and helps to encourage more cooperation overall.
A mistake theorist thinks people are defecting accidentally, and therefore, would not respond to incentives. Punishing them is pointless and counterproductive; it could even result in a continuing feud, making things much worse for everyone.