I ran across this article: The Much Forgotten and Ignored Need to Have Workable Solutions, that might interest some, either for the Rationality or the Effective Altruism aspects.

For a very rough summary: Academia (more specifically, the humanities) gives too much credit to describing problems (i.e. complaining) and not enough on thinking about good solutions, which is the difficult and important part.

Some quotes if you don't want to read the whole thing:

Of course the biggest assumption of all that is being shown to be inconsistent with actual behaviour is that of rationality – Richard Thaler’s Misbehaving and other behavioural research is showing that people are subject to various biases and often do not make rational decisions. This is especially scary for theoretical economists, whose entire universe pretty much depends on the rational representative household.

If their assumptions are rather strict and may not hold up in real-life, their call for a policy response is technically null and void. A good example is with auctions, where previously designers (economists) would rely heavily on the Revenue Equivalence Theorem in creating the rules of auctions. Yet, many of them forget that the assumptions of Revenue Equivalence aren’t always satisfied, for example the possibility of collusion, which can prove to significantly reduce the revenue of the seller.

The best paper on a time economists forgot about ECON 101 has to be this review of European 3G auctions. What was most clear for me from Klemperer’s work is that you can get all up in complex auction theory and mechanism design, but if you forget how very basic concepts in economics work in conjunction with that, you can get easily derailed. They basically put the cart before the horse – they forgot that they had to satisfy their own assumptions before applying their model to reality.

More questions: is the policy they suggest cumbersome, intangible and unable to be monitored for success? This is another pet peeve of mine – my blood boils when people say “We need to fix gender stereotypes! We need to create awareness! We need to change societal attitudes!” without suggesting how it should be done, how this monumental task will be measured for good performance and how they propose regulating all the sources of these things.

Also, how would they justify that spending? Have they thought about the parameters which would determine success or failure? What kind of campaign or agency are they suggesting to carry out these monumental tasks? What are the conditions for success?

Last is that sometimes when people chuck the words “Policy Implications” around, they often have no idea what a deep and complicated field policy design actually is. To be fair, I’m still learning about it and I don’t expect university students or even researchers not involved in related areas to have a full understanding of it.

However, it’s not like economists don’t have a basic understanding of incentives, principal-agent relationships, transaction-cost economics and externalities. Those four areas should be enough to at least attempt a more rigorous analysis of possible policies, rather than simply providing an offhand description of the policy based on a single relationship.

At the end of the day, there’s just a lot of arrogance among some researchers who like to imply that their research necessitates action – yet they haven’t put any meaningful or strategic thought whether the research truly necessitates action in the first place (especially in comparison to cost-equivalent policies in similar areas, or dealing with similar problems), whether the action will actually lead to the desired outcome (checking if assumptions are realistic/addressing relevant design issues) or whether there will be any undesirable externalities or further implications of the policy.

[...]

Maybe the worst thing about all of this is that when I was growing up, I always looked up to people who were aware of issues outside themselves, especially if the issues didn’t necessarily affect them. They seemed so cool and aware and intelligent. I’d watch these people with great admiration for their insight.

Now a lot of that is gone. The people about whom once I thought, wow, this person is so aware and intelligent, I now realize aren’t actually that intelligent. They’re just pretending to be. They’re just better at vocalizing some of the things that anyone can see and turning them into long spiels about what’s wrong with the world. They haven’t really thought about it.

(ironically (intentionally?), the post is mostly complaining about a problem, without offering a workable solution, but I still liked it)

New to LessWrong?

New Comment