Stefan_Schubert

Stefan_Schubert's Comments

The case for C19 being widespread

Thanks, Lukas. I only saw this now. I made a more substantive comment elsewhere in this thread. Lodi is not a village, it's a province with 230K inhabitants, as are Cremona (360K) and Bergamo (1.11M). (Though note that all these names are also names of the central town in these provinces.)

The case for C19 being widespread

In the province of Lodi (part of Lombardy), 388 people were reported to have died of Covid-19 on 27 March. Lodi has a population of 230,000, meaning that 0.17% of _the population_ of Lodi has died. Given that everyone hardly has been infected, IFR must be higher.

The same source reports that in the province of Cremona (also part of Lombardy), 455 people had died of Covid-19 on 27 March. Cremona has a population of 360,000, meaning that 0.126% of the population of Cremona has died, according to official data.

Note also that there are reports of substantial under-reports of deaths in the Bergamo province. Some reports estimate that the true death rates in some areas may be as much as 1%. However, those reports are highly uncertain. And they may be outliers.

https://www.facebook.com/stefan.schubert.3954/posts/1369053463295040

Rational vs Reasonable

Here is a new empirical paper on folk conceptions of rationality and reasonableness:

Normative theories of judgment either focus on rationality (decontextualized preference maximization) or reasonableness (pragmatic balance of preferences and socially conscious norms). Despite centuries of work on these concepts, a critical question appears overlooked: How do people’s intuitions and behavior align with the concepts of rationality from game theory and reasonableness from legal scholarship? We show that laypeople view rationality as abstract and preference maximizing, simultaneously viewing reasonableness as sensitive to social context, as evidenced in spontaneous descriptions, social perceptions, and linguistic analyses of cultural products (news, soap operas, legal opinions, and Google books). Further, experiments among North Americans and Pakistani bankers, street merchants, and samples engaging in exchange (versus market) economy show that rationality and reasonableness lead people to different conclusions about what constitutes good judgment in Dictator Games, Commons Dilemma, and Prisoner’s Dilemma: Lay rationality is reductionist and instrumental, whereas reasonableness integrates preferences with particulars and moral concerns.
Moral public goods

Thanks, this is interesting. I'm trying to understand your ideas. Please let me know if I represent them correctly.

It seems to me that at the start, you're saying:

1. People often have strong selfish preferences and weak altruistic preferences.

2. There are many situations where people could gain more utility through engaging in moral agreements or moral trade - where everyone promises to take some altruistic action conditional on everyone else doing the same. That is because the altruistic utility they gain more than makes up for the selfish utility they lose.

These claims in themselves seem compatible with "altruism being about consequentialism".

To conclude that that's not the case, it seems that one has to add something like the following point. I'm not sure whether that's actually what you mean, but in any case, it seems like a reasonable idea.

3. Fairness considerations loom large in our intuitive moral psychology: we feel very strongly about the principle that everyone should do and have their fair share, hate being suckers, are willing to altruistically punish free-riders, etc.

It's known from dictator game studies, prisoner's dilemma studies, tragedies of the common, and similar research that people have such fairness-oriented dispositions (though there may be disagreements about details). They help us solve collective action problems, and make us provide for public goods.

So in those experiments, people aren't always choosing the action that would maximise their selfish interests in a one-off game. Instead they choose, e.g. to punish free-riders, even at a selfish cost.

Similarly, when people are trying to satisfy their altruistic interests (which is what you discuss), they aren't choosing the actions that, at least on the face of it (setting indirect effects of norm-setting, etc, aside), maximally satisfy their altruistic interests. Instead they take considerations of fairness and norms into account - e.g. they may contribute in contexts where others are contributing, but not in contexts where others aren't. In that sense, they aren't (act)-consequentialists, but rather do their fair share of worthwhile projects/slot into norms they find appropriate, etc.

Have epistemic conditions always been this bad?

I think this is a kind of question where our intuitions are quite weak and we need empirical studies to know. It is very easy to get annoyed with poor epistemics and to conclude, in exasperation, that things must have got worse. But since people normally don't remember or know well what things were like 30 years ago or so, we can't really trust those conclusions.

One way to test this would be to fact-check and argument-check (cf. https://www.lesswrong.com/posts/k54agm83CLt3Sb85t/clearerthinking-s-fact-checking-2-0 ) opinion pieces and election debates from different eras, and compare their relative quality. That doesn't seem insurmountably difficult. But of course it doesn't capture all aspects of our epistemic culture.

One could also look at features that one may suspect are correlated with poor epistemics, like political polarisation. On that, a recent paper gives evidence that the US has indeed become more polarised, but five out of the other nine studied OECD countries rather had become less polarised.

https://www.brown.edu/Research/Shapiro/pdfs/cross-polar.pdf

Robin Hanson on the futurist focus on AI
How about a book that has a whole bunch of other scenarios, one of which is AI risk which takes one chapter out of 20, and 19 other chapters on other scenarios?

It would be interesting if you went into more detail on how long-termists should allocate their resources at some point; what proportion of resources should go into which scenarios, etc. (I know that you've written a bit on such themes.)

Unrelatedly, it would be interesting to see some research on the supposed "crying wolf effect"; maybe with regards to other risks. I'm not sure that effect is as strong as one might think at first glance.

Robin Hanson on the futurist focus on AI

Associate professor, not assistant professor.

Is there a definitive intro to punishing non-punishers?
One of those concepts is the idea that we evolved to "punish the non-punishers", in order to ensure the costs of social punishment are shared by everyone.

Before thinking of how to present this idea, I would study carefully whether it's true. I understand there is some disagreement regarding the origins of third-party punishment. There is a big literature on this. I won't discuss it in detail, but here are some examples of perspectives which deviate from that taken in the quoted passage.

Joe Henrich writes:

This only makes sense as cultural evolution. Not much third party punishment in many small-scale societies .

So in Henrich's view, we didn't even (biologically) evolve to punish wrong-doers (as third parties), let alone non-punishers. Third-party punishment is a result of cultural, not biological, evolution, in his view.

Another paper of potential relevance by Tooby and Cosmides and others:

A common explanation is that third-party punishment exists to maintain a cooperative society. We tested a different explanation: Third-party punishment results from a deterrence psychology for defending personal interests. Because humans evolved in small-scale, face-to-face social worlds, the mind infers that mistreatment of a third party predicts later mistreatment of oneself.

Another paper by Pedersen, Kurzban and McCullough argues that the case for altruistic punishment is overstated.

Here, we searched for evidence of altruistic punishment in an experiment that precluded these artefacts. In so doing, we found that victims of unfairness punished transgressors, whereas witnesses of unfairness did not. Furthermore, witnesses’ emotional reactions to unfairness were characterized by envy of the unfair individual's selfish gains rather than by moralistic anger towards the unfair behaviour. In a second experiment run independently in two separate samples, we found that previous evidence for altruistic punishment plausibly resulted from affective forecasting error—that is, limitations on humans’ abilities to accurately simulate how they would feel in hypothetical situations. Together, these findings suggest that the case for altruistic punishment in humans—a view that has gained increasing attention in the biological and social sciences—has been overstated.
How do you assess the quality / reliability of a scientific study?

A recent paper developed a statistical model for predicting whether papers would replicate.

We have derived an automated, data-driven method for predicting replicability of experiments. The method uses machine learning to discover which features of studies predict the strength of actual replications. Even with our fairly small data set, the model can forecast replication results with substantial accuracy — around 70%. Predictive accuracy is sensitive to the variables that are used, in interesting ways. The statistical features (p-value and effect size) of the original experiment are the most predictive. However, the accuracy of the model is also increased by variables such as the nature of the finding (an interaction, compared to a main effect), number of authors, paper length and the lack of performance incentives. All those variables are associated with a reduction in the predicted chance of replicability.
...
The first result is that one variable that is predictive of poor replicability is whether central tests describe interactions between variables or (single-variable) main effects. Only eight of 41 interaction effect studies replicated, while 48 of the 90 other studies did.

Another, unrelated, thing is that authors often make inflated interpretations of their studies (in the abstract, the general discussion section, etc). Whereas there is a lot of criticism of p-hacking and other related practices pertaining to the studies themselves, there is less scrutiny of how authors interpret their results (in part that's understandable, since what counts as a dodgy interpretation is more subjective). Hence when you read the methods and results sections it's good to think about whether you'd make the same high-level interpretation of the results as the authors.

Two explanations for variation in human abilities

One aspect may be that the issues we discuss and try to solve are often at the limit of human capabilities. Some people are way better at solving them than others, and since those issues are so often in the spotlight, it looks like the less able are totally incompetent. But actually, they're not; it's just that the issues they are able to solve aren't discussed.

Cf. https://www.lesswrong.com/posts/e84qrSoooAHfHXhbi/productivity-as-a-function-of-ability-in-theoretical-fields

Load More