Really enjoyed the post, but in the interest of rationality,
How many more older siblings should patrons of the Chelsea nightclub have than all other men in New York?
This question rests on the false premise(s) (i.e., model misspecification(s)) that homosexuality is only a function of birth order and that the Chelsea nightclub probability doesn't stem from heavy selection. Relatedly, gwern notes that, "surely homosexuality is not the primary trait the Catholic Church hierarchy is trying to select for." Maybe this was supposed to be more tongue-in-cheek. But identifying a cause does not require that it sufficiently explain something completely on its own.
I agree with you. Unless the signal is so strong that people believe that their personal experience is not representative of the economy, it's going to be overweighted. "I and half the people I know make less" will lead to discontent about the state of the economy. "I and half the people I know make less, but I am aware that GDP grew 40%, so the economy must be doing fine despite my personal experience" is possible, but let's just say it's not our prior.
Exactly, which is why the metric Mazlish prefers is so relevant and not bizarre, unless the premise that people judge the economy from their own experiences is incorrect.
Why is this what matters? It’s a bizarre metric. Why should we care what the median change was, instead of some form of mean change, or change in the mean or median wage?
The critique that the justification wasn't great because the mean wage dropped a lot in the example is fair. Yet, in the proposed alternative example it remains quite likely that people will perceive the economy as having gotten worse, even if the economy is objectively so much better - 2/3 will say they're personally worse off, insufficiently adjust for the impersonal ways of assessing the economy, and ultimately say the economy is worse.
Neither nor is a bizarre metric. may be great for observers to understand general trajectories of income when you lack panel data, but since people use their own lives to assess whether they are better off and in turn overweight that when they judge the economy, is actually more useful for understanding the translation from people's lives into their perceptions.
Consider a different example (also in real terms):
T1: A makes $3, B makes $3, C makes $3 ,D makes $10, E makes $12
T2: A makes $2, B makes $2, C makes $3, D makes $9, E makes $16
The means show nice but not as crazy economic growth ($6.20 to $6.40), and the is $0 ($3 to $3) - "we're not poorer!" However, the is -$1. And people at T2 will generally feel worse off (3/5 will say they can't buy as much as they could before, so "this economy is tough").
Contrast that with (still in real terms):
T1: A makes $2, B makes $2, C makes $4 ,D makes $10, E makes $12
T2: A makes $3, B makes $3, C makes $3, D makes $10, E makes $12
The means show nice but not as crazy economic growth ($6 to $6.20), and the is -$1 ($4 to $3) - "we're poorer!" However, the is $0. And people at T2 will generally feel like things are going okay (only 1 person will feel worse off).
And these are comparing to 0. Mazlish's post illustrates that people will probably not compare to 0 and instead to recent trajectories ("I got a 3% raise last year, what do you mean my raise this year is 2%?!"), so #1 means people will be dissatisfied. #2, as it bore out in the data, also means dissatisfaction. And #3, largely due to timing, means further dissatisfaction.
Then it is no surprise that exit polls show people who were most dissatisfied with the economy under Biden (and assumed Harris would be more of that) voted for Trump. Sure, there's some political self-deception bias going on (see charts of economic sentiment vs. date by party affiliation), but note that the exit polls are correlational - they can indicate that partisanship is a hell of a drug or that people are rationally responding to their perceptions. It's likely both. And if your model of those perceptions is inferior in the ways Mazlish notes, you'd wrongly think people would have been happy with the economy.
Literally macroeconomics 101. Trade surpluses aren't shipping goods for free. There is a whole balance of payments to consider. I'm shocked EY could get that so wrong, surprised that lsusr is so ready to agree, and confused because surely I missed something huge here, right?
I guess I misunderstood you. I figured that without "regression coefficients," the sentence would be a bit tautological: "the point of randomized controlled trial is to avoid [a] non-randomized sample," and there were other bits that made me think you had an issue with both selection bias (agree) and regressions (disagree).
I share your overall takeaway, but at this point I am just genuinely curious why the self-selection is presumed to be such a threat to internal validity here. I think we need more attention to selection effects on the margin, but I also think there is a general tendency for people to believe that once they've identified a selection issue the results are totally undermined. What is the alternative explanation for why semaglutide would disincline people who would have had small change scores from participating or incline people who have large change scores to participate (remember, this is within-subjects) in the alcohol self-administration experiment? Maybe those who had the most reduced cravings wanted to see more of what these researchers could do? But that process would also occur among placebo, so it'd work via the share of people with large change scores being greater in the semaglutide group, which is...efficacy. There's nuance there, but hard to square with lack of efficacy.
That said, still agree that the results are no slam dunk. Very specific population, very specific outcomes affected, and probably practically small effects too.
I appreciate this kind of detailed inspection and science writing, we need more of this in the world!
I'm writing this comment because of the expressed disdain for regressions. I do share the disappointment about how the randomization and results turned out. But for both, my refrain will be: "that's what the regression's for!"
This contains the same data, but stratified by if people were obese or not:
Now it looks like semaglutide isn’t doing anything.
The beauty of exploratory analyses like these is that you can find something interesting. The risk is that you can also read into noise. Unfortunately, all they did was plot these results, not report the regression, which could tell us whether there is any effect beyond the lower baseline. eTable3 confirms that the interaction between condition and week is non-significant for most outcomes, which the authors correctly characterized. That's what the regression's for!
This means the results are non-randomized.
Yes and no. People were still randomized to condition and it appears to be pretty even attrition. Yes, there is an element of self-selection, which can constrain the generalizability (i.e., external validity) of the results (I'd say most of the constraint is actually just due to studying people with AUD rather than the general population, but you can see why they'd do such a thing), but that does not necessarily mean it broke the randomization, which would reduce the ability to interpret differences as a result of the treatment (i.e., internal validity). To the extent that you want to control for differences that happen to occur or have been introduced between the conditions, you'll need to run a model to covary those out. That's what the regression's for!
the point of RCTs is to avoid resorting to regression coefficients on non-randomized samples
My biggest critique is this. If you take condition A and B and compute/plot mean outcomes, you'd presumably be happy that it's data. But computing/plotting predicted values from a regression of outcome on condition would directly recover those means. And from what we've seen above, adjustment is often desirable. Sometimes the raw means are not as useful as the adjusted/estimated means - to your worry about baseline differences, the regression allows us to adjust for that (i.e., provide statistical control where experimental control was not sufficient). And, instead of eyeballing plots, the regressions help tell you if something is reliable. The point of RCTs is not to avoid resorting to regression coefficients. You'll run regressions in any case! The point of RCTs is to reduce the load your statistical controls will be expected to lift by utilizing experimental controls. You'll still need to analyze the data and implement appropriate statistical controls. That's what the regression's for!
I really like this succinct post.
I intuitively want to endorse the two growth rates (if it "looks" linear right now, it might just be early exponential), but surely this is not that simple, right? My top question is "What are examples of linear growth in nature and what do they tell us about this perception that all growth is around zero or exponential?"
A separate thing that sticks out is that having two growth rates does not necessarily imply generally two subjective levels.
This can be effectively implemented by the government accumulating tax revenues (largely from the rich) in good times and spending them on disaster relief (largely on the poor) in bad times. It lets price remain a signal while also expanding supply.
This magical suggestion needs explication.
From what I've seen via Ethan Mollick, instead of copy-pasting, the new assignments that would be effective are the same as the usual - just "do the work," but at the AI. Enter a simulation, but please don't dual-screen the task. Teach the AI (I guess the benefit here is immediate feedback, as if you couldn't use yourself or friend as a sounding board), but please don't dual-screen the task. Have a conversation (again, not in class or on a discussion board or among friends), but please don't dual-screen the task. Then show us you "did it." You could of course do these things without AI, though maybe AI makes a better (and certainly faster) partner. But the crux is that you have to do the task yourself. Also note that this admits the pre-existing value of these kinds of tasks.
Students who will good-faith do the work and leverage AI for search, critique, and tutoring are...doing the work and getting the value, like (probably more efficiently than, possibly with higher returns than) those who do the work without AI. Students who won't...are not doing the work and not getting the value, aside from the signaling value from passing the class. So there you have it - educators can be content that not doing the assignment delivers worse results for the student, but the student doesn't mind as long as they get their grade, which is problematic. Thus, educators are not going quietly and are in fact very concerned about AI-proofing the work, including shifts to in-person testing and tasks.
However, that only preserves the benefit of the courses and in turn degree (I'm not saying pure signaling value doesn't exist, I'm just saying human capital development value is non-zero and under threat). It does not insulate the college graduate from competition in knowledge work from AI (here's the analogy: it would obviously be bad for the Ford brand to send lemons into the vehicle market, but even if they are sending decent cars out, they should still be worried about new entrants).