This might be good ground for an adversarial collaboration.
It's difficult to take a context-less view of hard facts - e.g. rate of non-violent crimes committed per year is heavily dependent on whether the police are bothering to record crimes this year or not, and that's downstream of a political decision and a cultural impetus.
Things like this are tricky because "what is the context?" is so hotly debated that, even if everyone has a perfect consensus model of past presidents' actions and the circumstances in which they took them, there's no clean middle ground on the situation surrounding an action today. For example, Abraham Lincoln suspending Habeas Corpus is generally regarded as a reasonable action taken during an existential crisis, whereas, had Bush done it as part of the War on Terror, which was unambiguously not an existential crisis, most people would say that it was uncalled for. Similarly, Eisenhower's own mass deportation campaign was substantially more intensive than anything done by more recent presidents, but did not face massive, highly-organized <protests / riots, depending on your party affiliation> intended to impede operations, meaning that "what measures are precedented to defend immigration enforcement operations?" hasn't yet been answered.
As awkward of a solution as it is, I would cast my ballot in favor of operating on hard metrics alone, simply because operating on 'vibes' opens the door to a lot of bad things that neither facilitate nor, arguably, permit rational discussion. In this world, trade flows, GDP growth (normalized for population and inflation however you see fit) and deficits/surpluses would be used to determine whether something extreme had happened in the realm of international trade, for instance. Incarceration rate, crime victimization rate, or the rate at which police encounters have violent outcomes would be used to determine whether something extreme had happened in the realm of criminal justice. This has the benefit of everyone agreeing on whether something happened, and what happened, if something did, thus allowing a more formal conversation on why it happened.
A less strict heuristic would be to conceptualize a world in which a legally-equivalent conflict was taking place in the opposite direction, and see if the emotional reaction it elicits is the same or different. I encourage this for political conversations elsewhere, but it's difficult to reliably evaluate whether someone is doing so in good faith, so it's hard to recommend it as a broader policy on a 'hard mode' topic.
For general American decline there is a recent article by Noah Smith on capital flight away from America. Its quantitative and its a Trump second term phenomenon. There is also the number of meetings between Trump and different traditionally independent agencies like the FBI and the Department of Justice. I believe those numbers have exploded in this term. The number of probes into elected and appointed officials have definitely also exploded this term. There have also been a large number of acquittals at the grand jury level because the charges are so clearly vindictive and false.
I've always thought about using LLMs to analyze and compare speeches. You could for example look at the ratio of the most information dense summary to the length of the speech. Also, in the wake of tragedies Presidents have been historically consistent in calling for unity. Most people left or right would agree that's a good historical precedent that's still common practice. It's a practice that Trump has clearly broken. A LLM could simply count the number of times he calls the other side evil and compare that with past responses to tragedies. In his speeches Trump also makes it very clear that he rewards loyalty to Trump and not to abstract ideals like the rule of law. Those are Trump specific norm-erosions.
Potential metrics
These metrics may not be available farther into the past.
The problem with this idea is that the metrics are cherry-picked in order to make Trump look bad. And even then, what they actually measure is how much Trump is in conflict with the rest of the government, not how much we should be scared of him.
Yeah, I can see that. Unfortunately, "how much should we be scared of him" doesn't have direct metrics, so we have to come up with proxies of some kind.
Do you have any suggestions that don't feel cherry-picked? What we likely need is input from some good historians. I don't think I can escape my recency bias well enough.
Supporters counter that Trump's actions are either completely precedented, or
Um, I thought the selling point of Trump was precisely that the institutions of the permanent education-media-administrative state are corrupt, and that Trump is going to fight them. Claims that Trump II is business-as-usual are probably political maneuvering that should not be taken literally. (They know Trump isn't business-as-usual, but they don't want to say that part out loud, because making it common knowledge would disadvantage their side in the war against the institutions they're trying to erode.)
That seemed ... like it was approaching a methodology that might actually be cruxy for some Trump supporters or Trump-neutral-ers.
No? The pretense that media coverage is "neutral" rather than being the propaganda arm of the permanent education-media-administrative state is exactly what's at issue.
AIs are known to have political bias in favor of the left. Using AIs to get information about Trump won't produce fair results. Also, AIs are trained on the Internet and the media, which bakes media bias into the AI's results. The AI will inherently trust the media's side of things, and will rarely tell you that the media blew up an unimportant incident or left things out.
In some recent posts, some people have been like “Wait why is there suddenly this abrupt series of partisan LW posts that are taking for granted there is a problem here that is worth violating the LW avoid-most-mainstream-politics norm?”.
Like TV shows that point out their own plot holes, saying "look, my post is violating the rules" doesn't excuse violating the rules.
There's an abrupt series of partisan posts because partisans like to think their enemies are the worst people ever--so bad that they can violate all the norms they want in order to get their enemies, including norms about avoiding politics. And when you stand to gain from thinking your enemies are the worst people ever, there's a lot of motivated reasoning going around.
There's nothing physically impossible about having a president that's substantially worse on overreach/corruption/degradation of institutions, so that has to be in the hypothesis space. We'd expect their defenders to say the accusations were overblown/unfair no matter what, so that's not evidence. And we'd expect their opponents to make the accusations no matter what, so that's not evidence either.
Given that, how do you propose to distinguish the worlds where the president is genuinely much worse than average, vs. one where they're completely precedented or following a trend line?
Critics of Trump often describe him as making absolutely unprecedented moves to expand executive power, extract personal wealth, and impinge on citizens’ rights. Supporters counter that Trump’s actions are either completely precedented, or are the natural extension of existing trends that the media wouldn’t make a big deal over if they didn’t hate Trump so much.
In some recent posts, some people have been like "Wait why is there suddenly this abrupt series of partisan LW posts that are taking for granted there is a problem here that is worth violating the LW avoid-most-mainstream-politics norm?".
My subjective experience has been "well, me and most of my rationalist colleagues have spent the past 15 years mostly being pretty a-political, were somewhat wary but uncertain about Trump during his first term, and the new set of incidents just seems... pretty unprecedently and scarily bad?"
But, I do definitely live in a bubble that serves me tons of news about bad-seeming things that Trump is doing. It's possible to serve up dozens or hundreds of examples of a scary thing per day, without that thing actually being objectively scary or abnormal. (See: Cardiologists and Chinese Robbers)
Elizabeth and I wanted to get some sense of how unusual and how bad Trump’s actions are. “How bad” feels like a very complex question with lots of room for judgment. “How unusual” seemed a bit more likely to have an ~objective answer.
I asked LLMs some basic questions about it, but wanted a more thorough answer. I was about to spin up ~250 subagents to go run searches on each individual year of American history, querying for things like “[year] [president name] ‘executive overreach’” or “[year] [president name] ‘conflict with Supreme Court’”, and fill up a CSV with incidents.
That seemed… like it was approaching a methodology that might actually be cruxy for some Trump supporters or Trump-neutral-ers.
It seemed like maybe good practice to ask if there were any ways to operationalize this question that’d be cruxy for anyone else. And, generally pre-register it before running the query, making some advance predictions.
Each operationalization I’ve thought of so far seems a bit confused/wrong/incomplete. I feel okay with settling for “the least confused/wrong options I can come up with after a day of thinking about it," but, I'm interested in suggestions for better ones.
Some examples so far that feel like they're at least relevant:
My own personal goal here is not just to get a bunch of numbers, but, to also get a nice set of sources for examples that people can look over, read up on, and get a qualitative sense of what's going on.
These questions all have the form "checking if allegations by detractors about Trump are true", which isn't necessarily the frame by which someone would defend Trump, or the right frame for actually answering the question "is the US in a period of rapid decline in a way that's a plausible top priority for me or others to focus on?"
I'm interested in whether people have more suggestions for questions that seem relevant and easy to check. Or, suggestions on how to operationalize fuzzier things that might not fit into the "measure it per year" ontology.
Appendix: Subagents Ahoy
A lot of these are recorded in places that are pretty straightforward to look up. i.e. there's already lists of Pardons per President, and Executive Orders per president.
But, I have an AI-subagent process I'm experimenting with that I expect to use for at least some of these, which currently goes something like:
I have a pet theory about leaning on exact quotes rather than summaries to avoid having to trust their summarization