This is deeply unconvincing. We didn't have a great power war in the 60s or the 70s because that would have meant nuclear war. High-level US government officials in internal documents describe Russia as an existential threat. Russian government documents, as I understand it, reflect terror of American willingness to use nukes. We haven't had a war between the US and China yet, but estimates of that holding true over the next five years are less confident than I'd like.
"Most wars have ultimately been fought over land because land determines food production and food production was a matter of life and death."
It seems like you're explaining the actions of kings with the preferences of peasants (and I am very unconvinced that a victorious war was better for the average peasant than peace), and I don't see that as particularly persuasive.
Priors are relative to how much evidence can be shared. There may not be agreement in a single conversation, but they should expect movement towards a common belief, though there are degenerate counter-cases. For example, perhaps both parties share a base rate and have different pieces of information that push in the same direction.
I think that the reason I don't see a lot of arguments against anti-vaxxers is that I don't know that I know of any. I think the reason that I see anti-vaxxers derided more often than average is flat-earthers are parsed as harmless and anti-vaxxers are parsed as doing harm. I think I'm not quite following what you're saying.
There's not a hard cutoff between 2005, when Ioannidis publishes, and the present, but I've worked on multiple systematic reviews, going over thousands of papers, and there's a visible improvement in quality over time, and that seemed like a reasonable date for "replication crisis attention is high."
This seems like an excessively general question to me. Yes, because we've gotten richer over time. No, because there is still suffering. Can you drill down into specifics?
Instead, we should tax the difference between what you earned and what anyone could have made by just putting the same amount of money in a savings account. That is, we should tax the stuff that is actually income, i.e. when you are actually doing work, taking risks, or exploiting connections.
I think that "savings account" is an underdefined term here, which I think causes serious problems. "doing work" and "taking risks" seem like income, and I see the argument for taxing them accordingly. Does "taking risks" mean "US Treasury Bonds" (which have a risk of default)? "broad market indices"? "employee stock options"? I think I would say that the market overall doesn't count as income in a meaningful sense, but that is very debatable.
I think an advantage of carry-forward is that someone can't get paid their marginal income tax rate for losing money. Marginal income tax at the moment caps at 50.3% in California (State+Federal), while long-term capital gains tax is 20%. There are a lot of accounting shenanigans that will make sense at a 45% rebate that would not at 5% rebate (approximating 10% annual returns and loss at halfway through the year, since carry-forward takes at least a year to come into affect assuming it is profits in the subsequent year that are at play, but even 15% vs 45% seems like more than enough room).
In advance of other comments: 1. Declining marginal utility of specific goods and non-uniform initial distributions of goods over people (this one matters).2. There is a finite length of the production chain that one person can accomplish if something would take longer than an entire human lifetime to produce. Suppose I luck into a massive amount of unobtanium, with perfect property rights. To some extent this is 1, but I might also desire goods that I could not produce in an individual life, and by trade acquire them (this one isn't that critical).3. Building on 2, some goods are only possible at the end of extremely expensive or complex production chains: effectively high startup costs imply that not everyone should produce a good. For example, "an airplane" is something that a small fraction of is not nearly as valuable as the whole entity, and there are small enough fractions that are worthless as airplanes. (you might consider this solved by star-trek transporters, which deal with a similar problem and you thus consider the class of problems solved. That said, I consider this a key source of gains from trade)4. There's a more pedantic point that as an economist I feel obligated to make: Increasing marginal utility of two goods over a relevant range implies gains from trade (this one doesn't matter for most understandings of trade)After reading other comments:Jsevillamol has a good point about insurance that I missed. Villam has a thoughtful point about R&D that I think is covered by the Matrix but arguably counts. Measure's point about shared equipment relates to time-sharing and startup costs, though framed differently.
This seems very questionable: "does X matter?" is comparable to "is X vs not-X worth the cost of investigation?" If I'm constrained by resource limitations, and trying to acquire as much knowledge as I can given that, the ability to dismiss some answers as unimportant is critical.
I would replace The Republic with a good Micro textbook for anyone who hasn't read one. A solid grasp of the mental framework that underlies Intro Micro is useful for anyone trying to parse society: not as the only framework, but an introduction to the fact that there are competing and incompatible mental frameworks. Following that, I would replace your book with an IR textbook, which must cover different and competing theories. They will be shallow introductions to complex thoughts, but they will give the reader the critical chance to test and compare the competing and incompatible theories against each other, and realize that no one theory has a convenient answer for all problems. The discussions of constructivism and responses to the neo-neo "debate" will help you recontextualize and better understand the micro textbook. Reading these books (If you have no idea what to go for, I'd recommend Cowen and Tabarrok for Micro and Introduction to International Relations: Theories and Approaches by Jackson and Sorenson for IR) will give you a broad overview of many theories, including modern ones informed by modern data.
I think you're right, so I'll start by assuming that you're wrong, because I have an alternative explanation for those who disagree with you (and which I think is the most convincing if we assume that the signalling explanation isn't the correct one). I think Eukaryote is missing one important cause. Assume that most writers arguing for caring more or less about a cause are doing so because they believe that this is an important way to serve that cause. Particularly outside our community, people rarely write about causes just for intellectual entertainment. "Everything is signalling" is a valid response, but I'll first reply to the rational-actor case, since that informs the limits and types of signalling. If I am writing to people who are "broadly value aligned", an admittedly imprecise term, I tend to expect that they are not opposed to me on topics that I think are most important. I expect most (85% if I'm optimistic) instances of reading writing to happen when people who are broadly value aligned with the author, at least with respect to the topic of the piece.
If someone cares less about something, I might value that directly (because I dislike the results of them caring), and I might value that indirectly (because I expect effort they take away from the target of my writing to go towards other causes that I value). However, conditional on broad value alignment, the causes that my readers care passionately about are not causes I'm opposed to, and the causes that I care passionately about are not ones that they're opposed to. So direct benefit, except in writing that is explicitly trying to convince people "from the other side", will rarely motivate me to try to make people care less.
Most communities have more than 3-4 potential cause areas. One specific friend of mine will physically go to events to support gun control, gender equality, fighting racism, homelessness prevention, Palestine, Pride, her church, abortion rights, and other topics. If I make her be less confident that gun control is an effective way of reducing violence, her efforts will be split fairly broadly. It is unlikely that whatever topic I find most important, or even whatever bundle of topics I find most important, is going to receive much marginal support. EAs are relatively unusual in that deactivating someone along one cause area has a high expected affect on specific other cause areas.