Neuroscientist Tal Yarkoni denounces many of his colleagues' tendency to appeal to publish-or-perish incentives as an excuse for sloppy science (October 2018, ~4600 words). Perhaps read as a complement to our recent discussion of Moral Mazes?

New Comment
119 comments, sorted by Click to highlight new comments since: Today at 3:34 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

There is a lot of arguing in the comments about what the 'tradeoffs' are for individuals in the scientific community and whether making those tradeoffs is reasonable. I think what's key in the quoted article is that fraudsters are trading so much for so little. They are actively obscuring and destroying scientific progress while contributing to the norm of obscuring and destroying scientific progress. Potentially preventing cures to diseases, time and life-saving technology, etc. This is REALLY BAD. And for what? A few dollars and an ego trip? An 11% instead of a 7% chance at a few dollars and an ego trip? I do not think it is unreasonable to judge this behavior as reprehensible, reguargless if it is the 'norm'.

Using peers in a field as a proxy for good vs. bad behavior doesn't make sense if the entire field is corrupt and destroying value. If 100% of scam artists steal people's money, I don't forgive a scam artist for stealing less money than the average scam artist. They are not 'making things better' by in theory reducing the average amount of money stolen per scam artist. They are still stealing money. DO NOT BECOME... (read more)

Using peers in a field as a proxy for good vs. bad behavior doesn't make sense if the entire field is corrupt and destroying value.

This seems to imply that you think that the world would be better off without academia at all. Do you endorse that?

Perhaps you only mean that if the world would be better off without academia at all, and nearly everyone in it is net negative / destroying value, then no one could justify joining it. I can agree with the implication, but I disagree with the premise.

7Ben Pace4y
I mean, there's a spectrum here. At what point should you avoid joining an institution out of principle. Is it only when the institution is net negative? I think that if your primary goal will be to fix the institution, then it can still be right to join an institution even if it's net negative. But I think if you find out that your institution has broken its promises and principles, even if it's mostly moving in a positive direction, especially if it's crowding out others doing work (indeed, it's hard to get respect as a scientist in modern society if you don't have a PhD), then I think that can be the sign to leave in protest and not cooperate with it, even while it's locally net positive and there isn't a clear alternative. (This reminds me of Zvi unilaterally leaving Facebook, even though Facebook has a coordination advantage. I'm glad Zvi left Facebook, and it helped me leave Facebook, but I couldn't have predicted that very directly at the time, and I don't think Zvi was in a position to either. Naive consequentialism is very difficult, because modelling the effects of your social decisions is incredibly hard, and is one of key advantages of deontological recommendations is to do the right thing even when you're not in a position to compute all its effects.) It seems like a fair hypothesis to me that academia has lied enough about whether it knows the truth, whether it has privileged access to truth, and whether its people are doing good work, that it should not be 'joined' but instead 'quit, and get to work on an alternative'.
4Rohin Shah4y
Tbc, in the grandparent I was responding to the specific sentence I quoted, which seems to me to be making a bold claim that I think is false. It's of course possible that the correct action is still "leave academia", but for a different reason, like the one you gave. Re: That depends pretty strongly on what the alternative is. Suppose your goal is for more investigation of speculative ideas that may or may not pan out, so that humanity figures out true and useful things about the world. It's not clear to me that you can do significantly better than current academia, even if you assume that everyone will switch from academia to your new institution. And of course, people in academia are selected for being good at academic jobs, and may not be good at building institutions. Or they may hate all the politicking that would be required for an alternative. Or they might not particularly care about impact, and just want to do research because it's fun. All of which are reasons you might join academia rather than quit and work on an alternative, and it's "morally fine".

I do not like this post. I think it gets most of its rhetorical oomph from speaking in a very moralizing tone, with effectively no data, and presenting everything in the worst light possible; I also think many of its claims are flat-out false. Let's go through each point in order.

1. You can excuse anything by appealing to The Incentives

No, seriously—anything. Once you start crying that The System is Broken in order to excuse your actions (or inactions), you can absolve yourself of responsibility for all kinds of behaviors that, on paper, should raise red flags. Consider just a few behaviors that few scientists would condone:

  • Fabricating data or results
  • Regulary threatening to fire trainees in order to scare them into working harder
  • Deliberately sabotaging competitors’ papers or grants by reviewing them negatively

Wow, that would be truly shocking; indeed that would be truly an indictment of academia. What's the evidence?

When Diederik Stapel confessed to fabricating the data used in over 50 publications, he didn’t explain his actions by saying “oh, you know, I’m probably a bit of a psychopath”; instead, he placed much of the blame squarely on The Incentives:

... Did you expect people who... (read more)

I agree with most of this review, and also didn't really like this post when it came out.

I think the first one could plausibly be a reason that we would want to promote this on LW. Unfortunately, I think it is wrong: I do not think that people should usually feel upon themselves the burden of bucking bad incentives. There are many, many bad incentives in the world; you cannot buck them all simultaneously and make the world a better place. Rather, you need to conform with the bad incentives, even though it makes your blood boil, and choose a select few areas in which you are going to change the world, and focus on those.

Just for the record, and since I think this is actually an important point, my perspective is that indeed people cannot take on themselves the burden of bucking bad all bad incentives, but that there are a few domains of society where not following these incentives is much worse than others and where I currently expect the vast majority of contributors to be net-negative participants because of those incentives (and as such establishing standards of "deal with it or leave it" is a potentially reasonable choice). 

I think truth-seeking institutions are one of thos... (read more)

Note that this review is not of the content that was nominated; nomination justifications strongly suggest that the comment suggestion, not the linkpost, was nominated.
I think the comments are in large part about the post, though, and it matters a lot whether the post is wrong or misleading. I also think that, while, this post wouldn't be eligible for the 2019 Review, an important point of the overall review process is still to have a coordinated time where everyone evaluates posts that have permeated the culture. I think this review is quite valuable along those lines.
That’s fair-I wasn’t disparaging the usefulness of the comment, just pointing out that the post itself is not actually what’s being reviewed, which is important, because it means that a low-quality post that sparks high-quality discussion isn’t disqualifying.
4Rohin Shah3y
As I read it, two of the nominations are for the post itself, and one is for the comments... what I was going to say until I checked and saw that this comment is a review, not a nomination. So one is for the post, and one for the comments. ---- I agree with Raemon that even if the nomination is for the comments, evaluating the post is important. I actually started writing a section on the comments, but didn't have that much to say, because they all seem predicated on the post stating something true about the world. The highest-voted top-level comment, as well as Zvi's position in this comment thread, seem to basically be considering the case where academia as a whole is net negative. I broadly agree with Zvi that it is not acceptable for an academic to go around faking data; if that were the norm in academia I expect I would think that academia was net negative and one could not justify joining it (unless you were going to buck the incentives). But... that isn't the norm in academia. I feel like these comments are only making an important point if you actually believe the original post, which I don't. The other comments seem to have only a little content, or to be on relatively tangential topics.
That’s a fair point-see my comment to Raemon. The way I read it, the mod consensus was that we can’t just curate the post, meaning that comments are essentially the only option. To me, this means an incorrect/low quality post isn’t disqualifying, which doesn’t decrease the utility of the review, just the frame under which it should be interpreted.

The discussion around It's Not the Incentives, It's You, was pretty gnarly. I think at the time there were some concrete, simple mistakes I was making. I also think there were 4-6 major cruxes of disagreement between me and some other LessWrongers. The 2019 Review seemed like a good time to take stock of that.

I've spent around 12 hours talking with a couple people who thought I was mistaken and/or harmful last time, and then 5-10 writing this up. And I don't feel anywhere near done, but I'm reaching the end of the timebox so here goes.

Core Claims

I think this post and the surrounding commentary (at least on the “pro” side) was making approximately these claims:

A. You are obligated to buck incentives. You might be tempted sometimes to blame The Incentives rather than take personal responsibility for various failures of virtue (epistemic or otherwise). You should take responsibility. 

B. Academia has gotten more dishonest, and academics are (wrongly) blaming “The Incentives” instead of taking responsibility.

C. Epistemics are the most important thing. Epistemic Integrity is the most important virtue. Improving societal epistemics is the top cause area. 

  • Possible stronger claim:
... (read more)

Personal Anecdote:

"It wasn't the Incentives. It was me." 

The forceful, moralizing tone of the article was helpful for me to internalize that I need the skill of noticing, and then bucking, incentives.

Just a few days ago, on Dec 31st, I found myself trying to rush an important blogpost out before 2020 ended, so it could show up next year in the 2020 LW Review. I found myself writing to some people, tongue-in-cheekly saying “Hey guys, um, the incentives say I should try to publish this today. Can you give feedback on it, and/or tell me that it’s not ready and I should take more time?”

And… well, sure I can hide behind the tongue-in-cheekness. And, "Can you help review a blogpost?" is a totally reasonable thing to ask my friends to do. 

But, also, as I clicked ‘send’ on the email, I felt a little squirming in my heart. Because I knew damn well the post wasn’t ready. I was just having trouble admitting it to myself because I’d be sad if it were delayed a year from getting into the next set of LW Books. And this was a domain where I literally invented the incentives I was responding to

It was definitely not the Incentives, It Was Me.

I still totally should have asked my fri... (read more)

Framing disagreements

Cognitive processes vs right answers; Median vs top thinkers

My frame here is “what cognitive strategy is useful for the median person to find the right answers”. 

I think that people I’ve argued against here were focused more directly on “What are the right answers?” or “What should truthseekers with high standards and philosophical sophistication do?”. 

I expect there to be a significant difference between the median academic and the sort of person participating in this conversation.

I think the median academic is running on social cognition, which is very weak. Fixing that should be their top priority. I think fixing that is cognitively very different fromnot being academically dishonest.” (Though this may depend somewhat on what sort of academic dishonesty we’re talking about, and how prevalent it is)

I think the people I’ve argued with probably disagree about that, and maybe think that ‘be aligned with the truth’ is a central cognitive strategy that is entangled across the board. This seems false to me for most people, although I can imagine changing my mind.

Arranging coordinated-efforts-that-work (i.e. Stag Hunts) is the most important thing, most ... (read more)

4Rohin Shah3y
To the extent that your summary of the "pro" case is accurate, particularly "Epistemics are the most important thing", I find it deeply ironic and sad that all of the commentary, besides one comment from Carl Shulman (and my own), seems to be about what people should do, rather than what is actually true. One would hope that people pushing "epistemics are the most important thing" would want to rely on true facts when pushing their argument.
There are a few more threads I ideally want to write here about what I think was going on in here. I'm not 100% sure whether I endorse your implied argument but think there was something to unravel here in the space you're pointing at.

I can't upvote this strongly enough. It's the perfect followup to discussion and analysis of Moloch and imperfect equilibria (and Moral Mazes) - goes straight to the heart of "what is altruism?" If you're not taking actions contrary to incentives, choosing to do something you value that "the system" doesn't, you're not making moral choices, only economic ones.

I wouldn't call them "economic" actions/decisions - how to do things at a concrete level is about what you want. The altruist may raise money for a charity, and the selfish may act in their own (view of) self interest say, to accumulate money/what they value. The difference isn't that the moral don't act economically, it's that they act economically with regards to something else.
I see what you mean, but there's a tendency to think of 'homo economicus' as having perfectly selfish, non-altruistic values. Also, quite aside from standard economics, I tend to think of economic decisions as maximizing profit. Technically, the rational agent model in economics allows arbitrary objectives. But, what kinds of market behavior should you really expect? When analyzing celebrities, it makes sense to assume rationality with a fame-maximizing utility function, because the people who manage to become and remain celebrities will, one way or another, be acting like fame-maximizers. There's a huge selection effect. So Homo Hollywoodicus can probably be modeled well with a fame-maximizing assumption. This has nothing to do with the psychology of stardom. People may have all kinds of motives for what they do -- whether they're seeking stardom consciously or just happen to engage in behavior which makes them a star. Similarly, when modeling politics, it is reasonable to make a Homo Politicus assumption that people seek to gain and maintain power. The politicians whose behavior isn't in line with this assumption will never break into politics, or at best will be short-lived successes. This has nothing to do with the psychology of the politicians. And again, evolutionary game theory treats reproductive success as utility, despite the many other goals which animals might have. So, when analyzing market behavior, it makes some sense to treat money as the utility function. Those who aren't going for money will have much less influence on the behavior of the market overall. Profit motives aren't everything, but other motives will be less important that profit motives in market analysis.
I think of economic decisions in terms of visible/modeled tradeoffs including time- and uncertainty-discounted cost/benefit choices. Moral decisions are this, plus hard-to-model (illegible) values and preferences. I acknowledge that there's a lot of variance in how those words are used in different contexts, and I'm open to suggestions on what to use instead.
In the case you referenced, "selfish" or "short sighted", depending on what you were going for, seem to fit. I very much agree with this part.

Very nice. Few notes:

1. Wrong incentives are no excuse for bad behaviour, they should rather quit their jobs than engaging in one.

2. World isn't black or white, sometimes there is a gray zone where you contribute enough to be net+, while cut some corners to get your contribution accepted.

3. People tend to overestimate their contribution and underestimate the impact of their behaviour, so 2. is quite dangerous.

4. In an environment with sufficiently strong wrong incentives, the only result is that only those with weak morals survive. Natural selection.

5. There is lot of truth in Taleb's position that research should not be a source of your income, rather a hobby.

Is this specific to research? Given unaligned incentives and Goodheart, I think you could make an argument that _nothing important_ should be a source of income. All long-term values-oriented work should be undertaken as hobbies. (Note - this is mostly a reductio argument. My actual opinion is that the split between hobby and income is itself part of the incorrect incentive structure, and there's no actual way to opt out. As such, you need to thread the needle of doing good while accepting some and rejecting other incentives.)

Is this specific to research? Given unaligned incentives and Goodheart, I think you could make an argument that nothing important should be a source of income. All long-term values-oriented work should be undertaken as hobbies.

This is an interesting argument for funding something like the EA Hotel over traditional EA orgs.

If the EA Hotel is easily confirmed as real, as in it is offering what it claims it is offering at a reasonable quality level at the price it claims to be offering that thing, then I am confused why it has any trouble being funded. This is yet another good reason for that. I understand at least one good reason why there aren't more such hotels - actually doing a concrete physical world thing is hard and no one does it.

There's been a great deal of discussion of the EA Hotel on the EA Forum. Here's one relevant thread:

Here's another:

It's possible the hotel's funding troubles have more to do with weirdness aversion than anything else.

I personally spent 6 months at the hotel, thought it was a great environment, and felt the time I spent there was pretty helpful for my career as an EA. The funding situation is not as dire as it was a little while ago. But I've donated thousands of dollars to the project and I encourage others to donate too.

6Samuel Hapák5y
Some important things can be a source of income, such as farming. Farming is pretty important and there are no huge issues with farmers doing it for profit. Problems happen when there is a huge disconnect between the value and reward. This happens in a basic research a lot, because researchers don't have any direct customers. Arguably, in a basic research, you principally can't have any customers. Your customers are future researchers that will build on top of your research. They would be able to decide whether your work was valuable or whether it was crap, but you'd be pretty old or dead by that time.
As a synthesis of points 1 and 4: it is both the incentives and you. The incentives explain why the game is so bad, but you have to ask yourself why you still keep playing it. A researcher with more personal integrity would avoid the temptation/pressure to do sloppy science... and perhaps lose the job as a result. The sloppy science itself would remain, only done by someone else.
In that case, might as well go into something better-paying.
Well from a consequentialist perspective, if people with a stronger desire for scientific integrity self-select out of science, that makes science weaker in the long run. I think a more realistic norm, which will likely create better outcomes, is for you personally to ensure that your work is at least in the top 40% for quality, and castigate anyone whose work is in the bottom 20%. Either of these practices should cause a gradual increase in quality if widely implemented (assuming these thresholds are tracked & updated as they change over time).

Is it necessary so? Today science means you spend considerable portion of your time doing bullshit instead of actual research. Wouldn't you be in a much better position doing quality research if you're earning good salary, saving a big portion of it, and doing science as a hobby?

It's possible. That's what I myself am doing--supporting myself with a part-time job while I self-study and do independent FAI research. However, it's harder have credibility in the eyes of the public with this path. And for good reason--the public has no easy way to tell apart a crank from a lone genius, since it's hard to judge expertise in a domain unless you yourself are an expert in it. One could argue that the academia acts as a reasonable approximation of eigendemocracy and thereby solves this problem. Anyway, if the scientists with credibility are the ones who don't care about scientific integrity, that seems bad for public epistemology.

Note that Wei Dei also notes that he chose exit from academia, as did many others on Less Wrong and in our social circles (combined with surprising non-entry).

If this is the model of what is going on, that quality and useful research is much easier without academia, but academia is how one gains credibility, then destroying the credibility of academia would be the logical useful action.

I think you have to do a lot more to demonstrate this. Did you read Scott Alexander's recent posts on cultural evolution? If the credibility of academia is destroyed, it's not obvious something better will come along to fill that void. Why is it better to destroy than repair? Plus, if something new gets created, it will probably have its own set of flaws. The more pressure is put on your system (in terms of funding and status), the greater the incentive to game things, and the more the cracks will start to show. I suggest instead of focusing on the destruction of a suboptimal means for ascertaining credibility, you focus on the creation of a superior means for ascertaining credibility. Let's phase academia out after it has been made obsolete, not before.
3Samuel Hapák5y
Academia in the current form isn’t Lindy. It’s not like we’re doing this thousands of years. Current system of Academia is at most 70 years old.
The broader institutions around academia have been around since at least the Royal Society, which was founded in 1660. That's usually the age I would put the rough institutions surrounding academia.
2Samuel Hapák5y
Royal Society in 1660 and current academia are very different beasts. For example the current citations/journal’s game is pretty new phenomenon. Peer-review wasn’t really a thing 100 years ago. Neither complex grant applications.

I thought peer-review had always been a core part of science in some form or another. I think you might be confusing external peer-view and editorial peer-review. As this Wikipedia article says:

The first record of an editorial pre-publication peer-review is from 1665 by Henry Oldenburg, the founding editor of Philosophical Transactions of the Royal Society at the Royal Society of London.[2][3][4]
The first peer-reviewed publication might have been the Medical Essays and Observationspublished by the Royal Society of Edinburgh in 1731. The present-day peer-review system evolved from this 18th-century process,[5] began to involve external reviewers in the mid-19th-century,[6] and did not become commonplace until the mid-20th-century.[7]
Peer review became a touchstone of the scientific method, but until the end of the 19th century was often performed directly by an editor-in-chief or editorial committee.[8][9][10]Editors of scientific journals at that time made publication decisions without seeking outside input, i.e. an external panel of reviewers, giving established authors latitude in their journalistic discretion. For example, Albert Einstein's four revolutionary Annus Mirabil
... (read more)
3Samuel Hapák5y
It's a huge difference whether the reviewer is some anonymous person unrelated to the journal or whether it's an editor in chief of the journal itself. I don't think it's appropriate to call the latter peer-review (there are no "peers" involved), but that's not important. Editor in chief has a strong motivation to have a good quality journal. If he rejects a good article, it's his loss. On the contrary, anonymous peer have stronger motivation to use this as an opportunity to promote (get cited) his own research than to help journal curate the best science. Let me try to rephrase the shift I see in science. Over the 20th century, science became bureaucraciesed, the process of "doing science" was largely formalised and standardised. Researchers obsess about impact factors, p-values, h-indexes, anonymous peer reviews, grants, currents... There are actual rules in place that determine formally whether you are "good" scientist. That wasn't the case over the most of the history of the science. Also the "full-time" scientist who never did any other job than academy research was much less common in the past. Take Einstein as an example.
Oh, I think we both definitely agree that science has changed a lot. I do also think that it still very clearly has maintained a lot of its structure from its very early days, and to bring things back to John's top level point, it is less obvious that that structure would redevelop if we were to give up completely on academia or something like that.

I disagree with most of the post and most of the comments here. I think most academics are not explicitly committing fraud, but bad science results anyway. I also think that for the vast majority of (non-tenured) academics, if you don't follow the incentives, you don't make it in academia. If you intervened on ~100 entering PhD students and made them committed to always not following the incentives where they are bad, I predict that < 10% of them will become professors -- maybe an expected 2 of them would. So you can't say "why don't the academics just not follow the incentives"; any such person wouldn't have made it into academia. I think the appropriate worlds to consider are: science as it exists now with academics following incentives or ~no academia at all.

It is probably correct that each individual instance of having to deal with bad incentives doesn't make that much of a difference, but there are many such instances. Probably there's an 80-20 thing to do here where you get 80% of the benefit by not following the worst 20% of bad incentives, but it's actually quite hard to identify these, and it requires you to be able to p... (read more)

Survey and other data indicate that in these fields most people were doing p-hacking/QRPs (running tests selected ex post, optional stopping, reporting and publication bias, etc), but a substantial minority weren't, with individual, subfield, and field variation. Some people produced ~100% bogus work while others were ~0%. So it was possible to have a career without the bad practices Yarkoni criticizes, aggregating across many practices to look at overall reproducibility of research.

And he is now talking about people who have been informed about the severe effects of the QRPs (that they result in largely bogus research at large cost to science compared to reproducible alternatives that many of their colleagues are now using and working to reward) but choose to continue the bad practices. That group is also disproportionately tenured, so it's not a question of not getting a place in academia now, but of giving up on false claims they built their reputation around and reduced grants and speaking fees.

I think the core issue is that even though the QRPs that lead to mostly bogus research in fields such as social psych and neuroimaging often started off without intentional bad conduct, their bad effects have now become public knowledge, and Yarkoni is right to call out those people on continuing them and defending continuing them.

5Rohin Shah5y
I'm curious how many were able to hit 0%? Based on my 10x estimate below I'd estimate 9%, but that was definitely a number I pulled out of nowhere. I personally feel the most pressure to publish because the undergrads I work with need a paper to get into grad school. I wonder if it's similar for tenured professors with their grad students. Also, the article seems to be condemning academics who are not tenured, e.g. Thought experiment (that I acknowledge is not reality): Suppose that it were actually the case that in order to stay in academia you had to engage in QRPs. Do you still think it is right to call out / punish such people? It seems like this ends up with you always punishing everyone in academia, with no gains to actually published research, or you abolish academia outright.
Isn't "academics who don't follow bad incentives almost never become professors" blatantly incompatible with "these are well-intentioned mistakes"?

The former is a statement about outcomes while the latter is a statement about intentions.

My model for how most academics end up following bad incentives is that they pick up the incentivized bad behaviors via imitation. Anyone who doesn't do this ends up doing poorly and won't make it in academia (and in any case such people are rare, imitation is the norm for humans in general). As part of imitation, people come up with explanations for why the behavior is necessary and good for them to do. (And this is also usually the right thing to do; if you are imitating a good behavior, it makes sense to figure out why it is good, so that you can use that underlying explanation to reason about what other behaviors are good.)

I think that I personally am engaging in bad behaviors because I incorrectly expect that they are necessary for some goal (e.g. publishing papers to build academic credibility). I just can't tell which ones really are necessary and which ones aren't.

6Ben Pace5y
This seems related to the ideas in this post on unconscious economies.
8Rohin Shah5y
Agreed that it's related, and I do think it's part of the explanation. I will go even further: while in that post the selection happens at the level of properties of individuals who participate in some culture, I'm claiming that the selection happens at the higher level of norms of behavior in the culture, because most people are imitating the rest of the culture. This requires even fewer misaligned individuals. Under the model where you select on individuals, you would still need a fairly large number of people to have the property of interest -- if only 1% of salesmen had the personality traits leading to them being scammy and the other 99% were usually honest about the product, the scammy salesmen probably wouldn't be able to capture all of the sales jobs. However, if most people imitate, then those 1% of salesmen will slowly push the norms towards being more scammy over generations, and you'd end up in the equilibrium where nearly every salesman is scammy. Come to think of it, I think I would estimate that ~1% of academics are explicitly thinking about how to further their own career at the cost of science (in ways that are different from imitation).
And how many if you didn't intervene? How do you reconcile this with the immediately prior sentence?
3Rohin Shah5y
Significantly more, maybe 20. To do a proper estimate I'd need to know which field we're considering, what the base rates are, etc. The thing I should have said was that I expect it makes it ~10x less likely that you become a professor; that seems more robust to the choice of field and isn't conditional on base rates that I don't know. The Internet suggests a base rate of 3-5%, which means without intervention 3-5 of them would become professors; if that's true I would say that with intervention an expected 0.4 of them would become professors. I didn't mean that it was literally impossible for a person who doesn't follow the incentives to get into academia, I meant that it was much less likely. I do in fact know people in academia who I think are reasonably good at not following bad incentives.

I did not follow the Moral Mazes discussion as it unfolded. I came across this article context-less. So I don't know that it adds much to Lesswrong. If that context is relevant, it should get a summary before diving in. From my perspective, its inclusion in the list was a jump sideways.

It's written engagingly. I feel Yarkoni's anger. Frustration bleeds off the page, and he has clearly gotten on a roll. Not performing moral outrage, just *properly, thoroughly livid* that so much has gone wrong in the science world.

We might need that.

What he wrote does not o... (read more)

This post substantially updated my thinking about personal responsibility.  While I totally disagree with the one-side framing of the post, the framing of it made me see that the "personal responsibility" vs. "incentives" thing wasn't really about beliefs at all, but was in fact about the framing.

I think it articulates the "personal responsibility" frame particularly well, and helps see how choosing "individuals" as the level of abstraction naturally leads to a personal responsibility framing.

Um, this is a linkpost. Can you nominate a linkpost to something by a non-Less Wrong-affiliated author? Certainly the comment thread is worth pointing to as evidence of "things we learned in 2019", but I don't think the post should be eligible for the voting round?
4Ben Pace3y
I’m not sure.  My current guess is: * If this post was very helpful for lots of LWers, that’s valuable info to know (e.g. if it scored highly in the voting round). * I like the post quite a bit. Also, looking at the author’s blog, they seem pretty cool e.g. they write fiction about GPT-3 :) * If it scores well in the review, I would be open to reaching out to the author and asking if they wanted to be fully crossposted and published in our annual essay collection. * I think it could be somewhat confusing if it were included in the book, as though it were an opinion of a LessWronger, even though it wasn’t written by a LessWrong user. I was considering just nominating the comment section, as I see that Ray has done. We’re still obviously figuring it all out, and I want to take it on a case-by-case basis. But I tentatively lean ‘yes’.
I was going to nominate this post for similar reasons, and then realized it's almost entirely a third-party linkpost. It feels like an important part of my worldview but I wasn't sure how to think about it in the context of the review. I suppose maybe it's good to have an Official Overton Window Fight about the post, without necessarily having that fight output an essay in the Best Of 2019 Book?
2Matt Goldenberg3y
Yeah, I think that if in part the review is "What should be LW common knowledge" we can vote for that outside of *and should therefore be included in a book."
2Ben Pace3y
Note that a lot more people will buy the book set than will look at the vote page. In some ways the book is the common-knowledge building mechanism.
2Matt Goldenberg3y
Hmm, I guess I was imagining something like a "New LW sequences" featured prominently on the front page or something.

In general, I think this post does a great job of articulatng a single, incomplete frame. Others in the review take umbrage with the moralizing tone, but I think the moralizing tone is actually quite useful to give an inside view of this frame. 

I believe this frame is incomplete, but gives an important perspective that is often ignored in the Lesswrong/Gray tribe.

5Rohin Shah3y
I primarily take umbrage at the fact that the post makes claims I think are false without providing any evidence for them. I brought up the moralizing tone as an explanation for why it was popular despite making (what I think are) false claims with no evidence.
3Matt Goldenberg3y
I don't think that evidence is needed to articulate the frame I'm talking about, it's much more about a way of interpreting the situation.

While I don't think this post is actually eligible for the Best of LW 2019 book (since it's written offsite and is only a linkpost here), I think it's reasonable to nominate the comments here for some kind of "what do we collectively feel about this 1.5 years later?" discussion.

Definitely think this is an important point in the conversation.

I think my take is something like "The incentives are the problem" is a useful frame for how to look at systems and (often but not always) other people, but should throw up a red flag when you use it as an excuse for your own behavior.

I'm not sure I endorse this post precisely as written, because "take ownership of your behavior" is a cause that will be Out To Get You for everything you've got (while leaving you vulnerable to Asymmetric Justice in the meanwhile). ... (read more)

If you're an academic and you're using fake data or misleading statistics, you are doing harm rather than good in your academic career. You are defrauding the public, you are making our academic norms be about fraud, you are destroying both public trust in academia in particular and knowledge in general, and you are creating justified reasons for this destruction of trust. You are being incredibly destructive to the central norms of how we figure things out about the world - one of many of which is whether or not it is bad to eat meat, or how we should uphold moral standards.

And you're doing it in order to extract resources from the public, and grab your share of the pie.

I would not only rather you eat meat. I would rather you literally go around robbing banks at gunpoint to pay your rent.

If one really, really did think that personally eating meat was worse than committing academic fraud - which boggles my mind, but supposing that - what the hell are you doing in academia in the first place, and why haven't you quit yet? Unless your goal now is to use academic fraud to prevent people from eating meat, which I'd hope is something you wouldn't endorse, and not what 99%+ of these people are doing. As the author of OP points out, if you can make it in academia, you can make more money outside of it, and have plenty of cash left over for salads and for subsidizing other people's salads, if that's what you think life is about.

You shouldn't put these in the same category. Fake data is a much graver sin than failing to correct for multiple comparisons or running a study with a small sample size. For the second two, anyone who reads you paper can see what you did (assuming you mention all the comparisons you made) and discount your conclusions accordingly. For a savvy reader or meta-analysis author, a paper which commits these sins can still improve their overall picture of the literature, especially if they employ tools to detect/correct for publication bias. It's not obvious to me that a scientist who employs these practices is doing harm with their academic career, especially given that readers are getting more and more savvy nowadays. I don't think "fraud" is the right word for these statistical practices. Cherry-picking examples that support your point, the way an opinion columnist does, is probably a more fraudulent practice.

It's fair to say that fake data is a Boolean and a Rubicon, where once you do it once, at all, all is lost. Whereas there are varying degrees of misleading statistics versus clarifying statistics, and how one draws conclusions from those statistics, and one can engage in some amount of misleading without dooming the whole enterprise, so long as (as you note) the author is explicit and clear about what the data was and what tests were applied, so anyone reading can figure out what was actually found.

However, I think it's not that hard for it to pass a threshold where it's clearly fraud, although still a less harmful/dangerous fraud than fake data, if you accept that an opinion columnist cherry-picking examples is fraud (e.g. for it to be more fraudulent than that, especially if the opinion columnist isn't assumed to be claiming that the examples are representative). And I like that example more the more I think about it, because that's an example of where I expect to be softly defrauded in the sense that I assume that the examples and arguments are words written are soldiers chosen to make a point slash sell papers, rather than an attempt to create common knowledge and seek truth. If scientific papers are in the same reference class as that...

I am very surprised that you still endorse this comment on reflection, but given that you do, it's not unreasonable to ask: Given that most people lie a lot, and you think personally not eating meat is more important than not lying, your track record actually not eating meat, and your claim that it's reasonable to be a 51st percentile moral person, why should we then trust your statements to be truthful? Let alone in good faith. I mean, I don't expect you to lie because I know you, but if you actually believed the above for real, wouldn't my expectation be foolish?

I'm trying to square your above statement and make it make sense for you to have said it and I just... can't?

I think "you are a bad person" is a very powerful and dangerous tool to use on yourself or others. I think there are a lot of ways to deeply fuck yourself up with it. Similarly, moral obligation is a very powerful and dangerous concept. I think it is (sort of) reasonably safe to use with "if you are in the bottom 50% of humanity*, you are morally obligated to work on that, and if you aren't at least working on it, you are a bad person." Aspiring towards being a truly *good* person is a lot of effort. It requires time to think a lot about your principles, it requires slack to dedicate towards both executing them and standing up to various peer pressures, etc. It is enough effort, and I think most people have enough on their plate, that I don't consider it morally obligatory. I aspire to be a truly good person, and I in fact try to create a fenced-in-bubble, which requires you to be aspiring towards some manner of goodness in order to gain many of the benefits I contribute to the semi-public commons. I think this is a pretty good strategy, to avoid the dangers of moral obligation and "bad person" mindset, while capturing the benefits of high percentile goodness. *possibly "if you're in the bottom 50% of your reference class", where reference class is somewhat vague.

You're the one bringing up the question of whether someone's a bad person.

True. But I do think we've run enough experiments on 'don't say anyone is a bad person, only point out bad actions and bad logic and false beliefs' to know that people by default read that as claims about who is bad, and we need better tech for what to do about this.

As long as we understand that "bad person" is shorthand for "past and likely near-future behaviors are interfering with group goals", It's a reasonable judgement to make. And it's certainly useful to call out people you'd like to eject from the group, or to reduce in status, or to force a behavioral change on. I don't object to calling someone a bad person, I only object to believing that such a thing is real.

The thing is, I don't think that shorthand (along with similar things like "You're an idiot") ever stays understood outside of very carefully maintained systems of people working closely together in super high trust situations, even if it starts out understood.

I'd agree. Outside of closely-knit, high-trust situations, I don't think it's achievable to have that subtlety of conceptual communication. You can remind (some) people, and you can use more precise terminology where the distinction is both important and likely to succeed. In other cases, maintaining your internal standards of epistemic hygiene is valuable, even when playing status games you don't like very much.

I think two different things are going on here:

1. The OP read as directly moralizing to me. I do realize it doesn't necessarily spell it out directly, but moralizing language rarely is. I don't know the author of the OP. There are individuals I trust on LW to be able to have this sort of conversation without waging subtle-or-unsubtle wars over who is a bad person, but they are rare. I definitely don't assume that for random people on the internet.

2. My "Be in the top 50% morally" statement was specifically meant to be in the context of the full Scott Alexander post, which is explicitly about (among other things) people being worried about being a good person.

And, yes, I brought the second point up (and I did bring it up in an offhand way without doing much to establish the context, which was sloppy. I do apologize for that).

But afterward providing the link, it seemed like people were still criticizing that point. And... I'm not sure I have a good handle on how this played out. But my impression is something like you and maybe a couple others were criticizing the 50% comment as if it were part of a different context, whereas if you read the original po... (read more)

The gradients between horrific, forbidden, disallowed, discouraged, acceptable, preferable, commendable, heroic seem like something that should be discussed here. I suspect you're mixing a few different kinds of judgement of self, judgement of others, and perceived judgement by others. I don't find them to be the same thing or the same dimensions of judgement, but there's definitely some overlap. I reject "goodness" as an attribute of a person - it does not fit my intuitions nor reasoned beliefs. There are behaviors and attitudes which are better or worse (sometimes by large amounts), but these are contingent rather than identifying. There _are_ bad people, who consistently show harmful behavior and no sign of changing throughout their lives. There are a LOT of morally mediocre people who have a mix of good and bad behavior, often more context-driven than choice-driven. I don't think I can distinguish among them, so I tend to assume that almost everyone is mediocre. Note that I can decide that someone is unpleasant or harmful TO ME, and avoid them, without having to condemn them as a bad person. So, I don't aspire to be a truly good person, as I don't think that's a thing. I aspire to do good things and make choices for the commons, which I partake of. I'm not perfect at it, but I reject judgement on any absolute scale, so I don't think there's a line I'm trying to find where I'm "good enough", just fumbling my way around what I'm able/willing to do.

Note: I may not be able to weigh in on this more until Sunday.

Clarifying some things all at once since a few people have brought up related points. I'm probably not going to get to address the "which is worse – lying or eating meat" issue until Sunday (in the meanwhile, to be clear, I think "don't lie" is indeed one of the single most important norms to coordinate on, and to create from scratch if you don't have such a norm, regardless of whether there are other things that are as or more important)

A key clause in the above comment was:

If the norm in academia is to use bad statistics, or fake data (I don't know whether it is or not, or how common it is)

In a world where the norm in academia is to not use bad statistics, or not to fake data, then absolutely the correct thing is to uphold that norm.

In a world where the norm is explicitly not to do those things (i.e. greater than 50% of academics would fake data), then we have very big problems, and unilaterally deciding to stop faking data... is nice, but isn't actually going to help unless it is part of a broader, more concerted strategy.

I don't actually know the state of academia en... (read more)

In a world where the norm is explicitly not to do those things (i.e. greater than 50% of academics would fake data), then we have very big problems, and unilaterally deciding to stop faking data... is nice, but isn't actually going to help unless it is part of a broader, more concerted strategy.

I think this claim is a hugely important error.

One scientist unilaterally deciding to stop faking data isn't going to magically make the whole world come around. But the idea that it doesn't help? That failing to do so, and not only being complicit in others faking data but also faking data, doesn't make it worse?

I don't understand how one can think that.

That's not unique to the example of faking data. That's true of anything (at least partially) observable that you'd like to change.

One can argue that coordinated action would be more efficient, and I'd agree. One can argue that in context, it's not worth the trade-off to do the thing that reinforces good norms and makes things better, versus the thing that's better for you and makes things generally worse. Sure. Not everything that would improve norms is worth doing.

But don't ... (read more)

One can argue that coordinated action would be more efficient, and I'd agree. One can argue that in context, it's not worth the trade-off to do the thing that reinforces good norms and makes things better, versus the thing that's better for you and makes things generally worse. Sure. Not everything that would improve norms is worth doing.
But don't pretend it doesn't matter.

This reads as enormously uncharitable to Raemon, and I don't actually know where you're getting it from. As far as I can tell, not a single person in this conversation has made the claim that it "doesn't matter"--and for good reason: such a claim would be ridiculous. That you seem willing to accuse someone else in the conversation of making such a claim (or "pretending" it, which is just as bad) doesn't say good things about the level of conversation.

What has been claimed is that "doing the thing that reinforces good norms" is ineffective, i.e. it doesn't actually reinforce the good norms. The claim is that without a coordinated effort, changes in behavior on an individual level have almost no effect on the behavior of the field as a ... (read more)

We all know that falsifying data is bad. But if that's the way the incentives point (and that's a very important if!), then it's also bad to call people out for doing it.

No. No. Big No. A thousand times no.

(We all agree with that first sentence, everyone here knows these things are bad, that's just quoted for context. Also note that everyone agrees that those incentives are bad and efficient action to change them would be a good idea.)

I believe the above quote is a hugely important crux. Likely it, or something upstream of it, is the crux. Thank you for being explicit here. I'm happy to know that this is not a straw-man, that this is not going to get the Mott and Bailey treatment.

I'm still worried that such treatment will mostly occur...

There is a position, that seems to be increasingly held and openly advocated for, that if someone does something according to their local, personal, short-term amoral incentives, that this is, if not automatically praiseworthy (although I believe I have frequently seen this too, increasingly explicitly, but not here or by anyone in this discussion), at least an immunity from being blameworthy, no matter the magnitud... (read more)

Here's another further-afield steelman, inspired by blameless postmortem culture.

When debriefing / investigating a bad outcome, it's better for participants to expect not to be labeled as "bad people" (implicitly or explicitly) as a result of coming forward with information about choices they made that contributed to the failure.

More social pressure against admitting publicly that one is contributing poorly contributes to systematic hiding/obfuscation of information about why people are making those choices (e.g. incentives). And we need all that information to be out in the clear (or at least available to investigators who are committed & empowered to solve the systemic issues), if we are going to have any chance of making lasting changes.

In general, I'm curious what Zvi and Ben think about the interaction between "I expect people to yell at me if I say I'm doing this" and promoting/enabling "honest accounting".

Trying to steelman the quoted section:

If one were to be above average but imperfect (e.g. not falsifying data or p-hacking but still publishing in paid access journals) then being called out for the imperfect bit could be bad. That person’s presence in the field is a net positive but if they don’t consider themselves able to afford the penalty of being perfect then they leave and the field suffers.

I’m not sure I endorse the specific example there but in a personal example:

My incentive at work is to spend more time on meeting my targets (vs other less measurable but important tasks) than is strictly beneficial for the company.

I do spend more time on these targets than would be optimal but I think I do this considerably less than is typical. I still overfocus on targets as I’ve been told in appraisals to do so.

If someone were to call me out on this I think I would be justified in feeling miffed, even if the person calling me out was acting better than me on this axis.

Thank you.

I read your steelman as importantly different from the quoted section.

It uses the weak claim that such action 'could be bad' rather than that it is bad. It also re-introduces the principle of being above average as a condition, which I consider mostly a distinct (but correlated) line of thought.

It changes the standard of behavior from 'any behavior that responds to local incentives is automatically all right' to 'behaviors that are above average and net helpful, but imperfect.'

This is an example of the kind of equivalence/transformation/Mott and Bailey I've observed, and am attempting to highlight - not that you're doing it, you're not because this is explicitly a steelman, but that I've seen. The claim that it is reasonable to focus on meeting explicit targets rather than exclusively what is illegibly good for the company versus the claim that it is cannot be blameworthy to focus exclusively on what you are locally personally incentivized to do, which in this case is meeting explicit targets and things you would be blamed for, no matter the consequence to the company (unless it would actually suffer enough to destroy its ability to pay you).

That is no straw man. In the companies described in Moral Mazes, managers do in fact follow that second principle, and will punish those seen not doing so. In exactly this situation.

I might try and write up a reply of my own (to Zvi's comment), but right now I'm fairly pressed for time and emotional energy, so until/unless that happens, I'm going to go ahead and endorse this response as closest to the one I would have given. EDIT: I will note that this bit is (on my view) extremely important: "Above average", of course, a comparative term. If e.g. 95% of my colleagues in a particular field regularly submit papers with bad data, then even if I do the same, I am no worse from a moral perspective than the supermajority of the people I work with. (I'm not claiming that this is actually the case in academia, to be clear.) And if it's true that I'm only doing what everyone else does, then it makes no sense to call me out, especially if your "call-out" is guilt-based; after all, the kinds of people most likely to respond to guilt trips are likely to be exactly the people who are doing better than average, meaning that the primary targets of your moral attack are precisely the ones who deserve it the least. (An interesting analogy can be made here regarding speeding--most people drive 10-15 miles over the official speed limit on freeways, at least in the US. Every once in a while, somebody gets pulled over for speeding, while all the other drivers--all of whom are driving at similarly high speeds--get by unscathed. I don't think it's particularly controversial to claim that (a) the driver who got pulled over is usually more annoyed at being singled out than they are recalcitrant, and (b) this kind of "intervention" has pretty much zero impact on driving behavior as a whole.)
Is your prediction that if it was common knowledge that police had permanently stopped pulling any cars over unless the car was at least 10 mph over the average driving speed on that highway in that direction over the past five minutes, in addition to being over the official speed limit, that average driving speeds would remain essentially unchanged?
Take out the “10mph over” and I think this would be both fairer than the existing system and more effective. (Maybe some modification to the calculation of the average to account for queues etc.)
As it happens, the case of speeding also came up in the comments on the OP. Yarkoni writes:
On reflection I’m not sure “above average” is a helpful frame. I think it would be more helpful to say someone being “net negative” should be a valid target for criticism. Someone who is “net positive” but imperfect may sometimes still be a valid target depending on other considerations (such as moving an equilibrium).

I don't endorse the quoted statement, I think it's just as perverse as you do. But I do think I can explain how people get there in good faith. The idea is that moral norms have no independent existence, they are arbitrary human constructions, and therefore it's wrong to shame someone for violating a norm they didn't explicitly agree to follow. If you call me out for falsifying data, you're not recruiting the community to enforce its norms for the good of all. There is no community, there is no all, you're simply carrying out an unprovoked attack against me, which I can legitimately respond to as such.

(Of course, I think this requires an illogical combination of extreme cynicism towards object-level norms with a strong belief in certain meta-norms, but proponents don't see it that way.)

It's an assumption of a pact among fraudsters (a fraud ring). I'll cover for your lies if you cover for mine. It's a kind of peace treaty. In the context of fraud rings being pervasive, it's valuable to allow truth and reconciliation: let the fraud that has been committed come to light (as well as the processes causing it), while having a precommitment to no punishments for people who have committed fraud. Otherwise, the incentive to continue hiding is a very strong obstacle to the exposition of truth. Additionally, the consequences of all past fraud being punished heavily would be catastrophic, so such large punishments could only make sense when selectively enforced.
1clone of saturn5y
Right... but fraud rings need something to initially nucleate around. (As do honesty rings)
I don't endorse it in that context, because data matters. Otherwise, why not? There are plenty of situations where "bad"/"good" seems like a non-issue*/counterproductive. *If not outright beneficial.
But that doesn't mean that everyone who fails to do what they did is an exceptionally bad person, and lambasting them for it isn't actually a very good way to get them to change.

I haven't said 'bad person' unless I'm missing something. I've said things like 'doing net harm in your career' or 'making it worse' or 'not doing the right thing.' I'm talking about actions, and when I say 'right thing' I mean shorthand for 'that which moves things in the directions you'd like to see' rather than any particular view on what is right or wrong to move towards, or what moves towards what, leaving those to the individual.

It's a strange but consistent thing that people's brains flip into assuming that anyone thinking some actions are better than other actions are accusing others who don't take the better actions of being bad people. Or even, as you say, 'exceptionally bad' people.

I mean, you haven't called anyone a bad person, but "It's Not The Incentives, It's You" is a pretty damn accusatory thing to say, I'd argue. (Of course, I'm also aware that you weren't the originator of that phrase--the author of the linked article was--but you at least endorse its use enough to repeat it in your own comments, so I think it's worth pointing out.)

Interesting. I am curious how widely endorsed this dynamic is, and what rules it operates by.

On two levels.

Level one is the one where some level of endorsement of something means that I'm making the accusations in it. Which at some levels that it happens often in the wild is clearly reasonable, and at some other levels that it happens in the wild often, is clearly unreasonable.

Level two is that the OP doesn't make the claim that anyone is a bad person. I re-read the OP to check. My reading is this. It claims that they are engaging in bad actions, and that there are bad norms that seem to have emerged, that together are resulting in bad outcomes. And it argues that people are using bad justifications for that. And it importantly claims that these bad outcomes will be bad not only for 'science' or 'the world' but for the people that are taking the actions in question, who the OP believes misunderstand their own incentives, in addition to having false beliefs as to what impact actions will have on others, and sometimes not caring about such impacts.

That is importantly different from claiming that these are bad people.

Is it possible to say 'your actions are bad and maybe you should stop' or even 'your actions are having these results and maybe you should stop' without saying 'you are bad and you should feel bad'?

I actually am asking, because I don't know.

Is it possible to say 'your actions are bad and maybe you should stop' or even 'your actions are having these results and maybe you should stop' without saying 'you are bad and you should feel bad'?
I actually am asking, because I don't know.

I've touched on this elsethread, but my actual answer is that if you want to do that, you either need to create a dedicated space of trust for it, that people have bought into. Or you need to continuously invest effort in it. And yes, that sucks. It's hugely inefficient. But I don't actually see alternatives.

It sucks even more because it's probably anti-inductive, where as some phrases become commonly understood they later become carrier waves for subtle barbs and political manipulations. (I'm not confident how common this is. I think a more prototypical example is "southern politeness" with "Oh bless your heart").

So I don't think there's a permanent answer for public discourse. There's just costly signaling via phrasing things carefully in a way that suggests you're paying attention to your reader's mental state (including their mental map ... (read more)

It seems pretty fucked up to take positive proposals at face value given that context.

Optimizing for anything is costly if you’re not counting the thing itself as a benefit.

Suppose I do count the thing itself (call it X) as a benefit. Given that I'm also optimizing for other things at the same time, the outcome I end up choosing will generally be a compromise that leaves some X on the table. If everyone is leaving some X on the table, then deciding when to blame or "call out" someone for leaving some X on the table (i.e., not being as honest in their research as they could be) becomes an issue of selective prosecution (absent some bright line in the sand such just making up data out of thin air). I think this probably underlies some people's intuitions that calling people out for this is bad.

Being in a moral maze is not worth it. They couldn’t pay you enough, and even if they could, they definitely don’t. Even if you end up CEO, you still lose. These lives are not worth it. Do not be a middle manager at a major corporation that looks like this. Do not sell your soul.

What if Moral Mazes is the inevitable outcome of trying to coordinate a large group of humans in order to take advantage of some economy of scale? (My guess is that Moral Mazes is just part of the

... (read more)

I think these are (at least some of) the right questions to be asking.

The big question of Moral Mazes, as opposed to conclusions worth making more explicit, is: Are these dynamics the inevitable result of large organizations? If so, to what extent should we avoid creating large organizations? Has this dynamic ever been different in the past in other places and times, and if so why and can we duplicate those causes?

Which I won't answer here, because it's a hard question, but my current best guess on question one is: It's the natural endpoint if you don't create a culture that explicitly opposes it (e.g. any large organization that is not explicitly in opposition to being an immoral maze will increasingly become one, and things generally only get worse over time on this axis rather than better unless you have a dramatic upheaval which usually means starting over entirely) and also that the more other large organizations around you are immoral mazes, the faster and harder such pressures will be, and the more you need to push back to stave them off.

My best guess on question two is: Quite a lot. At least right here, right now any sufficiently large organization, be ... (read more)

This comment feels like it correctly summarizes a lot of my thinking on this topic, and I would feel excited about a top-level post version of it.
"Selling out" has been in the well-known concept space for a long long time - it's not a particularly recent phenomenon to have to make choices where the moral/prosocial option is not the materially-rewarded one. It probably _IS_ recent that any group or endeavor can be expected to have large impact over much of humanity. Do we have any examples of groups that both behave well AND get significant things done?
One idea on the subject of government is "eventually it will fail/fall. This has happened a lot throughout history, and it will happen someday to this country. Things may keep getting big/inefficient, but the system keeps chugging along until it dies." One alternative to this, would be to start a group/country/etc. with an explicit end date - something similar with regards to some aspect. (Reviewing all laws on the books to see if they should stick around would be a big deal, as would implementing laws with end dates, or only laws with end dates. Some consider this to have failed in the past though, as emergency powers demonstrate.)
Nod. I don't know that I disagree with any of this per se. I'll respond more on Sunday. Any disagreements I have I think are about how to weight things and how to strategize (with slightly different caveats for individuals, for groups with fences, and for amorphous society)
I could imagine this being true in some sort of hyper-Malthusian setting where any deviation from the Nash equilibrium gets you immediately killed and replaced with an otherwise-identical agent who will play the Nash equilibrium.

Let me try to be a little clearer here.

If someone defrauds me, and I object, and they explain that the incentive structure society has set up for them pays more on net for fraud than for honest work, then this is at least a relevant reply, and one that is potentially consistent with owning one's decision to participate in corruption rather than fighting it or opting out. (Though I think the article makes a pretty good case that in the specific case of academia, "fighting it or opting out" is better for most reasonable interests.)

If someone defrauds me, and I object, and they explain that they're instead spending their goodness budget on avoiding eating meat, this is not a relevant reply in the same sense. Factory farmed animals aren't a party we're negotiating with or might want to win the trust of, and the public interest in accurate information is different in kind from the public interest in people not causing animals to suffer.

This is especially important in the light of a fairly recent massive grass-roots effort in academia - originated by academics in multiple disciplines volunteering their spare time - to do the work that led to the replication crisis, because academics in many fields are actually still trying to get the right answer along some dimensions and are willing to endure material costs (including reputational damage to their own fields) to do so. So, that's not actually a proposal to decline to initiate a stag hunt, that's a proposal to unilaterally choose Rabbit in a context where close to a critical quorum might be choosing Stag.

Another distinction I think is important, for the specific example of "scientific fraud vs. cow suffering" as a hypothetical:

Science is a terrible career for almost any goal other than actually contributing to the scientific endeavor.

I have a guess that "science, specifically" as a career-with-harmful-impacts in the hypothetical was not specifically important to Ray, but that it was very important to Ben. And that if the example career in Ray's "which harm is highest priority?" thought experiment had been "high-frequency-trading" (or something else that some folks believe has harms when ordinarily practiced, but is lucrative and thus could have benefits worth staying for, and is not specifically a role of stewardship over our communal epistemics) that Ben would have a different response. I'm curious to what extent that's true.

You're right that I'd respond to different cases differently. Doing high frequency trading in a way that causes some harm - if you think you can do something very good with the money - seems basically sympathetic to me, in a sufficiently unjust society such as ours.

Any info good (including finance and trading) is on some level pretending to involve stewardship over our communal epistemics, but the simulacrum level of something like finance is pretty high in many respects.

I think your final paragraph is getting at an important element of the disagreement. To be clear, *I* treat science and high frequency trading differently, too, but yes I think to me it registers as "very important" and to Ben it seems closer to "sacred" (which, to be clear, seems like a quite reasonable outlook to me) Small background tidbit that's part of this: I think many scientists have goals that seem like more like like "do what their parents want" and "be respectable" or something. Which isn't about traditional financial success, but looks like opting into a particular weird sub-status-hierarchy that one might plausibly well suited to win at. Another background snippet informing my model: Recently I was asking an academic friend "hey, do you think your field could benefit from better intellectual infrastructure?" and they said "you mean like LessWrong?" and I said "I mean a meta-level version of it that tries to look at the local set of needs and improve communication in some fashion." And they said something like "man, sorry to disappoint you, but most of academia is not, like, trying to solve problems together, the way it looks like the rationality or AI alignment communities are. They wouldn't want to post clearer communications earlier in the idea-forming stage because they'd be worried about getting scooped. They're just trying to further their own career." This is just one datapoint, and again I know very little about academia overall. Ben's comments about how the replication crisis happened via an organic grassroots process seems quite important and quite relevant. Reiterating from my other post upthread: I am not making any claims about what people in science and/or academia should do. I'm making conditional claims, which depend on the actual state of science and academia.

One distinction I see getting elided here:

I think one's limited resources (time, money, etc) are a relevant question in one's behavior, but a "goodness budget" is not relevant at all.

For example: In a world where you could pay $50 to the electric company to convert all your electricity to renewables, or pay $50 more to switch from factory to pasture-raised beef, then if someone asks "hey, your household electrical bill is destroying the environment, why didn't you choose the green option", a relevant reply is "because I already spent my $50 on cow suffering".

However, if both options cost $0, then "but I already switched to pasture-raised beef" is just irrelevant in its entirety.

A few clarifications that seem worth making for posterity: [Epistemic status: none of this was or is anything I'm especially confident in. But I'm relatively meta-confident that if you think the answers here are obvious, you're typical minding on how obvious your particular world/morality-model is] * To be clear, I eat meat, and I devote a lot of effort to improving my honesty, meta-honesty and integrity. I expect the median scientist to be in a fairly different epistemic and agentic position that I am. * There were a couple mistakes that I noticed or had pointed out to me after making the above comment. One of them is discussed here. * For the point I was actually trying to make, a perhaps less (or differently?) distracting example would be: "If I'm a median academic, it's not obvious whether I should try to become more honest* than the median academic, or try to, I dunno, recycle more or something." * *where I think "honesty" is a skill, built out of various sub-skills including social resilience, introspection, etc. * This question might have different answers depending on how dishonest or honest you think the median academic is. * I do not think recycling is very important. [edit: concretely, it is definitely less important than academic honesty by a large margin]. But the world is full of things that might possibly be important for me to do, and figuring out which of them matter and why is hard. I expect the median academic to have put very little thought into their morality. If they think improving their honesty is more important, or improving their recycling is more important, I think this says very little about how good they are at choosing moral principles, and has much more to do with having happened to accidentally bump into a set of friends/family/etc that priorities honesty or recycling or veganism or whatever. * I think the most important thing the median academic should do is become more socially resilient, agenty, and good at
I can't tell if you're saying eating meat is worse than faking data to you personally, or for a hypothetical academic, could you clarify? And if it is a position you personally hold, can you explain your moral calculus?
What does this mean?
It was a reference to this post:

I almost wrote a reply to that post when it came up (but didn't because one should not respond too much when Someone Is Wrong On The Internet, even Scott), because this neither seemed like an economic perspective on moral standards, nor did it work under any equilibrium (it causes a moral purity cascade, or it does little, rarely anything in between), nor did it lead to useful actions on the margin in many cases as it ignores cost/benefit questions entirely. Strictly dominated actions become commonplace. It seems more like a system for avoiding being scapegoated and feeling good about one's self, as Benquo suggests.

(And of course, >50% of people eat essentially the maximum amount of quality meat they can afford.)

So you mean try to do slightly less of what can get you blamed than average? What policy goal does slightly outperforming at an incoherent standard achieve?
Try coming up with a charitable interpretation of what I said. I feel like the various posts I linked showcase why I think there failure modes to naively doing the thing you're saying, not to mention the next two bullet points.

I don't actually understand how to be "more charitable" or "less charitable" here - I'm trying to make sense of what you're saying, and don't see any point in making up a different but similar-sounding opinion which I approve of.

If I try to back out what motives lead to tracking the average level of morality (as opposed to trying to do decision theory on specific cases), it ends up to be about managing how much you blame yourself for things (i.e. trying to "be" "good"); I actually don't see how thinking about global outcomes would get you there.

If you have a different motivation that led you there, you're in a better position to explain it than I am.

Writing posts a certain way to get more karma on lesswrong is an area of application for this stance.