When you reach for this term, take a second to consider more specifically what you mean, and considering saying that more specific thing instead.
What considerations might lead you to not say the more specific thing? Can you give a few examples of cases where it's better to say "outside view" than to say something more specific?
The amount of research and development coming from twitter in the 5 years before the acquisition was already pretty much negligible
That isn't true, but I'm making a point that's broader than just Twitter, here. If you're a multi-billion dollar company, and you're paying a team 5 million a year to create 10 million a year in value, then you shouldn't fire them. Then again, if you do fire them, probably no one outside your company will be able to tell that you made a mistake: you're only out 5 million dollars on net, and you have billions more where that came from. If you're an outside observer trying to guess whether it was smart to fire that team or not, then you're stuck: you don't know how much they cost or how much value they produced.
How long do we need to wait for lawsuits or loss of clients to cause observable consequences?
In Twitter's case the lawsuits have already started, and so has the loss of clients. But sometimes bad decisions take a long time to make themselves felt; in a case close to my heart, Digital Equipment Corporation made some bad choices in the mid to late 80s without paying any visible price until 1991 or so. Depending on how you count, that's a lead time of 3 to 5 years. I appreciate that that's annoying if you want to have a hot take on Musk Twitter today, but sometimes life is like that. The worlds where the Twitter firings were smart and the worlds where the Twitter firings were dumb look pretty much the same from our perspective, so we don't get to update much. If your prior was that half or more of Twitter jobs were bullshit then by all means stay with that, but updating to that from somewhere else on the evidence we have just isn't valid.
If you fire your sales staff your company will chug along just fine, but won't take in new clients and will eventually decline through attrition of existing accounts.
If you fire your product developers your company will chug along just fine, but you won't be able to react to customer requests or competitors.
If you fire your legal department your company will chug along just fine, but you'll do illegal things and lose money in lawsuits.
If your fire your researchers your company will chug along just fine, but you won't be able to exploit any more research products.
If you fire the people who do safety compliance enforcement your company will chug along just fine, but you'll lose more money to workplace injuries and deaths (this one doesn't apply to Twitter but is common in warehouses).
If you outsource a part of your business instead of insourcing (like running a website on the cloud instead of owning your own data centers, or doing customer service through a call center instead of your own reps) then the company will chug along just fine, and maybe not be disadvantaged in any way, but that doesn't mean the jobs you replaced were bullshit.
In general there are lots of roles at every company that are +EV, but aren't on the public-facing critical path. This is especially true for ad-based companies like Twitter and Facebook, because most of the customer-facing features aren't publicly visible (remember: if you are not paying, you're not the customer).
That post sounds useful, I would have liked to read it.
Sure, I just don't expect that it did impact peoples' models very much*. If I'm wrong, I hope this review or the other one will pull those people out of the woodwork to explain what they learned.
*Except about Leverage, maybe, but even there...did LW-as-a-community ever come to any kind of consensus on the Leverage questions? If Geoff comes to me and asks for money to support a research project he's in charge of, is there a standard LW answer about whether or not I should give it to him? My sense is that the discussion fizzled out unresolved, at least on LW.
I liked this post, but I don't think it belongs in the review. It's very long, it needs Zoe's also-very-long post for context, and almost everything you'll learn is about Leverage specifically, with few generalizable insights. There are some exceptions ("What to do when society is wrong about something?" would work as a standalone post, for example), but they're mostly just interesting questions without any work toward a solution. I think the relatively weak engagement that it got, relative to its length and quality, reflects that: Less Wrong wasn't up for another long discussion about Leverage, and there wasn't anything else to talk about.
Those things aren't flaws relative to Cathleen's goals, I don't think, but they make this post a poor fit for the review: a didn't make a lot of intellectual progress, and the narrow subfield it did contribute to isn't relevant to most people.
AIUI it was a feature of early Tumblr culture, which lingered to various degrees in various subcommunities as the site grew more popular. The porn ban in late 2018 also seemed to open things up a lot, even for people who weren't posting porn; I don't know why.
The way I understood the norm on Tumblr, signal-boosting within Tumblr was usually fine (unless the post specifically said "do not reblog" on it or something like that), but signal-boosting to other non-Tumblr communities was bad. The idea was that Tumblr users had a shared vibe/culture/stigma that wasn't shared by the wider world, so it was important to keep things in the sin pit where normal people wouldn't encounter them and react badly.
Skimming the home invasion post it seems like the author feels similarly: Mastodon has a particular culture, created by the kind of people who'd seek it out, and they don't want to have to interact with people who haven't acclimated to that culture.
I'm a little curious what reference class you think the battle of Mariupol does belong to, which makes its destruction by its defenders plausible on priors. But mostly it sounds like you agree that we can make inferences about hard questions even without a trustworthy authority to appeal to, and that's the point I was really interested in.