Sure, but the conclusion that any given approach you’ve found actually works must be arrived at in the usual way—by updating on evidence—and the prior probability is low. And such a favorable conclusion about the approaches you list in the OP is unwarranted.
(Edited to add note)
(Note: this is my third comment on this topic, and as such, any further comments will be delayed by the rate limit.) If you would like to see further responses, I believe there is some sort of setting which you can use to enable further comments from me.)
Who said there has to be any solution? The universe makes us no such guarantee. The answer to your question could very well be “nobody can fix your stuff; suffer”.
(Well, until the singularity, when godlike friendly AIs can rewrite our whole brains to eliminate flaws, or some such speculative thing. But that can be said of anything, and is irrelevant now; I mention it for completeness only—yes, there may not be any physical law that prevents any given problem from being solved, but that doesn’t mean that we can actually solve it.)
… any therapeutic intervention that is now standardized and deployed on mass-scale has once not been backed by scientific evidence.
And, often, is still not backed by scientific evidence, even after it’s been deployed on a mass scale.
Freud was considered a crackpot when he first suggested that actually, how peoples’ childhoods play out might have an influence on how they behave and misbehave as adults.
As far as we know, therapy based on Freudian ideas is nothing more than pseudoscience, and most of the claims about the effectiveness of such things are baseless. This stuff was considered nonsense; then—in the manner of fashion trends and passing fads—were believed to be effective; and now, once again and increasingly, is understood to be nonsense after all.
Given the replication crisis, and recent developments in various social-science fields, it seems likely (and other evidence points this way) that the same is true of most or all of the other forms of therapy that you mention. (See also the dodo bird verdict.)
Similarly, the current mindfulness-based third wave of psychotherapy would be unthinkable if some bums in India several millennia ago hadn’t decided to see what happens when you just sit very, very still for a while. Without any double blind experiments to reassure them while their minds disintegrated and went down all kinds of scary avenues.
It seems like what happens if you just sit very, very still for a while is that your mind disintegrates and goes down all kinds of scary avenues. This is a bad thing! Going crazy is bad, and not good.
It really seems like the rationalists who mistrusted you were right to mistrust you, but that you, on the other hand, are wrong to “trust that the past decade of studying human minds theoretically, on the meditation cushion, and in relationships, prepared me for the real world”.
In other words: The leading edge of cultural innovation never happens in health insurance-paid sessions with licensed therapists.
Of course it doesn’t—who would have thought otherwise? But just because something happens in some other place does not, in fact, mean that said thing is anything but nonsense.
You aren’t meant to be able to anti-react to a reaction that no one else has reacted
But this seems bad, then, given the current stable of reactions!
I understand it from the standpoint of interaction design, of course—but then it seems like you should add opposite-valence reactions for those reactions which currently make sense as standalone anti-reacts (see my other comments in this thread for some examples).
It seems like an awkward bit of information-architecture design, though, doesn’t it?
I mean, for some of the reactions, it does, actually, make sense to anti-react to them directly, “from scratch”, as it were. Anti-“Insightful” clearly means “not insightful”, anti-“Virtue of Scholarship” can mean “this should exhibit the virtue of scholarship but fails to do so”, anti-“Clear” and anti-“Hits the Mark” and anti-“Exciting” also all have fairly clear meanings even when not reacting to their regular (non-reversed) versions.
Now, for one thing, that this is the case for some of the reacts but not others seems like it’s bound to lead to confusion and weirdness.
For another thing, it seems like making it easy to directly anti-react with the reactions I list above… should be fairly easy to access via the UI, given that it’s clearly meaningful to do so. But this would also (as currently designed) make it easier to directly anti-react with “Wrong” or “Shrug” or whatever, which seems less than ideal.
This seems to me to suggest that the conceptual design of the feature might need some work.
What does it mean to anti-react “Wrong” when no one has reacted “Wrong” (for example)? (Or “Shrug”, or “Additional questions”.)
Also, the hover tooltip for a reaction covers up the one(s) below it—very annoying, makes it hard to browse them.
(Note: this comment delayed by rate limit. Next comment on this topic, if any, won’t be for a week, for the same reason.)
Very ironic! I had all three of those in mind as counterexamples to your claim. (Well, not Deepmind specifically, but Google in general; but the other two for sure.)
Bell Labs was indeed “one of history’s most intellectually generative places”. But the striking thing about Bell Labs (and similarly Xerox PARC, and IBM Research) is the extent to which the people working there were isolated from ordinary corporate politics, corporate pressures, and day-to-day business concerns. In other words, these corporate research labs are notable precisely for being enclaves within which corporate/company culture essentially does not operate.
As far as Google and/or Deepmind goes, well… I don’t know enough about Deepmind in particular to comment on it. But Google, in general, is famous for being a place where fixing/improving things is low-prestige, and the way to get ahead is to be seen as developing shiny new features/products/etc. This has predictable consequences for, e.g., usability (Google’s products are infamous for having absolutely horrific interaction and UX design—Google Plus being one egregious example). Everything I’ve heard about Google indicates that the stereotypical “moral maze” dynamics of corporate culture are in full swing there.
Re: Bridgewater, you remember correctly, although “some concerns” is rather an understatement; it’s more like “the place is a real-life Orwellian panopticon, with all the crushing stress and social/psychological dysfunction that implies”. Even more damning is that they never even bother to verify that all of this helps their investing performance in any way. This seems to me to be very obviously the opposite of a healthy epistemic environment—something to avoid as assiduously as we possibly can.
Excellent post! I agree with Daniel—this is a post which I feel like should’ve been made long ago (which is about as high a level of praise as I can think of).
Link broken. New link:
http://faculty.cord.edu/andersod/wigner-abbott-reasonable-ineffectiveness.pdf