My slightly relevant background: I'm a former tech entrepreneur (co-founder of the music software company Sibelius). Among other things I now play the stock market & write software to predict it. I have degrees in philosophy.
On a small point, maybe it would be helpful to use a more natural term than 'defusion', e.g. 'detachment' (if that expresses it clearly), or perhaps something like 'objectivity'.
As better to avoid the confusion of introducing a new technical term if something can be expressed just as well with a familiar one.
This is an interesting topic and post. My thoughts following from the God exists / priors bit (and apologies if this is an obvious point, or dealt with elsewhere - e.g. too many long comments below to read more than cursorily!):
Many deeply-held beliefs - particularly broadly ideological ones (e.g. theological, ethical, or political) - are held emotionally rather than rationally, and not really debated in a search for the truth, but to proclaim one's own beliefs, and perhaps in the vain hope of converting others.
So any apparently strong counter-evidence or counter-arguments are met with fall-back arguments, or so-called 'saving hypotheses' (where special reasons are invoked for why God didn't answer your entirely justified prayer). Savvy arguers will have an endless supply of these, including perhaps some so general that they can escape all attack (e.g. that God deliberately evades all attempts at testing). Unsavvy arguers will run out of responses, but still won't be convinced, and will think there is some valid response that they just happen not to know. (I've even heard this used by one church as an official ultimate response to the problem of evil: 'we don't know why God allows evil, but he does (so there must be a good reason we just don't know about)'.)
That is, the double-crux model that evidence (e.g. the universe) comes first and beliefs follow from it is reversed in these cases. The beliefs come first, and any supporting evidence and reasoning are merely used to justify the beliefs to others. (Counter-evidence and counter-arguments are ignored.) Gut feel is all that counts. So there aren't really cruxes to be had.
I don't think these are very special cases; probably quite a wide variety of topics are treated like this by many people. E.g. a lot of 'debates' I see on Facebook are of this kind; they lead nowhere, no-one ever changes their mind, and they usually turn unpleasant quickly. The problem isn't the debating technique, but the nature of the beliefs.
Re your addendum, to make an almost-obvious point, over-optimizing producing worse results is what large parts of modern life are all about; typically over-optimizing on evolved behaviours. Fat/sugar, porn, watching TV (as a substitute for real life), gambling (risk-taking to seek reward), consumerism and indeed excess money-seeking (accumulating unnecessary resources), etc. The bad results often take the form of addictions.
Though some such things are arguably harmless (e.g. professional sport - building unnecessary muscles/abilities full-time to win a pointless status contest).
I reckon a bit of both - viz.:
(a) The Internet (and TV before it) make it in platform's interests, via ad revenue, to produce clickbait (soaps/game shows), because humans are more interest-seekers than truth-seekers. This phenomenon is aka 'dumbing down'. And also:
(b) the Internet enables all consumers to broadcast their own stuff regardless of truth/quality. This is another kind of dumbing down; though note TV didn't do this, making it clear that it's a different kind.
The alignment problem is arguably another example, like my above response re quantum physics, of a field spilling over into philosophy, such that even a strong amateur philosopher can point things out that the AI professionals hadn't thought through. I.e. it shows that AI alignment is an interdisciplinary topic which (I assume) went beyond existing mainstream AI.
First, thanks for your comments on my comments, which I thought no-one would read on such an old article!
Re your quantum physics point, with unusual topics like this that overlap with philosophy (specifically metaphysics), it is true that physicists can be out of their depth on that part of it, and so someone with a strong understanding of metaphysics (even if not a professional philosopher as such) can point out errors in the physicists' metaphysics. That said, saying X is clearly wrong (due to faulty metaphysics) is a weaker claim than that Y is clearly right, particularly if there are many competing views. (As there are AFAIK even in the philosophy of QM.) Just as a professional physicist can't be certain about getting the metaphysics bit of QM right, even a professional philosopher couldn't be certain about the physics bit of it; not certain enough to claim a slam-dunk. So without going into the specifics of the case (which I'm not qualified to do) it still seems like an overreach.
Also, more generally, I assume interdisciplinary topics like this (for which a highly knowledgeable amateur could spot flaws in the reasoning of someone who's a professional in one discipline but not the other) are the exception rather than the rule.
Re the economics case, well, for all I know, EY may well have been right in this case (and for the right reasons), but if so then it's just a rare example of an amateur who has a very high professional-level understanding of a particular topic (though presumably not of various other parts of economics). I.e. this is an exception.
That said, and without going into the fine details of the case, the professionals here presumably include the top macroeconomists in Japan. Is it really plausible that EY understands the relevant economics and knows more relevant information than them? (E.g. they may well have considered all kinds of facts & figures that aren't public or at least known to EY.) Which is presumably where the issue of other biases/influences on them would come in; and while I accept that there could be personal/political biases/reasons for doing the economically wrong thing, this can be too easy a way of dismissing expert opinion.
So I'd still put my money on the professional vs the amateur, however persuasive the latter's arguments might seem to me. And again, the fact that the Bank of Japan's decision turned out badly may just show that economics is an inexact science, in which correct bets can turn out badly and incorrect bets turn out well.
One other exception I'd like to add to my original comment: it is certainly true that a highly expert professional in a field can be very inexpert in topics that are close to but not within their own specialism. (I know of this in my own case, and have observed it in others, e.g. lawyers. E.g. a corporate lawyer may only have a sketchy understanding of IP law. Though they are well aware of this.)
A decade late to the party, I'd like to join those skeptical of EY's use of many-worlds as a slam-dunk test of contrarian correctness. Without going into the physics (for which I'm unqualified), I have to make the obvious general objection that it is sophomoric for an amateur in an intellectual field - even an extremely intelligent and knowledgeable one - to claim a better understanding than those who have spent years studying it professionally. It is of course possible for an amateur to have an insight professionals have missed, but very rare.
I had a similar feeling on reading EY's Inadequate Equilibria, where I was far from convinced by his example that an amateur can adjudicate between an economics blogger and central bankers and tell who is right. (EY's argument that the central bankers may have perverse incentives to give a dishonest answer is not that strong, since they may give an honest answer anyway, and that fact that with 20-20 hindsight it might look like they were wrong just shows that economics is an inexact science.) The economics blogger may make points that seem highly plausible and convincing to an amateur, but then, one-sided arguments often do.
Back to physics, any amateur who says "many-worlds is just obvious if you understand it, so those who say otherwise are obviously wrong" is claiming a better understanding than many professionals in the field; again backed with allegations of perverse incentives. Though the latter carry some weight, I'd put my money on the amateur just being overconfident, and having missed something.
If anything I'd judge people on the sophistication of their reasons rather than the opinion itself. E.g. I'd take more notice of someone who had a sophisticated reason for denying that 1 + 1 = 2 than someone who said 'it's just obvious, and anyone who says otherwise is an idiot'.
(I for one have doubts that 1 + 1 = 2; the most I'd be prepared to say is that 1 + 1 often equals 2. And I'm in good company here - e.g. Wittgenstein had grave doubts that simple counting and addition (and indeed any following of rules) are determinate processes with definite results, something which he discussed in seminars with Alan Turing among other students of his.)
The one kind of case in which I'd prefer the factual opinion of a sophisticated amateur to a professional is in fields which don't involve enough intellectual rigour. For example I'd rather believe an amateur with an advanced understanding of evolutionary psychology than some gender studies professors to give a correct explanation of certain social phenomena; not just because the professors may well have an ideological axe to grind, but also because they may lack the scientific rigour necessary to understand the subtleties of causation and statistics.
AFAIK the first two aren't correlated with intelligence. Cf geeks stereotypically lack people skills.
FWIW another reason, somewhat similar to the low hanging fruit point, is that because the remaining problems are increasingly specialized, they require more years' training before you can tackle them. I.e. not just harder to solve once you've started, but it takes longer for someone to get to the point where they can even start.
Also, I wonder if the increasing specialization means there are more problems to solve (albeit ever more niche), so people are being spread thinner among them. (Though conversely there are more people in the world, and many more scientists, than a century or two ago.)
In software development, a perhaps relevant kind of problem solving, extra resources in the form of more programmers working on the same project doesn't speed things up much. My guesstimate is output = time x log programmers. I assume the main reason being because there's a limit to the extent that you can divide a project into independent parallel programming tasks. (Cf 9 women can't make a baby in 1 month.)
Except that if the people are working in independent smaller teams, each trying to crack the same problem, and *if* the solution requires a single breakthrough (or a few?) which can be made by a smaller team (e.g. public key encryption, as opposed to landing a man on the moon), then presumably it's proportional to the number of teams, because each has an independent probability of making the breakthrough. And it seems plausible that solving AI threats might be more like this.