As does Chesterton, less explicitly:
Mere light sophistry is the thing that I happen to despise most of all things, and it is perhaps a wholesome fact that this is the thing of which I am generally accused. I know nothing so contemptible as a mere paradox; a mere ingenious defence of the indefensible.
and at length.
I get the impression that he (thankfully!) eased off on that particular template as time went on.
I suspect most self-identified communists would baulk at the description of their ideology as "complete state control of many facets of life".
Here's how I think about the distinction on a meta-level:
"It is best to act for the greater good (and acting for the greater good often requires being awesome)."
"It is best to be an awesome person (and awesome people will consider the greater good)."
where ''acting for the greater good" means "having one's own utility function in sync with the aggregate utility function of all relevant agents" and "awesome" means "having one's own terminal goals in sync with 'deep' terminal goals (possibly inherent in being whatever one is)" (e.g. Sam Harris/Aristotle-style 'flourishing').
Cool; I take that back. Sorry for not reading closely enough.
Ah, good point. It's like the prior, considered as a regularizer, is too "soft" to encode the constraint we want.
A Bayesian could respond that we rarely actually want sparse solutions -- in what situation is a physical parameter identically zero? -- but rather solutions which have many near-zeroes with high probability. The posterior would satisfy this I think. In this sense a Bayesian could justify the Laplace prior as approximating a so-called "slab-and-spike" prior (which I believe leads to combinatorial intractability similar to th...
Many L1 constraint-based algorithms (for example the LASSO) can be interpreted as producing maximum a posteriori Bayesian point estimates with Laplace (= double exponential) priors on the coefficients.
This is just the (intended) critique of utilitarianism itself, which says that the utility functions of others are (in aggregate) exactly what you should care about.
I have another possible explanation, which I think deserves a far greater "probability mass'': images make scientific articles seem more plausible for (some of) the same reasons they make advertising or magazine articles seem more plausible -- i.e., precognitive reasons which may have little to do with the articles' content being scientific. McCabe and Castel don't control for this, but it is somewhat supported by their comparison of their study with Weisberg's:
The simple addition of cognitive neuroscience explanations may affect people’s conscious
Luke -- your typology of ends reminds me of something I was reading recently by Jonathan Edwards. I know this is not an atheology post, and the Edwards work isn't particularly empirical, but I thought it might be an interesting antecedent besides.
Opposing Bohr's interpretation.