All of Leon's Comments + Replies

Opposing Bohr's interpretation.

As does Chesterton, less explicitly:

Mere light sophistry is the thing that I happen to despise most of all things, and it is perhaps a wholesome fact that this is the thing of which I am generally accused. I know nothing so contemptible as a mere paradox; a mere ingenious defence of the indefensible.

and at length.

I get the impression that he (thankfully!) eased off on that particular template as time went on.

I suspect most self-identified communists would baulk at the description of their ideology as "complete state control of many facets of life".

Here's how I think about the distinction on a meta-level:

"It is best to act for the greater good (and acting for the greater good often requires being awesome)."


"It is best to be an awesome person (and awesome people will consider the greater good)."

where ''acting for the greater good" means "having one's own utility function in sync with the aggregate utility function of all relevant agents" and "awesome" means "having one's own terminal goals in sync with 'deep' terminal goals (possibly inherent in being whatever one is)" (e.g. Sam Harris/Aristotle-style 'flourishing').

So arete, then?

Cool; I take that back. Sorry for not reading closely enough.

Ah, good point. It's like the prior, considered as a regularizer, is too "soft" to encode the constraint we want.

A Bayesian could respond that we rarely actually want sparse solutions -- in what situation is a physical parameter identically zero? -- but rather solutions which have many near-zeroes with high probability. The posterior would satisfy this I think. In this sense a Bayesian could justify the Laplace prior as approximating a so-called "slab-and-spike" prior (which I believe leads to combinatorial intractability similar to th... (read more)

See this [] comment. You actually do get sparse solutions in the scenario I proposed.

Many L1 constraint-based algorithms (for example the LASSO) can be interpreted as producing maximum a posteriori Bayesian point estimates with Laplace (= double exponential) priors on the coefficients.

Yes, but in this setting maximum a posteriori (MAP) doesn't make any sense from a Bayesian perspective. Maximum a posteriori is supposed to be a point estimate of the posterior, but in this case, the MAP solution will be sparse, whereas the posterior given a laplacian prior will place zero mass on sparse solutions. So the MAP estimate doesn't even qualitatively approximate the posterior.

This is just the (intended) critique of utilitarianism itself, which says that the utility functions of others are (in aggregate) exactly what you should care about.

Utilitarianism doesn't say that. Maybe some variant says that, but general utilitarianism merely says that I should have a single self-consistent utility function of my own, which is free to assign whatever weights to others. ETA: PhilGoetz says otherwise []. I believe that he is right, he's an expert in the subject matter. I am surprised and confused. []

What does "intrinsically teleological" mean?

What about mentioning the St. Petersburg paradox? This is a pretty striking issue for EUM, IMHO.

Added. See here [].
I concur. Plus, the St. Petersburg paradox was the impetus for Daniel Bernoulli's invention of the concept of utility.
The St Petersburg paradox actually sounds to me a lot like Pascal's Mugging. That is, you are offered a very small chance at a very large amount of utility, (or in the case of Pascal Mugging, of not loosing a large amount of utility), with a very high expected value if you accept the deal, but because the deal has such a low chance of paying out, a smart person will turn it down, despite that having less expected value than accepting.

I have another possible explanation, which I think deserves a far greater "probability mass'': images make scientific articles seem more plausible for (some of) the same reasons they make advertising or magazine articles seem more plausible -- i.e., precognitive reasons which may have little to do with the articles' content being scientific. McCabe and Castel don't control for this, but it is somewhat supported by their comparison of their study with Weisberg's:

The simple addition of cognitive neuroscience explanations may affect people’s conscious

... (read more)

Luke -- your typology of ends reminds me of something I was reading recently by Jonathan Edwards. I know this is not an atheology post, and the Edwards work isn't particularly empirical, but I thought it might be an interesting antecedent besides.