Said Achmiz

Comments

WordPress Destroys Editing Process, Seeking Alternatives

Yep, a fair point. It only happens with Naval Gazing (not my personal blog), for reasons I don’t think would apply to Zvi’s blog, but until the bug that causes that is fixed, it’s a risk.

WordPress Destroys Editing Process, Seeking Alternatives

Zvi, I host a couple of blogs (such as my own blog and Naval Gazing) on my custom wiki platform. If you’re not able to find another alternative that suits you better, I’d be happy to host your blog as well.

Pros:

  • I won’t ever add ‘features’ like “a new editor that doesn’t work”
  • Personal support / assistance
  • A massive array of features, from LaTeX to LessWrong comment thread transclusion to embedded graphs / charts to Git integration to … lots of stuff

Cons:

  • No WYSIWIG editor
  • Less ‘polished’ than Wordpress in various ways
  • Definitely not a drop-in replacement and cannot seamlessly transfer over old blog contents
Attacking enlightenment

The conversation did not take place, so there are no logs to produce.

Jam is obsolete

Jam is tastier than frozen fruit. This, as far as I can see, ends the debate. (And if your jam is not tastier than frozen fruit, then you’re doing jam wrong.)

(… you are making the jam yourself, of course—aren’t you? Certainly there is little point in comparing to store-bought jam.)

Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle

I don’t think that’s right. As I mention in another comment, Dennett’s notion of the intentional stance is relevant here. More specifically, it provides us with a way to distinguish between cases that Zack intended to include in his concept of “algorithmic intent”, and such cases as the “catch more vitamin D” that you mention. To wit:

The positing of “algorithmic intent” is appropriate in precisely those cases where taking the intentional stance is appropriate (i.e., where—for humans—non-trivial gains in compression of description of a given agent’s behavior may be made by treating the agent’s behavior as intentional [i.e., directed toward some posited goal]), regardless of whether the agent’s conscious mind (if any!) is involved in any relevant decision loops.

Conversely, the positing of “algorithmic intent” is not appropriate in those cases where the design stance or the physical stance suffice (i.e., where no meaningful gains in compression of description of a given agent’s behavior may be made by treating the agent’s behavior as intentional [i.e., directed toward some posited goal]).

Clearly, the “catch more vitamin D” case falls into the latter category, and therefore the term “algorithmic intent” could not apply to it.

Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle

This discussion would be incomplete without a mention of Daniel Dennett’s notion of the intentional stance.

The Ghost of Joseph Weber

If you would like to hear this post read aloud, try this video.

Meta: the video didn’t make it through the cross-posting, it seems. (I am not sure if Less Wrong supports video embedding; I think it may not. You might want to just link the video.)

The New Frontpage Design & Opening Tag Creation!

For instance, ReadTheSequences.com has a slightly off-white background for just this reason.

The Illusion of Ethical Progress

I mean that chairs and apples are less universal than the Universal Law of Gravitation.

In what way?

That the law of gravitation holds is a fact about the universe. That chairs exist is also a fact about the universe.

What does “less universal” mean? Does it mean something like “is applicable or relevant in a smaller volume of the observable universe”? If humanity spreads throughout the cosmos, and if we bring chairs with us everywhere we go, will chairs and gravitation thereby become equally “universal” (or, at least, more equal in “universality” than they are now)?

In any case this comparison is a red herring. The relevant comparison is not “chairs vs. gravity”, it’s “chairs vs. ethics”—or, more to the point, “guns vs. ethics”, “tanks vs. ethics”, “food vs. ethics”, “laws vs. ethics”, “governments vs. ethics”, “money vs. ethics”, “prestige vs. ethics”, etc. No vague allusion to “universality” will help you in any of these cases, since all of the things I’ve just listed are (so far as we know, anyway) approximately equally localized—namely, they are all facts about what exists and happens on the surface of one particular planet.

The Illusion of Ethical Progress

Perhaps this is not central to the post, but I have always found that bit from Pratchett to be unbelievably inane. Truly, it grinds my gears to see it quoted, in wise tones, as if it expresses some profound truth; and doubly so, to see it quoted on Less Wrong.

Consider the following substitution:

Take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of apples, one molecule of chairs.

Right? There aren’t any chair molecules, are there? You won’t find apples on the Periodic Table, will you? So what? Do chairs and apples not exist? Are they somehow not real, or less real than… well, than what…? Hydrogen? Methane? Should we adjust our attitude toward apples, or chairs, or paintings, or tigers, on the basis of this insight? What, actually, is to be concluded from this?

Anyway, this is old news. The point, if you like, is that of course ‘physics’ ‘contains’ ethics, and improvements in ethics; these things are facts about people, and the goings-on in people’s brains—which are (dualistic views aside) very much “contained in physics”. Of course, you could argue otherwise[1], but you must do it without recourse to any such “greedy-reductionist”, “grind down the universe” arguments…


  1. E.g., non-cognitivism, or error theory. I am sympathetic to certain arguments in this broad class; but note that they have nothing much to do with the question of whether [fundamental] ‘physics’ ‘contains’ ethics or not. ↩︎

Load More