Said Achmiz

Wiki Contributions

Comments

Sure. Now, as far as I understand it, whether the extrapolated volition of humanity will even cohere is an open question (on any given extrapolation method; we set aside the technical question of selecting or constructing such a method).

So Eli Tyre’s claim seems to be something like: on [ all relevant / the most likely / otherwise appropriately selected ] extrapolation methods, (a) humanity’s EV will cohere, (b) it will turn out to endorse the specific things described (dismantling of all governments, removing the supply of factory farmed meat, dictating how people should raise their children).

Right?

And… you claim that the CEV of existing humans will want those things?

You don’t think that most humans would be opposed to having an AI dismantle their government, deprive them of affordable meat, and dictate how they can raise their children?

Er… yes, I am indeed familiar with that usage of the term “Friendly”. (I’ve been reading Less Wrong since before it was Less Wrong, you know; I read the Sequences as they were being posted.) My comment was intended precisely to invoke that “semi-technical term of art”; I was not referring to “friendliness” in the colloquial sense. (That is, in fact, why I used the capitalized term.)

Please consider the grandparent comment in light of the above.

Doesn’t this very answer show that an AI such as you describe would not be reasonably describable as “Friendly”, and that consequently any AI worthy of the term “Friendly” would not do any of the things you describe? (This is certainly my answer to your question!)

It also seems to strongly imply than mind uploading into some kind of classical artificial machine is possible, since it’s unlikely that all or even most of the classical properties of the brain are essential.

Could you say more about this? Why is this unlikely?

One man’s modus ponens is another man’s modus tollens. I agree that the LW-style decision theory posting encourages this type of thinking, and you seem to infer that the high-quality reasoning in the decision theory posts implies that they give good intuitions about the philosophy of identity.

I draw the opposite conclusion from this: the fact that the decision theory posts seem to work on the basis of a computationalist theory of identity makes me think worse of the decision-theory posts.

Strongly seconding this.

Let’s perhaps try and clarify what we mean here. Cooking has a larger margin of error than baking—is that what you’re referring to? (If so, then I agree.) Or are we talking about being able to repeatably get a specific result (which is how I read the OP)?

Sure, but yeast itself is not a bacteria, is the point. But indeed, a sourdough starter contains both.

Load More