Dmytry

Comments

"You might wish to read someone who disagrees with you:"

Quoting from

http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf

To say that a system of any design is an “artificial intelligence”, we mean that it has goals which it tries to accomplish by acting in the world.

I had been thinking, could it be that respected computer vision expert indeed believes that the system will just emerge world intentionality? That'd be pretty odd. Then I see it is his definition of AI here, it already presumes robust implementation of world intentionality. Which is precisely what a tool like optimizing compiler lacks.

edit: and in advance of other objection: I know evolution can produce what ever argument demands. Evolution, however, is a very messy and inefficient process for making very messy and inefficient solutions to the problems nobody has ever even defined.

What I am certain of is that your provided argument does not support, or even strongly imply your stated thesis.

I know this. I am not making argument here (or actually, trying not to). I'm stating my opinion, primarily on presentation of the argument. If you want argument, you can e.g. see what Hansen has to say about foom. It is, deliberately this way. I am not some messiah hell bent on rescuing you from some wrongness (that would be crazy).

value states of the world instead of states of their minds

Easier said than done. Valuing state of the world is hard; you have to rely on senses.

Okay, then, you're right: the manner of presentation of the AI risk issue on lesswrong somehow makes a software developer respond with incredibly bad and unsubstantiated objections.

Why when bunch of people get together, they don't even try to evaluate the impression they make on 1 individual? (except very abstractly)

Precisely, thank you! I hate arguing such points. Just because you can say something in English does not make it an utility function in the mathematical sense. Furthermore, just because in English it sounds like modification of utility function, does not mean that it is mathematically a modification of utility function. Real-world intentionality seem to be a separate problem from making a system that would figure out how to solve problems (mathematically defined problems), and likely, a very hard problem (in the sense of being very difficult to mathematically define).

With all of them? How so?

If even widely read bloggers like EY don't qualify to affect your opinions, it sounds as though you're ignoring almost everyone.

I think you discarded one of conditionals. I read Bruce Schneier's blog. Or Paul Graham's. Furthermore, it is not about disagreement with the notion of AI risk. It's about keeping the data non cherry picked, or less cherry picked.

Thanks. Glad you like it. I did put some work into it. I also have a habit of keeping epistemic hygiene by not generating a hypothesis first then cherry-picking examples in support of it later, but that gets a lot of flak outside scientific or engineering circles.

To someone that wants to personally exist for a long time, it becomes very relevant what part humans have in the future.

I think this is an awesome point I overlooked. That talk of future of mankind, that assigning of the moral values to future humans but zero to the AI itself... it does actually make a lot more sense in context of self preservation.

Load More