All of ford_prefect42's Comments + Replies

Compartmentalizing: Effective Altruism and Abortion

I haven't read all the comments to this post, and I am new to LW generally, so if I say anything that's been gone over, bear with me.

The argument that abortion is bad due to the QALYs has certain inherent assumptions. First is that there's "room" in the system for additional people. If the addition of a new person subtracts from the quality of life of others, then that has to be factored.

Another aspect that must be factored into this analysis is somewhat more obscure. "Moral hazard". "Slippery slope" is a fallacy, howeve... (read more)

[Link] Robots Program People

To some extent, they already are. Google and Facebook have had measurable impacts on neural structures and human behavior. There are also products like "emospark" that are designed to deliberately manipulate our emotional condition. Now, how well they do remains a question.

Philosophical differences

True enough. I hadn't read that one either, and, having joined a few days ago, there is very little of the content here that I have read. This seemed like a light standalone topic in which to jump in.

This second article however really does address the weaknesses of my thought process, and clarify the philosophical difficulty that the op is concerned with.

Philosophical differences

I had not. And I will avoid that in the future. However, that has very little bearing on my overall post. Please ignore the single sentence that references works of fiction.

0hairyfigment7y
I'm not quite sure how to put this, but there are many other posts on the site [http://lesswrong.com/lw/y3/value_is_fragile/] which you seem unaware of.
0Davidmanheim7y
Also all of the framing that are implied by those works? And the dichotomy that you propose? You shouldn't just read it, think about how it has warped your perspective on AI risks - that's the point.
Philosophical differences

I am of the opinion that you're probably right. That AI will likely be the end of humanity. I am glad to see others pondering this risk.

However, I would like to mention that there are 2 possible/likely modes of that end coming about.

First is the "terminator" future, and second is the "Wall-e" future. The risk that AI war machines will destroy humanity is a legitimate concern, given "autonomous drones", and other developmental projects. The other side has a LOT more projects and progress. Siri, Emospark, automated fac... (read more)

2Davidmanheim7y
I'm assuming you haven't read this; http://lesswrong.com/lw/k9/the_logical_fallacy_of_generalization_from/ [http://lesswrong.com/lw/k9/the_logical_fallacy_of_generalization_from/]