Well, I'm a noob, but I don't really understand why AI is so dangerous. If we created a superintelligent AI on supercomputer (or a laptop or whatever)even if it was malevolent, how could it take over the world/kill all humans or whatever? It would be a software program on a computer. Assuming we didn't give it the capacity to manufacture, how would it make all those microscopic killbots anyway?
I really struggled with reconciling my intuitive feeling of free will with determinism for a while. Finally, when I was about 18, my beliefs settled in (I think) exactly this way of thinking. My sister still doesn't accept determinism. When we argue, I always emphasize: "Just because reality is deterministic, that doesn't mean we stop choosing!"