Posts

Sorted by New

Wiki Contributions

Comments

Eliezer: I've enjoyed the extended physics thread, and it has garnered a good number of interesting comments. The posts with more technical content (physics, Turing machines, decision theory) seem to get a higher standard of comment and to bring in people with considerable technical knowledge in these areas. The comments on the non-technical posts are somewhat weaker. However, I think that both sorts of posts have been frequently excellent.

Having been impressed with your posts on rationality, philosophy of science and physics, I look forward to posts on the transhumanist issues that you often allude to. Here are some questions your writing on this area raises:

  1. Have you convinced any other AI theorists (or cognitive scientists) that AI is as dangerous as you suggest?
  2. Where do your priors come from for the structure space of "minds in general"? Couldn't it be that this space is actually quite restricted, with lots of "conceivable" minds not being physically possible? (This would line up with impossibility/limitation results in mathematical logic, complexity theory, public choice / voting theory, etc.)
  3. Where do your priors come from for the difficulty of intelligence improvements at increasing levels of intelligence? You can't have an intelligence explosion if after some not to high point it becomes much more difficult than previously to enhance intelligence. Again, in the light of limitation results, why assume one way rather than the other?
  4. If it rational only to assign a low probability to AI being as dangerous as you think, then it seems we are in a Pascal's Wager type situation. I'm skeptical that we should act on Pascal's Wagers. Can you show that this isn't a Pascal type situation?
  5. Some transhumanists want AI or nanotechnology because of the supposed dramatic improvements they will bring to the quality of human life. I can accept that the technologies could improve things by eradicating extreme poverty and diseases like malaria and HIV, and by preventing natural disasters, nuclear war and other abominations. But beyond that, it is not obvious to me that these technologies would improve human life that much. This is not status-quo bias. I'm skeptical that the lives of present Americans are much better than those of Americans a hundred years ago (excluding civil rights and prevention of diseases). To pick some specific examples: I don't think my higher intelligence, greater knowledge, and more rational set of beliefs makes my life better than that of various people I know. At most, it has some incidental benefits (better SAT makes college/jobs easier to get) but certainly doesn't seem intrinsically to improve life quality. (Also, it might be that for reasons of evolutionary psychology, humans can't have satisfying lives without genuine risks and threat of death. Something that doesn't need those risks in order to thrive is not a human and so I'd be indifferent to its existence.)