nickLW
nickLW has not written any posts yet.

Wei, you and others here interested in my opinions on this topic would benefit from understanding more about where I'm coming from, which you can mainly do by reading my old essays (especially the three philosophy essays I've just linked to on Unenumerated). It's a very different world view than the typical "Less Wrong" worldview: based far more on accumulated knowledge and far less on superficial hyper-rationality. You can ask any questions that you have of me there, as I don't typically hang out here. As for your questions on this topic:
(1) There is insufficient evidence to distinguish it from an arbitrarily low probability.
(2) To state a probability would be an... (read more)
The Bureau of Labor Statistics reports 728,000 lawyers in the U.S
I would have thought it obvious that I was talking about lawyers who have been developing law for at least a millenium, not merely currently living lawyers in one particular country. Oh well.
Since my posts seem to be being read so carelessly, I will no longer be posting on this thread. I highly recommend folks who want to learn more about where I'm coming from to visit my blog, Unenumerated. Also, to learn more about the evolutionary emergence of ethical and legal rules, I highly recommend Hayek -- Fatal Conceit makes a good startng point.
I only have time for a short reply:
(1) I'd rephrase the above to say that computer security is among the two most important things one can study with regard to this alleged threat.
(2) The other important thing is law. Law is the "offensive approach to the problem of security" in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents with a real diversity of values every day. It's... (read more)
Selfish Gene itself is indeed quite sufficient to convince most thinking young people that evolution provides a far better explanation of how we got to be the way we are. It communicated far better than anybody else the core theories of neo-Darwinism which gave rise to evolutionary psychology, by stating bluntly the Copernican shift from group or individual selection to gene selection. Indeed, I'd still recommend it as the starting point for anybody interested in wading into the field of evolutionary psychology: you should understand the fairly elegant underlying theory before doing the deep dive into what is now a far less elegant and organized study (in part because many... (read more)
I am far more confident in it than I am in the AGI-is-important argument. Which of course isn't anywhere close to saying that I am highly confident in it. Just that the evidence for AGI-is-unimportant far outweighs that for AGI-is-important.
All of these kinds of futuristic speculations are stated with false certainly -- especially the AGi-is-very-important argument, which is usually stated with a level of certainty that is incredible for an imaginary construct. As for my evidence, I provide it in the above "see here" link -- extensive economic observations have been done on the benefits of specialization, for example, and we have extensive experience in computer science with applying specialized vs. generalized algorithms to problems and assessing their relative efficiency. That vast amount of real-world evidence far outweighs the mere speculative imagination that undergirds the AGI-is-very-important argument.
When some day some people (or some things) build an AGI, human-like or otherwise, it will at that time be extremely inferior to then-existing algorithms for any particular task (including any kind of learning or choice, including learning or choice of algorithms). Culture, including both technology and morality, will have changed beyond any of our recognitions long before that. Humans will already have been obsoleted for all jobs except, probably, those that for emotional reasons require interaction with another human (there's already a growth trend in such jobs today).
The robot apocalypse, in other worlds, will arrive and is arriving one algorithm at a time. It's a process we can... (read more)
Skill at making such choices is itself a specialty, and doesn't mean you'll be good at other things. Indeed, the ability to properly choose algorithms in one problem domain often doesn't make you an expert at choosing them for a different problem domain. And as the software economy becomes more sophisticated these distinctions will grow ever sharper (basic Adam Smith here -- the division of labor grows with the size of the market). Such software choosers will come in dazzling variety: they like other useful or threatening software will not be general purpose. And who will choose the choosers? No sentient entity at all -- they'll be... (read more)
Indeed. As to why I find extreme consequences from general AI highly unlikely, see here. Alas, my main reason is partly buried in the comments (I really need to do a new post on this subject). It can be summarized as follows: for basic reasons of economics and computer science, specialized algorithms are generally far superior to general ones. Specialized algorithms are what we should hope for or fear, and their positive and negative consequences occur a little at a time -- and have been occurring for a long time already, so we have many actual real-world observations to go by. They can be addressed specifically, each passing tests 1-3, so that we can solve these problems and achieve these hopes one specialized task at a time, as well as induce general theories from these experiences (e.g. of security), without getting sucked into any of the near-infinity of Pascal scams one could dream up about the future of computing and robotics.
ideal reasoners are not supposed to disagree
My ideal thinkers do disagree, even with themselves. Especially about areas as radically uncertain as this.