Weiner's book is descriptive of the problem, and in the same section of the book, he states that he holds little hope for the social sciences becoming as exact and prescriptive as the hard sciences.
I believe that the singularitarian view somewhat contradicts this view.
I believe that the answer is to create more of the kinds of minds that we like to be surrounded by, and fewer of the kinds of minds we dislike to be surrounded by.
Most of us dislike being surrounded by intelligent sociopaths who are ready to pounce on any weakness of ours, to exploit, rob, o...
So, in any case, if you stand up to the system, and/or are "caught" by the system, the system will give you nothing but pure sociopathy to deal with ...except for possibly your interaction with those few "independent" jurors who are nonetheless "selected" by the unconstitutional, unlawful means known as "subject matter voir dire." The system of injustice and oppression that we currently have in the USA is a result of this grotesque "jury selection" process. (This process explains how randomly-selected juror...
Hierarchical, Contextual, Rationally-Prioritized Dishonesty
This is an outstanding article, and it closely relates to my overall interest in LessWrong.
I'm convinced that lying to someone who is evil, who obviously has immediate evil intentions is morally optimal. This seems to be an obvious implication of basic logic. (ie: You have no obligation to tell the Nazis who are looking for Anne Frank that she's hiding in your attic. You have no obligation to tell the Fugitive Slave Hunter that your neighbor is a member of the underground railroad. ...You have no ...
The ultimate result of shielding men from the results of folly is to fill the world with fools.
— Herbert Spencer (1820-1903), ”State Tampering with Money and Banks“ (1891)
I think Spooner got it right:
If the jury have no right to judge of the justice of a law of the government, they plainly can do nothing to protect the people against the oppressions of the government; for there are no oppressions which the government may not authorize by law.
-Lysander Spooner from "An Essay on the Trial by Jury"
There is legitimate law, but not once law is licensed, and the system has been recursively destroyed by sociopaths, as our current system of law has been. At such a point in time, perverse incentives and the punishment ...
The gardeners, receptionists, and cooks are secure in their jobs for decades to come.
Except that in exponentially-increasing computation-technology-driven timelines, decades are compressed into minutes after the knee of the exponential. The extra time a good cook has, isn't long.
Let's hope that we're not still paying rent then, or we might find ourselves homeless.
If you're right (and you may well be), then I view that as a sad commentary on the state of human education, and I view tech-assisted self-education as a way of optimizing that inherently wasteful "hazing" system you describe. I think it's likely that what you say is true for some high percentage of classes, but untrue for a very small minority of highly-valuable classes.
Also, the university atmosphere is good for social networking, which is one of the primary values of going to MIT or Yale.
Probably true, but I agree with Peter Voss. I don't think any malevolence is the most efficient use of the AGI's time and resources. I think AGI has nothing to gain from malevolence. I don't think the dystopia I posited is the most likely outcome of superintelligence. However, while we are on the subject of the forms a malevolent AGI might take, I do think this is the type of malevolence most likely to be allow the malevolent AGI to retain a positive self-image.
(Much the way environmentalists can feel better about introducing sterile males into crop-pest p...
i.e. not my statistical likelihood, i.e. nice try, but no-one is going going to have a visceral fear reaction and skip past their well-practiced justification (or much reaction at all, unless you can do better than that skeevy-looking graph.)
I suggest asking yourself whether the math that created that graph was correctly calculated. A bias against badly illustrated truths may be pushing you toward the embrace of falsehood.
If sociopath-driven collectivism was easy for social systems to detect and neutralize, we probably wouldn't give so much of our wealt...
An interesting question to ask is "how many people who favor markets understand the best arguments against them, and vice versa." Because we're dealing with humans here, my suspicion is that if there's a lot of disagreement it stems largely from unwillingness to consider the other side, and unfamiliarity with the other side. So, in that regard you might be right.
Then again, we're supposed to be rational, and willing to change our minds if evidence supports that change, and perhaps some of us are actually capable of such a thing.
It's a debate wort...
"how generalization from fictional evidence is bad"
I don't think this is a universal rule. I think this is very often true because humans tend to generalize so poorly, tend to have harmful biases based on evolution, and tend to write and read bad (overly emotional, irrational, poorly-mapped-to-reality) fiction.
Concepts can come from anywhere. However, most fiction maps poorly to reality. If you're writing nonfiction, at least if you're trying to map to reality itself, you're likely to succeed in at least getting a few data points from reality co...
I strongly agree that universal, singular, true malevolent AGI doesn't make for much of a Hollywood movie, primarily due to points 6 and 7.
What is far more interesting is an ecology of superintelligences that have conflicting goals, but who have agreed to be governed by enlightenment values. Of course, some may be smart enough (or stupid enough) to try subterfuge, and some may be smarter-than-the-others enough to perform a subterfuge and get away with it. There can be a relative timeline where nearby ultra-intelligent machines compete with each other, or...
I don't know, in terms of dystopia, I think that an AGI might decide to "phase us out" prior to the singularity, if it was really malevolent. Make a bunch of attractive but sterile women robots, and a bunch of attractive but sterile male robots. Keep people busy with sex until they die of old age. A "gentle good night" abolition of humanity that isn't much worse (or way better) than what they had experienced for 50M years.
Releasing sterile attractive mates into a population is a good "low ecological impact" way of decreasing a population. Although, why would a superintelligence be opposed to _all humans? I find this somewhat unlikely, given a self-improving design.
Philip K. Dick's "The Second Variety" is far more representative of our likelihood of survival against a consistent terminator-level antagonist / AGI. Still worth reading, as is reading the other book "Soldier" by Harlan Ellison that Terminator is based on. The Terminator also wouldn't likely use a firearm to try to kill Sarah Connor, as xkcd notes :) ...but it also wouldn't use a drone.
It would do what Richard Kuklinski did: make friends with her, get close enough to spray her with cyanide solution (odorless, undetectable, she seeming...
continuing on, Weiner writes:
... (read more)