Posts

Sorted by New

Wiki Contributions

Comments

@Eliezer: Sophiesdad, you should be aware that I'm not likely to take your advice, or even take it seriously. You may as well stop wasting the effort.

Noted. No more posts from me.

@sophiesdad: Autodidactism may be a superior approach for the education of certain individuals, but it allows the individual to avoid one element crucial to production: discipline.

@Pyramid Head: Eliezer (who, in my opinion, don't lack discipline)

My comment about discipline was not meant to be inflammatory, nor even especially critical. Rather, it was meant to be descriptive of one aspect of autodidactism. In comparison, suppose that Mr. Yudkowsky was working toward his PhD at (say) University of Great Computer Scientists. His chosen topic for his dissertation is "Development of Friendly Artificial Intelligence, Superhuman". After seven years, he reports to his advisers and shows his writings on Bayesian Probabilities, quantum mechanics, science fiction stories, pure fiction, and views of philosophical ideas widely published for centuries. They ask, "Where is your proposed design for FAI?" He would not receive his degree. Thomas Bayes described Bayesian Probablility adequately. Anyone who cannot understand his writings (me, for example) is not qualified to design an FAI, so the fact that Eliezer can help the common man do so is meaningless with regard to his "PhD work". For the same reason, Mr. Yudkowsky's wonderful series on quantum mechanics, which I have thoroughly enjoyed, is meaningless so far as advancing new knowledge or recruiting those with adequate brainage to work on FAI. It is entertaining, and particularly it is SELF-ENTERTAINING, but it is not reflective of the discipline necessary to accomplish the stated goal.

Of course, the answer to this is that disciplined, conventional educational and research techniques are what he is trying to avoid. He is right on schedule, but the technique is so brilliant in its conception that no one else can recognize it.

I don't know you, Eliezer, and I will grant without knowing you that you are a far more special creation than I, and perhaps POTENTIALLY in the lineage of the Newtons, etc. But what if you have a fatal accident tomorrow? What if you have the recessive diseases associated with the high-intelligence Ashkenazy Jews and your life ends while you're playing games? Will there be any record of what you did? Will someone else be able to stand on your shoulders? Will mankind be any closer to FAI?

@Pyramid Head: I don't see how he can hope to save the world by writing blog posts...

Ditto. Autodidactism may be a superior approach for the education of certain individuals, but it allows the individual to avoid one element crucial to production: discipline. Mr. Yudkowsky's approach, and his resistance to work with others, along with his views that it is his job to save the world, and no one else can do it, suggest an element of savantism. Hardly a quality one would want in a superhuman intelligence.

I, too, enjoy his writing, but the fact that he discovered 200+ year-old Bayesian Probability only seven years ago, and claims that everything he did before that is meaningless, shows the importance of the input of learned associates.

Eliezer, I truly want you, along with others fortunate enough to have the neuronal stuff that produces such uniquely gifted brains, to save me. Please get to work, or I'll give you a spanking.

@Eliezer Yudkowsky said: Spindizzy and sophiesdad, I've spent quite a while ramming headlong into the problem of preventing the end of the world. Doing things the obvious way has a great deal to be said for it; but it's been slow going, and some help would be nice. Research help, in particular, seems to me to probably require someone to read all this stuff at the age of 15 and then study on their own for 7 years after that, so I figured I'd better get started on the writing now.

I have posted this before without answer, but I'll try again. You are working alone while seeking an "assistant" who may currently be 15, offering subsistence wages (see Singularity Institute) . While I am aware that there is great fear of government involvement in FAI, governments could bring together a new "Manhattan Project" of the greatest minds in the fields of engineering, mathematics, and computer science. If you alone knew how to handle all the aspects, you would already be doing it, instead of thinking about it. Surely you must believe that a good result is more important than personal credit for "preventing the end of the world"? Einstein himself did not believe that the power of the atom could be harnessed by humans, yet when he was combined with a TEAM, it was done in three years.

Secondly, you have no way to know what other governments may be doing in clandestine AI research. Why not strive to be first? Yes, I'm well aware that the initial outcome of the Manhattan Project was far from "friendly", but my cousin in the CIA tells me the US govt has first refusal on any technology that might be vital to national security. I think AI qualifies, so why not use their resources and power? Someone will, and they might not not have your "friendly" persuasions.

I second spindizzy, yet hope that something major is happening.

I don't think Deep Blue "knew" that it was trying to beat Gary Kasparov in the game of chess. It was programmed to come up with every possible alternative move and evaluate the outcome of each in terms of an eventual result of taking K's king. The human brain is elegant, but it's not fast, and unquestionable no human could have evaluated all the possible moves within the time limit. Deep Blue is quaint compared to the Universal Machines of the near future. David Deutsch claims that quantum computers will be able to factor numbers that would require a human more time than is known to have existed in the history of the universe. It won't have superhuman intelligence, but it will be fast. Imagine if it's programs were recursively self-improving.

sophiesdad: As I understand it, it is not possible for a human to design a machine that is "smarter-than-human", by definition.

Caledonian: Your understanding is mistaken.

Mr. Caledonian, I'm going to stick by my original statement. Smarter-than-human intelligence will be designed by machines with "around human" intelligence running recursive self-improvement code. It will not start with a human-designed superhuman intelligence. How could a human know what that is? That's why I'm not sure that all the years of thought going into the meaning of morality, etc. is helpful. If it is impossible for humans to understand what superhuman intelligence and the world around it would be like, just relax and go along for the ride. If we're destroyed, we'll never know it. If you're religious, you'll be in heaven (great) or hell (sucks). I agree with Tim Tyler (who must not be drinking as much as when he was commenting on Taubes) that we already have machines that perform many tasks, including design tasks, that would be impractical for humans alone.

Unknown wrote:
As I've stated before, we are all morally obliged to prevent Eliezer from programming an AI.

As Bayesians, educated by Mr. Yudkowsky himself, I think we all know the probability of such an event is quite low. In 2004, in the most moving and intelligent eulogy I have ever read, Mr. Y stated: "When Michael Wilson heard the news, he said: "We shall have to work faster." Any similar condolences are welcome. Other condolences are not." Somewhere, some person or group is working faster, but at the Singularity Institute, all the time is being spent on somewhat brilliant and very entertaining writing. I shall continue to read and reflect, for my own enjoyment. But I hope those others I mentioned have Mr. Y's native abilities, because I agree with Woody Allen: "I don't want to achieve immortality through my work. I want to achieve it by not dying."

Interestingly, there are categories of habitual liars (as distinguished from pathologic liars) who have no fear of common knowledge whatever. They lie in preference to telling the truth, and if caught, suffer no embarrassment or remorse. I once encountered such a person who was telling a story about a Wildlife Officer using an AK-47 assault rifle to kill a grizzly bear in West Virginia. When informed that no wild grizzly has ever been reported east of the continental divide, and that a state agency certainly would not issue a Russian weapon to its officers, the person simply continued, "Yeah, he was firing on full automatic from the hip, and the grizzly would have gotten him if he had only a pistol." See: http://www.answers.com/topic/pseudologia-fantastica-1?cat=health

Load More