Summary: The problem of AI has turned out to be a lot harder than was originally thought. One hypothesis is that the obstacle is not a shortcoming of mathematics or theory, but limitations in the philosophy of science. This article is a preview of a series of posts that will describe how, by making a minor revision in our understanding of the scientific method, further progress can be achieved by establishing AI as an empirical science.


The field of artificial intelligence has been around for more than fifty years. If one takes an optimistic view of things, its possible to believe that a lot of progress has been made. A chess program defeated the top-ranked human grandmaster. Robotic cars drove autonomously across 132 miles of Mojave desert. And Google seems to have made great strides in machine translation, apparently by feeding massive quantities of data to a statistical learning algorithm.

But even as the field has advanced, the horizon has seemed to recede. In some sense the field's successes make its failures all the more conspicuous. The best chess programs are better than any human, but go is still challenging for computers. Robotic cars can drive across the desert, but they're not ready to share the road with human drivers. And Google is pretty good at translating Spanish to English, but still produces howlers when translating Japanese to English. The failures indicate that, instead of being threads in a majestic general theory, the successes were just narrow, isolated solutions to problems that turned out to be easier than they originally appeared.

So what went wrong, and how to move forward? Most mainstream AI researchers are reluctant to provide clear answers to this question, so instead one must read behind the lines in the literature. Every new paper in AI implicitly suggests that the research subfield of which it is a part will, if vigorously pursued, lead to dramatic progress towards intelligence. People who study reinforcement learning think the answer is to develop better versions of algorithms like Q-Learning and temporal difference (TD) learning. The researchers behind the IBM Blue Brain project think the answer is to conduct massive neural simulations. For some roboticists, the answer involves the idea of embodiment: since the purpose of the brain is to control the body, to understand intelligence one should build robots, put them in the real world, watch how they behave, notice the problems they encounter, and then try to solve those problems. Practitioners of computer vision believe that since the visual cortex takes up such a huge fraction of total brain volume, the best way to understand general intelligence is to first study vision.

Now, I have some sympathy for the views mentioned above. If I had been thinking seriously about AI in the 80s, I would probably have gotten excited about the idea of reinforcement learning. But reinforcement learning is now basically an old idea, as is embodiment (this tradition can be traced back to the seminal papers by Rodney Brooks in the early 90s), and computer vision is almost as old as AI itself. If these avenues really led to some kind of amazing result, it probably would already have been found.

So, dissatisfied with the ideas of my predecessors, I've taken some trouble to develop my own hypothesis regarding the question of how to move forward. And desperate times call for desperate measures: the long failure of AI to live up to its promises suggests that the obstacle is no small thing that can be solved merely by writing down a new algorithm or theorem. What I propose is nothing less than a complete reexamination of our answers to fundamental philosophical questions. What is a scientific theory? What is the real meaning of the scientific method (and why did it take so long for people to figure out the part about empirical verification)? How do we separate science from pseudoscience? What is Ockham's Razor really telling us? Why does physics work so amazingly, terrifyingly well, while fields like economics and nutrition stumble?

Now, my answers to these fundamental questions aren't going to be radical. It all adds up to normality. No one who is up-to-date on topics like information theory, machine learning, and Bayesian statistics will be shocked by what I have to say here. But my answers are slightly different from the traditional ones. And by starting from a slightly different philosophical origin, and following the logical path as it opened up in front of me, I've reached a clearing in the conceptual woods that is bright, beautiful, and silent.

Without getting too far ahead of myself, let me give you a bit of a preview of the ideas I'm going to discuss. One highly relevant issue is the role that other, more mature fields have had in shaping modern AI. One obvious influence comes from computer science, since presumably AI will eventually be built using software. But this fact appears irrelevant to me, and so the influence of computer science on AI seems like a disastrous historical accident. To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood. Another influence, that should in principle be healthy but in practice isn't, comes from physics. Unfortunately, for the most part, AI researchers have imitated only the superficial appearance of physics - its use of sophisticated mathematics - while ignoring its essential trait, which is its obsession with reality. In my view, AI can and must become a hard, empirical science, in which researchers propose, test, refine, and often discard theories of empirical reality. But theories of AI will not work like theories of physics. We'll see that AI can be considered, in some sense, the epistemological converse of physics. Physics works by using complex deductive reasoning (calculus, differential equations, group theory, etc) built on top of a minimalist inductive framework (the physical laws). Human intelligence, in contrast, is based on a complex inductive foundation, supplemented by minor deductive operations. In many ways, AI will come to resemble disciplines like botany, zoology, and cartography - fields in which the researchers' core methodological impulse is to go out into the world and write down what they see.

An important aspect of my proposal will be to expand the definitions of the words "scientific theory" and "scientific method". A scientific theory, to me, is a computational tool that can be used to produce reliable predictions, and a scientific method is a process of obtaining good scientific theories. Botany and zoology make reliable predictions, so they must have scientific theories. In contrast to physics, however, they depend far less on the use of controlled experiments. The analogy to human learning is strong: humans achieve the ability to make reliable predictions without conducting controlled experiments. Typically, though, experimental sciences are considered to be far harder, more rigorous, and more quantitative than observational sciences. But I will propose a generalized version of the scientific method, which includes human learning as a special case, and shows how to make observational sciences just as hard, rigorous, and quantitative as physics.

As a result of learning, humans achieve the ability to make fairly good predictions about some types of phenomena. It seems clear that a major component of that predictive power is the ability to transform raw sensory data into abstract perceptions. The photons fall on my eye in a certain pattern which I recognize as a doorknob, allowing me to predict that if I turn the knob, the door will open. So humans are amazingly talented at perception, and modestly good at prediction. Are there any other ingredients necessary for intelligence? My answer is: not really. In particular, in my view humans are terrible at planning. Our decision making algorithm is not much more than: invent a plan, try to predict what will happen based on that plan, and if the prediction seems good, implement the plan. All the "magic" really comes from the ability to make accurate predictions. So a major difference in my approach as opposed to traditional AI is that the emphasis is on prediction through learning and perception, as opposed to planning through logic and deduction.

As a final point, I want to note that my proposal is not analogous to or in conflict with theories of brain function like deep belief networks, neural Darwinism, symbol systems, or hierarchical temporal memories. My proposal is like an interface: it specifies the input and the output, but not the implementation. It embodies an immense and multifaceted Question, to which I have no real answer. But, crucially, the Question comes with a rigorous evaluation procedure that allows one to compare candidate answers. Finding those answers will be an awesome challenge, and I hope I can convince some of you to work with me on that challenge.

I am going to post an outline of my proposal over the next couple of weeks. I expect most of you will disagree with most of it, but I hope we can at least identify concretely the points at which our views diverge. I am very interested in feedback and criticism, both regarding material issues (since we reason to argue), and on issues of style and presentation. The ideas are not fundamentally difficult; if you can't understand what I'm saying, I will accept at least three quarters of the blame.


85 comments, sorted by Click to highlight new comments since: Today at 10:15 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Here's an issue of style and presentation: Would you mind editing your text (or your future texts), striving to remove self-reference and cheerleading ("fluff")?

A small number of uses of "I/my" and colorful language ("amazing, terrifying, bright, beautiful, silent, immense, multifaceted") is reasonable, but the discipline of focusing almost entirely on the ideas being discussed helps both you and your readers understand what the ideas actually are.

As far as I can tell, the content of your post is "I will be posting over the next couple of weeks.", and the rest is fluff. Since you did invest some time in writing this post, you must have believed there was more to it. The fluff has either confused you (into believing this post was substantial) or confused me (preventing me from seeing the substantial arguments).

You maybe should have mentioned the earlier discussion of your idea on the open thread, in which I believed I spotted some critical problems with where you're going: you seem to be endorsing a sort of "blank slate" model in that humans have a really good reasoning engine, and the stimuli humans get after birth are sufficient to make all the right inferences.

However, all experimental evidence tells us (cf. Pinker's The Blank Slate) that humans make a significantly smaller set of inferences on our sense data than are logically possible under constraint of Occam's razor; there are grammatical errors that children never make in any language; there are expectations babies all have, at the same time, though none has gathered enough postnatal sense data to justify such inferences, etc.

I conclude that it is fruitless to attempt to find "general intelligence" by looking at what general algorithm would make the inferences human do, given postnatal stimuli. My alternative suggestion is to identify human intelligence as a combination of general reasoning and pre-encoding of environment-specific knowledge that humans do not have to entirely relearn after birth because the b... (read more)

The reason I didn't link to that discussion is that it was kind of tangential to what will be my main points. My goal is to understand the natural setting of the learning problem, not the specifics of how humans solve it.
But you've made assumptions that will keep you from finding that setting. Your approach already commits itself to treating humans as a blank slate. But humans aren't "blank slate with great algorithm"; they're "heavily formatted slate with respectable context-specific algorithm".
Let's postpone this debate until the main points become a bit more clear. I don't think of myself as "treating humans" at all, much less as a blank slate!
Could you at least give some signal of your idea's quality that distinguishes it from the millions with hopeless ideas who scream "You guys are doing it all wrong, I've got something that's just totally different from everything else and will get it right this time"? Because a lot of what you've said so far isn't promising.
Yikes, take it easy. When I said "let's argue", I meant let's argue after I've made some of my main points.
Yes, I read that part of your comment. But having posted on the order of ~1500 words on your idea by now (this article + our past exchange), I still can't find a sign of anything promising, and you've had more than enough space by now to distinguish yourself from the dime-a-dozen folks claiming to have all the answers on AI. I strongly recommend that you look at whatever you have prepared for your next article, and cut it down to about 500 words in which you get straight to the point. LW is a great site because of its frequent comments and articles from people who have assimilated Eliezer Yudkowsky's lessons on rationality; I'd hate to see it turn into a platform for just any AI idea that someone thinks is the greatest ever.
Which will be soon, right?

I'm intrigued and looking forward to reading your articles. I suggest you change your title-writing algorithm, though. To my ears, "Preface to a Proposal for a New Mode of Inquiry" sounds like a softcover edition of a book co-authored by a committee of the five bastard stepchildren of Kant and Kafka.

To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood.

"Computer science is no more about computers than astronomy is about telescopes." -- E. Dijkstra

Dijkstra did take a bit narrow view of computer science though, or maybe he was a bit tongue-in-cheek here. I think actual computers should influence computer science; for instance, it's crucial for fast algorithms to be smart with respect to CPU cache usage, but many of the 'classical computer science' hash tables are quite bad in that area.
I'm a bit surprised this statement is being upvoted with such apparent admiration here. I've always found it rather inaccurate.

Our decision making algorithm is not much more than: invent a plan, try to predict what will happen based on that plan, and if the prediction seems good, implement the plan. All the "magic" really comes from the ability to make accurate predictions.

You need to locate a reasonable hypothesis before there is any chance for it to be right. A lot of magic is hidden in the "invent a plan".

To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood.

It's been brought up in multiple comments already, but I also wanted to register my disapproval of this statement. The first four minutes of the first SICP video lecture has the best description of computer science that I've ever heard, so I quote:

"The reason that we think computer science is about computers is pretty much the same reason that the Egyptians thought geometry was about surveying instruments, and that is when some field is just getting started and you don't really understand it very well, it's very easy to confuse the essence of what you're doing with the tools that you use...I think in the future, people will look back and say, "well yes, those primitives in the 20th century were fiddling around with these gadgets called 'computers,' but really what they were doing was starting to learn how to formalize intuitions about process: how to do things; starting to develop a way to talk precisely about 'how-to' knowledge, as opposed to geometry that talks about 'what is true.'" - Hal Abelson

That said, I'm looking forward to your upcoming posts.

Yet, OP has a point. In the course of getting a PhD in computer science, I had the requirement or opportunity to study computer hardware architecture, operating system design, compiler design, data structures, databases, graphics, and lots of different computer languages. And none of that stuff was ever relevant to AI - not one page of it. (Even the data structures and databases courses dealt only with data structures inappropriate for AI.) The courses I took in linguistics, neuroscience, mathematics, psychology, and even electrical engineering were all more useful. Other than the specifically AI-oriented courses, I can recall only 2 computer science courses that turned out to be helpful for AI: Algorithm analysis, and computational complexity theory. And the AI courses always seemed out of place in the computer science department. I would not recommend anyone interested in AI to major in computer science. Far too much time wasted on irrelevant subjects. It's difficult to say what they should major in - perhaps neuroscience, or math.

Er, have you given much thought to friendliness?

Anna Salamon once described the Singularity Institute's task as to "discover differential equations before anyone else has discovered algebra". The idea being that writing an AI that will behave predictably according to a set of rules you give it is much more difficult than building an AI that's smart enough to do dangerous stuff. It seems to me that if your ideas about AI are correct, you will be contributing to public knowledge of algebra.

I see that I am caught between a rock and a hard place. To people who think I'm wrong, I'm a crackpot who should be downvoted into oblivion. To people who think I might have something interesting and original to say, I'm helping to bring about the destruction of humanity. To people who think I'm wrong: fine, who cares? Isn't the point of this site to be a forum where relatively well-informed discussions can take place about issues of mutual interest? To people who think I'm bringing about doomsday: if my ideas are substantively right, it's going to take a long time before this stuff gets rolling. It will take a decade just to convince the mainstream scientific establishment. After that, things might speed up, but it's still going to be a long, hard slog. Did I mention I have only a good question, not an answer? Let's all take some deep breaths.
BTW, a potential bias you should be aware of in this situation is the human tendency to be irrationally inclined to go through with things once they said they're going to do them. (I believe Robert Cialdini's Influence: Science and Practice talks about this.) So you might want to consider self-observing and trying to detect if that bias is having any influence on your thought process. I (and, probably, all of the kind folks at SIAI--although of course I can't speak for them) will completely forgive you if you go back on your public statements on this. Speaking for myself individually, I'd see this as a demonstration of virtue. And just to be a little silly, I'll use another technique from Influence on you: reciprocation. When I read that you didn't think computer science would be fundamental to the development of strong AI, I immediately thought "That can't be right". I had a very strong gut feeling that somehow, computer science must be fundamental to the development of strong AI and I immediately starting trying to find a reason for why it was. (It seems Vladimir Nesov's reaction was very similar to mine, and note that he didn't find much of a reason. My guess is his comment's high score is a result of many LW readers sharing his and my gut instinct.) However, I noticed that my mind had entered one of its failure modes (motivated continuation []) and I thought to myself "Well, I don't have any solid argument now for why computer science must be fundamental, and there's no real reason for me to look for an argument in favor of that idea instead of an argument against it." So now I've publicly admitted that my gut instinct was unfounded and that my mind is broken; maybe using the Dark Technique of trying to get you to reciprocate will convince you to do the same. :P I believe Eliezer is a member of the school of thought which holds that the intelligence explosion could potentially be triggere

I believe Eliezer is... nine geniuses working together in a basement.

By the nether gods... IT ALL MAKES SENSE NOW

Note that I attacked a flaw in the argument (usage of analogy that assumes that computer science is about computers), and never said anything about the implied conclusion (that computer science is irrelevant for AI). And this does reflect my reaction.
Oh, sorry, I missed that.
I sense this thread has crossed a threshold, beyond which questions and criticisms will multiply faster than they can be answered.
But that is an absurd task, because if you don't understand algebra, you certainly won't be discovering differentiation. Attempting to "discover differential equations before anyone else has discovered algebra" doesn't mean you can skip over discovering algebra, it just means you also have to discover it in addition to discovering DE's. It seems that a more reasonable approach would be a) work towards algebra while simultaneously b) researching and publicizing the potential dangers of unrestrained algebra use (Oops, the metaphor broke.)
To clarify: 'Anna Salamon once described the Singularity Institute's task as to "discover differential equations before anyone who isn't concerned with friendliness has discovered algebra".'
Okay, but what exactly is the suggestion here? That the OP should not publicize his work on AI? That the OP shouldn't even work on AI at all, and should dedicate his efforts to advocating friendly AI discussion and research instead? If a major current barrier to FAI is understanding how intelligence even works to begin with, then this preliminary work (if it is useful) is going to be a necessary component to both regular AGI and FAI. Is the only problem you see, then, that it's going to be made publicly available? Perhaps we should establish private section of LW for Top Secret AI discussion? I apologize for being snarky, but I can't help but find it absurd that we should be worrying about the effects of LW articles on unfriendly singularity, especially given that the hard takeoff model, to my knowledge, is still rather fuzzy. (Last I checked, Robin Hanson put probability of hard takeoff at less than 1% []. Unfriendly singularity is so bad an outcome that research and discussion about hard takeoff is warranted, of course, but is it not a bit of an overreaction to suggest that this series of articles might be too dangerous to be made available to the public?)
And among writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer (i.e., have already invested years of their lives in becoming AGI researchers), Robin Hanson is on one extreme end of the continuum of opinion on the subject. Seems like the sensible course of action to me! Do you really think Eliezer and other responsible AGI researchers have published all of their insights into AGI? If the OP wishes to make a career in AGI research, he can do so responsibly by affiliating himself with SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI. They will probably share their insights with him only after a lengthy probationary period during which they vigorously check him for signs that he might do something irresponsible once they have taken him into their confidence. (ADDED. If it were me, I would look mainly for signs that the candidate might make a choice which tends to have a bad effect on the global situation, but a positive effect on his or her scientific reputation or on some other personal agenda that humans typically care about.) And they will probably share their insights with him only after he has made a commitment to stay with the group for life.
I don't buy that that's a good approach, though. This seems more like security through obscurity to me: keep all the work hidden, and hope that it's both a) on the right track and b) that no one else stumbles upon it. If, on the other hand, AI discussion did take place on LW, then that gives us a chance to frame the discussion and ensure that FAI is always a central concern. People here are fond of saying "people are crazy, the world is mad," which is sadly true. But friendliness is too important an issue for SIAI and the community surrounding it to set itself up as stewards of humanity; every effort needs to be made to bring this issue to the forefront of mainstream AI research.
I agree, which is why I wrote, "SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI". If for some reason, the OP does not wish to or is not able to join one of the existing responsible groups, he can start his own. In security through obscurity, a group relies on a practice they have invented and kept secret when they could have chosen instead to adopt a practice that has the benefit of peer review and more testing against reality. Well, yeah, if there exists a practice that has already been tested extensively against reality and undergone extensive peer review, then the responsible AGI groups should adopt it -- but there is no practice like that for solving this particular problem. There are no good historical examples of the current situation with AGI, but the body of practice with the most direct applicability that I can think of right now is the situation during and after WW II in which the big military powers mounted vigorous systematic campaigns that lasted for decades to restrict the dissemination of certain kind of scientific and technical knowledge. Let me remind that in the U.S. this campaign included the requirement for decades that vendors of high-end computer hardware and machine tools obtain permission from the Commerce Department before exporting any products to the Soviets and their allies. Before WW II, other factors (like wealth and the will to continue to fight) besides scientific and technical knowledge dominated the list of factors that decided military outcomes. Note the current plan of the SIAI for what the AGI should do after it is created is to be guided by an "extrapolation" that gives equal weight to the wishes or "volition" of every single human living at the time of the creation of the AGI, which IMHO goes a very long way to aleviating any legit concerns of people who cannot joing one of the responsible AGI groups.
I didn't realize that. Have there been surveys to establish that Robin's view is extreme?
In discussions on Overcoming Bias during the last 3 years, before and after LW spun off of Overcoming Bias, most people voicing opinions backed by actual reasoning voiced opinions that assigned a higher probability to a hard take-off given that a self-improving AGI is created than Robin. In the spirit of impartial search for the truth, I will note that rwallace on LW advocates not worrying about unFriendly AI, but I think he has invested years becoming an AGI researcher. Katja Grace is another who thinks hard take-off very unlikely and has actual reasoning on her blog to that effect. She has not invested any time becoming an AGI researcher and has lived for a time at Benton Street as a Visiting Fellow and in the Washington, D.C., area where she traveled with the express purpose of learning from Robin Hanson. All the full-time employees and volunteers of SIAI that I know of assign much more probability to hard take-off (given AGI) than Robin does. At a workshop following last year's Singularity Summit, every attendee expressed the wish that brain emulation would arrive before AGI. I get the definite impression that those wishes stems mainly from fears of hard takeoff, and not from optimism about brain emulation per se. In the spirit of impartial search for truth, I note that SIAI employees and volunteers probably chose the attendee list of this workshop.
I'm not convinced that "full-time employees and volunteers of SIAI" are representative of "writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer", even when weighted by level of rationality. I'm under the vague impression that Daniel Dennett and Douglas Hofstadter are skeptical about hard take-off. Do you know whether that impression is correct? ETA: . . . or is there a reason to exclude them from the relevant class of writers?
No, I know of no reason to exclude Douglas Hofstadter from the relevant class of writers though his writings on the topic that I have seen are IMO not very good. Dennett has shown abundant signs of high skill at general rationality, but I do not know if he has done the necessary reading to have an informed probability of hard take-off. But to get to your question, I do not know anything about Dennett's opinions about hard take-off. (But I'd rather talk of the magnitude of the (negative) expected utility of the bad effects of AGI research than about "hard take-off" specifically.) Add Bill Joy to the list of people very worried about the possibility that AI research will destroy civilization. He wrote of it in an influential piece in Wired in 2000. (And Peter Theil if his donations to SIAI mean what I think they mean.) Note that unlike those who have invested a lot of labor in SIAI, and consequently who stand to gain in prestige if SIAI or SIAI's area of interest gains in prestige or importance, Bill Joy has nothing personal to gain from holding the opinion he holds. Neither do I, BTW: I applied to become a visiting fellow at SIAI last year and was turned down in such a way that made it plain that the decision was probably permanent and probably would not be revisited next year. Then I volunteered to work at SIAI at no cost to SIAI and was again turned down. (((ADDED. I should rephrase that: although SIAI is friendly and open and has loose affiliations with very many people (including myself) my discussions with SIAI have left me with the impression that I will probably not be working closely enough with SIAI at any point in the future for an increase in SIAI's prestige (or income for that matter) to rub off on me.))) I would rather have not disclosed that in public, but I think it is important to give another example of a person who has no short-term personal stake in the matter who thinks that AGI research is really dangerous. Also, it makes people more likely to
Please expand on your reasons for thinking AGI is a serious risk within the next 60 years or so.
Hmmm... I have absolutely no knowledge of the politics involved in this, but it sounds intriguing.... could you elaborate on this a bit more?
BTW I have added a sentence of clarification to my comment [] . All I am going to say in reply to your question is that the policy that seems to work best in the part of the world in which I live (California) is to apply to participate in any educational program one would like to participate in and to join every outfit one would like to join, and to interpret the rejection of such an application as neither a reflection on one's value as a person nor the result of the operation of "politics".
Nope, that's all from me. Thanks for your thorough reply :). (My question was just about the meta-level claim about expert consensus, not the object level claim that there will be a hard take-off.)
Also, people who believe hard takeoff is plausible are more likely to want to work with SIAI, and people at SIAI will probably have heard the pro-hard-takeoff arguments more than the anti-hard-takeoff arguments. That said, <1% is as far as I can tell a clear outlier among those who have thought seriously about the issue.
When Robin visited Benton house and the 1% figure was brought up, he was skeptical that he had ever made such a claim. Do you know where that estimate came up (on OB or wherever)? I'm worried about ascribing incorrect probability estimates to people who are fully able to give new ones if we asked.
Off-topic question: Is Benton house the same as theSIAI house []? (I see that it isin the Bay Area [].) Edit: Thanks Nick and Kevin!
The people living there seem to call it Benton house or Benton but I try to avoid calling it that to most people because it is clearly confusing. It'll be even more confusing if the SIAI house moves from Benton Street...
Are you sure this wasn't a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?
Well, the question prompting the discussion was whether a responsible AGI researcher should just publish his or her results (and let us for the sake of this dialog define an idea that took a long time to identify even though it might not pan out a "result") for any old AGI researcher to see or whether he or she should take care to control as best he or she can the dissemination of the results so that rate of dissemination to responsible researchers is optimized relative to rate of dissemination to irresponsible ones. If an unFriendly AI can do a lot of damage without hard take-off, well, I humbly suggests he or she should take pains to control dissemination. But to answer your question in case you are asking out of curiosity rather than to forward the discussion on "controlled dissemination": well, Eliezer certainly thinks hard take-off represents the majority of the negative expected utility, and if the other (2) attendees of the workshop that I have had long conversations with felt differently, I would have learned of that by now more likely than not. (I, too, believe that hard take-off represent the majority of the negative expected utility even when utility is defined the "popular" way rather than the rather outre way I define it.)
Yes, this was a question about curiosity of the responses not in regards specifically to the issue of controlled dissemination.
For rational people skeptical about hard takeoff, consider the Interim Report from the Panel Chairs, AAAI Presidential Panel on Long-Term AI Futures []. Most economists I've talked to are also quite skeptical, much more so than I. Dismissing such folks because they haven't read enough of your writings or attended your events seems a bit biased to me.
"The panel of experts was overall skeptical of the radical views expressed by futurists and science-fiction authors. Participants reviewed prior writings and thinking about the possibility of an “intelligence explosion” where computers one day begin designing computers that are more intelligent than themselves. They also reviewed efforts to develop principles for guiding the behavior of autonomous and semi-autonomous systems. Some of the prior and ongoing research on the latter can be viewed by people familiar with Isaac Asimov's Robot Series as formalization and study of behavioral controls akin to Asimov’s Laws of Robotics. There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems."
Hi Robin! If a professional philosopher or an economist gives his probability that AGI researchers will destroy the world, I think a curious inquirer should check for evidence that the philosopher or economist has actually learned the basics of the skills and domains of knowledge the AGI researchers are likely to use. I am pretty sure that you have, but I do not know that, e.g., Daniel Dennett has, excellent rationalist though he is. All I was saying is that my interlocutor should check that before deciding how much weight to give Dennett's probability.
But in the above you explicitly choose to exclude AGI researchers. Now you also want to exclude those who haven't read a lot about AGI? Seems like you are trying to exclude as irrelevant everyone who isn't an AGI amateur like you.
I guess it depends where exactly you set the threshold. Require too much knowledge and the pool of opinions, and the diversity of the sources of those opinions, will be too small (ie, just "AGI ameteurs"). On the other hand, the minimum amount of research required to properly understand the AGI issue is substantial, and if someone demonstrates a serious lack of understanding, such as claiming that AI will never be able to do something that narrow AIs can do already, then I have no problem excluding their opinion.
About advanced AI being developed, extremely rapid economic growth upon development, or local gains?
Now that you mention it, I didn't have any opinion about whether Eliezar et al had secret ideas about AI. My tentative assumption is that they hadn't gotten far enough to have anything worth keeping secret, but this is completely a guess based on very little.
Lots of guesswork.
If the probability of hard takeoff was 0.1%, it's still too high a probability for me to want there to be public discussion of how one might build an AI. []
I don't get it. Are you saying a smart, dangerous AI can't be simple and predictable? Differential equations are made of algebra, so did she mean the task is impossible? You were replying to my post, right?
Probably not simple. The point is that for it to be predictable, you'd need a very high level of knowledge about it. More than the amount necessary to build it.

To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood.

Something of a jarring note in an otherwise interesting post (I'm at least curious to see the follow-up), in that you are a) reasoning by analogy and b) picking the wrong one: the usual story about music is that it begins with plucked strings and that the study of string resonance modes gave rise to the theories of tuning and harmony.

I have separate reasons for believing that CS is a bad influence (the analogy is an illustration, not an argument). Basically, CS is a mix of theory and engineering with very little empirical science mixed in.

I think I understand better now.

Your proposal seems to involve throwing out "sophisticated mathematics" in favor of something else more practical, and probably more complex. You can't do that. Math always wins.

The problem with math is that it's too powerful: it describes everything, including everything you're not interested in. In theory, all you need to make an AI is a few Turing machines to simulate reality and Bayes theorem to pick the right ones. In practice this AI would take an eternity to run. Turing machines live in a world of 0s and 1s... (read more)

I am not, of course, against mathematics per se. But the reason math is used in physics is because it describes reality. All too often in AI and computer vision, math seems to be used because it's impressive. Obviously, in fields like physics math is very, very useful. In other cases, it's better to just go out and write down what you see. So cartographers make maps, zoologists write field guides, and linguists write dictionaries. Why a priori should we prefer one epistemological scheme to another?
I'd find it much more impressive if you could do anything useful in AI or computer vision without math.
What else is there to see besides humans?
Paperclips. Also, paperclip makers. And paperclip maker makers. And paperclip maker maker makers. And stuff for maintaining paperclip maker maker makers.
And paper?
Maybe [].

I am unsure whether this is LW material. There are plenty of people with ideas about AI and it tends to generate more heat than light, from my experience. I'll reserve judgement though, since there is a need for a place to discuss things.

First I agree with the need to take AI in different directions.

However I'm sceptical of the Input Output view of intelligence. Humans aren't pure functions that always map the same input to the same output, it relies on their history as well. So even if you have a system that corresponds with what a human does for the time... (read more)

"(and why did it take so long for people to figure out the part about empirical verification)?"

Most of the immediate progress after the advent of empiricism was about engineering more than science. I think the biggest hurdle wasn't lack of understanding of the importance of empirical verification, but lack of understanding of human biases.

Early scientists just assumed that they were either unbiased or that their biases wouldn't affect the data. They had no idea of the power of expectation and selection biases, placebo effects, etc. It wasn't ... (read more)

[-][anonymous]12y 3

Have you heard of the methodology proposed by cyberneticists and systems engineers and how is it similar or different from what you are proposing?

Edited for diplomacy/clarity.

[This comment is no longer endorsed by its author]Reply

So... what's your proposal?

I am going to post an outline of my proposal over the next couple of weeks. I expect most of you will disagree with most of it, but I hope we can at least identify concretely the points at which our views diverge. I am very interested in feedback and criticism, both regarding material issues (since we reason to argue), and on issues of style and presentation. The ideas are not fundamentally difficult; if you can't understand what I'm saying, I will accept at least three quarters of the blame.

Aw come on, just one little hint?... (read more)

Thoughts I found interesting:

The failures indicate that, instead of being threads in a majestic general theory, the successes were just narrow, isolated solutions to problems that turned out to be easier than they originally appeared.

Interesting because I don't think it's true. I think the problem is more about the need of AI builders to show results. Providing a solution (or a partial solution or a path to a solution) in a narrow context is a way to do that when your tools aren't yet powerful enough for more general or mixed approaches. Given the v... (read more)

I work in machine translation research. Google might have a little more data, but there are several groups doing equally good work.

This sounds really good and interesting, and is well written, but it also sounds incredibly ambitious. Maybe a little more conservative presentation would be more convincing for me.

Human intelligence, in contrast, is based on a complex inductive foundation, supplemented by minor deductive operations.

You'd be hard-pressed to formalize this statement, since any notion of "induction" can find a deductive conceptualization.

I will formalize it. I don't know what your second statement means; to me induction and deduction are completely different. 2+2=4 is a deductive statement, provably true within the context of a certain formal system. "Mars is red" is an inductive statement, it can't be derived from some larger theory; we believe it because of empirical evidence.
That's not an example of a non-trivial induction, since you're talking about a set with only one element. A truly inductive statement says something about a larger set of things where we don't have the relevant empirical data about each single one of them. And once you start formalizing a procedure for non-trivial induction, the boundary between induction and deduction becomes very blurry indeed.
Maybe an example will clarify the issue. Compare general relativity to a world atlas. Both are computational tools that enable predictions, so both are, by my definition, scientific theories. Now GR is very complex deductively (it relies on complex mathematics), but very simple parametrically (it uses only a couple of constants). The world atlas is the opposite - simple deductively but complex parametrically (requires a lot of bits to specify).
I trust you've read the discussions and articles regarding the status of purported "a priori" knowledge, then? If not, I have reason to suspect your ideas will not appear informed and will thus not yield inslight.

One thing the world has is an abundance of human minds. We actually do not need machines that think like humans - we have humans. What we need is two other things: machines that do thinking that humans find difficult (like the big number crunchers) and one-off machines that are experimental proofs-of-concept for understanding how a human brain works (like Blue Brain). As far as getting the glory for doing what many said was impossible and unveiling a mechanical human-like intelligence, forget the glory because they will just move the goal posts.

I believe ... (read more)

One thing the world has is an abundance of human minds. We actually do not need machines that think like humans - we have humans.

Machines for doing dangerous and monotonous work which requires human or near-human levels of perception and judgment such as mining or driving trucks would have a clear utility, even though they'd just be machines that think (somewhat) like humans and would neither do superhuman feats of cognition nor advance the understanding of the mind design space.

We have an abundance of ordinary human minds. We don't have an abundance of genius human minds. For all I know, machines that thought like Shakespeare or Mill or Newton could be a godsend.
One can make a case that genius is precisely the degree to which one does not think like a human mind (at least in a more useful and/or beautiful way).
Depends how broadly you're drawing the line around the 'human mind' concept. I'd say that since Shakespeare, Mill and Newton's minds were all human minds, that's a prima facie case for saying they think like humans.
Well I'd agree we don't want exact human clones. But then the majority of people don't want the complex to use computers we have at the moment. Moving from serial to parallel won't make the computer any easier to use or reduce the learning burden on the user. The beauty of interacting with a human is that you don't need to know the fine details of how it works on the inside to get it to do what you want, even if it didn't have the ability to do the task previously. This aspect of the human brain would be very beneficial if we can get computers to have it (assuming it doesn't lead to negative singularity, extinction of the human race etc).
An AI that acts like people? I wouldn't buy that. It sounds creepy. Like Clippy with a soul.
I didn't say acts like people. I said had one aspect of humans (and dogs or other trainable animals for that matter). We don't need to add all the other aspects to make it act like a human.