Dreams of AI Design

(Post is currently broken; will be fixed at launch.)

Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

The following is a public service announcement for all Overcoming Bias readers who may be thinking of trying to construct a real AI.

AI IS HARD. IT'S REALLY FRICKING HARD. IF YOU ARE NOT WILLING TO TRY TO DO THINGS THAT ARE REALLY FRICKING HARD THEN YOU SHOULD NOT BE WORKING ON AI. You know how hard it is to build a successful Internet startup? You know how hard it is to become a published author? You know how hard it is to make a billion dollars? Now compare the number of successful startups, successful authors, and billionaires, to the number of successful designs for a strong AI. IT'S REALLY FRICKING HARD. So if you want to even take a shot at it, accept that you're going to have to do things that DON'T SOUND EASY, like UNDERSTAND FRICKING INTELLIGENCE, and hold yourself to standards that are UNCOMFORTABLY high. You have got to LEVEL UP to take on this dragon.

Thank you. This concludes the public service announcement.

Robin, whole brain emulation might be physically possible, but I wouldn't advise putting venture capital into a project to build a flying machine by emulating a bird. Also there's the destroy-the-world issue if you don't know what you're doing.

"Planes would fly just as well, given a fixed design, if birds had never existed; they are not kept aloft by analogies."

Good analogy.

Ben does outside research projects like OpenCog, since he knows the field and has the connections, and is titled "Research Director". I bear responsibility for SIAI in-house research, and am titled "Research Fellow" because I helped found SIAI and I consider it nobler not to give myself grand titles like Grand Poobah.

Silas, I would confidently say, "Oh hell no, the last thing we need right now is a Manhattan Project. Give me $5 million/year to spend on 10 promising researchers and 10 promising students, and maybe $5 million/year to spend on outside projects that might help, and then go away. If you're lucky we'll be ready to start coding in less than a decade."

Retired, are you signed up for cryonics?

No, I don't have 20 people in mind. And I don't need that full amount, it's just the most I can presently imagine myself managing to use.

JB, ditched Flare years ago.

Aron, if I knew what code to write, I would be writing it right now. So I'm working on the "knowing" part. I don't think AGI is hardware-constrained at all - it would be a tremendous challenge just to properly use one billion operations per second, rather than throwing most of it away into inefficient algorithms.

Simply mimicking the human brain in an attempt to produce intelligence is akin to scavenging code off the web to write a program. Will you understand well how the program works? No. Will it work? If you can hack it, very possibly.

It seems that we're only just beginning to learn how to hack nature. Personally.. I'd say it's a much more likely way to AI than deliberate design. But that may be just because I don't think humans are collectively that bright.

AGI may be hard, but narrow AI isn't necessarily. How many OB readers care about real AI vs. just improving their rationality? It's not that straightforward to demonstrate to the common reader how these two are related.

Realizing the points you make in this post about AI is just like lv 10 out of 200 or something levels. It's somewhat disappointing that you actually have to even bother talking about it, because this should have been realized by everyone back in 1956, or at least in 1970, after the first round of failures. (Marvin Minsky, why weren't you pushing that point back then?) But is it bad that I sort of like how most people are confused nowadays? Conflicting emotions on this one.

Whole brain emulation -- hm, sounds like some single human being gets to be godlike first, then. Who do we pick for this? The Dalai Lama? Barack Obama? Is worrying about this a perennial topic of intellectual masturbation? Maybe.

Previous comment of mine contains an error. Apparently Eliezer_1996 did go on the record as saying that, given a hundred million dollars per year, he would have a "reasonable chance" of doing it in nine years. He was thinking of brute-forcing AI via Manhattan Project and heuristic soup, and wasn't thinking about Friendliness at all, but still.

I would start by assuming away the "initially nice people" problem and ask if the "stability under enhancement" problem was solvable. If it was, then I'd add the "initially nice person" problem back in.

So start with artificial stupidity. Stupidity is plentiful and ubiquitous - it follows that it should be easy for us to reproduce.

As it happens, we've made far more progress making computer programs that can 'think' as well as insects than can think like humans. So start with insects first, and work your way up from there.

Eliezer, if designing planes had turned out to be "really fricking hard" enough, requiring "uncomfortably high standards" that mere mortals shouldn't bother attempting, humans might well have flown first by emulating birds. Whole brain emulation should be doable within about a half century, so another approach to AI will succeed first only it is not really really fricking hard.

Re: Simply mimicking the human brain in an attempt to produce intelligence is akin to scavenging code off the web to write a program. Will you understand well how the program works? No. Will it work? If you can hack it, very possibly.

Except that this is undocumented spaghetti code which comes with no manual, is written in a language for which you have no interpreter, was built by a genetic algorithm, and is constructed so that it disintegrates.

The prospective hacker needs to be more than brave, they need to have no idea that other approaches are possible.

If you meet someone who says that their AI will do XYZ just like humans ... Say to them rather: "I'm sorry, I've never seen a human brain, or any other intelligence, and I have no reason as yet to believe that any such thing can exist. Now please explain to me what your AI does, and why you believe it will do it, without pointing to humans as an example."

This seems the wrong attitude toward someone who proposes to pursue AI via whole brain emulation. You might say that approach is too hard, or the time is not right, or that another approach will do better or earlier. But whole brain emulation hardly relys on vague analogies to human brains - it would be directly making use of their abilities.

If a creature engages in goal-directed activity, then I call it intelligent. If by "having said goal" you mean "consciously intends it", than I regard the faculties for consciously intending things as a more sophisticated means for aiming at goals. If intercepting the ball is characterized (not defined) as "not intelligent", that is true relative to some other goal that supercedes it.

I'm basically asserting that the physical evolution of a system towards a goal, in the context of an environment, is what is meant when one distinguishes something that is "intelligent" from something (say, a bottle) that is not. Here, it is important to define "goal" and "environment" very broadly.

Of course, people constantly use the word "intelligence" to mean something more complicated, and higher-level. So, someone might say that a human is definitely "intelligent", and maybe a chimp, but definitely not a fly. Well, I think that usage is a mistake, because this is a matter of degree. I'm saying that a fly has the "I" in "AI", just to a lesser degree that a human. One might argue that the fly doesn't make plans, or use tools, or any number of accessories to intelligence, but I see those faculties as upgrades that raise the degree of intelligence, rather than defining it.

Before you start thinking about "minds" and "cognition", you've got to think about machinery in general. When machinery acquires self-direction (implying something toward which it is directed), a qualitative line is crossed. When machinery acquires faculties or techniques that improve self-direction, I think that is more appropriately considered quantitative.

I define intelligence much more generally. I think that an entity is intelligent to the extent that it is able to engage in goal-directed activity, with respect to some environment. By this definition, fish and insects are intelligent. Humans, more so. "Environment" can be as general as you like. For example, it can include the temporal dimension. Or it might be digital. A machine that can detect a rolling ball, compute its path, and intercept it is intelligent.

Aspects of human intelligence, such as language, and the ability to model novel environments, serve the end of goal-directed activity. I think the first-person view ('consciousness') is "real", but it is also subservient to the end of goal-directed activity. I think that as definitions go, one has got to start there, and build up and out. As Caledonian points out, this could also apply to construction plans.

EY:Give me $5 million/year to spend on 10 promising researchers and 10 promising students, and maybe $5 million/year to spend on outside projects that might help, and then go away. If you're lucky we'll be ready to start coding in less than a decade."

I am contacting the SIAI today to see whether they have some role I can play. If my math is correct, you need $100 million dollars, and 20 selected individuals. If the money became available, do you have the individuals in mind? Would they do it?

I'll be 72 in 10 years when the coding starts; how long will that take? Altruism be damned, remember my favorite quote: "I don't want to achieve immortality through my work. I want to achieve it through not dying. (W. Allen)

Roko, Ben thought he could do it in a few years, and still thinks so now. I was not working with Ben on AI, then or now, and I didn't think I could do it in a few years, then or now. I made mistakes in my wild and reckless youth but that was not one of them.

[Correction: Moshe Looks points out that in 1996, "Staring into the Singularity", I claimed that it ought to be possible to get to the Singularity by 2005, which I thought I would have a reasonable chance of doing given a hundred million dollars per year. This claim was for brute-forcing AI via Manhattan Project, before I had any concept of Friendly AI. And I do think that Ben Goertzel generally sounds a bit more optimistic and reassuring about his AI project getting to general intelligence in on the order of five years given decent funding. Nonetheless, the statement above is wrong. Apparently this statement was so out of character for my modern self that I simply have no memory of ever making it, an interesting but not surprising observation - there's a reason I talk about Eliezer_1996 like he was a different person. It should also be mentioned that I do assess a thought-worthy chance of AI showing up in five years, though probably not Friendly. But this doesn't reflect the problem being easy, it reflects me trying to widen my confidence intervals.]

Zubon, the thought has tormented me for quite a while that if scientific progress continued at exactly the current rate, then it probably wouldn't be more than 100 years before Friendly AI was a six-month project for one grad student. But you see, those six months are not the hard part of the work. That's never the really hard part of the work. Scientific progress is the really fricking hard part of the work. But this is rarely appreciated, because most people don't work on that, and only apply existing techniques - that's their only referent for "hard" or "easy", and scientific progress isn't a thought that occurs to them, really. Which also goes for the majority of AGI wannabes - they think in terms of hard or easy techniques to apply, just like they think in terms of cheap or expensive hardware; the notion of hard or easy scientific problems-of-understanding to solve, does not appear anywhere on their gameboard. Scientific problems are either already solved, or clearly much too difficult for anyone to solve; so we'll have to deal with the problem using a technique we already understand, or an understandable technology that seems to be progressing, like whole brain emulation or parallel programming.

These are not the important things, and they are not the gap that separates you from the imaginary grad student of 100 years hence. That gap is made out of mysteries, and you cross it by dissolving them.

Peter, human brains are somewhat unstable even operating in ancestral parameters. Yes, you run into a different class of problems with uploading. And unlike FAI, there is a nonzero chance of full success even if you don't use exact math for everything. But there are still problems.

Eliezer, I suspect that was rhetorical. However.. top algorithms that avoid overtraining can benefit from adding model parameters (though in massively decreasing returns of scale). There are top-tier monte carlo algorithms that take weeks to converge, and if you gave them years and more parameters they'd do better (if slight). It may ultimately prove to be a non-zero advantage for those that have the algorithmic expertise and the hardware advantage particularly in a contest where people are fighting for very small quantitative differences. I mentioned this for Dan's benefit and didn't intend to connect it directly to strong AI.

I'm not imagining a scenario where someone in a lab is handed a computer that runs at 1 exaflop and this person throws a stacked RBM on there and then finally has a friend. However, I am encouraged by the steps that Nvidia and AMD have taken towards scientific computing and Intel (though behind) is simultaneously headed the same direction. Suddenly we may have a situation where for commodity prices, applications can be built that do phenomenally interesting things in video and audio processing (and others I'm unaware of). These applications aren't semantic powerhouses of abstraction, but they are undeniably more AI-like than what came before, utilizing statistical inferences and deep parallelization. Along the way we learn the basic nuts and bolts engineering basics of how to distribute work among different hardware architectures, code in parallel, develop reusable libraries and frameworks, etc.

If we take for granted that strong AI is so fricking hard we can't get there in one step, we have to start looking at what steps we can take today that are productive. That's what I'd really love to see your brain examine: the logical path to take. If we find a killer application today along the lines above, then we'll have a lot more people talking about activation functions and log probabilities. In contrast, the progress of hardware from 2001-2006 was pretty disappointing (to me at least) outside of the graphics domain.

As my name has come up in this thread I thought I'd briefly chime in. I do believe it's reasonably likely that a human-level AGI could be created in a period of, let's say, 3-10 years, based on the OpenCogPrime design (see http://opencog.org/wiki/OpenCog_Prime). I don't claim any kind of certitude about this, it's just my best judgment at the moment.

So far as I can recall, all projections I have ever made about the potential of my own work to lead to human-level (or greater) AGI have been couched in terms of what could be achieved if an adequate level of funding were provided for the work. A prior project of mine, Webmind, was well-funded for a brief period, but my Novamente project (http://novamente.net) never has been, and nor is OpenCogPrime ... yet.

Whether others involved in OpenCogPrime work agree closely with my predictive estimates is really beside the point to me: some agree more closely than others. We are involved in doing technical research and engineering work according to a well-defined plan (aimed explicitly at AGI at the human level and beyond), and the important thing is knowing what needs to be done, not knowing exactly how long it will take. (If I found out my time estimate were off by a factor of 5, I'd still consider the work roughly equally worthwhile. If I found out it were off by a factor of 10, that would give me pause, and I would serious consider devoting my efforts to developing some sort of brain scanning technology, or quantum computing hardware, or to developing some totally different sort of AGI design).

I do not have a mathematical proof that the OpenCogPrime design will work for human-level AGI at all, nor a rigorous calculation to support my time-estimate. I have discussed the relevant issues with many smart, knowledgeable people, but ultimately, as with any cutting-edge research project, there is a lot of uncertainty here.

I really do not think that my subjective estimate about the viability of the OpenCogPrime AGI design is based on any kind of simple cognitive error. It could be a mistake, but it's not a naive or stupid mistake!

In order to effectively verify or dispute my hypothesis that the OpenCogPrime design (or the Novamente Cognition Engine design: they're similar but not identical) is adequate for human-level AGI, with a reasonable level of certitude, Manhattan Project level funding would not be required. US $10M per year for a decade would be ample; and if things were done very carefully without too much bad luck, we might be able to move the project full-speed-ahead on as little as US $1.5 M per year, and achieve amazing results within as little as 3 years.

Hell, we might be able to get to the end goal without ANY funding, based on the volunteer efforts of open-source AI developers, though this seems a particularly difficult path, and I think the best course will be to complement these much-valued volunteer efforts with funded effort.

Anyway, a number of us are working actively on the OpenCogPrime project now (some funded by SIAI, some by Novamente LLC, some as volunteers) even without an overall "adequate" level of funding, and we're making real progress, though not as much as we'd like.

Regarding my role with SIAI: as Eliezer stated in this thread, he and I have not been working closely together so far. I was invited into SIAI to, roughly speaking, develop a separate AGI research programme which complements Eliezer's but is still copacetic with SIAI's overall mission. So far the main thing I have done in this regard is to develop the open-source OpenCog (http://opencog.org) AGI sofware project of which OpenCogPrime is a subset.

I will, but it looks from your blog like you're already talking to Michael Vassar. I broadcast to the world, Vassar handles personal networking.

Eliezer, what destroy-the-world issues do you see resulting from whole brain emulation? I see risks that the world will be dominated by intelligences that I don't like, but nothing that resembles tiling the universe with smiley faces.

Roko has a point there.

I like "AI IS HARD. IT'S REALLY FRICKING HARD." But that is an argument that could cut you in several ways. Anything that has never been done is really hard. Can you tell those degrees of really hard beforehand? 105 years ago, airplanes were really hard; today, most of us could get the fundamentals of designing one with a bit of effort. The problem of human flight has not changed, but its perceived difficulty has. Is AI that kind of problem, the one that is really hard until suddenly it is not, and everyone will have a half-dozen AIs around the house in fifty years? Is AI hard like time travel? Like unaided human flight? Like proving Fermat's Last Theorem?

It seems like those CAPS will turn on you at some point in the discussion.


But unless you use an actual human brain for your AI, you're still just creating a model that works in some way "like" a human brain. To know that it will work, you'll need to know which behaviors of the brain are important to your model and which are not (voltages? chemical transfers? tiny quantum events?). You'll also need to know what capabilities the initial brain model you construct will need vs. those it can learn along the way. I don't see how you get the answers to those questions without figuring out what intelligence really is unless generating your models is extraordinarily cheap.

For the planes/birds analogy, it's the same as the idea that feathers are really not all that useful for flight as such. But without some understanding of aerodynamics, there's no reason not to waste a lot of time on them for your bird flight emulator, while possibly never getting your wing shape really right.

Aron: What did those performance improvements of 20-100x buy you in terms of reduced squared error on the Netflix Prize?

Are there some kind of "Envisioning fallacy" that generalizes this?

I have seen myself and others fall prey to a sense of power that is readily difficult to describe when discussing topics as diverse as physics simulation, conways game of life-derivatives and automatic equation derivation.

And I have observed myself to once have this very same sense of power when I thought about some graphs probabilistically weighted edges and how a walker on this graph would be able to interpret data and then make AI (It was a bit more complicated and smelled like an honest attempt, but there was definitely black boxes there).

I get what you're saying, and I actually think most people would agree that a fly has a degree of intelligence, just not much. There is merit in your point about goals.

Before you start thinking about "minds" and "cognition", you've got to think about machinery in general.

I thought that's what I was doing. If you look at the "machinery" of intelligence, you find various cognitive faculties, AKA "mental abilities." The ability to do basic math is a cognitive faculty which is necessary for the pursuit of certain goals, and a factor in intelligence. The better one is at math, the better one is at pursuing certain goals, and the more intelligent one is in certain ways. Same for other faculties.

How would you define self-direction? I'm not sure a fly has self-direction, though it can be said to have a modicum of intelligence. Flies act solely on instinct, no? If they're just responding automatically to their environment based on their evolved instincts, then in what sense do they have self-direction?

Andy Wood,
Why the goal criterion? Every creature might be said to be engaging in goal-directed activity without actually having said goal. Also, what if the very goal of intercepting the ball is not intelligent?

Admittedly, the "mental" aspect of "mental ability" might be difficult to apply to computers. Perhaps it would be an improvement to say intelligence is cognitive ability or facility. Mental abilities can take many forms and can be used in pursuit of many goals, but I think it is the abilities themselves which constitute intelligence. One who has better "mental abilities" will be better at pursuing their goals - whatever they might be - and indeed, better at determining which goals to pursue.

Anyone have any problems with defining intelligence as simply "mental ability"? People are intelligent in different ways, in accordance with their mental abilities, and IQ tests measure different aspects of intelligence by measuring different mental abilities.

@Aron, wow, from your initial post I thought I was giving advice to an aspiring undergraduate, glad to realize I'm talking to an expert :-)

Personally I continually bump up against performance limitations. This is often due to bad coding on my part and the overuse of Matlab for loops but I still have the strong feeling that we need faster machines. In particular, I think full intelligence will require processing VAST amounts of raw unlabelled data (video, audio, etc) and that will require fast machines. The application of statistical learning techniques to vast unlabeled data streams is about to open new doors. My take on this idea is spelled out better here.


While it is apparent when something is flying, it is by no means clear what constitutes the "I" of "AI". The comparison with flight should be banned from all further AI discussions.

I anticipate definition of "I" shortly after "I" is created. Perhaps, as is so often done in IT projects, managers will declare victory, force the system upon unwilling users and pass out T-shirts bearing: "AI PER MANDATUM" (AI by mandate).

Or perhaps you have a definition of "I"?

I think it's likely that we will understand AGI well enough on the WBE track, even if AGI is not developed independently before that, and as a result this understanding will be implemented before WBE sorts out all the technical details and reaches its goal. So, even if it's hard to compare the independent development of these paths, dependent scenario leads to conclusion that AGI will likely come before WBE.

Re: Anders Sandberg argues that brain scanning techniques using a straightforward technology (slicing and electron microscopy) combined with Moore's law will allow us to do WBE on a fairly predictable timescale.

Well, that is not unreasonable - though it is not yet exactly crystal clear which brain features we would need to copy in order to produce something that would boot up. However, that is not a good argument for uploads coming first. Any such an argument would necessarily compare upload and non-upload paths. Straightforward synthetic intelligence based on engineering principles seems likely to require much less demanding hardware, much less in the way of brain scanning technology - and much less in the way of understanding what's going on.

The history of technology does not seem to favour the idea of AI via brain scanning to me. A car is not a synthetic horse. Calculators are not electronic abacuses. Solar panels are not technological trees. Big Blue was not made of simulated neurons.

It's not clear that we will ever bother with uploads - once we have AI. It will probably seem like a large and expensive engineering project with dubious benefits.

[I] am titled "Research Fellow" because I helped found SIAI and I consider it nobler not to give myself grand titles like Grand Poobah.

It seems to me that the titles "Director of Research" and "Executive Director" give the holders power over you, and it is not noble to give other people power over you in exchange for dubious compensation, and the fact that Ben's track record (and doctorate?) lend credibility to the organization strikes me as dubious compensation.

Example: the holders of the titles might have the power to disrupt your scientific plans by bringing suit claiming that a technique or a work you created and need is the intellectual property of the SI.

If like me you don't expect the first upload to all by itself rapidly become all powerful, you don't need to worry as much about upload friendliness.


I guess that under the plausible (?) assumption that at least one enhancement strategy in a not too huge search space reliably produces friendly superintelligences, the problem reduces from creating to recognizing friendliness? Even so I'm not sure that helps.

Vassar handles personal networking? Dang, then I probably shouldn't have mouthed off at Robin right after he praised my work.

Someone should write a "creating friendly uploads", but a first improvement over uploading then enhancing a single human would be uploading that human ten times and enhancing all ten copies in different ways so as to mitigate some possible insanity scenarios.

Re: a number of clever people like Robin Hanson and the guys at the Future of Humanity Institute think whole brain emulation will come first, and have good arguments for that conclusion.

What? Where are these supposedly good arguments, then? Or do you mean the crack of a future dawn material?

Eliezer, if the US government announced a new Manhattan Project-grade attempt to be the first to build AGI, and put you in charge, would you be able to confidently say how such money should be spent in order to make genuine progress on such a goal?

Disclaimer: perhaps the long-standing members of this blog understand the following question and may consider it impertinent. Sincerely, I am just confused (as I think anyone going to the Singularity site would be).

When I visit this page describing the "team" at the Singularity Institute, it states that Ben Goertzel is the "Director of Research", and Eliezer Yudkowsky is the "Research Fellow". EY states (above); "I was not working with Ben on AI, then or now." What actually goes on at SIAI?

Richard, the whole brain emulation approach starts with and then emulates a particular human brain.

Michael, we have lots of experience picking humans to give power to.

Dan, I've implemented RBMs and assorted statistical machine learning algorithms in context with the NetflixPrize. I've also recently adapted some of these to work on Nvidia cards via their CUDA platform. Performance improvements have been 20-100x and this is hardware that has only taken a few steps away from pure graphics specialization. Fine-grained parallelization, improved memory bandwidth, less chip logic devoted to branch prediction, user-controlled shared memory, etc. help.

I'm seeing a lot of interesting applications in multimedia processing, many of which have statistical learning elements. One project at Siggraph allowed users to modify a single frame of video and have that modification automatically adapt across the entire video. Magic stuff. If we are heading towards hardware that is closer to what we'd expect as the proper substrate for AI, and we are finding commercial applications that promote this development, then I think we are building towards this fricking hard problem the only way possible: in small steps. It's not the conjugate gradient, but we'll get there.

Re: I don't think anyone really knows the general requirements for AGI, and therefore nobody knows what (if any) kind of specialized hardware is necessary.

One thing which we might need - and don't yet really have - is parallelism.

True, there are FPGAs, but these are still a nightmare to use. Elsewhere parallelism is absurdly coarse-grained.

We probably won't need anything very fancy on the hardware front - just more speed and less cost, to make the results performance- and cost-competitive with humans.

If you haven't seen a brain, "Nothing is easier than to familiarize one's self with the mammalian brain. Get a sheep's head, a small saw, chisel, scalpel and forceps..."

In fact - you can just buy them at the butcher's shop - ready-prepared... often in pairs.

Eliezer, do you work on coding AI? What is the ideal project that intersects practical value and progress towards AGI? How constrained is the pursuit of AGI by a lack of hardware optimized for it's general requirements? I'd love to hear more nuts and bolts stuff.

Aron, I don't think anyone really knows the general requirements for AGI, and therefore nobody knows what (if any) kind of specialized hardware is necessary. But if you're a hardware guy and you want something to work on, you could read Pearl's book (mentioned above) and find ways to implement some of the more computationally intensive inference algorithms in hardware. You might also want to look up the work by Geoff Hinton et al on reduced Boltzmann machines and try to implement the associated algorithms in hardware.

Eliezer, of course in order to construct AI we need to know what intelligence really is, what induction is, etc. But consider an analogy to economics. Economists understand the broad principles of the economy, but not the nuts and bolts details. The inability of the participants to fully comprehend the market system hardly inhibits its ability to function. A similar situation may hold for intelligence: we might be able to construct intelligent systems with only an understanding of the broad principles, but not the precise details, of thought.