As the SIAI is gaining publicity more people are reviewing its work. I am not sure how popular this blog is but judged by its about page he writes for some high-profile blogs. His latest post takes on Omohundro's "Basic AI Drives":

When we last looked at a paper from the Singularity Institute, it was an interesting work asking if we actually know what we’re really measuring when trying to evaluate intelligence by Dr. Shane Legg. While I found a few points that seemed a little odd to me, the broader point Dr. Legg was perusing was very much valid and there were some equations to consider. However, this paper isn’t exactly representative of most of the things you’ll find coming from the Institute’s fellows. Generally, what you’ll see are spanning philosophical treatises filled with metaphors, trying to make sense out of a technology that either doesn’t really exist and treated as a black box with inputs and outputs, or imagined by the author as a combination of whatever a popular science site reported about new research ideas in computer science. The end result of this process tends to be a lot like this warning about the need to develop a friendly or benevolent artificial intelligence system based on a rather fast and loose set of concepts about what an AI might decide to do and what will drive its decisions.

Link: worldofweirdthings.com/2011/01/12/why-training-a-i-isnt-like-training-your-pets/

I posted a few comments but do not think to be the right person to continue that discussion. So if you believe it is important what other people think about the SIAI and want to improve its public relations, there is your chance. I'm myself interested in the answers to his objections.

New Comment
78 comments, sorted by Click to highlight new comments since: Today at 7:25 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I'm worried that XiXi posted this link expressly as an example of the sort of thing that the SIAI should be engaging in and then when the author came over here, his comments got quickly downvoted. This is not an effective recipe for engagement.

6GregFish13y
Hey, if people choose to downvote my replies, either because they disagree or just plain don't like me, that's their thing. I'm not all that easy to scare with a few downvotes... =)
3TheOtherDave13y
Do you think the comments themselves ought not have been downvoted? Or just that, regardless of the value of the comments, the author ought not have been? If the former, that seems a broader concern. If you have a sense of what it is about them that the community disliked that it ought not have disliked, it might be valuable to articulate that sense and why a different metric would be preferable. If the latter, I'm not sure that's a bad thing, nor am I sure that "fixing" it doesn't cause more problems than it resolves.
-2[anonymous]13y
Wolf!

Shane Legg is not "from the Singularity Institute". He is currently a postdoctoral research fellow at the Gatsby Computational Neuroscience Unit in London.

3JoshuaZ13y
The reason that the piece refers to him in that context is that the author read Legg's material on the advice of Michael Anissimov (who is affiliated with the SI).

My view is that the problem here is a disconnect between the practical and the theoretical view points.

The practical view of computers is likely to commit pc-morphism, that is assume that any computer systems of the future are likely to be like current PCs in the way that they are programmed and act. This is not unreasonable if you haven't been exposed to things like cellular automata and have a lot of evidence of computers being PC-like.

The theoretical view looks at the entire world as a computer (computable physics etc) and so has grander views of what ... (read more)

I am by no means an expert, but I see a problem with this passage:

Wanted behaviors are rewarded, unwanted are punished, and the subject is basically trained to do something based on this feedback. It’s a simple and effective method since you’re not required to communicate the exact details of a task with your subject. Your subject might not even be human, and that’s ok because eventually, after enough trial and error, he’ll get the idea of what he should be doing to avoid a punishment and receive the reward. But while you’re plugging into the existing be

... (read more)
2Nornagest13y
There are a number of machine learning techniques that don't involve progressive reinforcement of any kind. Most of those I can think of are either too crude to support AGI or computationally intractable when generalized outside of tiny problem domains, but I don't know of any proof that says AGI implies reinforcement learning. On the other hand, you could make an analogous but stronger argument in terms of fitness functions.
4PeterisP13y
To put it in very simple terms - if you're interested in training AI according to technique X because you think that X is the best way, then you design or adapt the AI structure so that technique X is applicable. Saying 'some AI's may not respond to X' is moot, unless you're talking about trying to influence (hack?) AI designed and controlled by someone else.
0Normal_Anomaly13y
Thanks for the response. I'll check out the other techniques; I don't know much about them. I didn't mean that, exactly; I just meant that reinforcement learning is possible. Fish seemed to be implying that it wasn't.
2GregFish13y
Absolutely not. If you take another look, I argue that it's uncessary. You don't want the machine to do something? Put in a boundry. You don't have the option to just turn off a lab rat's desire to search a particular corner of its cage with a press of a button, so all you can do is put in some deterrent. But with a machine, you can just tell it not to do that. For example, this code in Java would mean not to add two even numbers if the method recieves them: public int add(int a, int b) { if ((a % 2) != 0 && (b % 2) != 0) { return a + b; } return -1; } So why do I need to build an elaborate curcuit to "reward" the computer for not adding even numbers? And why would it suddenly decide to override the condition? Just to see why? If I wanted it to experiment, I'd just give it fewer bounds.

Part of the disagreement here seems to arise from disjoint models of what a powerful AI would consist of.

You seem to imagine something like an ordinary computer, which receives its instructions in some high-level imperative language, and then carries them out, making use of a huge library of provably correct algorithms.

Other people imagine something like a neural net containing more 'neurons' than the human brain - a device which is born with little more hardwired programming than the general guidance that 'learning is good' and 'hurting people is bad' together with a high-speed internet connection and the URL for wikipedia. Training such an AI might well be a bit like training your pets.

It is not clear to me which kind of AI will reach a human level of intelligence first. But if I had to bet, I would guess the second. And therein lies the danger.

ETA: But even the first kind of AI can be dangerous, because sooner or later someone is going to issue a command with unforeseen consequences.

1GregFish13y
That's not what an artificial neural net actually is. When training your ANN, you give it an input and tell it what the output should be. Then, using a method called backpropagation, you tell it to adjust the weights and activation thresholds of each neuron object until it can match the output. So you're not just telling it to learn, you're telling it what the problem is and what the answer should be, then let it find its way to the solution. Then, you apply what it learned on real-world problems. Again, those other people you mention seem to think that a lot more is going on in an AI system than is actually going on.
4Perplexed13y
Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net. Clearly, a neural net by itself doesn't act autonomously - to get anything approaching 'intelligence' you will need to at least add some feedback loops beyond simple backpropagation. More will go on in a future superhuman AI than goes on in any present-day toy AI. Well, yes, those other people I mention do seem to think that. But they are not indulging in any kind of mysticism. Only in the kinds of conceptual extrapolation which took place, for example, in going from simple combinational logic circuitry to the instruction fetch-execute cycle of a von Neumann computer architecture.
1GregFish13y
No, actually I think the tutorial was necessary, especially since what you're basically saying is that something like a large enough neural net will no longer function by the rules of an ANN. If it doesn't, how does it learn? It would simply spit out random outputs without having some sort of direct guidance. And again I'm trying to figure out what the "superhuman" part will consist of. I keep getting answers like "it will be faster than us" or "it'll make correct dicisons faster", and once again point out that computers already do that on a wide variety of specific tasks which is why we use them...
5Perplexed13y
Am I really being that unclear? Something containing so many and such large embedded neural nets so that the rest of its circuitry is small by comparison. But that extra circuitry does mean that the whole machine indeed no longer functions by the rules of an ANN. Just as my desktop computer no longer functions by the rules of a dRAM. And as JoshuaZ explains, it is something that does everything intellectual that a human can do, only faster and better. Play chess, write poetry, learn to speak Chinese, design computers, prove Fermat's Last Theorem. The whole human repertoire. Sure, machines already do some of those things. Many people (I am not one of them) think that such an AI, doing every last one of those things at superhuman speed, would be transformative. It is at least conceivable that they are right.
0GregFish13y
It never really did. DRAM is just a way to keep bits in memory for processing. What's going on under the hood of any computer hasn't changed at all. It's just grown vastly more complex and allowed us to do much more intricate and impressive things with the same basic ideas. The first computer ever built and today's machines function by the same rules, it's just that the latter is given the tools to do so much more with them. But machines already do most of the things humans do faster and better except for being creative and pattern recognition. Does it mean that the first AI will be superhuman by default as soon as it encompasses the whole human realm of abilities? At the very least it would be informative and keep philosophers marinating on the whole "what does it mean to be human" thing.
2Perplexed13y
Yes. As long as it does everything roughly as well as a human and some things much better.
0timtyler13y
Bostrom has: I think that is more conventional. Unless otherwise specified, to be "super" you have to be much better at most of the things you are supposed to be "super" at.
0GregFish13y
Sounds like a logical conclusion to me... I still have a lot of questions about detail but I'm starting to see what I was after: consistent, objective definitions I can work with and relate to my experience with computers and AI.
0JGWeissman13y
To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans, be implemented in a way that humans (and therefore the AI) can understand well enough to improve on, and have access to the details of this implementation.
0Vladimir_Nesov13y
It could start improving (in software) from a state where it's much worse than humans in most areas of human capability, if it's designed specifically for ability to self-improve in an open-ended way.
0JGWeissman13y
Agreed. I meant to emphasize the importance of the AI having the ability to effectively reflect on its own implementation details. An AI that is as smart as humans but doesn't understand how it works is not likely to FOOM.
0timtyler13y
The ability to duplicate adult researchers quickly and cheaply might accelerate the pace of research quite a bit, though.
4Perplexed13y
It might indeed. 25 years of human nursing, diapering, potty training, educating, drug rehabilitating, and more educating gets you a competent human researcher about 1 time in 40, so artificial researchers are likely to be much cheaper and quicker to produce. But I sometimes wonder just how much of human innovation stems from the fact that not all human researchers have had exactly the same education.
0timtyler13y
If machine researchers are anything like phones or PCs, there will be millions of identical clones - but also substantial variation. Not just variation caused by different upbringings and histories, but variation caused by different architectural design. By contrast humans are mostly all the same - due to being built using much the same recipe inherited from a recent common ancestor. We aren't built for doing research - whereas they probably will be. They will likely be running rings around us soon enough.
-1timtyler13y
There's a big, fat book all about the topic of the difficulties of controlling machines - and it is now available online: Kevin Kelly - Out of Control

Having read the piece I was not impressed. I became even less impressed when I read his criticism of Legg's piece. It seems to be basically come down "computers can't do things that humans can. And they never will be able to. So there."

9XiXiDu13y
My intention for linking to it was not that I thought it featured good arguments, as you might notice by my comments over there, but that he is an educated skeptic with potential influence in the mainstream rationality community. The post is a sample of an outsiders perception and assessment of the SIAI. And right now is the time for the SIAI to hone its appearance and public relations. Because once people like PZ Myers become aware of the SIAI and portray it and LW negatively, this community will be inundated with literally thousands of mediocre rationalists and many potential donors will be lost.
0ata13y
Bad comments will get downvoted and not seen by many people. If someone isn't getting much out of LW and LW isn't getting much out of their presence, they'll leave eventually. If the moderation system continues working about as well as it has been working, an influx of new users shouldn't be a problem. (It's probably something the site needs to be prepared for when Eliezer's books come out, anyway.)
-4GregFish13y
Gee, thanks. So you basically linked and replied as a form of damage control? And by the way, the "outsiders' perception" isn't helped when the "insiders'" arguments seem to be based not on what computers actually do, but what they're made to do in comic books.
9JoshuaZ13y
XiXi is actually one of the people here who is more critical of the SI and the notion of run-away superintelligence. XiXi can correct me if I'm wrong here, but I suspect that XiXi's intention in this particular instance was to do just what he said. To give an example of an outsider's perspective on the SI of exactly the type of outsider who the SI should be trying to convince and should be able to convince if their arguments have much validity. Ok. This is the sort of remark that get's the SI people correctly annoyed. Generalizations from fictional evidence are bad. But, at the same time, that something happens to have occurred in fictional settings isn't in general a reason to assign it lower probability than you would if one weren't aware of such fiction. (To use a silly example, there's fiction set after the sun has become a red giant. The fact that there's such fiction isn't relevant to evaluating whether or not the sun will enter such a phase). It also misses one of the fundamental points that the SI people have made repeatedly: computers as they exist today are very weak entities. The SI's argument doesn't have to do with computers in general. It centers around what happens once machines have human level intelligence. So, ask yourself, how likely is it do you think that we'll have general AI ever, and if we do have general AI, what buggy failure modes seem most likely?
0GregFish13y
As defined by... what exactly? We have problems measuring our own intelligence or even defining it so we're giving computers a very wide sliding scale of intelligence based on personal opinions and ideas morethan a rigirous examination. A computer today could ace just about any general knowledge test we give it if we tell it how to search for an answer or compute a problem. Does that make it as intelligent as a really academically adept human? Oh and it can do it in a tiny fraction of the time it would take us. Does that make it superhuman?
5JoshuaZ13y
It may be a red herring to focus on the definition of "intelligence" in this context. If you prefer, taboo the words intelligent and intelligence in this context and simply refer to a computer capable of doing at least everything a regular person can do. The issue is what happens after one has a machine that reaches that point.
0GregFish13y
But we already have things capable of doing everything a regular person can do. We call them regular people. Are we trying to build another person in digital format here, and if so, why? Just because we want to see if can? Or because we have some big plans for it?
2JoshuaZ13y
Irrelevant to the question at hand, which is what would happen if a machine had such capabilities. But, if you insist on discussing this issue also, machines with human-like abilities could be very helpful. For example, one might be able to train one of them to do some task, and then make multiple copies of it, much more efficient than individually training lots of humans. Or one could send such AIs into dangerous situations where we might not ethically send a person (whether it would actually be ethical to send an AI is a distinct question.)
0Vladimir_Nesov13y
Why is it distinct? Whether doing something is an error determines if it's beneficial to obtain ability and willingness to do it.
1ata13y
It's distinct when the question is about risk to the human, rather than about the ethics of the task itself. We could make nonsentient nonpersons that nevertheless have humanlike abilities in some broad or narrow sense, so that sacrificing them in some risky or suicidal task doesn't impact the ethical calculation as it would if we were sending a person. (I think that's what JoshuaZ was getting at. The "distinct question" would presumably be that of the AI's potential personhood.)
0GregFish13y
Um... we already do all that to a pretty high extent and we don't need general intelligence in every single facet of human ability to do that. Just make it an expert in its task and that's all you need.
1JoshuaZ13y
There are a large number of tasks where the expertise level needed by current technology is woefully insufficient. Anything that has a strong natural language requirement for example.
0GregFish13y
Oh fun, we're talking about my advisers' favorite topic! Yeah, strong natural language is a huge pain and if we had devices that understood human speech well, tech companies would jump on that ASAP. But here's the thing. If you want natural language processing, why build a Human 2.0? Why not just build the speech recognition system? It's making AGI for something like that the equivalent of building a 747 to fly one person across a state? I can see various expert systems coming together as an AGI, but not starting out as such.
3TheOtherDave13y
It would surprise me if human-level natural-language processing were possible without sitting on top of a fairly sophisticated and robust world-model. I mean, just as an example, consider how much a system has to know about the world to realize that in your next-to-last sentence, "It's" is most likely a typo for "Isn't." Granted that one could manually construct and maintain such a model rather than build tools that maintain it automatically based on ongoing observations, but the latter seems like it would pay off over time.
3jsalvatier13y
I don't think this is a good argument. Just because you cannot define something doesn't mean it's not a real phenomena or that you cannot reason about it at all. Before we understood fire completely, it was still real and we could reason about it somewhat (fire consumes some things, fire is hot etc.). Similarly, intelligence is a real phenomena that we don't completely understand and we can still do some reasoning about it. It is meaningful to talk about a computer having "human-level" (I think "human-like" might be more descriptive) intelligence.
2GregFish13y
If you have no working definition for what you're trying to discuss, you're more than likely to be barking up the wrong tree about it. We didn't understand fire completely, but we knew that it was hot, you couldn't touch it, and you made it by rubbing dry sticks together really, really fast, or by making a spark with rocks and have it land on dry straw. Also, where did I say that until I get a definition of intelligence all discussion about the concept is meaningless? I just want to know what criteria an AI must meet to be considered human and match them with what we have so far so I can see how far we might be from those benchmarks. I think it's a perfectly reasonable way to go about this kind of discussion.
0jsalvatier13y
I apologize, the intent of your question was not at all clear to me from your previous post. It sounded to me like you were using this as an argument that SIAI types were clearly wrong headed. To answer your question then, the relevant dimension of intelligence is something like "ability to design and examine itself similarly to it's human designers".
0GregFish13y
Ok, I'll buy that. I would agree that any system that could be its own architect and hold meaningful design and code review meetings with its builders would qualify as human-level intelligent.
0jsalvatier13y
To clarify: I didn't mean that such a machine is necessarily "human level intelligent" in all respects, just that that is the characteristic relevant to the idea of an "intelligence explosion".
0[anonymous]13y
Interesting question, Wikipedia does list some requirements.
2GregFish13y
Wow, if that's all you got from a post trying to explain the very real difference between acing an intelligence test by figuring things out on your own and having a machine do the same after you give it all the answers and how the suggested equations only measure how many answers were right, not how that feat was accomplished, I don't even know how to properly respond... Oh and by the way, in the comments I suggest how to keep track of the machine doing some learning and figuring out to Dr. Legg so there's another thing to consider. And yes, I've had the formal instruction in discrete math to do so.
4JoshuaZ13y
It is possible that I didn't explain my point well. The problem I am referring to is your apparent insistence that there are things that machines can't do that people can and that this is insurmountable. Most of your subclaims are completely reasonable, but the overarching premise that machines can only do what they are programmed to seems to show up in both pieces, and is simply wrong. Even today, that's not true by most definitions of those terms. Neural nets and genetic algorithms often don't do what they are told.
-1GregFish13y
Only if you choose to discard any thought to how machines are actually built. There's no magic going on in that blinking box, just ciruits performing the functions they were designed to do in the order they're told. Actually, they do precisely what they're told because without a fitness function which determines what problem they are to solve in their output and their level of correctness, they just crash the computer. Don't confuse algorithms that have very generous bounds and allow us to try different possible solutions to the same problem for some sort of thinking or initiative on the computer's part. And when computers do something weird, it's because of a bug which sends them persuing their logic in whays programmers never intended, not because they decide to go off on their own. I can't tell you how many seemingly bizarre and ridiculous problems I've eventually tracked down to a bad loop, or a bad index value, or a missing symbol in a string...
9JoshuaZ13y
There's no magic going on inside the two pounds of fatty tissue inside my skull either. Magic is apparently not required for creativity or initiative (whatever those may be). I'm confused by what you mean by "thinking" and "initiative." Let's narrow the field slightly. Would the ability to come up with new definitions and conjectures in math be an example of thinking and initiative? Calling something a bug doesn't change the nature of what is happening. That's just a label. Humans are likely as smart as they are due to runaway sexual selection for intelligence. And then humans got really smart and realized that they could have all the pleasure of sex while avoiding the hassle of reproduction. Is the use of birth-control an example of human initiative or a bug? Does it make a difference?
0GregFish13y
Yes, but with a caveat. I could teach an ANN how to solve a problem but it would be more or less by random trial and error with a squashing function until each "neuron" has the right weight and activation function. So it will learn how to solve this generic problem, but it won't be because it traced its way along all the steps. (Actually I made in mistake in my previous reply, ANNs have no fitness function, that's a genetic algorithm. ANNs are given an input and a desired output.) So if you develop a new defintion or conjecture and can state why and how you did it, then develop a proof, you've shown thought. Your attempt to suddenly create a new definition or theorem just because you wanted to and were curious rather than just tasked to do it would be initiative. No, you see, a bug is when a computer does something it's not supposed to do and handles its data incorrectly. Birth control is actually another approach to reproduction most of the time, delaying progeny until we feel ready to raise them. Those who don't have children have put their evolutionary desire to provide for themselves above the drive to reproduce and counter that urge with protected sex. So it's not really a bug as it is a solution to some of the problems posed by reproduction. Now, celibacy is something I'd call a bug and we know from many studies that it's almost always a really bad idea to forgo sex altogether. Mental health tends to suffer greatly.
2JoshuaZ13y
Hmm, so would a grad student who is thinking about a thesis problem because their advisor said to think about it be showing initiative? Is a professional mathematician showing initiative? They keep thinking about math because that's what gives them positive feedback (e.g. salary, tenure, positive remarks from their peers). Is "incorrectly" a normative or descriptive term? .How is it different than "this program didn't do what I expected it to do" other than that you label it a bug when the program deviates more from what you wanted to accomplish? Keep in mind that what a human wants isn't a notion that cleaves reality at the joints. Ok. So when someone (and I know quite a few people in this category) deliberately uses birth control because they want the pleasure of sex but don't want to ever have kids, is that a bug in your view?
0GregFish13y
Dis he/she volunteer to work on a problem and come to the advisor saying that this is the thesis subject? Doesn't sound like it, so I'd say it's not. Initiative is doing something that's not required, but something you feel needs to be done or something you want to do. Yes. When you need it to return "A" and it retuns "Finland," it made a mistake which has to be fixed. How it came to that mistake can be found by tracing the logic after the bug manifests itself. Ok, whan you build a car but the car doesn't start, I don't think you're going to say that the car is just doing what it wants and we humans are just selfishly insisting that it bends to our whims. You're probably going to take that thing to a mechanic. Same thing with computers, even AI. If you build an AI to learn a language and it doesn't seem to be able to do so, there's a bug in the system. That's answered in the second sentence of the quote you chose...
0JoshuaZ13y
Ok. Now, if said grad student did come to the thesis adviser, but their motivation was that they've been taught from a very young age that they should do math. Is there initiative? It seems that a large part of the disagreement is implicit premises here. You seem to be focused on very narrow AI, when the entire issue is what happens when one doesn't have narrow AI but have AI that has most capabilities that humans have. Let's set aside whether or not we should build such AIs and whether or not they are possible. Assuming that such entities are possible, do you or do you not think there's a risk of the AI getting out of control. Either there's a miscommunication here or there's a misunderstanding about how evolution works. An organism that puts its own survival over reproducing is an evolutionary dead end. Historically, lots of humans didn't want any children, but they didn't have effective birth control methods, so in the ancestral environment there was minimal evolutionary incentive to remove that preference. It has only been recently that there is widespread and effective birth control. So, what you've said is one evolved desire overriding another would still seem to be a bug.
3GregFish13y
Not sure. You could argue both points in this situation. Any AI can get out of control. I never denied that. My issue is with how that should be managed, not whether it can happen. I suppose it would.
0JoshuaZ13y
Ah. In that case, there's actually very minimal disagreement.
8TheOtherDave13y
Can you clarify how it's helpful to know that my machine only does what it's been told to do, if I can't know what I'm telling it to do or be certain what I have told it to do? I mean, there's a sense in which humans only do "what they've been told to do", also... we have programs embedded in DNA that manifest themselves in brains that construct minds from experience in constrained ways. (Unless you believe in some kind of magic free will in human minds, in which case this line of reasoning won't seem sensible to you.) But so what? Knowing that doesn't make humans harmless.
2jsalvatier13y
Additionally, a big part of what SIAI types emphasize is that knowing very precisely and very broadly (at the same time) what humans want is very important. Human desires are very complex, so this is not a simple task.
-4GregFish13y
If you have no idea what you want your AI to do, why are you building it in the first place? I have never built an app that does, you know, anything and whatever. It'll just be muddled mess that probably won't even compile. No we do not. This is not how biology works. Brains are self-organizing structures built by a combination of cellular signals and environmental cues. All that DNA does is to regulate what proteins the cell will manufacture. Development goes well beyond that.
4TheOtherDave13y
I'm not sure how you got from my question to your answer. I'm not talking at all about programmers not having intentions, and I agree with you that in pretty much all cases they do have intentions. I'll assume that I wasn't clear, rather than that you're willing to ignore what's actually being said in favor of what lets you make a more compelling argument, and will attempt to be clearer. You keep suggesting that there's no reason to worry about how to constrain the behavior of computer programs, because computer programs can only do what they are told to do. At the same time, you admit that computer programs sometimes do things their programmers didn't intend for them to do. I might have written a stupid bug that causes the program to delete the contents of my hard drive, for example. I agree completely that, in doing so, it is merely doing what I told it to do: I'm the one who wrote that stupid bug, it didn't magically come out of nowhere, the program doesn't have any mysterious kind of free will or anything. It's just a program I wrote. But I don't see why that should be particularly reassuring. The fact remains that the contents of my hard drive are deleted, and I didn't want them to be. That I'm the one who told the program to delete them makes no difference I care about; far more salient to me is that I didn't intend for the program to delete them. And the more a program is designed to flexibly construct strategies for achieving particular goals in the face of unpredictable environments, the harder it is to predict what it is that I'm actually telling my program to do, regardless of what I intend for it to do. In other words: "I can't know what I'm telling it to do or be certain what I have told it to do." Sure, once it deletes the files, I can (in principle) look back over the source code and say "Oh, I see why that happened." But that doesn't get me my files back. And yet, remarkably, brains don't "self-organize" in the absence of that regulation. Y
0GregFish13y
No, I just keep saying that we don't need to program them to "like rewards and fear punishments" and train them like we'd train dogs. Oh no, it's not. I have several posts on my blog detailing how bugs like that could actually turn a whole machine army against us and turn Terminator into a reality rather than a cheesy robots-take-over-the-world-for-shits-and-giggles flick. But the source code isn't like DNA in an organism. Source code covers so much more ground than that. Imagine having an absolute blueprint of how every cell cluster in your body will react to any stimuli through your entire life and every process it will undertake from now until your death, including how it will age. That would be source code. Your DNA is not ever nearly that complete. It's more like a list of suggestions and blueprints for raw materials.
0TheOtherDave13y
(shrug) OK, fair enough. I agree with you that reward/punishment conditioning of software is a goofy idea. I was reading your comment here to indicate that we can constrain the behavior of human-level AGIs by just putting appropriate constraints in the code. ("You don't want the machine to do something? Put in a boundry. [..] with a machine, you can just tell it not to do that.") I think that idea is importantly wrong, which is why I was responding to it, but if you don't actually believe that then we apparently don't have a disagreement. Re: source code... if we're talking about code that is capable of itself generating executable code as output in response to situations that arise (which seems implicit in the idea of a human-level AGI, given that humans are capable of generating executable code), it isn't at all clear to me that its original source code comprises in any kind of useful way an absolute blueprint for how every part of it will react to any stimuli. Again, sure, I'm not positing magic: whatever it does, it does because of the interaction between its source code and the environment in which it runs, there's no kind of magic third factor. So, sure, given the source code and an accurate specification of its environment (including its entire relevant history), I can in principle determine precisely what it will do. Absolutely agreed. (Of course, in practice that might be so complicated that I can't actually do it, but you aren't claiming otherwise.) If you don't think the same is true of humans, then we disagree about humans, but I think that's incidental.
0GregFish13y
Again, it really shouldn't be doing that. It should have the capacity to learn new skills and build new neural networks to do so. That doesn't require new code, it just requires a routine to initialize a new set of ANN objects at runtime.
0TheOtherDave13y
If it somehow follows from that that there's an absolute blueprint in it for how every part of it will react to any stimuli in a way that is categorically different from how human genetics specify how humans will respond to any environment, then I don't follow the connection... sorry. I have only an interested layman's understanding of ANNs.

"imagined by the author as a combination of whatever a popular science site reported"

I've heard this argument from non-singulatarians from time to time. It bothers me due to the problem conservation of expected evidence. What is the blogger's priors of taking an argument seriously if it seems as if the discussed about topic reminds him of something he's heard about in a pop sci piece?

We all know that popular sci/tech reporting isn't the greatest, but if you low confidence about SIAI-type AI and hearing it reminds you of some second hand pop rep... (read more)

2JoshuaZ13y
I don't think that's what is meant by the phrase. I think the author is asserting that it seems to them that some of the stuff put out by the website shows the general trends one expect if someone has learned about some idea from popularizations rather than the technical literature. If that is what the author is discussing then that is worrisome.
3GregFish13y
Yes that is exactly what I meant. That might sound a little harsh, but that was my impression.
2XiXiDu13y
What might also be worrisome is that the two papers he seems to have read and associated with the SIAI are both not written by the SIAI.
5JoshuaZ13y
Yes, but in at least one of those cases (both cases?) the piece was recommended to him by a higher-up in the SIAI. So associating them with the SIAI in the weak sense that they reflect views connected to the Institute is not unreasonable. If that was the intended meaning, it is just very poor phrasing. ETA: And regardless of those issues, that's a reflection of problems with the author, not necessarily a claim that defends the SIAI from the particular criticism in question.
1timtyler13y
I think that is not correct. You said: However. the link was to: http://singinst.org/upload/ai-resource-drives.pdf ...not... http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ The former is written by Carl Shulman - who seems to be being credited with 4 recent SIAI publications here.
0Zachary_Kurtz13y
its not clear to me, though this explanation seems plausible as well. Either way it's not good.