Posts

Sorted by New

Wiki Contributions

Comments

Now, if said grad student did come to the thesis adviser, but their motivation was that they've been taught from a very young age that they should do math. Is there initiative?

Not sure. You could argue both points in this situation.

Assuming that such entities are possible, do you or do you not think there's a risk of the AI getting out of control.

Any AI can get out of control. I never denied that. My issue is with how that should be managed, not whether it can happen.

So, what you've said is one evolved desire overriding another would still seem to be a bug.

I suppose it would.

Oh fun, we're talking about my advisers' favorite topic! Yeah, strong natural language is a huge pain and if we had devices that understood human speech well, tech companies would jump on that ASAP.

But here's the thing. If you want natural language processing, why build a Human 2.0? Why not just build the speech recognition system? It's making AGI for something like that the equivalent of building a 747 to fly one person across a state? I can see various expert systems coming together as an AGI, but not starting out as such.

Sounds like a logical conclusion to me...

I still have a lot of questions about detail but I'm starting to see what I was after: consistent, objective definitions I can work with and relate to my experience with computers and AI.

... if we're talking about code that is capable of itself generating executable code as output in response to situations that arise

Again, it really shouldn't be doing that. It should have the capacity to learn new skills and build new neural networks to do so. That doesn't require new code, it just requires a routine to initialize a new set of ANN objects at runtime.

Just as my desktop computer no longer functions by the rules of a dRAM.

It never really did. DRAM is just a way to keep bits in memory for processing. What's going on under the hood of any computer hasn't changed at all. It's just grown vastly more complex and allowed us to do much more intricate and impressive things with the same basic ideas. The first computer ever built and today's machines function by the same rules, it's just that the latter is given the tools to do so much more with them.

And as JoshuaZ explains, it is something that does everything intellectual that a human can do, only faster and better.

But machines already do most of the things humans do faster and better except for being creative and pattern recognition. Does it mean that the first AI will be superhuman by default as soon as it encompasses the whole human realm of abilities?

Many people think that such an AI, doing every last one of those things at superhuman speed, would be transformative.

At the very least it would be informative and keep philosophers marinating on the whole "what does it mean to be human" thing.

Um... we already do all that to a pretty high extent and we don't need general intelligence in every single facet of human ability to do that. Just make it an expert in its task and that's all you need.

the relevant dimension of intelligence is something like "ability to design and examine itself similarly to it's human designers".

Ok, I'll buy that. I would agree that any system that could be its own architect and hold meaningful design and code review meetings with its builders would qualify as human-level intelligent.

You keep suggesting that there's no reason to worry about how to constrain the behavior of computer programs, because computer programs can only do what they are told to do.

No, I just keep saying that we don't need to program them to "like rewards and fear punishments" and train them like we'd train dogs.

I agree completely that, in doing so, it is merely doing what I told it to do: I'm the one who wrote that stupid bug, it didn't magically come out of nowhere, the program doesn't have any mysterious kind of free will or anything. It's just a program I wrote. But I don't see why that should be particularly reassuring.

Oh no, it's not. I have several posts on my blog detailing how bugs like that could actually turn a whole machine army against us and turn Terminator into a reality rather than a cheesy robots-take-over-the-world-for-shits-and-giggles flick.

... and yet we have no significant difficulty equating a running program with its source code.

But the source code isn't like DNA in an organism. Source code covers so much more ground than that. Imagine having an absolute blueprint of how every cell cluster in your body will react to any stimuli through your entire life and every process it will undertake from now until your death, including how it will age. That would be source code. Your DNA is not ever nearly that complete. It's more like a list of suggestions and blueprints for raw materials.

Hmm, so would a grad student who is thinking about a thesis problem because their advisor said to think about it be showing initiative?

Dis he/she volunteer to work on a problem and come to the advisor saying that this is the thesis subject? Doesn't sound like it, so I'd say it's not. Initiative is doing something that's not required, but something you feel needs to be done or something you want to do.

Is "incorrectly" a normative or descriptive term?

Yes. When you need it to return "A" and it retuns "Finland," it made a mistake which has to be fixed. How it came to that mistake can be found by tracing the logic after the bug manifests itself.

Keep in mind that what a human wants isn't a notion that cleaves reality at the joints.

Ok, whan you build a car but the car doesn't start, I don't think you're going to say that the car is just doing what it wants and we humans are just selfishly insisting that it bends to our whims. You're probably going to take that thing to a mechanic. Same thing with computers, even AI. If you build an AI to learn a language and it doesn't seem to be able to do so, there's a bug in the system.

So when someone (and I know quite a few people in this category) deliberately uses birth control because they want the pleasure of sex but don't want to ever have kids, is that a bug in your view?

That's answered in the second sentence of the quote you chose...

Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net.

No, actually I think the tutorial was necessary, especially since what you're basically saying is that something like a large enough neural net will no longer function by the rules of an ANN. If it doesn't, how does it learn? It would simply spit out random outputs without having some sort of direct guidance.

More will go on in a future superhuman AI than goes on in any present-day toy AI.

And again I'm trying to figure out what the "superhuman" part will consist of. I keep getting answers like "it will be faster than us" or "it'll make correct dicisons faster", and once again point out that computers already do that on a wide variety of specific tasks which is why we use them...

Load More