Posts

Sorted by New

Wiki Contributions

Comments

Of course listening to AI safety researchers ...

I've read quite a bit about this area of research. I haven't found a clear solution anywhere. There is only one point that everyone agrees on. With increasing intelligence, the possibilities of control is declining in the same way as the rise of the possibilities and the risk.

Sure. At the beginning, as it develops, sure (99%).  

At the beginning everything will be nice, I guess. It will only consume computing power and slowly develop. Little baby steps.  

I can watch how memory cells switch from 0 to 1 and maybe back. I can maybe play some little games as test, to see how it develops.  

But the question when to stop. Will I be able to realize the moment when and how it starts to go bad? Before it's to late? When will it be? When will it be to late?  

The developing speed depends on how many cores and computers works on the development. But to see if it develops in the "right"  way (whatever the "right" way is), I need to let it to develop.

But what if it develops curiosity? ...and it will. What if it needs more calculating power? What If I say no and try prevent it?  Should I?  

About all this questions I was already thinking, and it was always a remaining risk.  The probability is small, but not zero. The possible risk is immensely.  

And, as far as it develops, more an more people will be involved. 

More people are more opinions, beliefs, needs, fears and desires. Corporations will show interest and governments will express their concerns and desire to participate. Contracts will be made and laws will be passed. Interests will be served. At what point should I pull the plug? How long will I be able to do it? Won't it be too late then?

Shouldn't we talk now about it? 

It seems like if your AGI actually works there's a good chance that it kills humanity.

 

But isn't humanity already killing itself? Maybe is an AI our last chance to survive?

So much bad karma....

That's a good question. 

A fractal, self-organizing digital organism that learns at geometric speed.

What can go wrong when it is calculated in a cloud on several computers around the world?

The software needs a higher computing capacity, so maybe I'll book some in the cloud. If everything works the way I expect, everything is OK. If it doesn't work, that's OK too. But what if things go better than expected?

There are enough papers that deal with exactly this risk. I don't have to quote them here. You can find enough information on the net. With a "stop" button or switch you won't get far ..

Google's AI has found glitches in games. And this AI can only play Atari games.

In addition, if it runs on a computer that can be reached in the network, the algorithm can be stolen and easily reprogrammed into a virus (actually a chained worm, but that doesn't matter now). The algorithm is designed so that you can define any (formally definable) goal. In the wrong hands, this can cause considerable damage.
 

Maybe I shouldn't worry so much and just try it out?


(I know, my English isn't perfect. I apologize for that.) 

You're absolutely right. I only claim to have solved the known manufacturing problems. I know how to build/code one. Not the unknown problems that are sure to come.

I didn't mean security issues. That question is still open.

A development on a laptop or a separate system is not possible with the required computing capacity. And yes, I am talking to universities and specialists. It's all in the works. But none of them can answer the moral question for me.

Folks, I'm not far from completion, only a few months. And then what? I wanted to think about a few things beforehand. 

Because You can't trap an AGI in a box. It will always find a way out. I see the code here in front of me. I see the potential. Believe me. I don't believe it can't be hold in a 'Black Box'. The question is also should we?

Do you want to be born in a maximum security prison?

What opinion should the AI have of us when we put it in jail? 

At the end? I don't know. You will be able to give the AI tasks or problems to solve, but I really don't know what it will make of them in the end. It will be an independent thinking and acting entity.

Well, I know all the possible problems and obstacles in development. I solved them in my calculations and also solve each problem individually. It seems reasonable to assume that it will work. I am not alone in this assessment. But only a real experiment would prove it. But since I do not bear the risk alone, all of you with me. I wanted to ask for your opinion.

Answer by MrLightNov 19, 202010

The question is not the Software. That can be easily written. The question is how you would like to map the available information, schematically and/or logically. And how you want to collect the data.

The rest is easy.

If the data is sufficient and the desired answer can be projected from the available data, an AI can be created that can answer the necessary question.