All of MrLight's Comments + Replies

Of course listening to AI safety researchers ...

I've read quite a bit about this area of research. I haven't found a clear solution anywhere. There is only one point that everyone agrees on. With increasing intelligence, the possibilities of control is declining in the same way as the rise of the possibilities and the risk.

Yes, according to current knowledge most AGI designs are dangerous. Speaking to researchers could help one of them to explain to you why your particular design is dangerous.

Sure. At the beginning, as it develops, sure (99%).  

At the beginning everything will be nice, I guess. It will only consume computing power and slowly develop. Little baby steps.  

I can watch how memory cells switch from 0 to 1 and maybe back. I can maybe play some little games as test, to see how it develops.  

But the question when to stop. Will I be able to realize the moment when and how it starts to go bad? Before it's to late? When will it be? When will it be to late?  

The developing speed depends on how many cores and computers w... (read more)

I don't know when you should stop. All I'm suggesting is that you not turn it on, without a time on which it is supposed to (automatically) switch off. In other words, you should stop it regularly, over and over again. This has the benefit of letting you consider the new information you have received, and decide how to respond to it. Perhaps your design will be "flawed" - and won't have the risk of going 'foom' that you think it will (without further work to revise and change it - by you, before it is capable of 'improving'). If you decide that it is risky, then the 'intervention' isn't turning it off - it's just not deciding to turn it back on (which maybe shouldn't be automatic).

It seems like if your AGI actually works there's a good chance that it kills humanity.


But isn't humanity already killing itself? Maybe is an AI our last chance to survive?

No, population is growing. Spending a few additional decades on AI safety research is likely improving our chances of survival. Of course listening to AI safety researchers and not just AI researchers from a random university matters as well.

So much bad karma....

That's a good question. 

A fractal, self-organizing digital organism that learns at geometric speed.

What can go wrong when it is calculated in a cloud on several computers around the world?

The software needs a higher computing capacity, so maybe I'll book some in the cloud. If everything works the way I expect, everything is OK. If it doesn't work, that's OK too. But what if things go better than expected?

There are enough papers that deal with exactly this risk. I don't have to quote them here. You can find enough information on th... (read more)

Can you run it for a while and then stop it?

You're absolutely right. I only claim to have solved the known manufacturing problems. I know how to build/code one. Not the unknown problems that are sure to come.

I didn't mean security issues. That question is still open.

A development on a laptop or a separate system is not possible with the required computing capacity. And yes, I am talking to universities and specialists. It's all in the works. But none of them can answer the moral question for me.

Folks, I'm not far from completion, only a few months. And then what? I wanted to think about a few things... (read more)

It seems like if your AGI actually works there's a good chance that it kills humanity.

At the end? I don't know. You will be able to give the AI tasks or problems to solve, but I really don't know what it will make of them in the end. It will be an independent thinking and acting entity.

Well, I know all the possible problems and obstacles in development. I solved them in my calculations and also solve each problem individually. It seems reasonable to assume that it will work. I am not alone in this assessment. But only a real experiment would prove it. But since I do not bear the risk alone, all of you with me. I wanted to ask for your opinion.

I wouldn't trust someone to do anything safety critical if they claim that they know all possible problems and obstacles. Unknown unknown problems are always part of the concern when doing something new. If you actually do make a decision to run this, I recommend doing it on an airgapped computer and commit to if it actually manages to self-improve in any way show the thing to someone well-versed in AI safety before removing airgapping.

The question is not the Software. That can be easily written. The question is how you would like to map the available information, schematically and/or logically. And how you want to collect the data.

The rest is easy.

If the data is sufficient and the desired answer can be projected from the available data, an AI can be created that can answer the necessary question.