Shall we bite the apple?
Hello community, nice to meet you here.
For a long time I have been tormented by a question that I can't seem to answer alone.
Let's talk hypothetically. Let's assume that I have developed a method that can be mapped in a small program. This method, once started, will develop an AGI by itself.
The program works independently, evolves and gets better from minute to minute. Not just better with one problem, but with any number of problems. The algorithm is able to rewrite and adapt itself. The algorithm follows this path of continuous improvement until it ends in an AGI.
This algorithm consists of several individual components, which together give this... (read 435 more words →)
I've read quite a bit about this area of research. I haven't found a clear solution anywhere. There is only one point that everyone agrees on. With increasing intelligence, the possibilities of control is declining in the same way as the rise of the possibilities and the risk.