I’m was inspired to enter this contest to shed light on a worldview that may influence Future Fund's plans regarding AGI. Future Funds listed three ideas bellow. I think the last possibility is where their focus should continue.
“As a result, we think it's really possible that:
I’m apprehensive about submitting this worldview, which disregards probabilities. This is a contest judged by super forecasters (our cultures prophets), about probabilities (our cultures prophesies); And I’m explaining that probabilities are irrelevant to the subject of this contest. Can you see the conflict? This is going to be tough for me to explain and for the judges to understand with an unbiased perspective. But I think we are all after the same thing. That is, we want to better understand reality and how to prepare for an uncertain future.
This worldview is a simplified attempt to explain how little we know in the required areas relating to Artificial General Intelligence, areas where we need a better understanding, before a genuine AGI is capable of being programmed, to THINK. Yes, a machine that can think, comprehend and explain things is what we need to qualify as an AGI.
AI- mindless machine. Includes, things we can explain and program.
AGI- what is required, is a mind running on a machine. Which cannot exclude knowledge creating processes (life), emotions, creativity, free will and consciousness.
AI can be better than humans at many things (dancing, chess, memory tasks, a finite list of things…) but not everything.
AGI will be better at everything and will have infinite potential. But to get an AGI, we have many hard problems to solve first.
*There are a lot more way to be wrong than to be right.
The first question one should ask is…Is AGI possible or impossible, not if it’s probable? How can probabilities not be relevant when referring to developing an AGI within a specified timeframe? I’ll start by pointing out the errors of probabilities in a universe which contain people. People are problem solvers, we cannot prophesies what knowledge people will create into the future, there are infinite possibilities of what we will come up with next, our future knowledge growth is unpredictable. Probabilities only work within finite sets, like in a game of chess or poker. But, knowledge is infinite and has no bounds. So, when it comes to humans solving problems, in the real world, probabilities are irrelevant.
When referring to AGI we are trying to understand what will work in the physical world. In reality there is no way of knowing if a thing is probably true or certainly true. We can never know if we are 100% right or if we are 90%, 95%, 99% correct. We can only be “less wrong”, by eliminating errors from our best held ideas.
Imagine trying to explain the metaverse to anyone from 1901. Now keep that in mind for the following…
Yes, we can make predictions, like the outcome of some science experiments; Or we can use a mathematical formula, to predict the location of a planet in orbit 100 years from now. This is based on knowledge we have today. We can’t predict the knowledge we will have in the further, or else we would have it today. Notice how no one from 100 years ago made a story about todays best technologies? We can’t imagine most of our future tech. In addition, predicting a way in which our tech could harm us is easier that imagining how it could help us, this explains why pessimistic, dystopian Sci-fi movies are more common that optimistic Sci-fi movies that have solved our problems.
Using predictions, We can only guess so far. If we could predict the outcome of an experiment, more than one step at a time, why wouldn’t we just jump past the first step, or second step, to the outcomes of the next steps? The subsequent outcomes introduce new possibilities that were not possible before. Guessing some of those outcomes is prophesy, story telling, fun but not scientific.
Assigning probabilities on a genuine AGI before a specific time is prophetic. It’s similar to assigning a probability that our civilization will be wiped out before the end of the century. If prophecy were possible, we wouldn’t need incremental improvements in our ideas. How can we invent things that our next inventions will make possible happen only after they are invented. If we could prophecies, we would just jump the middle steps and invent the subsequent inventions. But we can’t. We have no idea what humans will come up with in the future, ideas are infinite and unpredictable.
This only touches on why we cannot forecast a probability of wether AGI will happen, in the real physical world, before a specific time. It’s prophesy, which is dependant on random luck.
(This understanding about probabilities takes time to come to terms with, it sure did we me).
For progress, it may help to think in terms of the speed of knowledge growths, not a date on a calendar or revolutions around the sun. Assigning an arbitrary due date on AGI, is non-science. Time isn’t a factor. The speed of our knowledge growth is our best metric, and this can’t be predicted.
If we can create the necessary knowledge regarding the entities I’ve listed in this worldview, then will will have AGI sooner or later but it depends on the speed of our knowledge growth first. Yes, people, us, we need to create this knowledge, the first AGI isn’t going to create itself.
There is no law of physics that will make AGI impossible to create. For example, it is possible for our human consciousness to exist, which is a wetware computer. A persons mind (software), running on a persons brain (hardware). We don’t have a reason why it would be impossible to recreate this. Therefore, we can deduce that it is possible to program an AGI, after we create the required knowledge.
Perhaps we have all the necessary technology today to program an AGI. What we are lacking is the necessary knowledge on how to program it.
Knowledge is not something you can get pre-assembled off a shelf. For ever piece of knowledge there is a building process. Let’s identify the two types of knowledge that we know of:
Computers today are not creating any new knowledge. They are using the knowledge which people have already created only. People still need to feed the knowledge into the machine.
*When I refer to problems, I mean problems, that relate to reality, not abstract mathematical problems that don’t need to reference the physical world. Math claims certain truth, science doesn’t. There is no certainty in reality, we can never be certain we have found the truth. What we want to do is to solve problems that make life more enjoyable, relieve suffering, understand more about reality. We do this by identifying, understanding and fixing our problems.
All problems are people problems. Without people, problems aren’t recognized. The dinosaurs didn’t know there was a problem before they went extinct and no other entity, that we know of, can understand problems either. An AGI must be programmed to deal with new problems.
An AGI needs creativity to solve new problems. Creativity is about creating something new, that didn’t exists before. People have the potential to solve an infinite number of problems. An AI has a finite set of problems it can solve. They are dependent on humans to program that finite set of problems. AI can not solve new problems which have never existed before. Creativity is an essential step in the knowledge creation process, it’s how we invent theories.
We know this method works, because this process creates progress. We see things around us improve, and problems being solved.
An AGI needs creativity to help solve new problems. Understanding creativity is a hard problem which we do not fully understand yet.
For computers to evolve to have AGI, like humans only faster, we would first need to fill the gaps in our understanding of how life emerged. We don’t yet know how inorganic material can become organic self replicating life forms. Our theories contain huge gaps, which we need to fill before the process can be understood then programmed.
Another idea for AGI to evolve is, we could try to recreate the universe in an artificial simulator. For this we would need to know all the laws of physics, then recreate our universe according to those laws in a computer simulation. This may or may not be possible, given the amount of physical material we would need for the computations and the time available before the end of the universe. Even then, we have a lot of learning to do first.
Will a computer become a person spontaneously, if we keep filling it full of human knowledge and increasing the speed and memory? No, that would be similar to the old theory of Lamarckism which Darwin replaced with a better theory, namely, evolution by natural selection.
We don’t know how much we need to comprehend in order to understand “consciousness”. Consciousness seems to be our minds subjective experiences. It would seem to emerge from the physical processes in our brain. Once we have understood consciousness, we can show this by programming it. David Deutsch (one of the god fathers of quantum computing) has a rule of thumb, “If you can’t program it, you haven’t understood it”. Meaning, when we can understand human consciousness well enough to program it into the software running on our computers, only then will we have a real AGI.
If you haven’t spent much time thinking about abstractions before, then what I write here will not be enough for you to understand them but it’s a start. It takes a lot of thinking about them before they are understood. Abstractions, are real, complex systems that have effects on the physical world. But they are not physical. They emerge from physical entities. The physical in which I am referring, is made of something tangible in our universe. Non-physical abstraction, are powered by the physical but do something else. Our mind is an abstraction. Our brains (physical) carrying knowledge (non physical). Yes, the knowledge is encoded in our physical brains, like a program in a computer. But it’s like another layer above the physical, which is our mind. Another way I’ve come to understand abstractions is, they are something that is more than the sum of its parts. The ‘More” is referring to abstractions. And they are objectively real.
Today, computer programs contain abstractions. They are made out of atoms but they contain abstractions which can effect the world. Eg. If you are playing chess against a computer and it wins, what beat you? What beat you is the abstract knowledge which was embodied in that computer program. People put that knowledge there.
Our minds (like computer programs) are abstract. Non-physical entities, not made of atoms, which effect physical entities.
Understanding abstractions are a necessary step to achieving AGI. First we need a good explanation on how our abstract minds work, to get us closer to programming AGI. To create an AGI, we must program, our knowledge into, physical software to make possible an abstract entity like our mind.
There hasn’t been any fundamental difference between todays computers and our original computers. They are still following the same philosophy, only faster, more memory and less error prone.
Today’s AI cannot genuinely pass the Turing test. Which is an AI that can fool a human judge into believing the AI is human. There are questions we can ask the AI, to test if the AI can understand something, anything. But as of yet, there is no understanding happening. Don’t expect SIRI to be your go-to companion any time soon, she’s going to be frustrating for a while still.
I think a real AGI will be a good thing. There are benefits that we can imagine and more that we can’t. Immortality comes to mind, populating to rest of the universe does as well.
People are a mind with an infinite repertoire of problem solving potential. After we understand our minds and program the AGI, it will be, by all definitions, a person. They will be able to understand things, they will be able to solve problems. They will be knowledge creators and explainers like us. And we will treat them like people.
Today, computers don’t have ideas, but people do. Computer don’t comprehend meaning from words, gestures, implications, symbols or anything at all. People do. For an AGI, what is needed is a knowledge creating, understanding and explaining program. We aren’t even close. It is possible to program an AGI. But “probably” having an AGI before a certain time is prophecy. Only after we understand human consciousness well enough can we begin the process of programming it.
Understanding that we can solve the many hard problems, needed to program an AGI, is how we deal with our unpredictable future. We can’t solve future problems today. But our knowledge continues to grow.
It sounds to me like you believe that the term probability has to refer to the frequentist concept and not the Bayesian. Reading the Superforcasting book might be helpful to understand what's meant.
Please correct me if I’m wrong, but doesn’t Bayesian induction claim we can be more confident we are right based on more evidence?
The method I have come to understand comes from Karl Popper, which is, we improve our knowledge through conjecture and criticism (the elimination of errors from our best held knowledge). Not by the probability of being closer to truth.
Yes, but Bayesnism is also about what we mean by the term probability.
Karl Popper didn't have much to say about probability.
This is wonderful, you have identified the crux of the conflict. And identifying conflicts is a great thing. We each have a different understanding of how science works. At the root of it, it’s a cases of Popperian Epistemology vs Bayesian Epistemology.
This is how I understand Bayesianism. Bayesianism works on a spectrum, on one end I don’t see a problem with it, on the other end it breaks. One end includes finite problems, like probabilities of winning in poker, the other end, we introduce new unpredictable things into the game (this is our reality), and Bayesian induction can’t work.
People who don’t know about Poppers refutation of Bayesian Induction, or don’t understand it. They will not like or grasp the worldview stated above.
We are looking at science through a different lens, philosophy, or if you prefer, worldview.
Take the name of this forum for example, one’s view of science effects their understanding. Each sees the same thing but understands it differently:
Popperian- “Less Wrong” means, we eliminate errors from our best ideas, to better ideas,
Bayesian- “Less Wrong” means, one finds credible evidence to justify or prove that we are more right.
Using Bayesianism to influence ones worldview is more popular, just look at the downvotes this Post is getting here. I also came from the same Bayesian based common-sense worldview. But, my learning has brought me to understand that Poppers theory is better. It explains more about understanding reality. But I could be wrong.
Popper on probabilities:
Popper spent a lot of time explaining probabilities and problems of induction.
If you are interested here is reference to some of his writing on the matter:https://stanford.library.sydney.edu.au/archives/sum2016/entries/induction-problem/
And here he proved mathematically how probabilities are impossible:
Thank you for posting your comments. I hope more people have helpful criticism, then I can correct my errors. Downvotes don’t do much for me. I would rather learn where I’m going wrong.