“It is impossible to speak in such a way as to not be misunderstood”

Karl Popper

 

Intro

This worldview combines the teachings of Karl Popper and David Deutsch with my own understandings. I attempt to explain areas where we need to improve our knowledge, before a genuine AGI is capable of being programmed to think, comprehend, create explanations and create knowledge. My take is that all the fear surrounding AGI is unnecessary. The common pessimistic themes are empirical, Bayesian induction and prophecy and not an image of what will happen in reality.

 

I cannot include everything in this article. There are too many places that need deeper explanations. Rather, this will be an introduction to my worldview; the remainder can follow in the comments section. My goal is to convince The Future Funds that this worldview should be seriously contemplated unless it can be rationally refuted.  (Downvotes don’t count as a scientific refutation, unless accompanied by a good explanation). Is this a popularity contest or one of rationality?

 

Arguments presented in this worldview:

 

1.     Empiricism, Bayesianism, Inductivism VS Fallibilism

2.     This worldview contest is paradox in itself. Are other worldviews allowed to         compete?

3.     Probabilities are not useful when referring to the nature of reality.

4.     AGI won’t be as dangerous as one may believe.

5.     AGI will be a lot harder to achieve than one may believe.

6.     Adopting a Fallibilist worldview over a Bayesian, empirical, inductive worldview will improve your investment objectives.

 

Worldviews

Worldview: A conception of the world.

 

This Future Funds worldview contest is about paradigms, worldviews, how our perspectives of reality differ and where one might be wrong in our perception of reality. Fundamentally, we examine and re-examine the way we think in an effort to converge upon the truth. It’s a very important exercise.

 

To begin, I notice a conflict in the game itself. Is the Future Funds Worldview Prize organized using Bayesian inference as a measuring stick and leaving out other options? This is a conflict even before the game begins.  The contest Invites discussion about existing worldviews, but within the confines of an empirical, Bayesian, inductive worldview.  What if the measuring stick is limited in the first place?

 

In this essay I am including two popular worldviews. One is based on empiricism and the other refers to inherent human – creative! –  fallibility. These are conflicting approaches to understanding how knowledge evolves; subsequently, how we can expand our understanding of reality.

 

Bayesian Epistemology

Empiricism: The belief that we acquire knowledge through observation, through the direct experience of our senses.

 

Bayesianism: The idea that, by confirming evidence, probabilities go up and disconfirming evidence makes probabilities go down. With seemingly higher probabilities one justifies the belief that one’s theories are closer to the truth.

 

Inductivism: The idea that, by confirming theories that use past and current data (info) as evidence, one can extrapolate that data(info) into the future.

 

Today these are accepted as the common-sense methodology for how science works. Together they can be regarded as Bayesian Epistemology, or a Bayesian theory of knowledge.

 

The Bayesian approach to learning supports one’s own chosen theory.  When evidence confirms our theories we feel confident we are right. With a growing list of evidence confirming a theory, we humans tend to create theories resistant to change. Such approaches become hierarchical and authoritative.

 

Popperian epistemology/Fallibilism

 

Popperian Epistemology, or Fallibilism, is the opposite of Bayesian Epistemology. Fallibilism is non-hierarchical, non-authoritative, and works by admitting that theories are inherently wrong but evolving to become less wrong.  We aim towards truth by continuously correcting errors. Using this epistemology one creates explanation through conjectures and rational criticism.  We make guesses (theories), then eliminate errors from those guesses. We find something to be more true by eliminating untruths.

 

Bayesianism is a belief system

 

Bayesianism purports that if we find enough confirming evidence we can at some point believe to have found “truth”. What is the process to fill the gaps in our knowledge about truths under Bayesianism? Do we fill the gaps with evidence? For how long do we collect the evidence before it becomes the ultimate truth. What if there is an infinite amount of confirming and disconfirming evidence we don’t know about?

 

Bayesianism uses todays evidence, extrapolates to the future, then assigns probabilities  More true, or less true.  Meanwhile it disregards the infinite potential of all the unknowable.  It is impossible to comprehend all the influences, all the dynamics, involved.

 

Fallibilism is not  a belief system

 

All our ideas are conjectural, but our conjectures become less wrong as we remove errors. At any moment our current knowledge can be replaced with better knowledge.  Practicing Fallibilism we have no final answers - but better and better explanations.

 

Is there is an end to scientific discovery? Will we eventually know everything there is to know and no new problems will arise? In such a reality, Bayesianism would work. But our reality isn’t finite, there will be no end to discovery, no end to problems or progress. Every problem solved introduces new and better problems. Reality includes infinite possibilities and is unpredictable.

 

My following article will challenge our current cultural belief system.  It is time to calibrate to a lens that doesn’t rely on a belief system but acknowledges the limitations of a Bayesian approach to Science.  To contemplate the future of AGI this is imperative..

 

What if current ideas about how scientific/technological progress should work is limited - or just wrong?

 

Identifying Problems

We want to relieve suffering, make life more enjoyable, understand more about our universe.  Rather than beginning with complex mathematical problems we need to focus first on the physical world, identifying, understanding, and solving conflicting ideas  - our problems.

 

Problems are people problems. Without people, problems aren’t recognized.  Dinosaurs didn’t know they had a problem before they went extinct.  Neither can any other entity that we know of understand problems.  To be genuine, an AGI must be able to identify problems. As example:  Does an AI get bored? No.  People do.  So we invented games as a temporary solution to boredom. Games solved a problem for people. AI wouldn’t invent games because it wouldn’t know it had a problem - unless we told it it did.

 

Reality has no boundary, it is infinite. Remembering this, people and AGI have the potential to solve an infinite number of problems. However, AI is limited, it can solve only a finite set of problems.  On its own, AI cannot solve problems which have not yet existed.

 

AI vs AGI vs People

AI (Artificial Intelligence): Mindless machine, which includes things we can explain and program into computers.  It is capable of solving finite problems. (Bayesianism works here)

 

AGI (Artificial General Intelligence): Capable of infinite problem solving.

To achieve AGI we will need to program the following:  

  • knowledge creating processes
  • emotions
  • creativity
  • free will
  • consciousness.

 

People and AGI will be similar beings in different packages. Each will be universal explainers, they will have the potential to comprehend anything in the universe. At this point, AGI will be like another race of people. At this level of emergence it’s not a case of man vs machine, or people vs AGI, it is about People/AGI and a never ending stream of problem solving.

 

AI can play chess better than humans, it can memorize better than we can, even be a better dancer; it can outdo us in many things . . . but not everything. AGI, on the other hand, will be better at everything and will have infinite potential. But to get an AGI, we first have challenging problems to solve. There’s a huge gap from AI to AGI.

 

Probabilities

We cannot calculate the probability of future events as if people didn’t exist. Recognizing this to be true, Is it relevant to assign probabilities when referring to the development of an AGI within a specified timeframe?

No.

People are abstract knowledge creators.  We cannot guess what knowledge people will create into the future. There exist infinite possibilities. What will we come up with next? Probabilities or Bayesian inference work within finite sets, like in a game of chess or poker. But probabilities don’t apply in the realm of actual reality, which is always expanding toward infinite possibility. Imagine a game of poker with an infinite number of cards.  Imagine also new types of cards showing up regularly – perhaps a 7 of mushrooms or a 14 of squares.   Probabilities work if nothing truly new is introduced to the game.

 

Bayesianism provides good explanations within a finite reality, a reality where it’s possible to count the known things involved. But, human knowledge can be infinite and with no bounds. So, realizing humans have the capacity to solve any problem in the real world, probabilities become irrelevant.

 

Imagine trying to explain the metaverse to someone from 1901.  1922?. Now, keep that in mind for the following…

 

We can’t predict the knowledge we will have in the future.  If we could, we would implement that knowledge today. No one who lived 100 years ago imagined how we would communicate easily around the globe through devices carried in our pockets, or that ordinary people could travel to the other side of the world to spend Christmas with family.  In the same vein, we can’t imagine most of our future tech.

 

Interestingly, people seem to find it easy to predict ways in which our tech could harm us. Less often we hear about how it could help us.  Pessimistic, dystopian Sci-fi movies are more common than optimistic Sci-fi movies – movies that solve problems.

 

Using predictions, we can guess only so far. If we could predict the outcome of an experiment, more than one step at a time, why wouldn’t we just jump past the first step, or second step? The subsequent outcomes introduce new possibilities that were not possible before. 

 

Assigning probabilities for a genuine AGI before a specific time is prophetic. Prophecy is not an accurate gauge of potential future events.  Our future inventions will grow from inventions that have gone before.  They can happen only after previous ones have been invented. If we could prophesy, we would just skip the middle steps – the creative process  - and jump to the final product.  We can’t do that. We have to create our way toward the future with no idea what we will come up with.  Our ideas lead us to unpredictable frontiers and the process is infinitely creative..

 

The Knowledge Clock

To make progress, we have to think not in terms of dates on a calendar or revolutions around the sun but rather about the speed at which knowledge grows. Assigning an arbitrary due date on the likelihood of AGI happening distracts from what is actually possible. The speed of our knowledge growth is our best metric, and this can’t be predicted, we can only measure it historically, after it has happened.

If we develop the necessary knowledge regarding the concepts I’ve listed in this worldview, then, sooner or later, we will have AGI.  But the timing is dependent upon the speed of our knowledge growth.

 

Is AGI Possible Or Impossible?

The first question one should ask is: “Is AGI possible?”  Not “Is it probable?”

 

There is no law of physics that will make AGI impossible to create. For example: It is possible for human consciousness to exist. This is a wetware computer. A human mind (software), runs on a human brain (hardware).

 

The Church–Turing–Deutsch principle states that a universal computing device can simulate every physical process. This is also know as the universality of computation and it explains that the laws of physics allow for rendering of all physical objects by a program on a general purpose computer. Therefore,  once we have developed the required knowledge, we can deduce that it is possible to program an AGI.

 

To Program An AGI We Need More Knowledge About Knowledge

Knowledge = information with influence.

 

Knowledge is information that has causal power.   Our genes have causal power.  Our ideas have causal power.

 

The development of knowledge evolves through our best guesses. For genes their best guesses don’t die before they replicate. For people, our best guesses are explanations that are hard to vary.

 

The knowledge creation process:
 

  • Problem  —>  Theory  (best guesses) —>  Error Correction (experiment)  —>  New Better Problem  —>  Repeat ( ∞ )…
     

If this problem solving system was a factory, Knowledge would be the resulting product. This is demonstrated in life by consistent improvement.

 

Knowledge is not something you can get pre-assembled off a shelf. For every piece of knowledge there has been a building process. Let’s identify the two types of knowledge building that we know of:

 

1.     Genes demonstrate the first knowledge creating process that we know of. It is a mindless process. Genes develop knowledge by adapting to an environment, They use replication, variation and selection. Genes embody knowledge. Theirs is a slow knowledge creating process.    

2.     Our minds demonstrate an intentional and much faster process. Our knowledge evolves as we recognize problems and creatively guess, developing  ideas to solve for that problem (adapting to an environment). We guess, then we criticize our guesses. This process is happening right now in you. I am not uploading knowledge into your brain. You are guessing what I’m writing about, comparing and criticizing those guesses using your own background knowledge. You are trying to understand the meaning of what I’m trying to share with you. Then that idea competes with your current knowledge on the subject matter. There’s a battle of ideas in your mind. If you are able to criticize your own ideas as well as the competing idea, the idea containing more errors can be discarded, leaving you with the better idea, therefore expanding your knowledge. Transferring the meaning of ideas (replicating them) is difficult. People are the only entity, that we know of, who can do this, and we do it imperfectly by variation and selection.

 

Perhaps we have the necessary technology today to program an AGI. What we lack is the necessary knowledge about how to program it.

 

Computers today are not creating new knowledge. Machines like AlphaZero look like they are creating new knowledge.  Actually, they are exploiting inefficiencies in a finite environment. We could learn a lot from AI. Machine learning will uncover new inefficiencies, and we can learn from that, but it is people that will find creative ways to utilize the information to evolve greater knowledge. Computers are trapped within the knowledge which people have already created.

 

Creativity

Creativity sets us apart from all life forms that we know of. It releases us from the confines of our genes. Creativity has enabled people to solve endless streams of problems, thus to open us to boundless knowledge. It appears that creativity has only evolved once, in us. It must be a very rare and difficult thing to achieve. And, it is a necessary element if we wish to achieve AGI. Why should we assume this will happen spontaneously in our machines?

 

Creativity is a transfer process.  It requires communication between people.  Through communication – people talking with people, conversing, exchanging ideas - knowledge is restructured in our minds.  Some ideas replicate; they become part of our culture and are shared as “memes”.

 

Creativity isn’t just about mimicking. It is more than trial and error. It isn’t limited to random image generation.  Mimicking, trial and error and random image generation are mechanical processes which AI can do today.

 

For our AI to make the leap to AGI, programmers must understand more clearly the human element of creativity.

 

AI and Machine Learning

There are claims today that machine learning has developed or is showing signs of creativity. This is a misconception. We are after genuine creativity, not the illusion of creativity.

 

Dall-E

A high level example of how the Image generators DALL-E works today:

 

1.     First, a text prompt is input into a text encoder that is trained to map the prompt to a representation space.

2.     Next, a model called “the prior” maps the text encoding to a corresponding image encoding that captures the semantic information of the prompt contained in the text encoding.

3.     Finally, an image decoder randomly generates an image which is a visual manifestation of this information found online.

 

This doesn’t come close to genuine creativity which I discussed in the previous section.

 

Alpha Zero

Machine learning, like Alpha zero is given the basic rules of the game. People invented these rules.  It then plays a game with finite moves on a finite board using trial and error to find the most efficient ways to win. No creativity is needed.  (In this situation, Bayesian induction does work).

Now . . .  superimpose that game over actual reality, which is a board with infinite squares. Straight away infinite new sets of problems arise.  New pieces show up repeatedly and the rules for them are unknown. How would Machine Learning? solve these new problems?

 

It can’t.  Machines don’t have the problem solving capabilities people have. People identify problems and solve them using creative conjectures and refutations. (Once the rules are in place, the algorithm can take over).

 

Lastly, it is people that interpret the results and come up with explanations to make any of this useful.

 

Consciousness

Most of us would acknowledge that we don’t yet understand “consciousness”.  Consciousness seems to be the subjective experience of the mind. It would seem to emerge from physical processes in our brains. Once we have understood consciousness, we can program it. David Deutsch (one of the Godfathers of quantum computing) has a rule of thumb: “If you can’t program it, you haven’t understood it”.  When we understand human consciousness well enough to program it into the software running on our computers, we will have a real AGI.

 

Abstractions.  Why are they important regarding AGI?

Abstractions are real, complex systems that have effects on the physical world. But they are not physical. They emerge from physical entities, tangible in our universe. Abstractions are powered by the physical.  “Mind” is an abstraction. Our brains (physical) carry knowledge (non-physical). The knowledge is encoded in our physical brains, like a program in a computer. But there is another “layer” above the physical, which  we call “mind”.  Another way I’ve come to understand abstractions is: They are more than the sum of their parts. The ‘More” is referring to abstractions. And they are objectively real.

 

Today, computer programs contain abstractions. They are made of atoms but they contain abstractions which can affect the world.  Example: If you are playing chess against a computer and it wins, what beat you? What beat you is the abstract knowledge which was embodied in that computer program. People put that knowledge there.

Our minds (like computer programs) are abstract, non-physical, but they influence physical entities.

Understanding abstractions is a necessary step to achieving AGI.  We need good explanations about how the abstract layers of mind work if we are to get closer to programming AGI. To create an AGI, we must program abstract knowledge into physical software.

 

Can AGI Evolve Artificially?

For AI to evolve to have AGI, to think like humans but faster, we first have to fill the gaps in our understanding of how life emerged. We don’t yet know how inorganic material became organic; we don’t know how life forms became self- replicating.  We need to fill in huge gaps before the process can be understood, then programmed.

 

There is another option for AGI to evolve: We could try to recreate the universe in an artificial simulator. For this we would have to be familiar with all the laws of physics.  We could then recreate our universe in a computer simulation according to those laws.

 

We cannot forget that, for evolution to happen, an entity must be able to solve problems.  By solving problems it adapts to its environment. Taking this into account, modelling our own universe seems logical. Even though there may be a way to produce life in a simulation, this may or may not be possible given the amount of physical material we would need for the computations and the time available before the end of the universe.

 

If we keep filling AI with human knowledge and if we increase speed and memory, will AI become a person spontaneously? No.  Such a theory would be similar to the old theory of Lamarckism which Darwin replaced with a better theory - evolution by natural selection.

 

Let’s fill in the gaps in our knowledge of how human intelligence has come into being before we think it can “just happen”.

 

AGI Progress So Far?

There is no fundamental difference between today’s computers and our original computers. Today’s computers are faster, have more memory and are less error prone but they are still following the same philosophy,

Today’s AI cannot genuinely pass the Turing test – a test that tries to fool a human judge into believing the AI is human. The AI is given questions to test if it can understand concepts as humans do but, as of yet, there is no understanding happening.

We can’t expect SIRI to be a go-to companion any time soon, neither can we expect AlphaZero to become a creative knowledge creator. We can’t expect any machine learning program to spontaneously evolve out of its finite realm to join us in our infinite realm. We must first expand the scope of our knowledge.

 

Conclusion

There are benefits to having a real AGI that we can imagine and more beyond our imagination. Immortality comes to mind.  Populating the rest of the universe does as well.

People and AGI together are a mind with an infinite repertoire of problem solving potential. Once we understand the mind and program an AGI with this potential it will be, by all definitions, a person. AGI will be able to understand concepts and create new knowledge independently. It could use that knowledge to help discover new, better problems – problems that, when solved carry expanded potential.

 

AGI will be universal knowledge creators and explainers like us. And we will treat them like people.

 

I hope our motivation to program AGI doesn’t fade.  It would be detrimental to us if we passed laws delaying AGI. Imagine what we might have discovered by now if the Inquisition hadn’t, constrained science as they did hundreds of years ago?

 

Today’s computers don’t have ideas but people do. Computers don’t comprehend meaning from words.  They can’t interpret implications or subtle nuances in voice tone. People can. It is possible to program an AGI but “probably” having an AGI before a certain time is prophecy. Only after we have more fully understood creativity and consciousness can we begin the process of programming it. Realizing that we have the ability to solve the challenging problems involved in programming an AGI, we can deal with our unpredictable future.

 

But . . . I may be wrong.

 


* A message to the Down Voters

 

This Worldview can be refined, but it needs your interaction. Therefore, the comment section is as important as the main paper. It will also give me a chance to correct errors and explain sections in more detail. This worldview is fallible as are all worldviews. Criticism can make it stronger.  What resists criticism will die.

The point of the Future Fund contest is to introduce the hosts to ideas they may not have otherwise thought through. Downvotes, without explanation make it unlikely the Future Fund will notice worldviews “beyond the chess board”, which defeats the purpose. Downvotes could mean I’m wrong, but it could also mean I’m writing something you just don’t agree with or that you are part of an authoritarian group which doesn’t allow conflicting worldviews (in other words doesn’t allow error correction).

So if you downvote, bring an explanation. This will help distinguish rational criticism from the irrational.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 1:07 PM

downvotes are to get spam off the front page, not to refute the spam

[This comment is no longer endorsed by its author]Reply

I have to admit I don't get it.  I mean, you can't just deny that probability estimates are a thing.  How do decision theories (or just decision mechanisms) work in a Fallibilist worldview?  What does it mean, technically, for a theory to become "less wrong" over time?  What are the mechanics (what changes in one's worldview) when we notice and eliminate an error in a theory?

Your description of infinite possibilities makes me think you don't understand the difference between "infinite" and "very large and not fully known".  And I wonder if you acknowledge that one's potential future experiences are NOT infinite, but are still very hard to predict and unknown in scope, and that Bayesean probabilities work just fine for it - include an assignment for "something else".   Bayesean probabilities are not true, they're personal estimates/assignments of future experiences.  And they're the best thing we have for making decisions.