This is my first post on LessWrong and I apologize for the length. I thought it would be possible that someone here is interested in reading or critiquing it. 
The blog post is my attempt to explain why we do not yet have AGI and a possible short path for getting there. The ideas are extrapolated from an interpretation of Karl Popper and David Miller's critique of inductive probability. My view is that a world model composed of formal statements (theories) can only be constrained by observation (including observations in the form of induction); theories cannot be supported by evidence, they can only be consistent or not. 

I try to clearly define two categories of knowledge with their unique properties, derive some principles for creating an explanatory world model, and share a toy example of how an LLM may be used to generate a formal explanatory world model (which I believe will be the foundation for AGI). 

I am writing from the perspective of a physician with some background in philosophy and physics, not a software engineer. I will respond to any serious feedback. The text is a draft and I do intend to fix the typos. 

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 8:04 AM
[-]TAG4mo20

whereas assertive learning uses a binary mechanism (e.g. Is a hypothesis compatible with the data or not?)

Not necessarily -- in Bayesian reasoning you can reduce the probability of a hypothesis in response to it being inconsistent with some data, without reducing it to zero. Binary disconfimration can be seen as a special case of probablistic disconfirmation ... just as bivalent logic can be seen as a special case of probablistic logic.

[-]TAG5mo20

The knowledge present in modern AI models is created by induction

How do AI models work by induction when it's impossible?

A proof of the impossibility of inductive probability

Thanks for the question. 

In the paper "A proof of the impossibility of inductive probability." by Popper and Miller, they demonstrated that the truth of a theory cannot be supported by observation. 
The knowledge in AI models is created through a kind of induction (the knowledge is a generalization of observations). AI models lack discrete theory statements, as in Bayesian reasoning and in the Popper/Miller critique; the knowledge structure is continuous. 

Therefore, your comment does not point to a contradiction. It does, however, point to the problem I am trying to express: that modern AI methods cannot produce a coherent deductive knowledge structure. I try to communicate an alternate method. 

[-]TAG4mo20

In the paper “A proof of the impossibility of inductive probability.” by Popper and Miller, they demonstrated that the truth of a theory cannot be supported by observation

Whereas its various rebuttals demonstrate the opposite.

Note that inductionism means different things in different contexts.

I have not found any persuasive rebuttals of Popper's argument. If you have found one that has convinced you, I am interested to know it.

[-]TAG4mo20

Probablistic induction clearly works, since you can mechanise it (ie write simple code to perform it).

But you concern may well be with the other kind of induction, induction as a source of hypotheses.