I found an essay written by David Marr called Artificial Intelligence -- a personal view that I thought was fairly insightful. Marr first discusses how information processing problems are generally solved:

The solution to an information processing problem divides naturally into two parts. In the first, the underlying nature of a particular computation is characterized, and its basis in the physical world is understood. One can think of this part as an abstract formulation of what is being computed and why, and I shall refer to it as the "theory" of a computation. The second part consists of particular algorithms for implementing a computation, and so it specifies how.

This is reminiscent of Marr's three levels of analysis.

Next, Marr draws a distinction between a Type 1 information processing problem and a Type 2 problem. A Type 1 problem has a solution that naturally divides along lines mentioned above: first one can formulate the computational theory behind it, and then one devises an algorithm to implement the computation. Marr proposes, however, that there is a class of problems that doesn't fit this description:

The fly in the ointment is that while many problems of biological information processing have a Type 1 theory, there is no reason why they should all have. This can happen when a problem is solved by the simultaneous action of a considerable number of processes, whose interaction is its own simplest description, and I shall refer to such a situation as a Type 2 theory. One promising candidate for a Type 2 theory is the problem of predicting how a protein will fold. A large number of influences act on a large polypeptide chain as it flaps and flails in a medium. At each moment only a few of the possible interactions will be important, but the importance of those few is decisive. Attempts to construct a simplified theory must ignore some interactions; but if most interactions are crucial at some stage during the folding, a simplified theory will prove inadequate.


More discussion about Type 1 and Type 2 problems follows, but I'm not going to summarize it. It well-worth reading, however. I did think this critique of the GOFAI program was pretty sharp for having been formulated in 1977:

For very advanced problems like story-understanding, current research is often purely exploratory. That is to say, in these areas our knowledge is so poor that we cannot even begin to formulate the appropriate questions, let alone solve them


Most of the history of A.I. (now fully 16 years old) has consisted of exploratory studies. Some of the best-known are Slagle's [24] symbolic integration program, Weizenbaum's [30] Eliza program, Evans" [4] analogy program, Raphaers [19] SIR, Quillian's [18] semantic nets and Winograd's [32] Shrdlu. All of these programs have (in retrospect) the property that they are either too simple to be interesting Type 1 theories, or very complex yet perform too poorly to be taken seriously as a Type 2 theory


And yet many things have been learnt from these experiences--mostly negative things (the first 20 obvious ideas about how intelligence might work are too simple or wrong)... The mistakes made in the field lay not in having carried out such studies--they formed an essential part of its development--but consisted mainly in failures of judgement about their value, since it is now clear that few of the early studies themselves formulated any solvable problems.



If we accept this taxonomy, then where does Friendliness fit in? My hunch is that it's a Type 2 problem. If this is so, what Type 1 problems can be focused on in the present?

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 11:22 PM

I hope there are soon some comments to this question. What do AI people think of the analysis - Marr's and nhamann? Is the history accurate? This there a reason for ignoring?