I have found that comprehensive overviews of artificial intelligence (Wikipedia, SEP article, Norvig and Russel's AI: A Modern Approach) make reference to symbolic AI and statistical AI in their historical context of the former preceding the latter, their corresponding limitations etc. But I have found it really difficult to dissect this from the question of whether the divide / cooperation between these paradigms are about the implementation of engineering of intelligent agents, or if they are getting at something more fundamental about the space of possible minds (I use this term to be as broad as possible considering anything we would label as a mind, regardless of ontogeny, architecture, physical components etc)?

I have given a list of questions below, but some of them are mutually exclusive, i.e. some answers to one question make other questions irrelevant. The fact that I have a list of questions is a demonstration of the fact I find it difficult to find what the boundaries of the discussion are supposed to be. Basically, I haven't been able to find anything that begins to answer the title question. And so I wouldn't expect any comment to answer each of my subquestions one by one, but to treat them as an expression of my confusion to maybe try an point me in some good directions. Immense thanks in advance, this has been one of those questions strangling me for a while now.

 

  • While trying to concern oneself as little as possible with the implementation or engineering of minds, what is the relationship between symbolic AI, connectionism, and the design space of minds?
    • When we talk about approaches to AI “failing”, is this in terms of practicality / our own limitations? I.e. without GPUs, in some sense “deep learning fails”. And by analogy, symbolic AI’s “failure” isn’t indicative of the actual structure of the space of possible minds.
    • Or is it more meaningful. I.e. the “failure of symbolic AI in favor of statistical methods” is because ‘symbolic AI’ simply doesn’t map onto the design space of minds.

 

  1. Are symbolic AI and machine learning merely approaches to design an intelligent system? I.e. there are regions in the design space of minds that are identifiable as ‘symbolic’ and others as ‘connectionist/ML’.
  2. Do all minds need symbolic components and connectionist components? And if so, what about the human brain? The neural network / artificial neural network comparison is largely analogous rather than rigorous - so does the human brain have symbolic & connectionist modules.
  3. Regardless of research direction / engineering application, what is the state / shape / axis of the design space of minds? Does symbolic AI talk about the whole space, or just some part of it? And what about connectionism?
  4. If it is the case that symbolic AI does talk about architecture, then
    1. If symbolic and connectionist are completely separable (i.e. some regions in the design space of minds are entirely one or the other), then what could some of the other regions be?
    2. If symbolic and connectionist aren’t completely separable (i.e. all minds have some connectionist components and some symbolic components), then are there other necessary components? Or would another category of module architectures be an addition on top of the ‘core’ symbolic + connectionist modules that not every mind in the design space of minds needs?
  5. Is ‘symbolic AI’ merely not interested in design and it serves more to explain high level abstractions? I.e. symbolic AI describes what/how any mind in the design space of minds is thinking not what the architecture of some particular mind is?
    1. As an extension, if this is the case, is symbolic AI a level above architecture and therefore there could be isomorphism between two different mind architectures, but “think in the same way” - therefore are the same mind, merely different implementations.
      1. In one abstract layer above the way some people consider it irrelevant whether a human mind is running on a physical brain, a computer simulating the physics/chemistry of a human brain, or a computer running the neural networks embodied in a brain.
New Answer
New Comment
1 comment, sorted by Click to highlight new comments since: Today at 2:11 PM

If the people involved are good naturalists, they will agree that both the symbolic and the connectionist approaches are making claims about high-level descriptions that can apply to things made of atoms. Jerry Fodor, famous proponent that brains have a "language of thought," would still say that the language of thought is a high-level description of collections of low-level things like atoms-bumping into other atoms.

My point is that arguments about what high-level descriptions are useful are also arguments about what things "are." When a way of thinking about the world is powerful enough, we call its building blocks real.

I would still make distinctions between describing human minds and trying to build artificial ones, here. You might have different opinions about how useful different ideas are for the different tasks. Someone will at some point say "We didn't build airplanes that flap their wings." I think a lot of the "old guard" of AI researchers have picked sides in this battle over the years, and the heavy-symbolicist side is in disrepute, but a pretty wide spectrum of views from "mostly symbolic reasoning with some learned components" to "all learned" are represented.

I think there's plenty of machine learning that doesn't look like connectionism. SVMs were successful for a long time and they're not very neuromorphic. I would expect ML that extracts the maximum value from TPUs to be more dense / nonlocal than actual brains, and probably violate the analogy to brains in some other ways too.