Posts

Sorted by New

Wiki Contributions

Comments

Very interesting. I’m an Experimental Psychologist by training, and I found this piece to be extremely well-written and well-researched. However, I'm not sure I can agree with the framing of your hypothesis.

There is a pervasive pattern in cognitive science (AI and cognitive psychology, in particular) of relying on a naïve Cartesian world-view. In other words, Descartes’ formulation of the Cogito, the thinking-self, is the implicit paradigm on which research is conducted. 

In this worldview, the “self” is taken to be an irreducible whole – Descartes’ placed his whole metaphysical system on the supposedly firm bedrock of Cogito Ergo Sum (I think, therefore I am). Later thinkers, including Kant, Hegel, and Nietzsche, would find many problems in the Cartesian formulation, and Nietzsche in particular would be significantly influential on psychoanalysis.

The research of Bargh and co that you have referenced here amounts to a recovery of psychoanalysis, which is also occurring elsewhere in neuroscience (see the work of Mark Solms) – although within more empirically scientific frameworks. Psychoanalysis, in part, was an exploration of the hidden processes that lie outside consciousness, either because they are components of the self, or because they are rejected from consciousness for whatever reason.

This is the line of reasoning I thought you were going to follow. Your conclusion, however, was different from the one I was expecting. After noting that some parts of cognition are not available to consciousness, you did not argue that these processes are under represented in cognitive science. Instead, you argued that normative cognition (which is generally taken to be available to consciousness) is underrepresented in cognitive science. I think this point is correct, but perhaps not for the reasons you've given. I found the idea that descriptive cognition cannot map normative cognition, and vice versa, to be a little confused.

I’m sure you’re aware of the early history of cognitive science, but if anything it was overly focused on normative cognition. Early attempts at AI, such as the work of John McCarthy, saw it attempted using logic as a means of representation. It was only after this failed spectacularly that many engineers were open to the idea that other forms of representation would be required. 

The current neglect of normative models is more a function of the historical flow of research than some psychological limitation of researchers. There's a plausible argument that some cognitive processes have been neglected due to their lack of availability to consciousness, but I'm not sure this can be applied to normative cognition. Rather, the spectacular success of learning techniques, combined with the earlier spectacular failure of reasoning techniques, has led to descriptive cognition being overrated in the research literature.

I agree. Myths are a function of how the mind stores (some types of) knowledge, rather than just silly stories. I would be interested to hear a "rational" account of poetry and art, as I think myth has more in common with these than with scientific knowledge.

The development of applied rationality was a historical phenomenon, which mostly originated in Greece (with some proto-rationalists in other cultures). One aspect of rationality is differentiating things from each other, and then judging between them. In order to employ judgement, one must have different options to judge between. This is why proto-rationality often arises in hermeneutic traditions, where individuals attempt to judge between possible interpretations of religious texts (see India, for example).

In pre-rational societies, myth often operates as an undifferentiated amalgam of various types of knowledge. It acts as a moral system, an educational system, a political system, a military system, and more. In Islam -- which traditionally did not have a separation of church and state -- politics, culture, and religion are still almost completely undifferentiated; this was also the largely the case in Rabbinic Judaism (minus the politics, for obvious reasons).

I think in future myths will continue to serve this purpose: integrating various domains of knowledge and culture together. Arguably the rationalist community, the enlightenment tradition, the philosophical tradition, each of these are engaged in a myth. Nietzsche would call this optimistic Socratism: the optimism that increased knowledge and consciousness will always lead to a better world, and more primordially that the world is ultimately intelligible to the human mind in some deep sense.

Do we need more academics that agree with the status quo? If you reframe your point as "academia selects for originality," it wouldn't seem such a bad thing. Research requires applied creativity: creating new ideas that are practically useful. A researcher who concludes that the existing solution to a problem is the best is only marginally useful.

The debate between Chalmers and Dennett is practically useful, because it lays out the boundaries of the dispute and explores both sides of the argument. Chalmers is naturally more of a contrarian and Dennett more of a small c conservative; people fit into these natural categories without too much motivation from institution incentives.

The creative process can be split into idea generation and idea evaluation. Some people are good at generating wacky, out-there ideas, and others are better at judging the quality of said ideas. As De Bono has argued, it's best for there to be some hygiene between the two due to the different kinds of processing required. I think there's a family resemblence here with exploration-explotation trade-offs in ML. 

TL;DR I don't think that incentives are the only constraint faced by academia. It's also difficult for individual people to be the generators and evaluators of their own ideas, and both processes are necessary.

Do rational communities undervalue idea generation because of their focus on rational judgement?