Polytopos

Wiki Contributions

Comments

Grokking illusionism

according to the story that your brain is telling, there is some phenomenology to it. But there isn't.

Doesn't this assume that we know what sort of thing phenomenological consciousness (qualia) is supposed to be so that we can assert that the story the brain is telling us about qualia somehow fails to measure up to this independent standard of qualia-reality?

The trouble I have with this is that there is no such independent standard for what phenomenal blueness has to be in order to count as genuinely phenomenal. The only standard we have for identifying something as an instance of the kind qualia is to point to something occurring in our experience. Given this, it remains difficult to understand how the story the brain tells about qualia could fail to be the truth, and nothing but the truth, about qualia (given the physicalist assumption that all our experience can be exhaustively explained through the brain's activity).

I see blue and pointing to the experience of this seeing is the only way of indicating what I mean when I say "there is a blue qualia". So to echo J_Thomas_Moros, any story the brain is telling that constitutes my experience of blueness would simply be the qualia itself (not an illusion of one).

Why everything might have taken so long

For an in depth argument that could taken to support this point, I highly recommend Humankind: A Hopeful History by Rutger Bregman.

Philosophy: A Diseased Discipline

it generalises. Logic and probability and interpretation and theorisation and all that, are also outputs of the squishy stuff in your head. So it seems that epistemology is not first philosophy, because it is downstream of neuroscience.

I find this claim interesting. I’m not entirely sure what you intend by the word “downstream” but I will interpret it as saying that logic and probability are epistemically justified by neuroscience. In particular, I understand this to include that claim a priori intuition unverified by neuroscience is not sufficient to justify mathematical and logical knowledge. If by "downstream" you have some other meaning in mind, please clarify. However, I will point out that you can't simply mean causally downstream, i.e., the claim that intuition is caused by brain stuff, because a merely causal link does not relate neuroscience to epistemology (I am happy to expand on this point if necessary, but I'll leave it for now).

So given my reading of what you wrote, the obvious question to ask is, do we have to know neuroscience to do mathematics rationally? This would be news to Bayse who lived in the 18th century when there wasn’t much neuroscience to speak of. Your view implies that Bayse (or Euclid for that matter) were unjustified epistemically in their mathematical reasoning because they didn’t understand the neural algorithms underlying their mathematical inferences.

If this is what you are claiming, I think it’s problematic on a number of levels. First, on it faces a steep initial plausibility problem in that it implies mathematics as a field is unjustified for most of its thousands of years of history until some research in empirical science validates it. That is of course possible, but I think most rationalists would balk at seriously claiming that Euclid didn't know anything about geometry because of his ignorance of cognitive algorithms.

But a second deeper problem affects the claim even if one leaves off historical considerations and only looks at the present state of knowledge. Even today when we do know a fair amount about the brain and cognitive mechanisms, the idea that math and logic are epistemically grounded in this knowledge is viciously circular. Any sophisticated empirical science relies on the validity of mathematical inference to establish it’s theories. You can’t use neuroscience to validate statistics when the validity of neuroscientific empirical methods themselves depend on the epistemic bonafides of statistics. With logic the case is even more obvious. An empirical science will rely on the validity of deductive inference in formulating it’s arguments (read any paper in any scientific journal). So there is no chance that the rules of logic will be ultimately justified through empirical research. Note this isn't the same as saying we can't know anything without assuming the prior validity of math and logic. We might have lots of basic kinds of knowledge about tables and chairs and such, but we can't have sophisticated knowledge of the sort gained through rigorous scientific research as this relies essentially on complex reasoning for it's own justification.

An important caveat to this is that of course we can have fruitful empirical research into our cognitive biases. For example, the famous Wason selection task showed that humans in general are not very reliable at applying the logical rule of modus tollens in an abstract context. However, crucially, in order to reach this finding, Wason (and other researchers) had to assume that they themselves knew the right answer on the task. i.e.., the cognitive science researchers assumed the a priori validity of the deductive inference rule based on their knowledge of formal logic. The same is true for Kahneman and Tversky’s studies of bias in the areas of statistics and probability.

In summary, I am wholeheartedly in favour of using empirical research to inform our epistemology (in the way that the cognitive biases literature does). But there is a big difference between this and the claim that epistemology doesn't need anything in addition to empirical science. This is simply not true. Mathematics is the clearest example of why this argument fails, but once one has accepted its failure in the case of mathematics, one can start to see how it might fail in other less obvious ways.

Book review: The Geography of Thought

This is a fascinating article about how the concept of originality differs in some Eastern cultures https://aeon.co/essays/why-in-china-and-japan-a-copy-is-just-as-good-as-an-original

Living Metaphorically

An interesting contribution to is this book by Hofstadter and Sanders

They explain thinking in terms of analogy, which as they use the term encompasses metaphor. This book is the a mature cognitive sciencey articulation of many of the fun and loose ideas that Hofstadter first explored in G.E.B.

Philosophy: A Diseased Discipline

I'm curious how many people here think of rationalism as synonymous with something like Quinean Naturalism (or just naturalism/physicalism in general). It strikes me that naturalism/physicalism is a specific view one might come to hold on the basis of a rationalist approach to inquiry, but it should not be mistaken for rationalism itself. In particular, when it comes to investigating foundational issues in epistemology/ontology a rationalist should not simply take it as a dogma that naturalism answers all those questions. Quine's Epistemology Naturalized is an instructive text because it attempt actually to produce a rational argument for approaching foundational philosophical issues naturalistically. This is something I haven't seen much on LW, it usually seems like this is taken as an assumed axiom with no argument.

The value of attempting to make the arguments for naturalized epistemology explicit is that they can then be critiqued and evaluated. As it happens, when one reads Quine's work on this and thinks carefully about it, it becomes pretty evident that it is problematic for various reasons as many mainstream philosophers have attempted to make clear (e.g., the literature around the myth of the given).

I'd like to see more of that kind of foundational debate here, but maybe that's just because I've already been corrupted by the diseased discipline of philosophy ; )

Against Modal Logics

You might be interested to look at David Corfield's book Modal Homotopy Type Theory. In the chapter on modal logic, he shows how all the different variants of modal logic can be understood as monads/comands. This allows us to understand modality in terms of "thinking in a context", where the context (possible worlds) can be given a rigorous meaning categorically and type theoretically (using slice categories).

The Power to Teach Concepts Better

I really enjoyed this post. It was fun to read and really drove home the point about starting with examples. I also thought it was helpful that it didn't just saying, "teach by example". I feel that simplistic idea is all too common and often leads to bad teaching where example after example is given with no clear definitions or high level explanations. However, this article emphasized how one needs to build on the example to connect it with abstract ideas. This creates a bridge between what we already understand and what we are learning.

As I was thinking about this to write this review, I was trying to think of examples where it makes more sense to explain the abstract thing first and then give examples. I had great difficulty coming up with any examples where abstract first makes sense. The few possible examples I could think of came from pure math, and even there I wonder if it wouldn't still help to start with examples.

The most abstract subject I've ever studied is category theory. Recently I was learning about adjoint functors, and here indeed the abstract definition make sense entirely independent of any examples. However, having learned the definition one can't really do anything with adjoint functors until one has seen it in some examples. So this might be an example where the abstraction-example-abstraction order or explanation makes sense. On the other hand, once I learned about the free-forgetfull adjunction, I thought that would have been a good example to start with to build intuition before introducing the abstract definition. I realized that my favorite teachers of the subject still use a lot of examples. Like Bartosz Milewski, who comes at category theory from the perspective of a programmer.

Learning to program is also a good example where in advance one might think it would make sense to learn a bunch of abstractions first. However, in practice, one learns to code by example, then after having mastered some examples, learning the principles behind them.

The Power to Teach Concepts Better

Excellent article, thank you. I particularly enjoyed your images and diagrams. To me concept diagrams are another superpower for explaining things. Have you written anything about that?

The Power to Teach Concepts Better

Personally, I thought "mind-hanger" was ok. I got an image of a coat-hanger for the mind. You could even include that image explicitly in your concept mapping pictures.

Some other ideas that stick with the coat-hanger variant would be "idea-hanger", "concept-hanger".

Another term you might consider is scaffolding. This also has a strong concrete image of construction scaffolding, but the metaphor lends itself to the idea of building on top of the skeletal example, just as we start a building project with a scaffold and then build the real building around it. Often the scaffold is removed at the end, which can also happen with abstraction, where we can throw away the pedegogic examples once we've mastered the bigger idea. We don't really build anything on top of a coat hanger (nor a ship's anchor).

Load More