Amongst other things I am currently a First Year Arts student at Melbourne University, doing Philosophy amongst my subjects. At the moment, I'm in several discussions with other students and my lecturers about various matters.

I have naturally read the material here, but am still not sure how to act on two questions.

1: I've been arguing out the question of Foundationalism v.s Coherentism v.s other similiarly basic methods of justifying knowledge (e.g. infinitism, pragmatism). The discussion left off with two problems for Foundationalism.

a: The Evil Demon argument, particularly the problem of memory. When following any piece of reason, an Evil Demon could theoretically fool my reason into thinking that it had reasoned correctly when it hadn't, or fool my memory into thinking I'd reasoned properly before with reasoning I'd never done. Since a Foundationalist either is a weak Foundationalist (and runs into severe problems) or must discard all but self-evident and incorrigible assumptions (of which memory is not one), I'm stuffed.

(Then again, it has been argued, if a Coherentist were decieved by an evil demon they could be decieved into thinking data coheres when it doesn't. Since their belief rests upon the assumption that their beliefs cohere, should they not discard if they can't know if it coheres or not? The seems to cohere formulation has it's own problem)

b: Even if that's discarded, there is still the problem of how Strong Foundationalist beliefs are justified within a Strong Foundationalist system. Strong Foundationalism is neither self-evident nor incorrigible, after all.

I know myself well enough to know I have an unusually strong (even for a non-rationalist) irrational emotive bias in favour of Foundationalism, and even I begin to suspect I've lost the argument (though some people arguing on my side would disagree). Just to confirm, though- have I lost? What should I do now, either way?

2: What to say on the question of skepticism (on which so far I've technically said nothing)? If I remember correctly Elizier has spoken of philosophy as how to act in the world, but I'm arguing with somebody who maintains as an axiom that the purpose of Philosophy is to find truth, whether useful or useless, in whatever area is under discussion.

3: Finally, how do I speak intelligently on the Contextualist v.s Invariantist problem? I can see in basic that it is an empirical problem and therefore not part of abstract philosophy, but that isn't the same thing as having an answer. It would be good to know where to look up enough neuroscience to at least make an intelligent contribution to the discussion.

New Comment
70 comments, sorted by Click to highlight new comments since: Today at 11:37 AM

You need to clarify your intentions/success criteria. :) Here's my What Actually Happened technique to the rescue:

(a) You argued with some (they seem) conventional philosophers on various matters of epistemology.
(b) You asked LessWrong-type philosophers (presumably having little overlap with the aforementioned conventional philosophers) how to do epistemology.
(c) You outlined some of the conventional philosophy arguments on the aforementioned epistemological matters.
(d) You asked for neuroscience pointers to be able to contribute intelligently.
(e) Most of the responses here used LessWrong philosophy counterarguments against arguments you outlined.
(f) You gave possible conventional philosophy countercounterarguments.

This is largely a failure of communication because the counterarguers here are playing the game of LessWrong philosophy, while you've played, in response, the game of conventional philosophy, and the games have very different win conditions that lead you to play past each other. From skimming over the thread, I am as usual most inclined to agree with Eliezer: Epistemology is a domain of philosophy, but conventional philosophers are mostly not the best at—or necessarily t... (read more)

3BerryPick611y
This is an excellent post.
-1Carinthium11y
Actually, although I do care about status I am trying to actually consider the truth of the issue primarily. I don't seek truth in this area for any practical purpose, but because I want to know.

Give up on justifying answers and just try to figure out what the answers really actually are, i.e., are you really actually inside an Evil Demon or not. Once you learn to quantify the reasoning involved using math, the justification thing will seem much more straightforward when you eventually return to it. Meanwhile you're asking the wrong question. Real epistemology is about finding correct answers, not justifying them to philosophers.

0Carinthium11y
Judging from things you have said in the past, you are of the view that philosophy is about how to act in the world. Just to make it clear, the discussion is about what is true in the topic area, whether useful or not. Without a justification, I cannot rationally believe in the truth of the senses. The Foundationalists have argued that probability is off the table because it is either a subjective feeling or an estimation of empirical evidence. Subjective feelings do not make a proper basis for justification, and if probability is based on empirical evidence and empirical evidence is based on probability it doesn't work. The Coherentists conceded on probability and moved on to using "tenability" (I.e believing x provisionally) to justify empirical evidence for fear of accusations of direct circularity. I don't see any way out of the metaphorical vicious circle- a conception of probability that gives a role to empirical data cannot be used to justify empirical data.

Without a justification, I cannot rationally believe in the truth of the senses.

Yeah you can. Like, are you wearing socks? Yes, you're wearing socks. People were capable of this for ages before philosophy. That's not about what's useful, it's about what's true. How to justify it is a way more complex issue. But if you lose sight of the fact that you are really actually in real life wearing socks, and reminding you of this doesn't help, you may be beyond my ability to rescue by simple reminders. I guess you could read "The Simple Truth", "Highly Advanced Epistemology 101 for Beginners", and if that's not enough the rest of the Sequences.

2Carinthium11y
I don't really see the relevance of "The Simple Truth" to this discussion besides its criticism of Coherentism. Next I read "The Useful Idea of Truth" and basically interpreted it as follows: -The refutation of subjectivism is in that experimental predictions are determined by belief, experimental results are determined by reality. (Edit: Your discussion of the idea of 'post-utopian' could be considered useful. I'm guessing you would question the way the term justified is being referred to. The Foundationalists and Coherentists each have their own idea of what means to be justified- in the debate, the Foundationalists provisionally define it as what must be true in any possible universe plus what can be rationally inferred without any other starting assumptions from such ("rationally" meaning all rules proven to work based on truths in the former category). The Coherentists define justification according to their web of beliefs. Both are arguing about which side has good reasoning.) This is clearly circular. You did solve the problem of doubting the senses alone by reference to the difference between experimental predictions and results. That does not solve the problem of doubting induction, doubting the principle of probability, or doubting memory. As for Tyrell's recommendation of "Where recursive justification hits bottom", in that you appear to me to be a Coherentist. However, the article basically appeals to "How best to achieve things in the world." In the discussion that started it all, we had all agreed to focus on what was true about the subject matter under debate whether useful or not. I'll keep going, but I don't see anything else that might be relevant.
2Carinthium11y
Going to need some time to go through them- after I have, I'll come back to you with a new reply.
0Tyrrell_McAllister11y
I would also recommend "Where recursive justification hits bottom". Maybe start with that one, because it is shorter.

Externalism is always the answer! Accept that some unlucky people who are in sceptical scenarios would be doomed; but that doesn't mean that you, who are not in a sceptical scenario, are not, even though they're subjectively indistinguishable.

2Carinthium11y
If I said that I would be rightly laughed at. How can I rule it out if it's subjectively indistinguishable?
2AlexSchell11y
You might think that you'd be laughed at, but actually externalism about evidence and knowledge is not an uncommon view in philosophy. Reliabilism, for instance, has it that whether or not you know something is a function of the objective reliability of your perceptual faculties and not merely of their input to your conscious experience. Timothy Williamson has also defended externalism about evidence (in Knowledge and its Limits I think).
1Carinthium11y
I would be laughed at if I made the claim with merely the arguments you gave. I've never seen a decent argument for externalism- all of the ones I have seen are circular in one way or another. I'll look up your sources, but I don't hold out much hope.
0AlexSchell11y
I didn't give any arguments -- you're confusing me with Larks. Also, my providing sources is not to be understood as endorsing externalism. I'm not sure about it.
0Carinthium11y
Sorry about that. I'll check it out.
3Larks11y
Nor was I in fact making any arguments - I was simply stating the position. It's been a few years since I've studied epistemology, so I wouldn't trust myself to do so. SEP is normally a good bet, and I seem to recall enjoying Nozick (Philosophical Investigations) and the Thermometer Model of Knowledge. I don't recall being convinced by any of the Externalist models I studied (Relevant Possible Alternatives, Tracking, Reliablism, Causal and Defeasability accounts) but I think something in that ballpark is a good idea. Externalism has been, in general, a very successful philosophical project, in a variety of areas (e.g. content externalism). Also, I hate to say it, but I think you would be better off ignoring everything that has been said on this thread. LW is good for many things, but its appreciation of academic philosophy is frankly infantile.
0torekp11y
Part of the point of externalism is to change the question -- although it's useful to note that to the extent the original question was framed in the term "knowledge", the question hasn't entirely changed. So, you can't rule the skeptical scenario out, but you don't need to. That sub-question is being abandoned, or at least severely demoted. I second Larks' recommendation, in another comment, of Nozick's Philosophical Investigations. You can probably google up a summary or review to get a taste.
0Carinthium11y
All the arguments for changing the question seem to be either pragmatist arguments (pragmatism does not correlate with truth in any event) or basically amount to "Take the existence of the world on faith" (which is no more useful than it is to take anything else on faith).

Warning: I am not a philosophy student and haven't the slightest clue what any of your terms mean. That said, I can still answer your questions.

1) Occam's Razor to the rescue! If you distribute your priors according to complexity and update on evidence using Bayes' Theorum, then you're entirely done. There's nothing else you can do. Sure, if you're unlucky then you'll get very wrong beliefs, but what are the odds of a demon messing with your observations? Pretty low, compared to the much simpler explanation that what you think you see correlates well to th... (read more)

6[anonymous]11y
Ok, response here from somebody who has studied philosophy. I disagree with a lot of what DSherron said, but on one point we agree - don't get a philosophy degree. Take some electives, sure - that'll give you an introduction to the field - but after that there's absolutely no reason to pay for a philosophy degree. If you're interested in it, you can learn just as much by reading in your spare time for FREE. I regret my philosophy degree. So, now that that's out of the way: philosophy isn't useless. In fact, at its more useful end it blurs pretty seamlessly into mathematics). It's also relevant to cognitive science), and in fact science in general. The only time philosophy is useless is when it isn't being used to do anything. So, sure, pure philosophy is useless, but that's like saying "pure rationality is useless". We use rationality in combination with every other discipline, that's the point of rationality. As for the OP's questions: 1. DSherron suggests following the method of the 13th century philosopher William of Ockham, but I don't think that's relevant to the question. As far as I can tell, ALL justificatory systems suffer from Munchausen's Trilemma. Given that, Foundationalism and Coherentism seem to me to be pretty much equivalent. You wouldn't pick incoherent axioms as your foundations, and conversely any coherent system of justifications should be decomposable into an orthogonal set of fundamental axioms and theorems derived thereof. Maybe there's something I'm missing, though. 2. DSherron's point is a good one. It was first formalised by the philosopher-mathematician Leibniz who proposed the principle of the Identity of Indiscernibles. 3. DSherron suggests that the LW sequence "A Human's Guide to Words" is relevant here. Since that sequence is basically a huge discussion of the philosophy of language, and makes dozens of philosophical arguments aimed at correcting philosophical errors, I agree that it is a useful resource.
1Carinthium11y
I'm doing a philosophy degree for two reasons. The first is that I enjoy philosophy (and a philosophy degree gives me plenty of opportunities to discuss it with others). The second is that Philosophy is my best prospect of getting the marks I need to get into a Law course. Both of these are fundamentally pragmatic. 1: Any Coherentist system could be remade as a Weak Foundationalist system, but the Weak Foundationalist would be asked why they give their starting axioms special priviledges (hence both sides of my discussion have dissed on them massively). The Coherentists in the argument have gone to great pains to say that "consistency" and "coherence" are different things- their idea of coherence is complicated, but basically involves judging any belief by how well interconnected it is with other beliefs. The Foundationalists have said that although they ultimately resort to axioms, those axioms are self-evident axioms that any system must accept. 2: Could you clarify this point please? Superficially it seems contradictory (as it is a principle that cannot be demonstrated empirically itself), but I'm presumably missing something. 3: About the basic philosophy of language I agree. What I need here is empirical evidence to show that this applies specifically to the Contextualist v.s Invariantist question.
0DSherron11y
For 1) the answer is basically to figure out what bets you're willing to make. You don't know anything, for strong definitions of know. Absolutely nothing, not one single thing, and there is no possible way to prove anything without already knowing something. But here's the catch; beliefs are probabilities. You can say "I don't know that I'm not going to be burned at the stake for writing on Less Wrong" while also saying "but I probably won't be". You have to make a decision; choose your priors. You can pick ones at random, or you can pick ones that seem like they work to accomplish your real goals in the real world; I can't technically fault you for priors, but then again justification to other humans isn't really the point. I'm not sure how exactly Coherentists think they can arrive at any beliefs whatsoever without taking some arbitrary ones to start with, and I'm not sure how anyone thinks that any beliefs are "self-evident". You can choose whatever priors you want, I guess, but if you choose any really weird ones let me know, because I'd like to make some bets with you... We live in a low-entropy universe; simple explanations exist. You can dispute how I know that, but if you truly believed any differently then you should be making bets left and right and winning against anyone who thought something silly like that a coin would stay 50/50 just because it usually does. Basically, you can't argue anything to an ideal philosopher of perfect emptiness, any more than you can argue anything to a rock. If you refuse to accept anything, then you can go do whatever you want (or perhaps you can't, since you don't know what you want), and I'll get on with the whole living thing over here. You should read "The Simple Truth"; it's a nice exploration of some of these ideas. You can't justify knowledge, at all, and there's no difference between claiming an arbitrary set of axioms and an arbitrary set of starting beliefs (they are literally the same thing), but you can still c
0Carinthium11y
1: The Foundationalists have claimed probability is off the metaphorical table- the concept of probability rests either on subjective feeling (irrational) or on empirical evidence(circular, as our belief in empirical evidence rests on the assumption it is probable). They had problems with self-evident, but I created a new definition as "Must be true in any possible universe" (although I'm not sure of the truth of his conclusion, the way EliIizer describes a non-reductionist universe basically claims for reductionism this sort of self-evidency). 2: Doesn't solve the problem I have with it. 3: Of the statement "A trout is a type of fish", the simplification "This statement is true in English" is good enough to describe reality. The invariantist, and likely the contextualist, would claim that universally, across languages, humans have a concept of "knows", however they describe it, which fits their philosophy.
0DSherron11y
You're right, my statement was far too strong, and I hereby retract it. Instead, I claim that philosophy which is not firmly grounded in the real world such that it effectively becomes another discipline is worthless. A philosophy book is unlikely to contain very much of value, but a cognitive science book which touches on ideas from philosophy is more valuable than one which doesn't. The problem is that most philosophy is just attempts to argue for things that sound nice, logically, with not a care for their actual value. Philosophy is not entirely worthless, since it forms the backbone of rationality, but the problem is the useful parts are almost all settled questions (and the ones that aren't are effectively the grounds of science, not abstract discussion). We already know how to form beliefs that work in the real world, justified by the fact that they work in the real world.. We already know how to get to the most basic form of rationality from whence we can then use the tools recursively to improve them. We know how to integrate new science into our belief structure. The major thing which has traditionally been a philosophical question which we still don't have an answer to, namely morality, is fundamentally reduced to an empirical question: what do humans in fact value? We already know that morality as we generally imagine it is a fundamentally a flawed concept, since there are no moral laws which bind us from the outside, but just the fact that we value some things that aren't just us and our tribe. The field is effectively empty of useful open questions (the justification of priors is one of the few relevant ones remaining, but it's also one which doesn't help us in real life much). Basically, whether philosophers dispute something is essentially un-correlated to whether there is a clear answer on it or not. If you want to know truth, don't talk to a philosopher. If you pick your beliefs based on strength of human arguments, you're going to believe whateve
0Carinthium11y
If you judge philosophy by what helps us in the empirical world, this is mostly correct. The importance of rationality to philosophy (granted the existence of an empirical world) I also agree with. However, some people want to know the true answers to these questions, useful or not. For that, argument is all we've got. I would mostly agree with rationality training for philosophers, except in that there is something both circular and silly about using empirical data to influence, if indirectly, discussions on if the empirical world exists.
-1DSherron11y
Super quick and dirty response: I believe it exists, you believe it exists, and everyone you've ever spoken to believes it exists. You have massive evidence that it exists in the form of memories which seem far more likely to come from it actually existing than any other possibility. Is there a chance we're all wrong (or that you're hallucinating the rest of us, etc.)? Of course. There always is. If someone demands proof that it exists, they will be disappointed - there is no such thing as irrefutable truth. Not even "a priori" logic - not only could you be mistaken, but additionally your thoughts are physical, empirical phenomena, so you can't take their existence as granted while denying the physical world the same status. If anyone really truly believes that the empirical world doesn't exist, you haven't heard from them. They might believe that they believed it, but to truly believe that it doesn't exist, or even simply that we have no evidence either way and it's therefore a tossup, they won't bother arguing about it (it's as likely to cause harm as good). They'll pick their actions completely at random, and probably die because "eat" never came up on their list. If anyone truly thinks that the status of the physical world is questionable, as a serious position, I'd like to meet them. I'd also like to get them help, because they are clinically insane (that's what we call people who can't connect to reality on some level). Basically, the whole discussion is moot. There is no reason for me to deny the existence of what I see, nor for you to do so, nor anyone else having the discussion. Reality exists, and that is true, whether or not you can argue a rock into believing it. I don't care what rocks, or neutral judges, or anyone like that believes. I care about what I believe and what other humans and human-like things believe. That's why philosophy in that manner is worthless - it's all about argumentation, persuasion, and social rules, not about seeking truth.
0Carinthium11y
Your argument is about as valid as "Take it on faith". Unless appealing to pragmatism, your argument is circular in using the belief of others when you can't justifiably assume their existence. Second, your argument is irrational in that it appeals to "Everybody believes X" to support X. Thirdly, a source claiming X to be so is only evidence for X being so if you have reason to consider the source reliable. You are also mixing up "epistemic order" with "empirical order", to frame two new concepts. "Epistemic order" represents orders of inference- if I infer A from B and B from C, then C is prior to B and B is prior to A in epistemic order regardless of the real-world relation of whatever they are. "Empirical order", of course, represents what is the empirical cause of what (if indeed anything causes anything). A person detects their own thoughts in a different way from the way they detect their own senses, so they are unrelated in epistemic order. You raise a valid point about assuming that one's thoughts really are one's thoughts, but unless resorting to the Memory Argument (which is part of the Evil Demon argument I discussed) they are at least avaliable as arguments to consider. The Foundationalist skeptic is arguing that believing in the existence of the world IS IRRATIONAL. Without resorting to the arguments I describe in the first post, there seems to be no way to get around this. Pragmatics clearly isn't one, after all.
0Carinthium11y
1: Occam's Razor has already been covered. The concept inherently rests (unless you take William of Ockham's original version, which cannot be applied in the same way) on empirical observations about the world- which are the things under doubt. 2: The argument started on if it is rational to trust the senses, and turned into an argument about the proper rules to decide that question. Such a question cannot be solved empirically. Besides, such a rule cannot justify itself as it is not empirically rooted. 3: I considered this possibility, but wasn't confident enough to claim it because rarely, despite the nature of human concepts, a simplistic explanation actually works. For example, that "a trout is a type of fish" is true as a linguistic statement, no clarification or deeper understanding of the human mind required. My mind is good at Verbal Comphrehension skills, such as Philosophy and Law. To get into Law at Melbourne, I need to get good marks. Philosophy is a subject at which I get good marks, and fun because of how my brain works, so I do it. I take a genuine interest because I like the intellectual stimulation and I want to be right about the sort of things philosophy covers.
0Manfred11y
Deferring to a simplicity prior is good for the outside world, but also raises the question of where you got your laws of thought and your assumption of simplicity. At some point you do need to say "okay, that's good enough," because it's always possible to have started from the wrong thoughts. Explanations aren't first and foremost about what the world is like. They're about what we find satisfying. It's like how people keep trying to explain quantum mechanics in terms of balls and springs - it's not because balls and springs are inherently better, it's because we find them satisfying enough to us that once we explain the world in terms of them we can say "okay, that's good enough."
0Carinthium11y
Philosophical Infinitism in a nutshell (the conclusions, not the argument line which seems unusual as fa as I can tell). Anyway, the Coherentists would say that you can simply go around in circles for justification (factoring for "webbiness", whilst the Foundationalist skeptics would say that this supports the view that belief in the existence of the world is inherently irrational. Just because something is satisfying doesn't mean it has any correlation with reality.
2Manfred11y
The truth is consistent, but not all consistent things are true. So yeah. I think the viewpoint that it's not only necessary but okay to have unjustified fundamental assumptions relies on fairly recent stuff. Aristotle could probably tell you why it was necessary (it's just an information-theoretic first cause argument after all), but wouldn't have thought it was okay, and would have been motivated to reach another conclusion. It's like I said about explanations. Once you know that humans are accidental physical processes, that all sorts of minds are possible, and some of them will be wrong, and that's just how it is, then maybe you can get around to thinking it's okay for us humans, who are after all just smart meat, to just accept some stuff to get started. The reason that we don't fall apart into fundamentally irreconcilable worldviews isn't magic, it's just the fact that we're all pretty similar, having been molded by the constraints of reality.
0Carinthium11y
The problem is that I can't argue based on the existence of the empirical world when that is the very thing the argument is about.
2Manfred11y
That the empirical world exists is a supposition you were born into. The argument is over whether that's satisfying enough to be called an explanation.
0Carinthium11y
The argument is about whether the belief is rational or irrational. Discussing it in the manner you describe is off the point,
1Manfred11y
My previous reply wasn't very helpful, sorry. Let me reiterate what I said above: making assumptions isn't so much rational as unavoidable. And so you ask "then, should we believe in the external world?" Well, this question has two answers. The first is that there is no argument that will convince an agent who didn't make any assumptions that they should believe in an external world. In fact, there is no truth so self-evident it can convince any reasoner. For an illustration of this, see What the Tortoise Said to Achilles. Thus, from a perspective that makes no assumptions, no assumption is particularly better than another. There is a problem with the first answer, though. This is that "the perspective that makes no assumptions" is the epistemological equivalent of someone with a rock in their head. It's even worse than the tortoise - it can't talk, it can't reason, because it doesn't assume even provisionally that the external world exists or that (A and A->B) -> B. You can't convince it of anything not because all positions are unworthy, but because there's no point trying to convince a rock. The second answer is that of course you should believe in the external world, and common sense, and all that good stuff. Now, you may say "but you're using your admittedly biased brain to say that, so it's no good," but, I ask you, what else should I use? My kidneys? If you prefer a slightly more sophisticated treatment, consider different agents interpreting "should we believe in the external world" with different meanings of the word "should". We can call ours human_should, and yes, you human_should believe in the external world. But the word no_assumptions_should does not, in fact, have a definition, because the agent with no assumptions, the guy with a rock in his head, does not assume up any standards to judge actions with. Lacking this alternative, the human_reasonable course of action is to interpret your question as ""human_should we believe in the external world,
0torekp11y
This is the place to whip out the farmer/directions joke. The one that ends, "you just can't get there from here."
0Manfred11y
"I say, farmer, you're pretty close to a fool, ain't'cha?" "Yup, only this here fence between us."
0Carinthium11y
I'd already considered the "What the Tortoise said to Achilles" argument in a different form. I'd gotten around it (I was arguing Foundationalism until now, remember) by redefining self-evident as: What must be true in any possible universe. If a truth is self-evident, then a universe where it was false simply COULD NOT EXIST for one reason or another. Elizier has described a non-Reductionist universe the way I believe a legitimate self-evident truth (by this definition) should be described. To those who object, I call it self-evident' (self evident dash, as I say it in normal conversation) and use it instead of self-evident as a basis for justification. The Foundationalist skeptics in the debate would laugh at your argument, point out you can't even assume the existence of a brain with justification, nor the existence of "should" either in the human sense or any other. Thus your argument falls apart.
0Manfred11y
I agree with the foundationalist skeptics, except for that anything "falls apart" is, of course, something that they just assume without justification, and should be discarded :)
0Carinthium11y
Self-evident from the definition of rational: It is irrational to believe a proposistion if you have no evidence for or against it. Empirical evidence is not evidence if you have no reason to trust it. Therefore, the fact that your argument falls apart is self-evident given the premises and conclusions therein.
0Manfred11y
The "definition of rational" is already without foundation - see again What the Tortoise Said to Achilles, and No Universally Convincing Arguments. Or perhaps I'm overestimating how skeptical normal skepticism is? Is it normal for foundationalist skeptics to say that there's no reason to believe the external world, but that we have to follow certain laws of thought "by definition," and thus be unable to believe the Tortoise could exist? That's not a rhetorical question, I'm pretty ignorant about this stuff.
1Carinthium11y
I've already gotten past the arguments in those two cases by redefining self-evident by reference to what must be true in any possible universe. Elizier himself describes reductionism in a way which fits my new idea of self-evident. The Foundationalist skeptics agree with me. As for the definition of rational, if you understand nominalism you will see why the definition is beyond dispute. The Foundationalist Skeptic supports starting from no assumptions except those that can be demonstrated to be self-evident.
-2Manfred11y
So, you agree that the Foundationalist Skeptic rejects the use of modus ponens, since Achilles cannot possibly convince the Tortoise to use it? Also, I recommend this post. You seem to be roving into that territory. And calling anything, even modus ponens, "beyond dispute" only works within a certain framework of what is disputable - someone with a different framework (the tortoise) may think their framework is beyond dispute. In short, the reflective equilibrium of minds does not have just one stable point.
0Carinthium11y
Just to remind you, I am not TECHNICALLY arguing for Foundationalist skepticism here. My argument is that it doesn't have any major weaknesses OTHER THAN the ones I've already mentioned. Regarding the use of modus ponens, that WAS a problem until I redefined self-evident to refer to what must be true in any possible universe. This is a mind-independent definition of self-evident. I suspect a Foundationalist skeptic shouldn't engage with Elizier's arguments in this case as it appeals to empirical evidence, but leaving that aside the ordinary definition of 'rational' contains a contradiction. In ordinary cases of "rationality", if somebody claims A because of X and are asked "Why should I trust X?", the claimer is expected to have an answer for why X is trustworthy. The four possible solutions to this are Weak Foundationalism (end up in first causes they can't justify), Infinitism(infinite regress), Coherentism(believe because knowledge coheres), and Strong Foundationalism. This excludes appealing to Common Sense, as Common Sense is both incoherent and commonly considered incompatible with Rationality. A Weak Foundationalist is challengable on privledging their starting points, plus the fact that any reason they give for privledging said starting points is itself a reason for their starting point and hence another stage back. Infinitism and Coherentism have the problem that without a first cause we have no reason to believe they cohere with reality. This leaves Strong Foundationalism by default.
0Manfred11y
So why doesn't the Tortoise agree that modus ponens is true in any possible universe? Do you have some special access to truth that the Tortoise doesn't? If you don't, isn't this just an unusual Neurathian vessel of the nautical kind?
0Carinthium11y
What the Tortoise believes is irrelevant. In any universe whatsoever, proper modus ponens will work. Another way of showing is that a universe where it doesn't work would be internally incoherent. Arguments are mind-independent- whether my mind has a special acess to truth or not (theoretically, I may simply have gotten it right this time and this time only), my arguments are just as valid. Elizier is right to say that you can't argue with a rock. However, insane individuals who disagree in the Tortoise case are irrelevant because the reasoning is not based on universial agreement of first premises but the fact that in any possible universe the premises must be true.
0Manfred11y
I agree - modus ponens works, even though there are some minds who will reject it with internally coherent criteria. Even criteria as simple as "modus ponens works, except when it will lead to belief in the primness of 7 being added to your belief pool" - this definition defends itself because if it was wrong, you could prove 7 was prime, therefore it's not wrong. You could be put in a room with one off these 7-denialists, and no argument you made could convince them that they had the wrong form of modus ponens, and you had the right one. But try seeing it from their perspective. To them, 7 not being prime is just how it is. To them, you're the 7-denialist, and they've been put in a room with you, yet are unable to convince you that you have the wrong form of modus ponens, and they have the right one. Suppose you try to show that a universe where 7 isn't prime is internally inconsistent. What would the proof look like? Well, it would look like some axioms of arithmetic, which you and the 7-denialists share. Then you'd apply modus ponens to these axioms, until you reached the conclusion that 7 is prime, and thus any system with "7 is not prime" added to the basic axioms would be inconsistent. What would the 7-denialist you're in a room with say to that? I think it's pretty clear - they'd say that you're making a very elementary mistake, you're just applying modus ponens wrong. In the step where you go from 7 not being factorable into 2, 3, 4, 5 or 6, to 7 being prime, you've committed a logical fallacy, and have not shown that 7 is prime from the basic axioms. Therefore you cannot rule out that 7 is not prime, and your version of modus ponens is therefore not true in every possible universe. Just because you can use something to prove itself, they say, doesn't mean it's right in every possible universe. You should try to be a little more cosmopolitan and seriously consider that 7 isn't prime.
0Carinthium11y
I'm guessing you disagree with Elizier's thoughts on Reductionism, then? The 7-denialists are making a circular argument with your first defence of their posistion. Circular arguments aren't self-evidently wrong, but they are self-evidently not evidence as there isn't justification for believing any of them. The argument for conventional modus ponens is not a circular argument. The second argument would be that the 7-denialists are making an additional assumption they haven't proven, whilst the Foundationalist Skeptic starts with no assumptions. That there is an inconsistency in 7 being prime needs demonstrating, after all. If you redefine Prime to exclude 7 then it is strictly correct and we don't have a disagreement, but we don't need a different logic for that. (And the standard defintition of Prime is more mathematically useful) Finally, the Foundationalist Skeptic would argue that they aren't using something to prove itself- they are starting from no starting assumptions whatsoever. I have concluded, as I mentioned, that there is a problem with their posistion, but not the one you claim.
1Manfred11y
Well if you say so. Best of luck then.

Finally, how do I speak intelligently on the Contextualist v.s Invariantist problem? I can see in basic that it is an empirical problem and therefore not part of abstract philosophy, but that isn't the same thing as having an answer. It would be good to know where to look up enough neuroscience to at least make an intelligent contribution to the discussion.

Invariantism, in my opinion, is rooted precisely in the failure to recognize that this is an empirical and ultimately linguistic question. I'm not sure how neuroscience would enter into it, actually. ... (read more)

2Carinthium11y
At least some invariantists do tend to look up cognitive evidence, so your argument is not totally correct. You're probably right overall, but I'm still not sure- the Invariantist tends to argue using Warranted Assertability manuveres that distinguish being warranted in asserting X from believing X.
0Creutzer11y
The most immediate problem for this approach is that it's not clear how it could work for embedded contexts. The other is, of course, to spell out the context-independent meaning and explain precisely how pragmatics operates on it. It's also not clear that this notion of a strong semantics-pragmatics divide with independent and invariant semantic meanings is tenable in general.

Not a philosophy student, but it seems to me that your question is basicly this:

If everything is uncertain (including reality, state of my brain, etc.), how can I become certain about anything?

And the answer is:

Taking your question literally, you can't.

In real life, we don't take it literally. We don't start by feeling uncertain about literally everything at the same time. We take some things as granted and most people don't examine them (which is functionally equivalent to having axioms); and some people examine them step by step, but not all at the same time (which is functionally equivalent to circular reasoning).

0Carinthium11y
Not quite- I had several questions, and you're somewhat misinterpreting the one you've discussing. I'll try and clarify it to you. There are two sides in the argument, the Foundationalists (mostly skeptics) and the Coherentists. So far I've been Foundationalist but not committed on skepticism. Logically of course there is no reason to assume that one or the other is the only possible posistion, but it makes a good heuristic for quick summary of what's been covered so far. -The Foundationalists in this particular argument are Strong Foundationalists (weak Foundationalism got thrown out at the beginning), who contend that you can only rationally believe something if you can justify it based on self-evident truths (in the sense that they must be true in any possible universe) or if you can infer them from such truths. -The Coherentists in this particular argument contend basically that all beliefs are ultimately justified by reference to each other. This is circular, and yet justified. -The Foundationalists have put the contention that probability is OFF THE TABLE. This is because it is impossible to create a concept of probability that is not simply a subjective feeling that does not rest on the presumption that empirical evidence is valid (which they dispute). This gets back to their argument that it is IRRATIONAL to believe in the existence of the world. -The Coherentists countered with the concept of "tenability"- believing X provisionally but willing to discard it should new evidence come along. -I have already, arguing close to the Foundationalist side, pointed out that just because humans DO reason in a certain way in practice does not give any reason for believing it is a valid form of reasoning. -Both sides have agreed that purely circular arguments are off the table. Hence, both the Foundationalists and the Coherentists have agreed not to use any reference to actual human behaviour to justify one theory over the other.
1Viliam_Bur11y
Could you give me examples of "self-evident truths" other than mathematical equations or tautologies? To me it seems that if you are allowed to use only things that are true in all possible universes, you can only get to conclusions that are true in all possible universes. (In other words, there is no way I could ever believe "my name is Viliam" using only the Strong Foundationalist methods.)
0Carinthium11y
Yes, the Foundationalist would agree with that. They would not see a problem with it- that is the legitimate limit of knowledge.
8FeepingCreature11y
Well, in that case the intuitive answer would be that the Foundationalists have successfully argued themselves into a spectacularly convincing corner, and meanwhile I'll just be over here using all this "unverifiable" "knowledge" to figure out how to deal with the "real" "world". And in any case, if you're invoking an Evil Demon you're lost regardless, it's the epistemologic equivalent of "but what if all your arguments are actually wrong and you just can't see it", to which the answer would be "In that case I am quite hopelessly lost but it doesn't look that way to me, and what more do you expect me to say?" I suppose an argument could be made that "if such a thing as evolution exists it seems implausible for it to create a brain that expends an awful lot of food intake on being irrepairably wrong about the things it knows, and if not even evolution exists our view of the cosmos is so lost as to be irrepairable regardless". Sometimes I wonder if philosophy should be taught in a largely noun-free environment. (Points for correct answers, points deducted for Noun Usage?) Get people's minds off the what, and on the how and why. Obsession with describing states will be the death of philosophy...
1Carinthium11y
Firstly, you're getting mixed up. The Foundationalist side are trying to downplay the Evil Demon Argument as much as possible whilst the Coherentist side claims it refutes Foundationalism as it means nothing can be known. Both sides plus myself plus practically everybody agrees that just because intuition states X doesn't mean X is true. So how can you invoke it with any plausibility in a debate? IF evolution works as suspected, there are still other ways that humans could survive other than correlation of beliefs with reality depending on how everything else works.

To combat skepticism, or at least solipsism, you just need to realise that there are no certainties, but that does mean you know nothing. You can work probabilistically.

Consider: http://lesswrong.com/lw/mn/absolute_authority/ http://lesswrong.com/lw/mo/infinite_certainty/ http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/ http://wiki.lesswrong.com/wiki/Absolute_certainty

0Carinthium11y
As of right now, the problem is with defending the concept of probability. The argument put is that: -Either probability is a subjective feeling (and thus invalid) or it rests on empirical evidence. But empirical evidence is the thing being disputed firstly, and secondly if empirical evidence is dependent upon the concept of probability and probability is dependent on the concept of empirical evidence you have a direct circular argument. The Coherentist side conceded the untenability of a direct circular argument and instead argued their knowledge was based not on probability as such but tenability (I believe X until I see evidence or argument to discredit it). A strong argument, but it throws probability as such out the window.
[-][anonymous]11y10

This book might be what you are looking for. It's Evidence and Inquiry by Susan Haack. I have it, but I've only done a few very cursory skims of it (ETA: It's on my summer reading list, though). It has two very positive reviews on Amazon. Also, she calls out the Gettier "paradoxes" for what they are (for the most part, pointless distractions).

0Carinthium11y
Got a pretty long reading list right now. I'll go through it when I have time, though.

I doubt people are actually still interesting, but just in case I've actually managed to solve this problem.

IF the Correspondence Theory of Truth is assumed (defining "Truth" as that which corresponds to reality) and the assumption is made that philosophy should pursue truth rather than what is pragmatically useful, then for any non-Strong Foundationalist method of determining truth the objection could be made that it could easily have no correlation with reality and there would be no way of knowing.

Probabalistic arguments fall apart because they... (read more)

Not trying to answer your questions, sorry. Just wanted to mention that different philosophical camps pattern-match to different denominations of the same religion. They keep arguing without any hope of agreeing. Occasionally some denominations prevail and others die out, or get reborn when a new convincing guru or a prophet shows up. If you have a strong affinity for theism, err, mainstream philosophy, just pick whichever denomination you feel like, or whichever gives you the best chance of advancement. If you care about that real world thing, consider de... (read more)

1Carinthium11y
Firstly, Elizier is amongst other things a philosopher. Secondly, he does argue with others using arguments that are not purely empirical. If you're not a follower of his this is no criticism, but if you accept this argument you must reject philosophy. Secondly, that there are several warring camps with no agreement does not imply none of them are right. It probably means that some people are being overly stubborn, but not that none are correct. In religion, the Athiests (who have taken a side in many religious debates) are right as it happens. There is a real problem with philosophy and rationality, but you don't have the solution. The Evil Demon argument is the extreme case which fools people about EVERYTHING. My apologies if that was unclear.

Here is one hand...

(Then again, it has been argued, if a Coherentist were decieved by an evil demon they could be decieved into thinking data coheres when it doesn't. Since their belief rests upon the assumption that their beliefs cohere, should they not discard if they can't know if it coheres or not? The seems to cohere formulation has it's own problem)

Doesn't Coherentism idea say that even if the knowledge is incorrect, it is still "true" for the observer because it coheres with the rest of their beliefs?

The opinion Eliezer says is essentially that yes, you c... (read more)

0Carinthium11y
I like your advice about losing and will take it unless I find a brilliant Foundationalist argument pretty soon. As for the rest, though, ignoring the problem of induction means conceding that all action and belief is irrational. Unless the senses and memory can be considered trustworthy (not demonstrated), it is irrational to use it as evidence for better outcomes.
2falenas10811y
By irrational, do you mean philosophically or in real life? Because someone who acted like there was no knowledge would do pretty terribly in life, and I would not call that rational. If you mean philosophically, then yes. I've never heard a good answer to the problem of induction that doesn't invoke God or isn't circular.
[+][anonymous]11y-80