Substituting "intuition" for "faith", I have come to a similar conclusion:-
There are a number of claims that can be made about an intuitions: that they are infallible, that they involve inexplicable mechanisms, that they are nonexistent, that they are unnecessary, that they are uniform or variable , and so on. I will be arguing that they are indispensable in epistemology, and that they are not necessarily good or bad.
(Epistemic status: Obvious and well-known in some circles, radical and disturbing in others).
Philosophers tend to have a fairly uniform set of attitudes about intuitions which are rarely set out explicitly . Rationalists tend to be dismissive towards intuition. I would like to put forward a lukewarm defence of intuition as something, which, while not ideal, is hard to avoid, because ideal, assumption-free epistemology is impossible. Indirectly, this will constitute a partial explanation of the difficulty of philosophy.
There's a repeated pattern within philosophy where a term is ambiguous, and has a range of definitions from the obvious-but-trivial to the momentous-but-hard-to-defend. Intuition is no different
"Intuition" has the basic meaning off feeling that you know something without knowing how, but beyond that means more than several things:-
i) Aunt Nelly's intuition that it's going to rain tomorrow. Something that "feels right". Mere phenomenology.
ii). Fast, but approximate, system 1 heuristic.
iii) Basic assumptions that you can't do without, since you need them to prove things, but can't prove, because they're basic.
Iv). Infallible insight of mysterious, possibly supernatural origin. Like Platos forms , or Descartes clear and certain ideas.
type i) intuitions dont present much of a problem, because no one puts much weight on them.
Type ii intuitions imply a some epistemic reliability ... and could have a naturalistic basis. We know that only a minority of the brain is dedicated to conscious thought. It is possible that the conscious and unconscious mind interact in a way where the conscious mind poses a question to the unconscious, which then processes it rapidly and efficientyefficiently, but unconsciously, in a black box, as it were, and presents the answer to the conscious From the perspective of the concsious mind, the answer "pops up out of nowhere" ...but that's an illusion: the answer arrives from neural activity, not an actual void.
How does it process it?
Again, no magic is needed. The unconscious mind/system 1 could perform some sort of pattern matching on the large volume of data it has gathered. That process could be inductive or Bayesian, or some other recognised epistemic process. Or it could perform deductions, and only present the final conclusion. Or abduction. This deductive/inductive/abductive model differs from the quasi-perceptual justification -- perceiving the Form of the True -- associated with type v intuitions. Even in the absence of of a known mechanism for intuition intuitions can be verified on a black box basis, so long as they are about empirically confirmable topics. Superforecasters are the (confirmable) experts at this type of intuition. Thus there are two ways of confirming that intuitions have some epistemic credibility...finding reliable mechanisms by neurological science, and empiricism about the results. Therefore, the existence of a faculty of intuition with some epistemic reliability can't be ruled out.
Unfortunately , philosophy isn't an area where intuitions can be empirically confirmed. Having said that, there is no need to condemn the philosophical use of intuitions as sweepingly as this:-
Moreover, its [philosophy's] central tool is intuition, and this displays a near-total ignorance of how brains work. As Michael Vassar observes, philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms. -- Rob Bensinger, Philosophy, a diseased discipline.
What's the problem?
...since there is a possible naturalistically and epistemically viable justification -- so long as one doesn't require certainty -- for at least one type of intuition. Inintuition isn't defined as "non algorithmic" or "not how brains work". Not always, anyway.
Its important to note that those who dismiss the use of intuitions by philosophers are generally unable to explain how they are managing without them themselves. One's own intuitions don't seem like intuitions to oneself; and it is easier to be unreflective about ones epistemic princesses than reflective.
Type iii are a kind reluctantly accepted by many philosophers, a group of people who would rather everything is epistemically well founded. Note that such a starting point need not be unconscious or cognitive.
Type iv, the type that Vassar assumes to be the only type, are accepted by some philosophers of a more mystical bent.
If intuitions are indeed the result of the operation of cognitive algorithms, then there is no reason to accept them as infallible. But the present topic is not the infallibility of intuition, but the unavoidably of intuition. We will be discussing type iii intuitions.
It's not that philosophers weirdly and unreasonably prefer intuition to empirical facts and mathematical/logical reasoning, it is that those things either don't go far enough, or are themselves based on intuition.Y
"Just use empircism" doesn't work, because philosophy is about interpreting empirical data.
"Just use maths/logic" doesn't work , because those things are based on axioms justified by intuitive appeal.
"Just use reductionism" doesn't work , because its not clear what lies at the bottom of the stack, or if anything does. Logic, epistemology and ontology have been held to be First Philosophy at different times. Logic, epistemology and ontology also seen to interact. Correct ontology depends on direct epistemology..but what minds are capable of knowing depends on ontology. Logic possibly depends on ontology too, since quantum.mechanics arguable challenges traditional bivalent logic.
Philosophers don't embrace intuitions because they think they are particularly reliable, they have reasoned that they can't do without them. That is the essence of the Inconvenient Ineradicability of Intuition. An unfounded foundation is what philosophers mean by "intuition", that is to say, meaning iii above. Philosophers talk about intuitions a lot because that is where arguments and trains of thought ground out...it is a way of cutting to the chase.
Most arguers and arguments are able to work out the consequences of basic intuitions correctly, so disagreements are likely to arise form differences in basic intuitions themselves.
Philosophers therefore appeal to intuitions because they can't see how to avoid them...whatever a line of thought grounds out in, is definitionally an intuition. It is not a case of using
intuitions when there are better alternatives, epistemologically speaking. And the critics of their use of intuitions tend to be people who haven't seen the problem of unfounded foundations because they have never thought deeply enough, not people who have solved the problem of finding sub-foundations for your foundational assumptions.
Scientists are typically taught that the basic principles maths, logic and empiricism are their foundations, and take that uncritically, without digging deeper. Empiricism is presented as a black box that produces the goods...somehow. Their subculture encourages use of basic principles to move forward, not a turn backwards to critically reflect on the validity of basic principles. That does not mean the foundational principles are not "there". Considering the foundational principles of science is a major part of philosophy of science, and philosophy of science is a philosophy-like enterprise, not a science-like enterprise, in the sense it consists of problems that have been open for a long time, and which do not have straightforward empirical solutions.
The "guarantor" of your future is two things:
Believing logic works has nothing to do with faith—it's that you cannot do anything useful with the alternative. Then, once you've assumed logic and created maths, you just find the simplest explanations that fit with what you see. Will the future always be what you expect? No, but you can make claims with very high confidence, e.g. "in 99.99% of worlds where I receive the sense data I did, the Sun will actually rise tomorow."
Just because you panic about the unknown does not mean the unknown will actually be a large factor in your reality.
Just because you panic about the unknown does not mean the unknown will actually be a large factor in your reality.
I do understand the point you are trying to make, but a large part of speculation around AI on this forum, especially around acausal trade, the simulation hypothesis etc. basically lives outside of the bounds of the two axioms you have set up. Especially if you start talking about whole brain emulation and the possibility of living in a simulation, you are no longer making educated inferences based on logic and sense data: Once you posit that all the sense data you have received in your life can be fabricated, you open yourself up to an endless pit of new and unfalsifiable arguments about what exactly is "out there".
In fact, a lot of simulation hypothesis related arguments have to smuggle assumptions about how the universe works out of the matrix, assuming that any base universe simulating us must have similar laws around thermodynamics, matter, physics etc., which is of course not a given at all. We could be simulations running in a Conway's Game of Life universe, after all.
And you can say, "well we must believe in this because the alternative is of no use to us and would be completely unworkable by the lights of my worldview", in which case you have just made a statement of faith sans evidence either for or against. You choose to believe in a universe where your systems of thinking have purpose and utility, which is basically the point I'm trying to make.
I would say that I focus my thinking on the universes I can get sensory input showing the thinking is useful.
Re: this thread
If the best you can come up with is a probabilistic argument, that concedes the point that there is no guarantee. Well, you put "guarantor" in quotes.
I would say that I focus my thinking on the universes I can get sensory input showing the thinking is useful
Settling for usefulness alone rather than usefulness+truth is also a concession.
You do know truth only means, "consistent with some set of assumptions (axioms)"? What does it mean to look for "true axioms"? That's why I defer to useful ones.
You do know truth only means, “consistent with some set of assumptions (axioms)”?
There's no agreed definition of truth , so I am not in a position to know that. The definition you have is kind of how logic works, but many would argue that logic doesn't generate truth for that reason.
What does it mean to look for “true axioms”?
A lot of people would use empiricism for that.
Words demarcate the boundaries of meanings. You seem to be claiming there is some undefinable quality to the word "truth" that is useful to us, i.e. some unmeaningful meaning. Believe in epehemeral qualities all you like, but don't criticize me for missing out on some "truths" that are impossible to discover anyway.
People who believe in science and rationality often state that their beliefs are formed from sound foundations of observation, experimentation, and reasoning. They dismiss religion and faith as unscientific forms of thought which cannot lead to knowledge. At the same time, thought experiments like the simulation hypothesis show that there are fundamental limits to what knowledge can be derived from observation and experimentation. I will go further and propose that a form of faith (i.e. unreasonable belief) is necessary to live in and make sense of our world.
This work was carried out as part of a preliminary investigation for the Human Inductive Bias project.
It is shocking how much we don’t know about the world. At any moment our senses give us a dismally small picture of our immediate surroundings. I cannot see past the wall a metre or two to my right in the office where I am typing this. I cannot even see under the table where my laptop is sitting unless I really make an effort. The whir of the air conditioning could be blocking out distant screams from a stabbing. For all I know, nuclear war could have started three minutes ago and unless someone thought to send a news alert I would die blissfully ignorant. A new unified theory of physics could have been born five seconds ago, and I would be none the wiser.
But I am not only ignorant about the state of the world. I am also ignorant about its mechanisms. I don’t really know how my phone works. I kind of know how a computer works, but not what’s going on at the processor or OS level. I certainly don’t know how my kidneys work at a chemical level, or how my brain works at any serious level at all. When I try to enumerate everything I am ignorant about, I am stunned by how impossible this task seems. How do plants communicate? Why does my skin suffer from eczema? Why is there war, or suffering, or economic injustice? How do you predict the movements of the stars? How do they make ink pens, combustion engines, headphones, or chocolate bars with embedded wafers? What even is dark matter? The awesome weight of the mysteries of the world crashes down on me and I sit stunned at my chair.
Yet, at the same time, I am not really aware of my ignorance unless I really think very pointedly about everything I don’t know. If you asked me any of those questions above, I’d probably give a vague, high level answer that sounded coherent but had little more than fragments of memorised factoids and ad-hoc speculation to back it up. Any expert in the field of engineering, manufacturing, science, or medicine would have no trouble tearing apart my claims. Even though my day job as a researcher is supposedly to push the limits of what I don’t know, it is only in very specific and bounded domains. When I’m not thinking about AI or the nature of minds or neuroscience I feel confident that I am an informed person whose beliefs about the world are backed by observed facts and collected evidence. This is manifestly untrue.
Still, there must be some reason we don’t spend our lives huddled in our rooms, peeking fearfully at the alien and strange world outside. And it is true that, in many aspects, my life proceeds as if I had a well-grounded grasp on what is true or not. I am not generally harmed by my ignorance of the principles of car manufacturing when I cross the road. I can complete my chores and take a shower despite being ignorant of the mysteries of consciousness. I use my laptop without precise knowledge of the OS-level memory management techniques that enable me to play video games and watch youtube videos at the same time. In short, I suffer limited harm and still achieve my high level goals despite my broad-ranging and almost total ignorance of the fundamental nature of the universe. And I suspect I am not alone.
So how do we get our information? A lot of it is from deferrals to other minds. I do not independently verify the truth of the Fritz-Haber reaction or the existence of gravity, I take it on faith that Newton was not lying to me when he wrote his Principia Mathematica. Some is our sense data. And quite a lot appears to be unspoken assumptions that certain things will transfer: rules like “things that were true in the past will probably be true in the future”, or “the laws of physics are the same everywhere, even in places where nobody has measured them”, “most things have a persistent nature and are not illusions”, and so on. But very little of what we believe is subject to the tiresome and somewhat-inaccurate process of experimental validation, much less replication at different intervals. Don’t judge the conspiracist or the peddler of pseudoscience too harshly: Epistemically speaking, we are all skating on very thin ice.
If we think back through our evolutionary history, it becomes clear that the idea that we derive our knowledge from principled observation and reason is a very new and somewhat silly one. Fish, cows, cats, dogs, and many other animals all manage to gain complex and high-dimensional knowledge about reality and improve their chances of survival despite being manifestly unable to write scientific papers or perform non-trivial reasoning tasks. Your dog is not putting a learned hierarchical Bayesian model of primate psychology into practice when they pant at you in the hopes that you will give them a treat—at least, not consciously.
So what kind of knowledge are we producing, if we are not using the clean hypothesis-confirming model of the Scientific Method? Here the new science of artificial neural networks (NNs or ANNs) offers us some clues—ANNs do not necessarily “grasp the meaning” or “understand the mechanisms” of the data they are presented with. In fact, many would argue that they cannot understand or grasp much of anything, since they aren’t conscious or sentient. Instead, they learn a series of transformations that turn input data into ideal outputs that minimise some loss function. Sometimes these transformations are largely stateless, as in the case with arithmetic: the answer to 20*5 is not dependent on the answer to 4+3. Sometimes the transformations require some information transfer from moment to moment, as with an RNN or transformer learning to predict text sequences. Still, nothing here demands that they “get”, on some deep level, why what they’re doing works. Does a transformer need to know the true nature of New York to internalise that the phrase “the American city of New” is usually followed by the token “York”? Probably not.
The same is true of our evolutionary ancestors: a nematode does not need to understand the intricacies of the ATP metabolic cycle to seek out food in its native habitat. A cat does not need to understand mouse neuroscience to catch mice. I further argue that the same is true for humans. We do not need to know how a combustion engine works to learn how to drive a car, or what toxins are in a mushroom to know that it is toxic. Effectively, what we learn are a series of transformations based on environmental cues, which we then take on faith will reproduce similar outcomes to what worked in the past[1]. We turn moments of sense data into input-output cause and effect pairs. We effectively engage in black-box learning for just about everything we touch, treating it as a mysterious box where certain inputs produce certain outputs. Science and reason augment this black-box process by giving us deeper sensitivity to the connections between particular input and output cues, and therefore greater predictive power. They might also give us access to new sense-data generated by experiments with which to draw causal inferences. Over time we develop models of the machines and living entities around us, which we carry forward to make more and more deep and confident predictions. Still, we never leave the regime of tying together moments of sensory data into input-output transformations, because ultimately that is the only way we receive any information about the outside world at all.
Still, nothing of this requires faith, right? If not for our limited brains and lifespans surely we would be able to catalogue everything, understand every truth, reason out every principle? Or the other way around—surely we will find the underlying principles of the universe, and with time deduce all relevant facts from axioms alone? So we come to the issue of the simulation hypothesis, i.e. the Evil Demon or Plato’s Cave problem. The problem is that we cannot verify that our senses are not deceiving us purely on the basis of our senses. It can always be the case that every piece of sense data we received was part of a meticulous Matrix whose sole purpose was to deceive us into inferring false relationships between the mass of an object and its energy content. Your eyes and ears cannot determine whether your eyes and ears are lying to you.
Ultimately, the problem we have when talking about whether our universe is simulated is the same one as that which motivated Gödel's incompleteness theorems. In short, we cannot use a set of information gathering techniques (whether based on sensory inputs or formal proofs) to prove the consistency and truth of those information gathering techniques in toto. We always need to depend on something beyond those techniques which acts as a guarantor that our techniques are built on solid foundations. And if we take our techniques plus that guarantor as our new information gathering system, a new Gödel statement can be formulated which proves this joint system inadequate, et cetera unto infinity. In other words, we must have an unreasonable belief in something that cannot be sensed or proven by observation and experiment with our senses. That something must act as a guarantor that our senses are not feeding us false information.
What happens if we deny the existence of this guarantor? Well, the problem with living in the Matrix is that the laws of physics can in fact change at any moment. Since all we are receiving is fabricated information, it becomes trivial to summon dragons, flip the world upside down, or turn day into night. At any time our hard-earned world model of transformations can be entirely broken down and twisted into lies. You, armed with all your science and rationality, may one day find yourself with no mouth and unable to scream. There is a reason why people who believe in the simulation hypothesis often fall prey to nihilism, bleak terror, or attain a view that there is no meaning in life. For what is faith? Faith is trust in some future eventuality despite uncertainty. And what is prediction? Trust in some future eventuality despite uncertainty.
This is not a statement of truth or fact. In fact, according to scientific principles what I am saying is categorically unprovable. But I argue that this faith in “something out there” is necessary to live in our world without giving into the nihilism of “everything can turn to ash in an instant”. However, this something is also emphatically not like the gods we have seen in our myths, religions, and fictions. Beings you can see and touch, who can rain fire from the sky or open chasms in the earth, are automatically disqualified from being the guarantor, which exists outside the realm of sense data entirely. What it is, unfortunately, I cannot say or describe. It appears to be codified best as an implicit promise that time will continue to operate as we understand it, that the rules of physics will hold true in one instant as they have in another, that cause and effect will be obeyed. In short, the inductive biases upon which we build our linkages of sensory data. But the fact that it has held true up to now, of course, is no inductive evidence for it holding true in the future. Induction tells us that the turkey was safest the day before it was slaughtered.
This is why we are so surprised when someone we know no longer responds reliably to e.g. their favourite joke, or their most detested foodstuff. It’s not that we think it is impossible for a human to change, but that most of our knowledge is predicated on things staying mostly the same.