Invoking the anthropic principle makes me uncomfortable.
I commend you for feeling uncomfortable about it.
if everything I observe is a demonic illusion, I admit defeat, and so I gain more utility by focusing on the worlds where I expect my actions can, in principle, lead to more utility.
Kind of, but consider how much defeat do you actually need to admit?
https://substack.com/@apeinthecoat102771/note/c-117316511
A similar principle applies to trusting my tools of logical analysis.
I'll be talking more about Münchhausen trilemma in future posts.
I'm pretty well aware that I don't actually know anything for certain, and Bayes proponents are always saying there is no 0 or 1.
This is the normality that skepticism is adding up to. Probabilities, not certain knowledge.
At the same time, for most things, my best decision-making seems to have come from taking my perceptions at face value.
You can be somewhat critical about your perceptions and even hold in your mind the possibility of evil demon.
Which version of scepticism? Even the article you link to concedes there is,at least one form of scepticism that's refutable through self defeat.
If extreme scepticism is self defeating, and certainty is unobtainable, then what you are left with is moderate scepticism -- AKA fallibilism. What you need to try is the right an by if the right kind of scepticism.
I think you've just answered your own question more or less.
If the world is weird, I wish to believe that the world is weird.
The goal is to add up to truth.
If normality means supporting all your intuitions, then you are going to have to disbelieve much science and maths.
If it means something else ...what?
We are in agreement here. And if you read to the end of the post you'll see the answer to your question:
After all, science is the normality to which we would like philosophy to be adding to
Something can work in some contexts, but not in in others.
Empiricism doesn't work for things you can't see, eg:-
Modality, Counterfactuals, Possible versus Actual.
Normativity, Ought versus is.
Essence versus Existence, hidden explanatory mechanisms.
A priori truth could have a naturalistic basis. Many organisms can instinctively recognise food, predators, rivals and mates. But even the broadest evolutionary knowledge must that operate within the limits of empiricism .. not the "invisibles" I mentioned. And of course it is a rather different kind of apriori knowledge than analytical kind, based on language and tautologies.
I'm sorry I'm not going to spend time untangling this confusion in the comments. I hope that if you keep reading my posts you'll eventually have enough insights to figure this out for yourself.
Space and time don't need to be justifiable, because they are not propositions.
Their existence and properties are proposition.
I think you meant Euclidean.
That's too, though I was mostly hinting to relativity.
Curious how our understanding of things that some people may assume are beyond observations are then happen to be changed by scientific discovery, isn't it?
Almost all contemporary epistemologists will say that they are fallibilists
That's nice. Though the key words here are "contemporary" and "epistemologists".
There are still minor nuances with fallibilism like the fact that people still manage to be confused by the possibility of Cartesian Demon, but I'll get to them in time.
That is, where skepticism claims to say that something can't be known, the claim that something can't be known is itself a claim to know, and thus skepticism is just a special case of claiming to know but where the particular knowledge rejects knowing the matter at hand
Yes, however a version of skepticism that claims that nothing can be known for sure/justified beyond any doubt doesn't have this problem.
Figuring out/refining the right kind of philosophical skepticism that would add up to normality is a bit of a challenge, but it's not that hard, really. But this require to do the first step - to stop dismissing skepticism out of hand.
Yes you are thinking in the right direction.
I'll make a separate post about how exactly skepticism adds up to normality. For now it's an invitation to try to figure it out yourself.
The point is to refine the wrong question about certainties 2 into a better question about probabilistic knowledge 2'. If you just want to get an answer to 2 - then this answer is 'no'. We can't be certain that our knowledge is true. Then again, neither we need to.
If your question is how it's different from cyclical reasoning then consider the difference:
Evolution produced my brain that discovered evolution. After I've considered this thought I'm now certain in both of them and so case closed. No more arguments or evidence can ever persuade me otherwise.
I can never be certain of anything but to the best of my knowledge the situation looks exactly the way it would looked if my brain was produced by evolution which allowed my brain to discover evolution. I'll be on lookout for counter-evidence, because maybe everything I know is a lie, but for now on a full reflection of my knowledge, including techniques of rationality and notions of simplicity and computational complexity, this seems to be the most plausible hypothesis.
I'll be talking more about Münchhausen trilemma in details in a separate post. But here are some highlights:
"from the inside" is what we care about here
There are two things that we care about
If our brains are created by natural selection and then they discover natural selection we have a straightforward non-paradoxical causal history for how our maps correlate to the territory:
Natural Selection -> Brain -> Natural Selection*
Where A* means map of A.
But that's an assumption. Doesn't lead you to know things.
In the map it is an assumption. But in the territory it's either true or false and it doesn't matter whether we've assumed it or not.
A map can correspond to the territory even if noone knows about it. We can have 1. without 2.
you knowledge is in the map too
You can map the map-territory correspondence as well. As soon as we considered it we are getting
(Natural Selection -> Brain -> Natural Selection*)*
This allows us to get a map of map-territory correspondence and then a map of map-territory correspondence of a map of map-territory correspondence and so on.
So, even though we do not have 2. and, in fact we can not have 2. in principle, as any reason for certainty will have to go through our mind.
We can have
2'. We are somewhat justifiably somewhat confident in 1
and
3'. We are somewhat justifiably somewhat confident in 2
and so on as long as we have enough data to construct a meta-model of the next level.
As a warming up exercise, try the tree story with something actually contentious
It's a fine exercise for beginners, but I hope we are long past it, at this point.
All the "contentiousness" evaporates as soon as we've fixed the definitions and got rid of the semantic confusion.
Note the lack of agreement about what empirical evidence even is, and about how science works.
Granted. This disagreement is resolved the same way still. You gather evidence about interpreting evidence. You reflect on it with your mind. And so on.
Probably not very, but philosophy isn't stuck there, since it has plenty of highly technical debates.
The "technicality" of the debates is really not the issue here.
The evolutionary argument guarantees that you can know just enough for your survival, and only that, things like which berries to eat and animals to avoid being eaten by. ( There's no evidence at all that it's "optimal")
Are you not familiar with the notion of optimization process? When you are a result of successful replication of imperfect replicators in a competitive environment with limited resources there is quite a lot of evidence for some kind of "optimality".
Typical philosophical problems, about ontology and epistemology, the real nature of things, have no relevance to survival, so the evolutionary argument doesn't tell you they are soluble.
If the existed in some kind of separate magisterium where our common knowledge wouldn't be applicable than yes. Is it your stance?
Some things are visible, and subject to direct feedback, other things arent.
So you use indirect feedback building on top of knowledge of things that are directly visible. I'm rather sure you understand how it works
To shown that empiricism works, you need to show that empiricism*alone" works, and it works for everything , including the tricky edge cases. And you can't infer all that from the fact that it works in one , simple case.
I can keep applying it to the "tricky cases" and see how things work from there. And this way I can aggregate more and more evidence and so on and so forth. Never reaching absolute certainty but always refining my tools.
models don't just drop out of the data.
They kind of do, in a manner of speaking. There are several ways to aggregate data in a model. We can try multiple of them and see how these models predict new data, therefore collecting evidence about what kind of models are good at data prediction in general and so refining our tools of model construction.
Being already selected for intuitions related to surviving in the world and not starting from scratch helps quite a lot.
Conjecture is a conscious, creative and cognitive process, not something that happens automatically as part of perception.
Okay I think I understand what is going on here. Are you under impression that I'm trying to bring back the old empiricism vs rationalism debate, arguing on the side of empiricism?
If so, I want to stop you here, as this couldn't be further from the truth. I believe this whole debate was quite embarrassing and the whole distinction on pure observation and pure cognition doesn't make sense in the first place. Observation is cognition, cognition is observation. You can skip all the obvious points how one actually needs a brain to make observations - I've explicitly mentioned it myself in the post.
There are direct quantifiable tests for predictive accuracy , but no way of directly comparing a map to the territory. Scientific realists hope and assume that empirical adequacy adds up to ontological correctness .. but it isn't necessarily so.
I'll talk about it in a future post.
One way of making this point is that ontologically wrong theories can be very accurate.
Yes, that's totally fine. I don't think we have any disagreement here.
Our inability to make direct comparisons between map and territory extends to an inability to tell how close we are to the ultimately accurate ontology,. Even probablistic reasoning can't tell us how likely our theories are in absolute terms. We only know that better theories are more probably correct than worse ones, but we don't really know whether current theories are 90% correct or 10% correct, from a God's eye point of view.
[Half joking]
Thankfully, there there doesn't seem to be any God, so we might as well not care about his point of view too much.
[/half joking]
Yes, it's all probabilities all the way down, without perfect certainty. This is fine. We can come up with adversarial examples where it means that we were completely duped, and our views are completely disentangled from "true reality" and were simply describing an "illusion", but
Examples, please.
There will be plenty in the future posts. But generally, consider the fact that philosophy reasons in all direction and normality is only a relatively small space of all possible destinations.
Have you solved philosophy? Has anybody?
Well, I've went further than many. But it's not relevant to the point I'm making here.
I also thought the robot:s answer missed the point quite badly ...because it reduced the ought all the way down to an is -- or rather a bunch of isses.
If you dismiss any reduction of ought to is, you are essentially dogmatically certain that Hume's guillotine is true. Is it your stance?
If what one ought to do reduces to what one would do
Not to what one would. Your ethical module may not be directly connected to the behavioral one and so your decisions are based on other considerations, like desires unrelated to ethics. This doesn't change the fact that what you ought to do is the output (or a certain generalization of multiple outputs) of the ethical module, which is a computation taking place in the real world, which can be observed.
there are potentially eight billion answers to what one ought to do.
Potentially but not actually. Once again, when you look, turns out individual ethical views of people are not *that* different. That said, there is still room for disagreement and how exactly we aggregate individual ethical preferences into morality is still up to debate. But this is the next question, with the somewhat similar direction for a solution.
There's a consistent theme in rationalist writing on ethics, where the idea that everyone has basically the same values , or "brain algorithms", is just assumed ... but it needs to be based on evidence as much as anything else.
Not basically the same, but somewhat similar. And it's not just assumed, it's quite observable. Human ethical disagreements are mostly about edge cases. Like what is your objective claim here, that human values are not correlated at all?
Reducing ethical normativity isn't bad, but doing it in a way that leads to sweeping subjetivism is bad. If you accept subjectivism, you miss better answers.
I think calling it subjectivism is very misleading. The whole subjective/objective duality is quite horrible - I'll be dedicating a post about it at some point. It's social constructivism of morality. Which is rooted in our other knowledge about game theory and evolution.
In the first comic, the engineers answer is a also a plausible philosophical answer.
Yes, this is exactly my point. A lot of things, which are treated as "applied missing the point answers" are in fact legitimately philosophically potent. At the very least, we should be paying much more attention to them.
In the third comic, the philosopher is technically correct. You can't achieve certainty from a finite.chain of observations, and you can only make a finite chain. Modern empiricists have admitted this giving up on certainty.
Therefore it's not just "by looking" but "pretty much by looking". I completely agree about the necessity to abandon the notion of certainty. If you want to give philosophers some credit for this - I agree. The irony of the joke stays the same. When the question is refined so that we removed the problematic notion of "certainty", the naive answer turned out to be basically true.
"This statement is a false" - is a paradox.
"This statement is unprovable" - is not.
We can say that both are self-defeating if we want to, but that doesn't really change anything.
Let's just say that Kant didn't see that coming. He essentially made a very confident prediction about the nature of space an time that was later shown wrong.
I think this is exactly fair. It's true that philosophy has some excusable complications preventing it from achieving the same rigor as math. But then one shouldn't claim this rigor for philosophy and be very careful when comparing philosophy to math.
I don't need to presuppose that some elaborate theory like platonic mathematical universe is wrong, when I have a simpler account for it's evidence.
And what exactly do you mean by real here?
It means that it's conditional knowledge, exactly what I'm talking about.