Do you know like the streets you see tend to be more crowded, airplanes have more seats taken, more people in restaurants on average from your observations, compared with how they actually are on average?
You are mixing together situations where a person can be correctly approximated as a random sample from some population with situation where it's not the case.
What we need to do is look at every situation trying to come up with an appropriate probabilistic model that describes it to the best of our knowledge. A map that fits the territory.
What mainstream anthropic reasoning is doing is assuming that this model has to always be the same in every situation and then trying to bite ridiculous bullets when it predictably leads to bizarre conclusions.
The point of the exercise is to add skepticism to normality. Skepticism is the starting point. Justified uncertainty is the goal. The journey between the two is the exercise.
Personally, I think that pointing out the self defeating nature of skepticism in the way it's described here is indeed a good argument against it
"This statement is a false" - is a paradox.
"This statement is unprovable" - is not.
We can say that both are self-defeating if we want to, but that doesn't really change anything.
We can say with confidence that they are curved and that our experience of them depends on our velocity and position.
Let's just say that Kant didn't see that coming. He essentially made a very confident prediction about the nature of space an time that was later shown wrong.
This seems unfair to philosophers because the very nature of their subject matter precludes, or at least makes far more difficult, the precise articulation of what they're talking about in a way which might allow for mathematical levels of rigour.
I think this is exactly fair. It's true that philosophy has some excusable complications preventing it from achieving the same rigor as math. But then one shouldn't claim this rigor for philosophy and be very careful when comparing philosophy to math.
This presupposes that mathematics is not the real world. If we lived in a Tegmark-like platonic mathematical universe, and knew this for certain, then mathematics could indeed prove things about the real, physical world.
I don't need to presuppose that some elaborate theory like platonic mathematical universe is wrong, when I have a simpler account for it's evidence.
Even if we don't, I would still consider mathematics to be real, just not physical, and therefore to be possible to use to prove things about the real world. I object to the use of the word 'merely' here, even if it wouldn't be possible to prove we lived in a mathematical universe if we did.
And what exactly do you mean by real here?
I think it's fair for me to point out that the fact that mathematics requires the assumption of axioms does not show that it can't provide justification for knowledge, because the axioms themselves can be included in statements about what follows from the axioms.
It means that it's conditional knowledge, exactly what I'm talking about.
Invoking the anthropic principle makes me uncomfortable.
I commend you for feeling uncomfortable about it.
if everything I observe is a demonic illusion, I admit defeat, and so I gain more utility by focusing on the worlds where I expect my actions can, in principle, lead to more utility.
Kind of, but consider how much defeat do you actually need to admit?
https://substack.com/@apeinthecoat102771/note/c-117316511
A similar principle applies to trusting my tools of logical analysis.
I'll be talking more about Münchhausen trilemma in future posts.
I'm pretty well aware that I don't actually know anything for certain, and Bayes proponents are always saying there is no 0 or 1.
This is the normality that skepticism is adding up to. Probabilities, not certain knowledge.
At the same time, for most things, my best decision-making seems to have come from taking my perceptions at face value.
You can be somewhat critical about your perceptions and even hold in your mind the possibility of evil demon.
Which version of scepticism? Even the article you link to concedes there is,at least one form of scepticism that's refutable through self defeat.
If extreme scepticism is self defeating, and certainty is unobtainable, then what you are left with is moderate scepticism -- AKA fallibilism. What you need to try is the right an by if the right kind of scepticism.
I think you've just answered your own question more or less.
If the world is weird, I wish to believe that the world is weird.
The goal is to add up to truth.
If normality means supporting all your intuitions, then you are going to have to disbelieve much science and maths.
If it means something else ...what?
We are in agreement here. And if you read to the end of the post you'll see the answer to your question:
After all, science is the normality to which we would like philosophy to be adding to
Something can work in some contexts, but not in in others.
Empiricism doesn't work for things you can't see, eg:-
Modality, Counterfactuals, Possible versus Actual.
Normativity, Ought versus is.
Essence versus Existence, hidden explanatory mechanisms.
A priori truth could have a naturalistic basis. Many organisms can instinctively recognise food, predators, rivals and mates. But even the broadest evolutionary knowledge must that operate within the limits of empiricism .. not the "invisibles" I mentioned. And of course it is a rather different kind of apriori knowledge than analytical kind, based on language and tautologies.
I'm sorry I'm not going to spend time untangling this confusion in the comments. I hope that if you keep reading my posts you'll eventually have enough insights to figure this out for yourself.
Space and time don't need to be justifiable, because they are not propositions.
Their existence and properties are proposition.
I think you meant Euclidean.
That's too, though I was mostly hinting to relativity.
Curious how our understanding of things that some people may assume are beyond observations are then happen to be changed by scientific discovery, isn't it?
Almost all contemporary epistemologists will say that they are fallibilists
That's nice. Though the key words here are "contemporary" and "epistemologists".
There are still minor nuances with fallibilism like the fact that people still manage to be confused by the possibility of Cartesian Demon, but I'll get to them in time.
That is, where skepticism claims to say that something can't be known, the claim that something can't be known is itself a claim to know, and thus skepticism is just a special case of claiming to know but where the particular knowledge rejects knowing the matter at hand
Yes, however a version of skepticism that claims that nothing can be known for sure/justified beyond any doubt doesn't have this problem.
Figuring out/refining the right kind of philosophical skepticism that would add up to normality is a bit of a challenge, but it's not that hard, really. But this require to do the first step - to stop dismissing skepticism out of hand.
Yes you are thinking in the right direction.
I'll make a separate post about how exactly skepticism adds up to normality. For now it's an invitation to try to figure it out yourself.
The point is to refine the wrong question about certainties 2 into a better question about probabilistic knowledge 2'. If you just want to get an answer to 2 - then this answer is 'no'. We can't be certain that our knowledge is true. Then again, neither we need to.
If your question is how it's different from cyclical reasoning then consider the difference:
Evolution produced my brain that discovered evolution. After I've considered this thought I'm now certain in both of them and so case closed. No more arguments or evidence can ever persuade me otherwise.
I can never be certain of anything but to the best of my knowledge the situation looks exactly the way it would looked if my brain was produced by evolution which allowed my brain to discover evolution. I'll be on lookout for counter-evidence, because maybe everything I know is a lie, but for now on a full reflection of my knowledge, including techniques of rationality and notions of simplicity and computational complexity, this seems to be the most plausible hypothesis.
Providing a link to Doomsday Argument and the False Dilemma of Anthropic Reasoning where I solve this anthropic issue. We can collapse all the meta-levels of anthropic reasoning into a simple principle: make sure that the map that you use actually corresponds to the territory.
You should be careful here as it's very easy to overestimate how much you actually know. Anthropic problems like Sleeping Beauty and Abscent Minded Driver are confusing specifically because of that.
There is also nothing illegal about noticing that P(X) is very low even though you already know that X is realized. If your model claims that some event X is low probable, but you've just observed it being realized, it's very probable that your model is wrong.