anorangic

Comments

How Strong is Our Connection to Truth?

Regarding logic and methods of knowing, I agree that logic might not be the only useful way of knowledge production, but why shouldn't you have it in your toolbox? I'm just trying to argue that there's no reason for anyone to neglect logical arguments if they yield new knowledge.

I agree that "prior" is a vastly better word choice than "axiom" because it allows us to refine the prior later.

The "planetary consciousness" thing also appears to me as a misunderstanding: I don't want to propose that every information about the world should be retrieved and processed, in the same way that even in my direct environment, what my neighbour does in his house is none of my business.

How Strong is Our Connection to Truth?

How do you differentiate between "Truth" and "truth"? I would really appreciate some clarification regarding these two words because it would help me to understand your comment better. Thanks :)

How Strong is Our Connection to Truth?

I'm very grateful that you bring up these points. Sorry for the long response, but I like your comment and would like to write down some thoughts on each part of it.

One doesn't need to assume an objective reality if one wants to be agentic. One can believe that 1) Stuff you do influences your prosperity 2) It is possible select for more prosperous influences.

First of all, I think choosing the term "objective" in my post was a too strong, and not quite well-defined. (My post also seems at risk of circular reasoning because it somehow tries to argue for rationality using rationality.)
I really should have thought more about this paragraph. You proposed an alternative to the assumption of an objective reality. While this also requires the assumption that there are some "real" rules that tell which of one's actions cause which effects to my sensations today or in the future, and thus some form of reality, this reality indeed could be purely subjective in the sense that other "sentient beings" (if there are any) might not experience the same "reality", the same rules.

The use of the concept of "effective" is a bit wonky there and the word seems to carry a lot of the meaning. What I know of in my memory "effective method" is a measure of what a computer or mathematician is able to unambigiously specify. I have it hard to imagine to fairly judge a method to be ineffective.

What I mean by effectiveness is a "measure of completeness": If some method for obtaining knowledge does not obtain any knowledge at all, it is not effective at all; if it would be able to derive all true statements about the world, it would be very effective. Logic is a tool which just consists of putting existing statements together and yields new statements that are guaranteed to be true, given that the hypotheses were correct. So I'd argue that not having logic in one's toolbox is never an advantage with respect to effectiveness.

Just because you need to have a starting point doesn't mean that your approach needs to be axiomatic.

This is not clear to me. What do you think is the difference between an axiom and a starting point in epistemology?

It is unclear why planetary consiouness would be desirable. If you admit that you can't know what happen on the other side of the planet to a great degree you don't have to rely on unreliable data mediums. Typically your life happens here and not there. And even if "there" is relevant to your life it usually has an intermediary through which it affects stuff "here".

This is also a very good point, and I'll try to clarify. Consider an asteroid that is going to collide with Earth. There will be some point in the future where we know about the existence of the asteroid, even if only for a short time frame, depending on how deadly it is. But it can be hard to know the asteroid's position (or even existence) in advance, although this would be much more useful.

So, in a nutshell, I'm also interested in parts of reality that do not yet strongly interact with my environment but that might interact with my environment in the future. (Another reason might be ethical: We should know when somewhere on the world someone commits genocide so that we can use our impact to do something about it.)

So maybe the problem is a lag of feedback, or hidden complexity in the causal chain between one's own actions and the feedback, and this complexity requires one to have a deep understanding of something one cannot observe in a direct way.