Ape in the coat


Sorted by New

Wiki Contributions


What do determinists here think about free will and Chalmer's hard problem of consciousness?

What? Where did I say that?

You said that some people would feel more free if counterfactuals and probability, which are part of our decision making algorithm, existed somewhere outside of our mind.

I don't know what you are referring to. Counterfactuals follow from indeterminism because indeterminism means an event could gave happened differently. It's quite straightforward.

There seem to be a huge gap between "something could have happened differently" and "there actually exist a parallel universe where this indeed happened differently". If we consider probability and counterfactuals to exist only on the map, it's easy to cross this gap via using different equivalent interpretations of our uncertainty. But otherwise I don't see how one follows from the other.

Why wouldn't it happen? To me that looks like a circular argument: nothing can happen unless it is determined , so everything is determined. Determinism, therefore determinism.

Because if the event happened I can now see the actual outcome. Therefore I can determine the outcome (by seeing it), therefore the event is not indeterminable. I agree that it's an obvious tautology, that's exactly the reason why I feel so confused trying to imagine an alternative.

Whether indeterminism based freewill makes sense is a separate question from whether indeterminism makes sense.

I agree. But that's what my initial claim was about: libertarian free will not making sense.

Why haven't we reduced qualia already, if reductionism is an old idea?

For the same reason why we haven't yet developed means to be immortal. It requires lots of actual scientific work in the direction that philosophy's showed us. Philosophical ground work may have been done, but scientific is not yet. That's exactly what I've said in the first comment.

What do determinists here think about free will and Chalmer's hard problem of consciousness?

I don't see how what you explained is more than decision making. As soon as we understand that probabilities are part of the map, not the territory, it's clear that causing the future is exactly the same thing as influencing it in a way that makes future A more likely than future B. I also do not understand why would anyone feel more free if their decision making algorithm existed outside of their mind and how it's even possible in theory.

Another thing that I have troubles understanding, is how the objective existence of counterfactuals follows from, or even is compatible with indeterminism. When we model some event with multiple outcomes we can either perceive it as random in one world, or determinable in many worlds, where each world correspond to each outcome. But you claim that it won't be enough for libertarians. For some reasons they need both at the same time?

The most confusing thing for me, is the whole idea of objectively indeterminable event. If it's indeterminable even for the universe itself, how can it happen at all in this universe? I can think of some justification via our universe being an interventionist simulation, but this will just pass the buck to the universe from which the intervention is performed.

I can definitely determine what my decision is. I do it every time when I make one. And I do it via my decision making algorithm, which can be executed in this universe, specifically on my brain. This requires quite a lot of determinism. And I don't see how it can make sense if my decisions can't be determined. And if someones defenition of free will requires decisions to be undeterminable, I claim that such definition doesn't make any sense. 

The Hard Problem is the problem of fishing a physical reduction of qualia, so saying "duh, find a physical reduction of qualia" isn't some novel solution that no one thought of before.

What does novelty have to do with anything here? Anyway understanding that we have already dealt with similar problems before, can give some valuable insights.

Book Review: Free Will

I mostly agree.

I like this. Free will is the feeling when you don't know the causes of your thoughts and actions.

It's definetely a huge part of the puzzle. But not all of it. Free will is also a feeling of not knowing the choices you will make in the future. And the process of determining this choice due to all the causes.

Suppose Omega perfectly knows all the prior causes of my decisions, it has my source code and all the inputs. Omega would still have to run the source code with these inputs, to actually execute my decision making algorithm, so that it can determine my actions. But nevertheless my actions are determined by my decision making algorithm. This part of free will is completely real.

(Map vs territory distinction, essentially. Free will exists on the map, not in the territory. It is not an illusion, in the sense that it is actually there on the map; without perfect self-knowledge it can't be otherwise.)

Yes! But with a caveat. This state of not knowing which action will actually be executed seems to be essential for the work of the decision making algorithm. Options need to be marked as reachable so that our tree search found the best one. Also the destinction between map and the territiry becomes fuzzy when the territory is our map making engine. Our decision making algorithm is embedded in our brain in this sense our freedom of will is more than just part of the map.

Book Review: Free Will

I haven't read Harris's book, but my guess would be that he takes appropriate care not to sound like he does more than describing the world he sees.

I had originally expected exactly that from the book! But, in my opinion, it didn't turn out to be the case. I'm pretty sure that Harris could have done it if he intended to. My guess is that he wanted to be more relatable and appealing to a layman reader rather than polish his speech too much.

What do determinists here think about free will and Chalmer's hard problem of consciousness?

I assume by determinists you've meant the so called "boring view of reality" something in a cluster of causal determinism and reductive materialism. I seem to fit quite good in there, so here is my take:

Humans have free will in a sense of decision making, planning, achieving our goals and shaping the future the way we would like it to be. It (spoiler alert) requires causal determinism or, at least, quite a lot of it. It does not require indeterminism or the existence of conterfactual worlds outside of ones mind at all. Libertarian definition of free will doesn't seem to make any sense.

All the philosophical groundwork for solving the hard problem of consciousness is already done. We can understand in principle how apparently-different-in-kind entities can be reducible to one thing. Now it's a scientific problem to figure out the exact reduction. Qualia are phisical. 

SIA > SSA, part 4: In defense of the presumptuous philosopher

I've been genuinely confused about all these antropics stuff and read your sequence in hope for some answers. Now I understand better what are SSA and SIA. Yet am not even closer to understanding why would anyone take these theories seriously. They often don't converge to normality, they depend on weird a priori reasoning which doesn't resemble the way cognition engines produce accurate maps of the territory.

SSA and SIA work only in those cases where their base asumptions are true. And in different circumstances and formulations of mind experiments different assumptions would be true. Then why, or why, for the sake of rationality are we expecting to have universal theory/referent class for every possible case? Why do we face this false dilemma, which ludicrous bullet to bite? Can we rather not?

Here is a naive idea for a supperior antropics theory. We update on antropic evidience only if both SSA and SIA agree that we are to. That saves us from all the presumptious cases. That prevents us from having precognitive, telekenetic and any other psychic powers to blackmail reality, while allowing us to update on God's equal numbers coin toss scenarios.

I'm pretty sure there are better approaches, I've heard of lots of good stuff about UDT, but haven't yet dived deep enough inside it. I've found some intuitively compelling aproaches to antropics on LW. Than why do we even consider SSA or SIA? Why are people amused by Grabby Aliens or Doomsday Argument in 2021?

Three enigmas at the heart of our reasoning

I really empathize with being troubled by such questions. I was amused by them a decade or so ago and I've found a way to actually make peace with them before I discovered Less Wrong, which in turn gave me so crucial insights, allowing to solve these enigmas to my own satisfaction. 

The way I originally made peace with these questions was through embracing the doubts rather than running from them. To, as you put it, "surrender to radical skepticism" Suppose that the questions are indeed unsolvable. That there is no ultimate justification, that everything is doubtful, that no absolute truth can ground our knowledge. Why would that be bad? How would we navigate in such a world?

The first impulse may be to fall for the fallacy of gray. It's understandable. But notice that some things are still easier to doubt than the others. You may doubt in you sensory inputs and you whole reasoning process. Allow it to yourself. Try it for a while and notice how much harder it is than to doubt the existence of invisible pink unicorn. There is no rule that compell you to doubt so hard in some specific cases but not the others. If such rule existed it would be so easy to doubt it. And notice that when you approach everything with the same level of doubt it all adds up to normality

The questions aren't answered yet. Why is it easier for me to doubt in X than in Y? But no more they are torturous, when you try to ground your knowledge in doubt rather than in certanity. Why did you think that absolute certanity is necessary in the first place? Isn't this idea really weird? How would it even work?