Wiki Contributions

Comments

I'm not sure that analogy can be extended to our cognitive processes, since we know for a fact that: 1. We talk about many things, such as free will, whose existence is controversial at best, and 2. Most of the processes causally leading to verbal expression are preconscious. There is no physical cause preventing us from talking about perceptions that our verbal mechanisms don't have direct causal access to for reasons that are similar to the reasons that we talk about free will.

Why must A cause C for C to be able to accurately refer to A? Correlation through indirect causation could be good enough for everyday purposes. I mean, you may think the coincidence is too perfect that we usually happen to experience whatever it is we talk about, but is it true that we can always talk about whatever we experience? (This is an informal argument at best, but I'm hoping it will contradict one of your preconceptions.)

Yeah, it might have helped to clarify that the infinitesimal factors I had in mind are not infinitely small as numbers from the standpoint of addition. Since the factor that makes no change to the product is 1 rather than 0, "infinitely small" factors must be infinitesimally greater than 1, not 0. In particular, I was talking about a Type II product integral with the formula pi(1 + f(x).dx). If f(x) = 1, then we get e^sigma(1.dx) = e^constant = constant, right?

No, he's right. I didn't think to clarify that my infinitely small factors are infinitesimally larger than 1, not 0. See the Type II product integral formula on Wikipedia that uses 1 + f(x).dx.

Thanks, product integral is what I was talking about. The exponentiated integral is what I meant when I said the integration will move into the power term.

Someone has probably thought of this already, but if we defined an integration analogue where larger and larger logarithmic sums cause their exponentiated, etc. value to approach 1 rather than infinity, then we could use it to define a really cool account of logical metaphysics: Each possible state of affairs has an infinitesimal probability, there are infinitely many of them, and their probabilities sum to 1. This probably won't be exhaustive in some absolute sense, since no formal system is both consistent and complete, but if we define states of affairs as formulas in some consistent language, then why not? We can then assign various differential formulas to different classes of states of affairs.

(That is the context in which this came up. The specific situation is more technically convoluted.)

Integrals sum over infinitely small values. Is it possible to multiply infinitely small factors? For example, Integration of some random dx is a constant, since infinitely many infinitely small values can sum up to any constant. But can you do something along the lines of taking an infinitely large root of a constant, and get an infinitesimal differential in that way? Multiplying those differentials will yield some constant again.

My off the cuff impression is that this probably won't lead to genuinely new math. In the most basic case, all it does is move the integrations into the powers that other stuff is raised by. But if we somehow end up with complicated patterns of logarithms and exponentiations, like if that other stuff itself involves calculus and so on, then who knows? Is there a standard name for this operation?

I don't see how you can achieve a reductionist ontology without positing a hierarchy of qualities. In order to propose a scientific reduction, we need at least two classes, one of which is reducible to the other. Perhaps "physical" and "perceived" qualities would be more specific than "primary" and "secondary" qualities.

Regarding your question, if the "1->2 and 1->3" theory is accurate, then I suppose when we say that "red is more like violet than green", certain wavelength ranges R are causing the human cognitive architecture to undertake some brain activity B that drives both the perception of color similarity A a well as behavior which accords with perception C.

So it follows that "But, by definition of epiphenomenalism, it's not A that causes people to say the above sentences S1 and S2, but rather some other brain activity, call it B." is true, but "But now by our theory of reference, subjective-red is B, rather than A." is false. The problem comes from an inaccurate theory of reference which conflates the subset of brain activities that are a color perception A with the entirety of brain activities, which includes preconscious processes B that cause A as well as the behavior C of expressing sentences S1 and S2.

Regarding S2, I think there is an equivocation between different definitions of the word "subjective". This becomes clear when you consider that the light rays entering your eyes are objectively red. We should expect any correctly functioning human biological apparatus to report the object as appearing red in that situation. If subjective experiences are perceptions resulting from your internal mechanisms alone, then the item in question is objectively red. If the meaning of "subjective experience" is extended to include all misreportings of external states of affairs, then the item in question is subjectively red. This dilemma can be resolved by introducing more terms to disambiguate among the various possible meanings of the words we are using.

So in the end, it still comes down to a mereological fallacy, but not the ones that non-physicalists would prefer we end up with. Does that make sense?

This is an interesting example, actually. Do we have data on how universal perceptions of color similarities, etc. are? We find entire civilizations using some strange analogies in the historical record. For example, in the last century, the Chinese felt they were more akin to Russia than the West because the Russians were a land empire, whereas Westerners came via the sea like the barbaric Japanese who had started the Imjin war. Westerners had employed similar strong arm tactics to the Japanese, forcing China to buy opium and so on. Personally, I find it strange to base an entire theory of cultural kinship on the question of whether one comes by land or sea, but maybe that's just me.

I don't think epiphenomenalists are using words like "experience" in accordance with your definition. I'm no expert on epiphenomenalism, but they seem to be using subjective experience to refer to perception. Perception is distinct from external causes because we directly perceive only secondary qualities like colors and flavors rather than primary qualities like wavelengths and chemical compositions.

EY's point is that we behave as if we have seen the color red. So we have: 1. physical qualities, 2. perceived qualities, and 3. actions that accord with perception. To steelman epiphenomenalism, instead of 1 -> 2 -> 3, are other causal diagrams not possible, such as 1 -> 2 and 1 -> 3, mediated by the human cognitive architecture? (Or maybe even 1 -> 3 -> 2 in some cases, where we perceive something on the basis of having acted in certain ways.)

However, the main problem with your explanation is that even if we account for the representation of secondary qualities in the brain, that still doesn't explain how any kind of direct perception of anything at all is possible. This seems kind of important to the transhumanist project, since it would decide whether uploaded humans perceive anything or whether they are nothing but the output of numerical calculations. Perhaps this question is meaningless, but that's not demonstrated simply by pointing out that, one way or another, our actions sometimes accord with perception, right?

In the Less Wrong Sequences, Eliezer Yudkowsky argues against epiphenomenalism on the following basis: He says that in epiphenomenalism, the experience of seeing the color red fails to be a causal factor in our behavior that is consistent with us having seen the color red. However, it occurs to me that there could be an alternative explanation for that outcome. It could be that the human cognitive architecture is set up in such a way that light in the wavelength range we are culturally trained to recognize as red causes both the experience of seeing the color as well as actions consistent with seeing it. After the research which shows that we decide to act before becoming conscious of our decision, such a setup would not be a surprise to me if true.

Thanks. You're right, that part should be expanded. How about:

At this point, you have two choices: Either 1. one randomly selected door, or 2. one door among two doors, chosen by the host on the basis of the other not having the prize.

You would have better luck with option 2 because choosing that door is as good as opening two randomly selected doors. That is twice as good as opening one randomly selected door as in option 1.

Load More