Posts

Sorted by New

Wiki Contributions

Comments

"Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration."

Whether this view is more accurate than DesCartes' view depends on whether the belief in question is already commonly accepted. When in the typical situation a typical person Bob says "X is Y, therefore I will perform act A" or "X should be Y, therefore we should perform act A", Bob is not making a statement about X or Y, he is making a statement about himself. All the truth or reality that is required for Bob to signal his altruism is that it be probable that he believes that X is Y or that X should be Y. The probability of this belief depends far more on what else Bob and his peers believe than it does about the reality or truth of "X is Y".

The influence of animal or vegetable life on matter is infinitely beyond the range of any scientific inquiry hitherto entered on. Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concurrence of atoms... Modern biologists were coming once more to the acceptance of something and that was a vital principle.

Given what we know now about the vastly complex and highly improbable processes and structures of organisms -- what we have learned since Lord Kelvin about nucleic acids, proteins, evolution, embryology, and so on -- and given that there are many mysteries still, such as consciousness and aging, or how to cure or prevent viruses, cancers, or heart disease, for which we still have far too few clues -- this rather metaphorical and poetic view of Lord Kelvin's is certainly a far more accurate view of the organism, for the time, than any alternative model that posited that the many details and functions of human body, or its origins, could be most accurately modeled by simple equations like those used for Newtonian physics. To the extent vitalism detered biologists from such idiocy vitalism must be considered for its time a triumph. Too bad there were to few similarly good metaphors to deter people from believing in central economic planning or Marx's "Laws of History."

Admittedly, the "infinetely different" part is hyperbole, but "vastly different" would have turned out to be fairly accurate.

In line with previous comments, I'd always understood the idea of emergence to have real content: "systems whose high-level behaviors arise or 'emerge' from the interaction of many low-level elements" as opposed to being centrally determined or consciously designed (basically "bottom-up" rather than "top-down"). It's not a specific explanation in and of itself, but it does characterise a class of explanations, and, more importantly, excludes certain other types of explanation.

This comment hits the bullseye. The general idea of emergence is primarily useful is in pointing out that when we don't understand something, there are still alternative explanations to those that superstitiously posit a near-omniscience or that pretend to have information or an ability to model complex phenomena that one does not in fact have. So, for example, a highly improbable organism does not imply a creator, a good law does not imply a legislator, a good economy does not require an economic planner, and so on, because such things can be generated by emergent processes. To come to such a conclusion does not require that we have first reasoned out the specific process by which the object in question emerged. Indeed if we had, we wouldn't have to invoke emergence any more but rather some more specific algorithm, such as natural selection to explain the origins of species.

For this reason, I strongly disagree with the following definition

Let K(.) be Kolmogorov complexity. Assume you have a system M consisting of and fully determined by n small identical parts C. Then M is 'emergent' if M can be well approximated by an object M' such that K(M') << n*K(C).

Because it is just in situations where a phenomenon has a not highly reducible complexity -- where M is not fully determined by n small identical particles, or where it is but K(M') is not substantially smaller than n*K(C) -- that the idea that a phenomenon is emergent, rather than the product of a near-omniscient or near-omnipotent creator, is most useful.

I'd add that the belief that any important phenomenon is highly reducible, or that even if it is reduceable that humans are capable of undertaking that reduction, are two other species of superstition. These are just as pernicious as the related superstition of the near-omniscient creator. In many, perhaps most cases of interest we either have to be satisfied with regarding a phenomenon as "emergent" or we have to superstitiously pretend that some being has information or a capability of reduction that it does not in fact have.

In line with previous comments, I'd always understood the idea of emergence to have real content: "systems whose high-level behaviors arise or 'emerge' from the interaction of many low-level elements" as opposed to being centrally determined or consciously designed (basically "bottom-up" rather than "top-down"). It's not a specific explanation in and of itself, but it does characterise a class of explanations, and, more importantly, excludes certain other types of explanation.

This comment hits the bullseye. The general idea of emergence is primarily useful is in pointing out that when we don't understand something, there are still alternative explanations to those that superstitiously posit a near-omniscience or that pretend to have information or an ability to model complex phenomena that one does not in fact have. So, for example, a highly improbable organism does not imply a creator, a good law does not imply a legislator, a good economy does not require an economic planner, and so on, because such things can be generated by emergent processes. To come to such a conclusion does not require that we have first reasoned out the specific process by which the object in question emerged. Indeed if we had, we wouldn't have to invoke emergence any more but rather some more specific algorithm, such as natural selection to explain the origins of species.

For this reason, I strongly disagree with the following definition

Let K(.) be Kolmogorov complexity. Assume you have a system M consisting of and fully determined by n small identical parts C. Then M is 'emergent' if M can be well approximated by an object M' such that K(M') << n*K(C).

Because it is just in situations where a phenomenon has a not highly reducible complexity -- where M is not fully determined by n small identical particles, or where it is but K(M') is not substantially smaller than n*K(C) -- that the idea that a phenomenon is emergent, rather than the product of a near-omniscient or near-omnipotent creator, is most useful.

I'd add that the belief that any important phenomenon is highly reducible, or that even if it is reduceable that humans are capable of undertaking that reduction, are two other species of superstition. These are just as pernicious as the related superstition of the near-omniscient creator. In many, perhaps most cases of interest we either have to be satisfied with regarding a phenomenon as "emergent" or we have to superstitiously pretend that some being has information or a capability of reduction that it does not in fact have.