Reading Lucretius made me realize how long the science vs religion debate has been going on. I was introduced to Lucretius through reading George Santayana, the American Philosopher of aesthetics in particular of literature and poetry. I discovered Santayana at about the same time I discovered E.T. Jaynes which is an weird coincidence since they both seem to base their doctrine on untangling the confusion of the mind projection fallacy. They both argue at length that humans attribute too much of what goes in their head to the real world. Santayana used it to argue that religion is poetry and it is an error to believe it speaks of the real universe when it is only meant to metaphorically and poetically represent our internal thoughts about the world.
I find that even if Santayana was no mathematician, his ideas fit very well with Bayesianity. Here are some select quotes from The Life of Reason:
"Science and common sense are themselves in their way poets of no mean order, since they take the material of experience and make out of it a clear, symmetrical, and beautiful world; the very propriety of this art, however, has made it common. Its figures have become mere rhetoric and its metaphors prose. Yet, even as it is, a scientific and mathematical vision has a higher beauty than the irrational poetry of sensation and impulse, which merely tickles the brain, like liquor, and plays upon our random, imaginative lusts. The imagination of a great poet, on the contrary, is as orderly as that of an astronomer, and as large; he has the naturalist's patience, the naturalist's love of detail and eye trained to see fine gradations and essential lines; he knows no hurry; he has no pose, no sense of originality; he finds his effects in his subject, and his subject in his inevitable world."
"Thought, we are told rightly enough, cannot be accounted for by enumerating its conditions. A number of detached sensations, being each its own little word, cannot add themselves together nor conjoin themselves in the void. Again, experiences having an alleged common cause would not have, merely for that reason, a common object. Nor would a series of successive perceptions, no matter how quick, logically involve a sense of time nor a notion of succession. Yet, in point of fact, when such a succession occurs and a living brain is there to acquire some structural modification by virtue of its own passing states, a memory of that succession and its terms may often supervene. It is quite true also that the simultaneous presence or association of images belonging to different senses does not carry with it by intrinsic necessity any fusion of such images nor any notion of an object having them for its qualities. Yet, in point of fact, such a group of sensations does often merge into a complex image; instead of the elements originally perceptible in isolation, there arises a familiar term, a sort of personal presence."
"When this diversity between the truest theory and the simplest fact, between potential generalities and actual particulars, has been thoroughly appreciated, it becomes clear that much of what is valued in science and religion is not lodged in the miscellany underlying these creations of reason, but is lodged rather in the rational activity itself, and in the intrinsic beauty of all symbols bred in a genial mind. Of course, if these symbols had no real point of reference, if they were symbols of nothing, they could have no great claim to consideration and no rational character; at most they would be agreeable sensations. They are, however, at their best good symbols for a diffused order and a tendency in events; they render that reality with a difference, reducing it to a formula or a myth, in which its tortuous length and trivial detail can be surveyed to advantage without undue waste or fatigue. Symbols may thus become eloquent, vivid, important, being endowed with both poetic grandeur and practical truth."
"Science, which thinks to make belief in miracles impossible, is itself belief in miracles–in the miracles best authenticated by history and by daily life"
Lucretius' On the Nature of Things (http://en.wikipedia.org/wiki/On_the_Nature_of_Things) is considered one of the most beautiful epic poem ever written and the subject can be summed up as the rejection of religion in favor of the physical sciences. Written before Christianity even existed, Lucretius describes atoms, the movement of mass, the infinite nature of the universe, and the materialistic nature of the soul. Beautiful indeed.
"It was previously pointed out to me that I might be losing some of my readers with the long essays"
I for one find the long mathematical bayesian proselytizing some of your most fascinating posts. I can't wait for the next ones.
What's interesting about "Thingspace" (I sometimes call it "orderspace") is that it flattens out all the different combinations of properties into a mutually exclusive space of points. An observable "thing" in the universe can't be classified in two different points in Thingspace. Yes you can have a range in Thingspace representing your uncertainty about the classification (If you're a mere mortal you always have this error bar) but the piece-of-universe-order you are trying to classify is in ideal terms only one point in the space.
IMO this could explain the way we deal with causality. Why do we say effects have only one cause? Where does the Principle of Sufficient Reason come from? The universe is not actually quantized in pieces that have isolated effects on each other. However, causes and effects are "things", they are points in Thingspace and as "things" they actually represent aggregates, bunches of variable values that when recognized as a whole have, by definition, unique cause-effect relationships with other "things". I see causality as arrows from one area of thing space to another. Some have tried to account for causality with complex Bayesian networks based on graph theory that are hard to compute. But I think applying causality to labeled clusters in Thingspace instead of trying to apply it to entangled real values seems simpler and more accurate. And you can do it at different levels of granularity to account for uncertainty. The space is then most useful classified hierarchically into an ontology. Uncertainty about classification is then represented by using bigger, vaguer, all encompassing clusters or "categories" in the Thingspace and high level of certainty is represented by a specified small area.
I once tried (and pretty much failed) to create a novel machine learning algorithm based on a causality model between hierarchical EM clusters. I'm not sure why it failed. It was simple and beautiful but I had to use greedy approaches to reduce complexity which might have broken my EM-algorithm. Well at least it (just barely) got me a masters degree. I still believe in my approach and I hope someone will figure it out some day. I've been reading and questioning the assumptions underlying all of this lately and specially pondering the link between the physical universe and probability theory and I got stuck at the problem of the arrow of time which seems to be the unifying principle but which also seems not that well understood. A well... maybe in another life.
Well, for example, the fact that two different real represent the same point. 2.00... 1.99... , the fact that they are not computable in a finite amount of time. pi and e are quite representable within a computable number system otherwise we couldn't reliably use pi and e on computers!
Benquo, I see two possible reasons:
1) '2' leads to confusion as to whether we are representing a real or a natural number. That is, whether we are counting discrete items or we are representing a value on a continuum. If we are counting items then '2' is correct.
2) If it is clear that we are representing numbers on a continuum, I could see the number of significant digits used as an indication of the amount of uncertainty in the value. For any real problem there is always uncertainty caused by A) the measuring instrument and B) the representation system itself such as the computable numbers which are limited by a finite amount of digits (although we get to choose the uncertainty here as we choose the number of digits). This is one of the reason the infinite limits don't seem useful to me. They don't correspond to reality. The implicit limits seems to lead to sloppiness in dealing with uncertainty in number representation.
For example I find ambiguity in writing 1/3 = 0.333... However, 1.000/3.000 = 0.333 or even 1.000.../3.000...=0.333... make more sense to me as it is clear where there is uncertainty or where we are taking infinite limits.
James, I share your feelings of uneasiness about infinite digits, as you said, the problem is not that these numbers will not represent the same points at the limit but that they shouldn't be taken to the limit so readily as this doesn't seem to add anything to mathematics but confusion.
Thanks g for the tip about computable numbers, that's pretty much what I had in mind. I didn't quite get from the wikipedia article if these numbers could or could not replace the reals for all of useful mathematics but it's interesting indeed.