Followup to: Making Beliefs Pay Rent, Belief in the Implied Invisible
Degrees of Freedom accuses me of reinventing logical positivism, badly:
One post which reads as though it were written in Vienna in the 1920s is this one [Making Beliefs Pay Rent] where Eliezer writes
"We can build up whole networks of beliefs that are connected only to each other - call these "floating" beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens's ability to build more general and flexible belief networks... The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict - or better yet, prohibit."
Logical positivists were best known for their verificationism: the idea that a belief is defined in terms of the experimental predictions that it makes. Not just tested, not just confirmed, not just justified by experiment, but actually defined as a set of allowable experimental results. An idea unconfirmable by experiment is not just probably wrong, but necessarily meaningless.
I would disagree, and exhibit logical positivism as another case in point of "mistaking the surface of rationality for its substance".
Consider the hypothesis:
On August 1st 2008 at midnight Greenwich time, a one-foot sphere of chocolate cake spontaneously formed in the center of the Sun; and then, in the natural course of events, this Boltzmann Cake almost instantly dissolved.
I would say that this hypothesis is meaningful and almost certainly false. Not that it is "meaningless". Even though I cannot think of any possible experimental test that would discriminate between its being true, and its being false.
On the other hand, if some postmodernist literature professor tells me that Shakespeare shows signs of "post-colonial alienation", the burden of proof is on him to show that this statement means anything, before we can talk about its being true or false.
I think the two main probability-theoretic concepts here are Minimum Message Length and directed causal graphs - both of which came along well after logical positivism.
By talking about the unseen causes of visible events, it is often possible for me to compress the description of visible events. By talking about atoms, I can compress the description of the chemical reactions I've observed.
We build up a vast network of unseen causes, standing behind the surface of our final sensory experiences. Even when you can measure something "directly" using a scientific instrument, like a voltmeter, there is still a step of this sort in inferring the presence of this "voltage" stuff from the visible twitching of a dial. (For that matter, there's a step in inferring the existence of the dial from your visual experience of the dial; the dial is the cause of your visual experience.)
I know what the Sun is; it is the cause of my experience of the Sun. I can fairly readily tell, by looking at any individual object, whether it is the Sun or not. I am told that the Sun is of considerable spatial extent, and far away from Earth; I have not verified this myself, but I have some idea of how I would go about doing so, given precise telescopes located a distance apart from each other. I know what "chocolate cake" is; it is the stable category containing the many individual transient entities that have been the causes of my experience of chocolate cake. It is not generally a problem for me to determine what is a chocolate cake, and what is not. Time I define in terms of clocks.
Bringing together the meaningful general concepts of Sun, space, time, and chocolate cake - all of which I can individually relate to various specific experiences - I arrive at the meaningful specific assertion, "A chocolate cake in the center of the Sun at 12am 8/8/1". I cannot relate this assertion to any specific experience. But from general beliefs about the probability of such entities, backed up by other specific experiences, I assign a high probability that this assertion is false.
See also, "Belief in the Implied Invisible". Not every untestable assertion is false; a deductive consequence of general statements of high probability must itself have probability at least as high. So I do not believe a spaceship blips out of existence when it crosses the cosmological horizon of our expanding universe, even though the spaceship's existence has no further experimental consequences for me.
If logical positivism / verificationism were true, then the assertion of the spaceship's continued existence would be necessarily meaningless, because it has no experimental consequences distinct from its nonexistence. I don't see how this is compatible with a correspondence theory of truth.
On the other hand, if you have a whole general concept like "post-colonial alienation", which does not have specifications bound to any specific experience, you may just have a little bunch of arrows off on the side of your causal graph, not bound to anything at all; and these may well be meaningless.
Sometimes, when you can't find any experimental way to test a belief, it is meaningless; and the rationalist must say "It is meaningless." Sometimes this happens; often, indeed. But to go from here to, "The meaning of any specific assertion is entirely defined in terms of its experimental distinctions", is to mistake a surface happening for a universal rule. The modern formulation of probability theory talks a great deal about the unseen causes of the data, and factors out these causes as separate entities and makes statements specifically about them.
To be unable to produce an experiential distinction from a belief, is usually a bad sign - but it does not always prove that the belief is meaningless. A great many untestable beliefs are not meaningless; they are meaningful, just almost certainly false: They talk about general concepts already linked to experience, like Suns and chocolate cake, and general frameworks for combining them, like space and time. New instances of the concepts are asserted to be arranged in such a way as to produce no new experiences (chocolate cake suddenly forms in the center of the Sun, then dissolves). But without that specific supporting evidence, the prior probability is likely to come out pretty damn small - at least if the untestable statement is at all exceptional.
If "chocolate cake in the center of the Sun" is untestable, then its alternative, "hydrogen, helium, and some other stuff, in the center of the Sun at 12am on 8/8/1", would also seem to be "untestable": hydrogen-helium on 8/8/1 cannot be experientially discriminated against the alternative hypothesis of chocolate cake. But the hydrogen-helium assertion is a deductive consequence of general beliefs themselves well-supported by experience. It is meaningful, untestable (against certain particular alternatives), and probably true.
I don't think our discourse about the causes of experience has to treat them strictly in terms of experience. That would make discussion of an electron a very tedious affair. The whole point of talking about causes is that they can be simpler than direct descriptions of experience.
Having specific beliefs you can't verify is a bad sign, but, just because it is a bad sign, does not mean that we have to reformulate our whole epistemology to make it impossible. To paraphrase Flon's Axiom, "There does not now, nor will there ever, exist an epistemology in which it is the least bit difficult to formulate stupid beliefs."
Well, if you had access to a time machine and some sensing device that could survive the environment of the Sun...
Yeah, I'm a nitpicker. I'm also not as good at picking nits as I might like.
I've an old post - 'Verification and Base Facts' - which shows how a non-verificationist can still capture much of what was most compelling in verificationism.
If it's unverifiable-in-principle, there is no distinction between the affirmation and negation of the premise, and the concept is meaningless.
The positivist accusation seems obviously false just based on the whole quantum mechanics series of posts.
Caledonian: There is one exception:
The Kolmogorov complexity of this sentence is exactly 50 bytes in Java bytecode.
Meaningful, but unfalsifiable.
Where should I read about that? I want a proof that Komologorov complexity is uncomputable. And I wanna know how we use it if it really is.
If I could write 49 bytes' worth of Java bytecode outputting that sentence, I would falsify it, wouldn't I?
Well, you need at least 381 bits of information to single out one 80 symbols long 27 symbol alphabet message. That's 48 bytes. That leaves you two bytes to specify "80 symbols long" and "27 symbol alphabet." Now I might not be an expert, but I think a chunk of 50 bytes of valid JVM code has significantly less information than that...
No. You mean you need at least that to represent any such message in a constant-size format. That's not what the complexity is about - if the message is the alphabet and then the rest are spaces, that's the right length and has the right number of symbols, but you can easily compress it to much much less than 381 bytes.
That said, I agree that Java bytecode probably isn't the ideal medium for transmitting such a terse message.
I think past!me miscommunicated and was stupid w.r.t. Kolmogorov Complexity, and I think you have misread the statement too. The program "enumerate all X symbol strings from this Y letter alphabet then choose the Nth" is pretty much one of the simplest ways of encoding a string. So I was merely remarking that to single out one 80 symbol, 27 letter alphabet string, you need at least 381 bits, or you will have incomplete domain coverage due to the pigeon-hole principle.
If we expand the alphabet to 32 letters, then it obviously takes 400 bits, which is 50 bytes.
A completely arbitrary string, yes. This was nowhere near an arbitrary string.
On second thought, that's not right. But you probably understood what I mean. If you happen to make an a conjecture about something like Kolmogorov complexity or the halting problem, and it just happens to be undecidable, it's still either true or false.
Eliezer, you're definitely setting up a straw man here. Of course it's not just you -- pretty much everybody suffers from this particular misunderstanding of logical positivism.
"Untestable" does not mean "untestable by humans using current technology". What it means is untestable, period -- even by (say) a hypothetical being with godlike powers. This is what distinguishes a chocolate cake in the sun from "post-colonial alienation". If a chocolate cake spontaneously formed in the Sun, there would be physical consequences. These consequences would necessarily be detectable to sufficiently advanced beings. We need only imagine a Laplacian calculator, for example.
The feasibility of performing the verification is utterly beside the point, because we're only interested in the meaning of the statement.
Contrast this with "post-colonial alienation". The problem there is not that we lack some technological gadget. Rather, the problem is that no being, not even a god or a Laplacian demon, could verify the statement. In the case of a chocolate cake, you simply present God with a list of ingredients, and tell him/her to look for them in the Sun; but what do you tell God to look for in the works of Shakespeare in order to determine whether there is "post-colonial alienation"?
So this is not a counterexample to logical positivism at all. In fact, you yourself already gave the positivist's reply to this criticism in "Belief in the Implied Invsible". The point is, you're allowed in logical positivism to use the full apparatus of mathematics and logic (and that includes probability theory!) in formulating theories (hence the name logical positivism); verificationism is not a constraint on mathematics.
Surely a god would be intelligent enough to recognize post-colonial alienation when he sees it? After all, my professor can, and he's no god!
How do you know that the phrase "logical positivism" refers to the correct formulation of the idea, rather than an exaggerated version? I have no trouble at all believing that a group of people discovered the very important notion that untestable claims can be meaningless, and then accidentally went way overboard into believing that difficult-to-test claims are meaningless too.
There are different shades of positivism, and I think at least some positivists are willing to say any statement for which there is a decision procedure even possible in principle for an omnipotent being is meaningful.
Under this interpretation, as Doug S. says, the omnipotent being can travel back in time, withstand the heat of the sun, and check the status of the cake. The omnipotent being could also teleport to the spaceship past the cosmological horizon and see if it's still there or not.
However, an omnipotent being still wouldn't have a decision procedure with which to evaluate whether Shakespeare's works show signs of post-colonial alienation (although closely related questions like whether Shakespeare meant for his plays to reflect alienation could be solved by going back in time and asking him).
This sort of positivism, I think, gets the word "meaningful" exactly right.
Might algorithmic positivism be a good name for it? As in if there is an implementable algorithm which decides the truth of the sentence, it is meaningful.
You can't make a Java class file that small.
Maybe you mean the difference between the smallest possible executable class file, and one that prints that message will be exactly 50 bytes, given some specified existing standard library? If so, the sentence should probably make that clearer.
Unfortunately, the capabilities of an omnipotent being are themselves not very well defined. Suppose we want to determine whether "The Absolute is an uncle" is meaningful. Well, says the deranged Hegelian arguing the affirmative, of course it is: we just ask our omnipotent being to take a look and see whether the Absolute is an uncle or not.
Butbutbutbut, you say, we can't tell it how to do that, whereas we can tell it how to check whether there's a spaceship past the cosmological horizon. But can we really? I mean, it's not like we know how to make that observation, or we'd be able to make it ourselves. What's the difference between this and checking whether the Absolute is an uncle? "Well, we know what it means to check whether there's a spaceship past the cosmological horizon, but not what it means for the Absolute to be an uncle." Circular argument alert!
It does feel like there's a difference that we can use, but trying to formulate it exactly seems to lead to a circular definition.
(No one is really going to defend "The Absolute is an uncle", but there certainly are people prepared to claim that the existence of an afterlife is testable because dead people might discover it, or because God could tell you whether it's there or not; and I don't think any sort of logical positivist would agree.)
What if we imagine the source code for the universe? Can we say: "the omnipotent being can check any part of the source code of the universe." Where "source code of the universe" means: a computationally irreducible algorithm which has a perfect isomorphism with the universe. It is part of our assumption as physicalists that the variables of this source code only ever store values that are of a physical nature, i.e., would be studied by physicists.
If you can imagine what state in the source code would correspond to the the truth of your belief, it is meaningful. If there is likely no statement in the TOE (no matter how large and stupid) which corresponds to your claim, then it is meaningless. This seems to be better defined but also capture the benefits of having an omnipotent being be the judge of meaningfulness.
You cannot simply check the source code. No matter how many experiments you run, there will always be room for the possibility that the source code is such that the spaceship disappears.
Yes, the point I was trying to make was that for a sentence to be meaningful, there must be a physical state for which it encodes, even if that physical state is in-accessible to us. "At 8:00 pm last night a tea kettle spontaneously formed around Saturn." is meaningful, because it encodes a state located in space-time.
I haven't gotten through your whole post yet, but the "postmodernist literature professor" jogged my memory about a trend I've noticed in your post. Postmodernists, and perhaps in particular postmodernist literature professors seem to be a recurring foil. What's going on there? Is a way to break out of that analytically? I sense that as a deeper writer and thinker you'll go beyond cartoonish representations of foils, if nothing more to reflect a deeper understanding of things like postmodernist literature professors as natural phenomena. It seems to me to be more a barrier to knowledge and understanding than an accurate summation of something in our reality (postmodernist literature professors).
No, I still think there's a difference, although the omnipotence suggestion might have been an overly hasty way of explaining it. One side has moving parts, the other is just a big lump of magic.
When a statement is meaningful, we can think of an experiment that confirms it such that the experiment is also built out of meaningful statements. For example, my experiment to confirm the cake-in-the-sun is for a person on August 1 to go to the center of the sun, and see if it tastes delicious. So, IF Y is in the center of the sun, AND IF Y is there on August 1, AND IF Y perceives a sensation of deliciousness, THEN the cake-in-the-sun theory is true.
Most reasonable people will agree that "Today is August 1st" is meaningful, "This is the center of the sun" is meaningful, and "That's delicious!" is meaningful, so from those values we can calculate a meaningful value for "There's a cake in the center of the sun August 1st". If someone didn't believe that "Today is August 1st" is meaningful, we could verify it by saying "IF the calendar says 'August 1', THEN it is August 1st" in which we specify a way of testing that. If someone doesn't even agree that "The calendar says 'August 1'" is meaningful, we reduce it to "IF your sensory experience includes an image of a calendar with the page set to August 1st, THEN the calendar says 'August 1'." In this way, the cake-in-the-sun theory gets reduced to direct sensory experience.
To determine the truth value of the uncle statement, I need to see if the Absolute has an uncle. Mmmkay. So. I'll just go and....hmmmm.
If you admit that direct sensory experience is meaningful, and that statements composed of operations on meaningful statements are also meaningful, then the cake-in-the-sun theory is meaningful and the uncle theory isn't.
(I do believe that questions about the existence of an afterlife are meaningful. If I wake up an hour after dying and find myself in a lake of fire surrounded by red-skinned guys with pointy pitchforks, that's going to concentrate my probability mass on the afterlife question pretty densely to one side.)
It's not clear that you're a verificationist but you're clearly an Empiricist. I think that's problematic. Unless you believe something magical happens at the retina, then there's no more reason to privilege what happens at the retina or in the brain, than there is the wire connecting the dial to the voltmeter. It's all causal linkage. We can use the same standards of reliability for people as we do wires. The sensory periphery is just not particularly interesting.
Is the following assertion meaningless -- "There exists a invisible dragon in my garage, which can't be seen, felt, or sensed by any methods known to man today, or in the future".
If it's a conscious dragon, then it's definitely meaningful. The dragon will have proof of its own existence, which implies that there is an experiment capable of proving its existence.
No. It is, however, suspicious - what led you to hold such a conveniently untestable belief? What do you think you know, and how do you think you know it?
Can it be sensed by beings other than humans with which humans might communicate? (Or does that already count as a method, however mediated by these alien beings, of sensing by humans?)
If somebody made this assertion, then I would, as MugaSofer, probably follow up with how this person came to that belief. Somewhere along the way there is likely to be some sensation by that human, whether a past sighting that can't be repeated, communication with (what the believer takes to be) alien beings (space aliens, fairies, angels, etc), or even just a vague feeling that a dragon is there. We might get meaning out of it (and then it is very likely, unless interpreted in a figurative sense, to be very likely false).
"There exists a invisible dragon in my garage, which can't be seen, felt, or sensed by any methods known to man today, or in the future".
This is meaningful and true according to MWI.
Poke, can you expand a little on what you're driving at?
Also, Steven, how on Earth is that statement true under MWI? :)
The dragon is in another world superimposed on ours, one where something improbable happened (some form of genetic engineering?); we will never be able to interact with the dragon in any way, but it's perfectly real, as the people (also invisible to us) in its world can verify.
That dragon isn't in our world, steven. It's not in the garage. The people to which the dragon does exist aren't in our world or our garage, either.
The original claim is meaningless.
I second Hopefully on criticism of the strawman postmodernist. Honestly, I think that academic disciplines, or even schools, where everyone is completely full of it are extremely rare. There are thoughtful, intelligent, and honest people who frame important and fairly novel ideas in the terminology of sociology, academic feminism, Freudianism, behaviorism, even, math-help-us, Jung. Perfect intellectual honest and intelligence among humans are a chimera, and different disciplines aspire to different approximations thereof by establishing different sorts of standards for esteem and for publication.
As a general summary, I would say that the Enlightenment was, to a large extent, a set of proscriptions regarding what types of questions should and should not be asked, argumentative styles should and should not be used, and hand waves are and are not allowed. Some set of standards is necessary for functional discourse under imperfect honesty, and the Enlightenment standards enabled fruitful conversation among people of merely, perhaps, 70th percentile honesty and 96th percentile intelligence, an AMAZING standard that reshaped history. Sadly, each set of standards disables productive discussion of certain subject matter, and places one's beliefs on ultimately unsound foundations, so re-foundation is ultimately necessary. The more dysfunctional academic disciplines are largely those which try to deal, possibly prematurely, with subject matter for which we lack good honesty enforcing protocols, including subject matter foundational to the Enlightenment subject matter, but for individuals in those disciplines insights are indeed possible and low-hanging fruit are even abundant.
Speaking as someone currently being taught some Jung as fact (archetype stuff) and trying desperately not to call anyone out on the BS, I'm both surprised and pleased to hear this.
Please, does anyone know of any important, novel, Jungian ideas? I could really do this some examples of this.
I have to agree with komponisto and some others: this post attacks a straw-man version of logical positivism. As komponisto alluded to, you are ignoring the logical in logical positivisim. The logical positivists believed that meaningful statements had to be either verifiable or they had to be logical constructs built up out of verifiable constituents. They held that if A is a meaningful (because verifiable) assertion that something happened, and B is likewise, then A & B is meaningful by virtue of being logically analyzable in terms of the meaning of A and B. They would maintain this even if the events asserted in A and B had disjoint light cones, so that you could never experimentally verify them both. In effect, they subscribed to precisely the view that you endorse when you wrote, "A great many untestable beliefs . . . talk about general concepts already linked to experience, like Suns and chocolate cake, and general frameworks for combining them, like space and time."
Your "general frameworks for combining" do exactly the work that logical positivists did by building statements from verifiable constituents using logical connectives. In particular space and time would be understood by them in logical terms as follows: space and time reduce to geometry via general relativity, and geometry, along with all math, reduces to logic via the logicist program of Russell and Whitehead's Principia Mathematica. See, for example, Hans Reichenbach's The Philosophy of Space and Time.
So, even without invoking omnipotent beings to check whether the cake is there, the logical positivist would attribute meaning to that claim in essentially the same way that you do.
I agree that EY's attacking a certain straw-man of positivism, and that EY is ultimately a logical positive with respect to how he showed the meaningfulness of the boltzman cake hypohteses. But, assuming EY submits to a computational complexity prior, his position is distinct, in that there could be two hypothesis, which we fundamentally cannot tell the difference between, e.g., copenhagen, mwi, and yet we have good reason to believe on over the other, even though there will never be any test that justifies belief in one over another (if you think you can test mwi vs. copenhagen, just replace the universe spawns humans with 10^^^^^10 more quanta in it vs. it doesn't, clearly can't test these, not enough quanta in the universe).
Most of those ideas were worthless. Most modern ideas are worthless, too. The question is: how good were they at distinguishing the worthy ideas from the worthless ones? The sheer number of recently-discarded fields, much less ideas, should make it clear how well the ancients discerned the gold from the dross.
What's the signal-to-noise ratio? How much care is taken to ensure that people aren't deluding themselves?
michael vassar, Maybe that's a good description of new departments (eg x-studies), but you sound like you think the post-colonial rhetoric is coming out of political science departments rather than English departments.
...and I go on to learn that people in political science departments cite Derida.
Political science is the department where it would be most interesting to know what's going on because it has methodological pluralism while seeming to have a single topic, while, say, English departments seem to have gotten methodological variety out of crisis of topic.
This seems like an excellent example of the application of the Law of the Minimum - admittedly, outside of the specific ecological context it was formulated in, but the general principle is sound.
The resource in least supply will dictate the growth of the population, and the entity that requires the least expended or committed resources while meeting the minimum standards will dominate the population.
Fields that have low standards will be dominated by garbage. Fields with high standards will be dominated by whatever can pass those standards.
What are the standards that determine whether work in a field is valuable? Not from the viewpoint of those within the field, but from a general perspective? Once that's determined, we need only see which fields' requirements best match those standards.
I just finished reading Ayer's "Language, Truth & Logic" last night, and from my understanding of it, I think he'd think that your proposal about the appearance and vanishing of a chocolate cake was a meaningful proposal. He said, for instance, that it would be meaningful and reasonable to posit the appearance of wildflowers on a mountain peak nobody had climbed based on the fact that such wildflowers had been seen on similar mountain peaks nearby, or to propose that there were mountains on the dark side of the moon (before it was possible to empirically verify this). He seemed mostly interested in disqualifying propositions that were /in principle/ unverifiable. Now if you're asserting that this piece of cake came and went /and/ that it's not just going to be really difficult to come up with a single sense-impression that this fact would have some bearing on, but that it is /in principle/ impossible to do so, then he'd probably say you're talking rot.
Your example of a spaceship exiting the range at which you could possibly have any interaction with it is another issue. Ayer deals with the "does this tree continue to be when there's no one about on the quad" question, and says that (if I remember right) since the logical construction "this tree" is composed of both actual and hypothetical sense experiences, there's no reason why you have to imagine it vanishing when those sense experiences aren't immediately occurring. Even given this, though, I'm not sure if Ayer would call your spaceship meaningless or merely improbable, since its hypotheticals would all seem to be logical impossibilities.
The classic objection to logical positivism is that it is incoherent: If statements which are not testable are meaningless, then the statement "Statements which are not testable are meaningless" is meaningless. For those of you defending logical positivism of any sort, how do you deal with this criticism? Is there even a way to test the central tenet of logical positivism in principle?
I deal solely in testable empirical predictions. My assertion is that a methodology in which one does not consider statements which do not generate testable empirical predictions is more efficient, other things being equal, than one which does, at generating accurate testable empirical predictions. And that's a testable empirical prediction.
Can you show it to be untestable? Or is there a way you might check?
If all you are worried about is in-principle verification, then, so the argument goes, the way to check is to see if there is any empirical content to "statements which are not testable are meaningless." What is there in the world that we would see if that statement is true?
There are three categories -- "meaningful," "meaningless," and "tautological" statements -- at least in Ayer's categorization. "Statements which are not testable are meaningless or tautological" would be an example of a tautology: just a definition of terms.
Because if you /could/ test the statement to see if it were true (not absolutely true, but, per Ayer, "probable"), you'd conduct an experiment where you took a sample of statements, tried to come up with tests (ways in which they refer to sense experiences that would serve to verify or disprove them), and then saw which ones were or were not meaningful. But in Ayer's framework, meaningfulness is defined as referring to sense experiences that would serve to verify or disprove, so it's circular, thus tautological, which isn't a term of abuse in Ayer's categorization the way meaninglessness is. He thinks that philosophers deal in tautologies all the time -- constructively! -- and that meaningful statements are more in the domain of science anyway.
Is there something which I can do as someone accepting meaningful untestable beliefs that I can't do as a positivist? Certainly there is some meaning to the sentence:
but the positivist might simply say that you can give this a meaning, but science does not need to to do its job. I'm not saying that there isn't an advantage to talking about the meanings of sentences without experimental results, i'm just saying that to refute positivism you have to show that science does something you cant do by simply defining beliefs in terms of predictions. You have to give an example of science doing something successfully which commits it to the meaning of an untestable belief.
It's not as if a star would have absolutely no effect from a Boltzmann cake suddenly appearing inside of it. A civilization with a good enough model of how this star zigs and zags, they would be able to find facts about the star which would force a bayesian to move from the ridiculously tiny prior probability of the hypothesis :
to some posterior distribution. Some pieces of evidence might increase the probability of the hypothesis, some might decrease it.
This is not a cheap objection in anyway. To misinterpret verifications such as early Wittgenstein and W.V. Quine as claiming that only those sentences which we can currently test are meaningful is a mistake. A common mistake, and one that some using the term positivist to describe themselves have made.
This is another sort of mistake. Because a hypothesis can't be tested by me does not mean that it is meaningless. Vereficationists would agree with this because they think verification works everywhere, even on the other side of the universe. If some alien race over there could have seen the spaceship, or seen something which made the probability of there being a spaceship there high, or not have, then the claim is not meaningless.
What vereficationists like Quine are saying is that science is done through the senses. In the matrix code, way above the level of the machine language, our senses are the evidence nodes of our Bayes nets, and our hypotheses are the last nodes. The top layer of nodes consists of the complete set of states that some beings sensory apparatus can be in, any node in this mind containing a belief which is independent of all of the evidence nodes, contains a belief which is meaningless for that mind. But showing subjective meaninglessness of some hypothesis in one being is not enough to show that a belief/hypothesis is meaningless for all minds.
I think the critiques of this article apply to the worse of the worse of positivism. But many of those critiques are critiques that were made by hard verificationists such as Quine. But the simplest form of versificationism can be traced to Edmund Husserl belief it or not. The core of what the first movement of phenomenologists, and Quine, were saying is that only stimulus sentences can ever be used as initial evidence. Some stimulus may increase the probability of some other belief which may then be used as evidence for some other belief in turn, but without evidence from stimulus there wouldn't be enough useful shifting about of probability to do anything. Certainly a human brain, or even a replica of Einsteins brain, would have a hard time figuring out the theories of relativity if they only had a 4by4 binary black and white pixel view of the world, and could move around the camera providing them the input around freely as they like.
If no constructable mind could ever get any result from any instrument, natural, current or wildly advanced, that would force a rational mind to update its probability about a given sentence, a la bayes, then that sentence is not a scientifically meaningful belief. This is to be senseless for Wittgenstein, or literally meaningless, this is only to be scientifically meaningless for Quine. Both positions have been called vereficationism and I think both are useful, and true-ish at least.
Lastly,I've always thought of positivism as going perfectly with a correspondence theory of truth. We can treat "senseless" or "meaningless" as just meaning "un-entangle-able beliefs", as in beliefs which make no restrictions on experience.
It seems to me that Yudkowsky and the whole lot of LW staples are plainly positivists. And I have always thought of this as a good thing. Positivism plus LW style Bayesianism plus effort, form an epistemology which at least gives you a stronger fighting chance than you would have otherwise. Forming stupid belifs is harder after reading lesswrong, and harder after reading Quine, or Goodman, or even the most basic vereficationists texts. Many people have made philosophical mistakes which can then be avoided by reading vereficationists. Such as LW. Give credit where it is due, to yourself and Quine.
I don't think I understand.. If it isn't possible to ever verify the existence of these aliens, what does it matter that they could have seen the spaceship? Essentially, how does it help that some being A could verify a phenomenon if I can't ever verify that this is indeed the case?
It doesn't help you at all. It just means that verificationsts would and should not call it meaningless. It is unverifiable for you but not for science as a whole.
Interesting post, which I would like to make some comments to.
First of all, I don't think it's necessarily a bad thing to be associated with the logical positivists. In their days, they were, in my view, one of the most interesting proponents of the scientific world-view. The fact that their program (which was unusually well specified, for being a philosophical program - somethinghat contributed to its demise, since it made it easier to falsify) ultimately was shown to be unviable does not show that their general outlook was mistaken. Verificationism and the notion that philosophy should construct a scientific world-view can still be good ideas (in fact are good ideas, in my view) even though the logical positivists' more specific ideas were misguided.
Secondly, Yudowsky is right that unverifiable statements are not meaningless in the same sense as true nonsense is meaningless. In his essay "Positivism Against Hegelianism", Ernest Gellner makes the same point:
"The logical positivist definition of meaning was inevitably somewhat confused. Clearly, though, it could not define 'meaning' in the sense used by working linguists as he classes of sound patterns which are emitted, recognised and socially accepted in a given speech community. By such a criterion, 'metaphysical' [i,e. unverifiable - my note] statements patently would be meaningful. The anti-Platonism of paradigmatic logical positivism equally prevents us from interpreting the delimitation of meaning as the characterisation of a given essence of 'meaning', as for them there are no such essences (thiugh in some semi-conscious manner, and in disharmony with their nominal anti-Platonism, I strongly suspect that this was precisely what many of them did mean).
The only thing which in effect they could mean, plausibly and in harmony with their other principles, was this: the definition circumscribed, not the de facto custom of any one or every linguistic community, but the limits of the kind of use of speech which deserves respect and commendation. It was a definition not of meaningful speech, but of commendable, good speech. Their verificationism was a covert piece of ethics. Meaningless was a condemnation, and meaning a commendation." (Gellner, Relativism and the Social Sciences, pp. 30-31)
As I argue in my article "Ernest Gellner's Use of the Social Sciences in Philosophy" (Phil of Soc Sci, 2014:1), this interpretation is not quite right, though. Gellner's interpretation is over-charitable - the logical positivists really did saw their assertion that unverifiable statements are meaningless as descriptive (even though Gellner is right that they at some level intended it to be normative). More importantly, so did their chief critic Quine who dealt an important blow to logical positivism by showing that that the logical positivists' conception of meaning failed to illuminate how language actually works (in "Two Dogmas of Empiricism"). He subsequently argued that it should be replaced by his own notion of stimulus meaning - a behaviouristic notion which he held to be empirically acceptable, unlike the logical positivists' notion of meaning. (Word and Object)
The logical positivist notion of meaning was, in short, not empirically grounded in any way. They just asserted that some statements are meaningless, and some are not, while producing little argument for it - and certainly no empirical evidence. My guess is that there is an important lesson to be learnt here. For all their talk of a scientific world-view, the logical positivists were rather influenced by the German tradition of a priori philosophy (e.g. Carnap was influenced by neo-Kantianism). Also, there was a strong anti-psychologistic trend in early 20th century philosophy, inherited from the 19th century (especially Frege: for an excellent overview over why psychology was severed from philosophy in Germany, read Martin Kusch's Psychologism: A Case-Study in the Sociology of Philosophical Knowledge, where it is convincingly argued that this happened for social, non-rational reasons.
Naturalistic philosophers have for centuries tried to make philosophy more empirical and more based on the sciences, but although some progress is done, it seems that it seldom goes far enough. E.g. the later Wittgenstein – in many ways a naturalistic philosopher - argued that philosophers should "not think, but look", and that we should look upon language in an "anthropological way", seeing how it really works (rather than constructing a priori models, as philosophers often had done). Still he did no empirical investigations himself. Likewise the logical positivists venerated science but used a non-empirical notion of meaning.
There might be several reasons for this, but the most important one seems to me be that philosophers are more or less exclusively trained at a priori-reasoning and don't really have a lot of other useful knowledge - certainly not cutting-edge knowledge. In order to make philosophy thoroughly naturalistic, philosophers must - as has been argued on this site - be extensively trained especially in cognitive psychology (which I hold to be the empirical discipline most useful to philosophers) but also as far as possible (and depending on specialization) in other disciplines.
Lastly I would like to add that Karl Popper’s famous falsificationism was probably closer to Yudowsky’s thinking, since Popper did not see falsifiability as a criterion of meaning, but rather as a criterion of whether a theory should be seen as scientific. Popper was, though, much more positive to metaphysics (e.g. he was a realist) than the logical positivists, and I’m not sure if Yudowsky would like to follow him on that point.
I'm not sure your interpretation of logical positivism is what the positivists actually say. They don't argue against having a mental model that is metaphysical, they point out that this mental model is simply a "gauge", and that anything physical is invariant under changes of this gauge.