The sentence “snow is white” is true if and only if snow is white.
To say of what is, that it is, or of what is not, that it is not, is true.
—Aristotle, Metaphysics IV
Walking along the street, your shoelaces come untied. Shortly thereafter, for some odd reason, you start believing your shoelaces are untied. Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace. There is a sequence of events, a chain of cause and effect, within the world and your brain, by which you end up believing what you believe. The final outcome of the process is a state of mind which mirrors the state of your actual shoelaces.
What is evidence? It is an event entangled, by links of cause and effect, with whatever you want to know about. If the target of your inquiry is your shoelaces, for example, then the light entering your pupils is evidence entangled with your shoelaces. This should not be confused with the technical sense of “entanglement” used in physics—here I’m just talking about “entanglement” in the sense of two things that end up in correlated states because of the links of cause and effect between them.
Not every influence creates the kind of “entanglement” required for evidence. It’s no help to have a machine that beeps when you enter winning lottery numbers, if the machine also beeps when you enter losing lottery numbers. The light reflected from your shoes would not be useful evidence about your shoelaces, if the photons ended up in the same physical state whether your shoelaces were tied or untied.
To say it abstractly: For an event to be evidence about a target of inquiry, it has to happen differently in a way that’s entangled with the different possible states of the target. (To say it technically: There has to be Shannon mutual information between the evidential event and the target of inquiry, relative to your current state of uncertainty about both of them.)
Entanglement can be contagious when processed correctly, which is why you need eyes and a brain. If photons reflect off your shoelaces and hit a rock, the rock won’t change much. The rock won’t reflect the shoelaces in any helpful way; it won’t be detectably different depending on whether your shoelaces were tied or untied. This is why rocks are not useful witnesses in court. A photographic film will contract shoelace-entanglement from the incoming photons, so that the photo can itself act as evidence. If your eyes and brain work correctly, you will become tangled up with your own shoelaces.
This is why rationalists put such a heavy premium on the paradoxical-seeming claim that a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise. If your retina ended up in the same state regardless of what light entered it, you would be blind. Some belief systems, in a rather obvious trick to reinforce themselves, say that certain beliefs are only really worthwhile if you believe them unconditionally—no matter what you see, no matter what you think. Your brain is supposed to end up in the same state regardless. Hence the phrase, “blind faith.” If what you believe doesn’t depend on what you see, you’ve been blinded as effectively as by poking out your eyeballs.
If your eyes and brain work correctly, your beliefs will end up entangled with the facts. Rational thought produces beliefs which are themselves evidence.
If your tongue speaks truly, your rational beliefs, which are themselves evidence, can act as evidence for someone else. Entanglement can be transmitted through chains of cause and effect—and if you speak, and another hears, that too is cause and effect. When you say “My shoelaces are untied” over a cellphone, you’re sharing your entanglement with your shoelaces with a friend.
Therefore rational beliefs are contagious, among honest folk who believe each other to be honest. And it’s why a claim that your beliefs are not contagious—that you believe for private reasons which are not transmissible—is so suspicious. If your beliefs are entangled with reality, they should be contagious among honest folk.
If your model of reality suggests that the outputs of your thought processes should not be contagious to others, then your model says that your beliefs are not themselves evidence, meaning they are not entangled with reality. You should apply a reflective correction, and stop believing.
Indeed, if you feel, on a gut level, what this all means, you will automatically stop believing. Because “my belief is not entangled with reality” means “my belief is not accurate.” As soon as you stop believing “ ‘snow is white’ is true,” you should (automatically!) stop believing “snow is white,” or something is very wrong.
So try to explain why the kind of thought processes you use systematically produce beliefs that mirror reality. Explain why you think you’re rational. Why you think that, using thought processes like the ones you use, minds will end up believing “snow is white” if and only if snow is white. If you don’t believe that the outputs of your thought processes are entangled with reality, why believe the outputs of your thought processes? It’s the same thing, or it should be.
Why not just say e is evidence for X if P(X) is not equal to P(X|e)?
Incidentally, I don't really see the difference between probabilistic dependence (as above) and entanglement. Entanglement is dependence in the quantum setting.
Trivially, because P(X|e) could be less than P(X)
Note to self: do not post when tired, which leads to asking embarassingly trivial questions.
Quantum wave amplitudes behave in some ways like probabilities and in other ways unlike probabilities. Because of this, some concepts have analogues, while others don't.
But no concepts are exactly equivalent. For example, evidence isn't integrally linked to complex numbers, while entanglement is.
Nonetheless, it is instructive (imho) to consider how (assigned) probability is a property of the observer, and not an inherent property of the system. If a qubit is (|0> + |1>)/sqrt(2), and I measure it and observe 0, then I'm entangled with it so relative to me it's now |0>. But what's really happened is that I became (|observed 0> + |observed 1>)/sqrt(2), or rather, that the whole system became (|0,observed 0> + |1,observed 1>)/sqrt(2). This is closely analogous to the Law of Conservation of Probability; if you take Expectations conditional on the observation, then take Expectation of the whole thing, you get the original expectation back. This is because observing the system doesn't change the system, it just changes you. This is obvious in Bayesian probability in the classical-mechanics world; the only reason it doesn't seem obvious in the quantum realm is that we've been told over and over that "observing a quantum system changes it".
Quite honestly, I don't see how a Bayesian can possibly be a Copenhagenist. Quantum probability is Bayesian probability, because quantum entanglement is just the territory updating itself on an observation, in the same way that Bayesian 'evidence entanglement' is updating one's map on an observation.
Classical probability preserves amplitude, quantum preserves |amplitude|^2.
They're different things, and they could, potentially, be even more different.
Um, but isn't that just a convention? Why should we treat the "amplitude" of a classical probability as being the probability?
Does the problem have something to do with the extra directionality quantum probabilities have by virtue of the amplitude being in C? (so that |0> and (-1*|0>) can cancel each other out)
Classical probability transformations preserve amplitude and quantum ones preserve |amplitude|^2. That's not a whole reason, but it's part of one.
Yes, that's part of the difference. Quantum transformations are linear in a two-dimensional wave amplitude but preserve a 1-dimensional |amplitude|^2. Classical transformations are linear in one-dimensional probability and preserve 1-dimensional probability.
Ah, I get it now, thanks!
(Copenhagen is still wrong though ;)
"This should not be confused with the technical sense of "entanglement" used in physics - here I'm just talking about "entanglement" in the sense of two things that end up in correlated states because of the links of cause and effect between them."
That's literally in the third paragraph.
I think you mean, if P(x)<P(x|e) then e is evidence for x. That is a good definition for evidence, but it doesn't function on the same level as Yudkowsky's above. Yudkowsky is explaining not just what function evidence has in truth finding, he is also explaining how evidence is built into a physical system, e.g., camera, human, or other entanglement device. The Bayesian def of evidence you gave tells us what evidence is, but it doesn't tell us how evidence works, which Yudkowsky's does.
X : precence of flower A in certain area e : there are bees on that area then you would possibly have that P(X) < P(X|e), given that bees help doing pollinization. Then should we phrase "probability of having flower A in an area is greater if we have bees, therefore e is evidence for X (bees are evidence for flower A)" and what if X is "having presents brought by santa claus", and e is "we are in USA instead of cambodia" (which increases the probability of having presents because that date is more commonly celebrated with presents in USA).
That definition does not always coincide with what is described in the article; something can be evidence even if P(X|e) = P(X).
Imagine that two cards from a shuffled deck are placed face-down on a table, one on the left and one on the right. Omega has promised to put a monument on the moon iff they are the same color.
Omega looks at the left card, and then the right, and then disappears in a puff of smoke.
What he does when he's out of sight is entangled with the identity of the card on the right. Change the card to one of a different color and, all else being equal, Omega's action changes.
But, if you flip over the card on the right and see that it's red, that doesn't change the degree to which you expect to see the monument when you look through your telescope. P(monument|right card is red) = P(monument) = 25/51
It does change your conditional beliefs, though, such as what the world would be like if the left card turned out to also be red: P(monument|left is red & right is red) > P(monument|left is red)
Of course e can be evidence even if P(X|e)=P(X) -- it just cannot be evidence for X. It can be evidence for Y if P(Y|e)>P(Y), and this is exactly the case you describe. If Y is "there is a monument and left is red or there is no monument and left is black", then e is (infinite, if Omega is truthful with probability 1) evidence for Y, even though it is 0 evidence for X.
Similarly, you watching your shoelace untied is zero evidence for my shoelaces...
I like the word entanglement, because it's a messy concept. Reality, whatever else it might be, is messy. That's why statements like the preceding sentence can't ever be completely true. The messiness makes it hard to talk about anything real in any absolutely definitive sort of way.
I can be definitive about artificial constructs in an artificial world, yes. Hence, mathematics. But when you or I try to capture the real world with that comforting clarity, we are doomed. Well, mostly doomed. 85.27% doomed, plus or minus an unknown set of unknowns.
That's the problem I have with your otherwise (as usual) thought provoking post: YES, our perceptions are entangled with the state of the world and that often influences our beliefs which then may entangle our utterances and therefore eventually entangle other people's beliefs. BUT what is the nature of that entanglement? You can't know for sure. What specifically are the beliefs that you intend to refer to? You can't know for sure.
The factor I expected to see in your essay, but did not, is interpretation based on mental models. There are many models I might have in my mind that could influence what counts as evidence.
You wrote: "For an event to be evidence about a target of inquiry, it has to happen differently in a way that's entangled with the different possible states of the target."
If we put the missing material about interpretation in there this might read:
"For me to consider an event to be evidence about a target of inquiry, I must first possess or construct a model of that event and that target and also a model of the world that contains and relates the event and target with each other. Then, for the event to be evidence CORROBORATING a particular theory about the target, I must imagine plausible alternative events that would that would CONTRADICT that theory."
Unfortunately, our models can be wrong, and are often wrong in interesting ways. So, we can satisfy your version of the statement, or my version, and still be counting as evidence things that may be no evidence at all. Example: "I was about to go for a car ride and a black cat crossed my path, which I interpret as a portent of evil, so I went back into my house. The black cat was evidence of evil in that particular situation because a black cat crossing my path is a rare event; it is possible for the cat not to have crossed my path; and in my culture, which is the collective product of successful experience staying alive and procreating, it is considered a portent of evil for a black cat to cross one's path. Had a black cat not crossed my path, I would consider that evidence (weak evidence) that I was not about to experience misfortune."
Seems to me that you can in principle rationally believe (1) that your beliefs are entangled with reality but (2) that you don't have any more effective way of persuading others than to say "see, I believe this". Specifically, imagine that every now and then you find yourself acquiring a belief in a particular, weird, internal way (say, you have the strong impression that God speaks to you, accompanied by a mysterious smell of apricots), and that several times this has happened and you've checked out the belief and it's turned out to be true. (And you've never checked it and found it to be false, and the instances you checked were surprising, etc.)
I think you'd be entitled, in this situation, to believe that your weirdly acquired beliefs are entangled with reality; but I can't see any way you could be very convincing to someone who didn't know the history (barring further such episodes in the future, of which there is no guarantee); and even in the best-possible case where whenever this thing happens to you you immediately tell someone else of the belief you've acquired and get them to check it, it could be very difficult for them to rule out hoaxing well enough to make them trust you.
Now, the standard case of incommunicably grounded beliefs -- which I suspect Eliezer had in mind here -- is of some sorts of religious belief; and they share at least some features with my semi-silly example. They generally lack the really important one (namely, repeated testing), and that's a big strike against them; but the big strike is the poor quality of the evidence, not its incommunicability as such.
So yes, incommunicability is suspicious, and a warning sign, but I think Eliezer goes too far when he says that a model that says your beliefs aren't evidence for others is ipso facto saying that you don't yourself have reason to believe. Unless he really means literally absolutely no evidence at all for others, but I don't think anyone really believes that.
You can tell them that your impressions have previously always been correct and surprising. To the extent that they trust you, the evidence will be just as good for them as it was for you.
The extent to which they trust you may not be very great, especially given that what you're telling them is that sometimes God speaks to you with an aura of apricots and reveals surprising but mundane truths. In any case, telling them this doesn't make your evidence any less incommunicable, except in so far as it makes all evidence communicable.
(Note: old "g" = newer "gjm".)
In this case, they'll trust you less than if you told them that your shoelaces were untied, but it's not fundamentally different. Your shoelaces being untied is only communicable in the sense that you can tell someone, unless you count telling them to look at your shoes, but that doesn't seem to be what this is talking about.
Unless I misunderstood Eliezer, he seemed to be saying that all evidence is communicable in exactly this way.
I don't know if it is just semantics but it seems to me that you are conflating evidence and our perception of that evidence, since you write:
Take the following thought experiment. Suppose Alan has untied shoelaces that he can see. Suppose that also Alan's shoelaces produce a barely audible sound when they are untied and suppose that Barbara can and does hear this sound, while Alan can't and doesn't.
Now if I interpret you correctly, your definition of evidence amounts to saying that Barbara and Alan have different evidence with regards to Alan's untied shoelaces. However, it seems more intuitive to say that there is the a single state of things, Alan's untied shoelaces, that constitutes the only evidence that's perceived differently by Barbara and Alan.
You also think that evidence is a type of event - of course, this would be true if evidence really was someone's perception of some state of affairs that led them to form true beliefs. But I believe that there are many types of evidence that simply are not events. What about mathematical evidence for some belief? Godel's incompleteness theorem is conclusive evidence for the fact that you can't derive all the true theorems of mathematics from a formal system. (Please don't boil me too much if I am like not totally correct.) Nevertheless, that theorem is not an event in time - it doesn't cause anything. Metaphorically, we might say a certain mathematical theorem might "cause" another one - or one theorem might be the immediate "consequence" of the other - but mathematical entailment relations are different from natural causation and all this talk is just metaphorical.
Lastly, you write that:
However, I can think of some instances in which perhaps "blind faith" is warranted. For instance, I can not conceive of a situation that would make 2+2 = 4 false. Perhaps for that reason, my belief in 2+2=4 is unconditional.
Yes, it is conditional. For example, I guess, if you had put two stones next to other two, then calculated and found that there is _five stones in total, that would be a proof that 2+2 not equal to 4. This is how your belief "2+2=4" could be falsified.
I know this is Eliezer's line but it still looks like nonsense to me. This experience would be evidence stones have a tendency to spontaneously appear when four stones are put next to each other.
I have a simpler reason that the belief 2+2 = 4 is not blind. When he says he has blind faith because "I can not conceive of a situation that would make 2+2 = 4 false." it is not blind because he is trying to find an alternative rather than entirely avoiding questioning his belief.
Two cups (of sugar) + two cups (of water) = 2 cups (of sugar water)
Therefore, 2 + 2 = 2. ;)
to be very anal and nit-picky with your joke (cuz i feel like it):
You're mixing equal volumes with inconsistent densities (and thus mass) and trying to compute a final volume. Either way you'd get more than 2 cups.
Back on topic:
i have a very simple definition of evidence.
Anything that modifies my mental probabilities about certain beliefs i hold to be true or false is considered evidence by me.
Whether or not the evidence is weak, strong, or even reliable in the first place is irrelevant if we're trying to define what evidence is.
I disagree with evidence being an event. It is rather an attribute. the event is the observation of evidence. The event (the observation -hearing, seeing, smelling, whatever) is only useful for determining if the evidence (attribute) is reliable (true).
The evidence itself does not change. It is a static thing. if you see different evidence next time, that's different evidence (a different static).
I DO agree with the entanglement though. evidence is entangled with both your map and (hopefully) the territory. after all, the whole point of evidence is to modify your map to better fit the territory. The nature of its entanglement is simple though. As stated above it simply shifts your probabilities (confidences in beliefs).
First time poster, noob in rationality so have some mercy folks ;)
Is there any decent literature on the extent to which the fact of knowing that my shoelaces are untied is a real property of the universe? Clearly it has measurable consequences - it will result in a predictable action taking place with a high probability. Saying 'I predict that when someone will tie his shoelaces when he sees they're undone' is based not on the shoelaces being untied, nor on the photons bouncing, but on this abstract concept of them knowing. Is there a mathematical basis for stating that the universe has measurably changed in a nonrandom way once those photons' effects are analysed? I'd love to read more on this.
Also (closely related question), I know that overall entropy would increase in the whole system, but does this entanglement represent a small local increase in order?
"belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise"
Such a great way to put it! I wish I had read this page a few years ago, when I was arguing with my dad about religion. I wasn't able to coherently put this thought, though in retrospect I believe I was trying to get there. I ended up asking a hypothetical situation about advanced aliens visiting and telling him that his beliefs were wrong, and explaining why. He disappointed me with his answer: that he would like to believe he is strong enough in his faith to ignore the aliens. This is when I realized it would be fruitless to attempt to persuade him away from religion.
“If you don't believe that the outputs of your thought processes are entangled with reality, why do you believe the outputs of your thought processes? ”
I don’t. Well not like Believe. Some few of them I will give 40 or even 60 deciBels.
But I’m clear that my brain lies to me. Even my visual processor lies. (Have you ever been looking for your keys, looked right at them, and gone on looking?)
I hold my beliefs loosely. I’m coachable. Maybe even gullible. You can get me to believe some untruth, but I’ll let go of that easily when evidence appears.
I think I would nominate this as the most important post on LessWrong. I keep referring people to it.
Great article, I have only this one comment:
"If your beliefs are entangled with reality, they should be contagious among honest folk."
Haven't true and false beliefs both proven to be contagious among honest folk? Just as we should not use a machine that beeps for all numbers as evidence for winning lottery numbers, we should not use whether or not a belief is contagious as evidence of its truth.
It depends on how likely the respective explanations are.
I think it depends on that, and only that, and should be completely disconnected from any social criteria such as "being contagious."
Also, Eliezer writes, "If your model of reality suggests that the outputs of your thought processes should not be contagious to others, then your model says that your beliefs are not themselves evidence, meaning they are not entangled with reality."
This seems false. Should LW thinkers take it as a problem that our methods are usually completely lost on, for example, fundamentalist scientologists? In fact, I don't think it's a stretch to claim that most people do not subscribe to LW methods, does that suggest a problem with LW methods? Do LW methods fail the test of being contagious and therefore fail the test of being reliable methods for acquiring evidence?
I think "should" here means "justified," not necessarily "likely."
Your (rational) beliefs should be considered evidence by the irrational, even though they likely won't be.
"Should LW thinkers take it as a problem..."
Yes to all of that. There are many problems with LW methods and beliefs, and those problems impede other people from seeing the parts that are right.
Scientologists believe that any method that wasn't invented/used by Ron Hubbard is bad. As such they are not open to evaluate a method on their merits and failure to convince them isn't a sign of failure of a method for acquiring evidence.
Sure. Scientologists are not close to being the only ones who disagree with LW's mistakes.
I think this should be more like "then your model offers weak evidence that your beliefs are not themselves evidence".
If you're Galileo and find yourself incapable of convincing the church about heliocentrism, this doesn't mean you're wrong.
Edit: g addresses this nicely.
I don't think that Eliezer suggested using a belief's contagiousness as strong evidence of its truth. Rather, a belief's lack of contagiousness is strong evidence of its untruth.
No, correct beliefs should only be contagious among honest folk who believe each other to be rational and honest. If I make the claim that The FSM is dictating these words to me, you would probably think me lunatic or liar. But if I truly can correctly recognize when I have been Touched by His Noodly Appendage, then my beliefs are entangled with reality but, understandably, not contagious. Furthermore, it would be perfectly rational for me to believe this revelation and at the same time not to consider it evidence for others. The point is that some beliefs, certainly the more extraordinary of them, should not be contagious, except through evidence as raw and unprocessed as possible.
Also, entanglement is necessary but not sufficient for correct beliefs. The fact that my beliefs contain information about the world is not enough for them to be correct. For example, if I misread the photon pattern, I could think that my shoelaces are tied when they are not, and untied when they are tied. This still has the same amount of entanglement, the same amount of information, yet the beliefs are incorrect.
I'm a newcomer working through the sequences for the first time, so I apologize if this has been more fully discussed or explained elsewhere, but I've hit a sticking point here. I was in agreement up until:
This works very well for claims like 'snow is white' but not so well for abstract concepts. In order for the evidence-based belief to transmit well, the listener must have definitions of 'snow' and 'white' that are compatible enough with the speaker's definitions for the belief to fit logically into their frame of reference - their map of the territory, if you will. Take out 'snow' and 'white' and plug in some more abstract concepts there and you'll see how quickly divergence can occur.
Two people may observe the same objective evidence and use it to reach different conclusions because their frames of reference, definitions, and prior understandings differ. Therefore, the section above doesn't seem to hold true for any beliefs bar the most simplistic and concrete.
That is, of course, unless the operative word in the quoted paragraph is claim, since anyone who outright states their beliefs are intransmissible is probably engaging in self-deception at one level or another. That seems something of an overly literal interpretation of the piece, though. Am I missing something?
It's definitely harder to reconcile two sets of conflicting beliefs when you're dealing with abstractions -- maybe even intractable in some cases -- but I don't think it's impossible in principle. In order for an abstraction to be meaningful, it has to say something about the sensory world; that is, it has to be part of a network of beliefs grounded in sensory evidence. That has straightforward consequences when you're dealing with physical evidence for an abstraction; when dealing with abstract evidence, though, you need to reconstruct what that evidence means in terms of experience in order to fit it into a new set of conceptual priors. We do similar things all the time, although we might not realize we're doing them: knowing that several languages conflate parts of the color space that English describes with "green" and "blue", for example, might help you deal with a machine translation saying that grass is blue.
This only becomes problematic when dealing with conceptually isolated abstractions. Smells are a good example: it's next to impossible to describe a scent well enough for it to then be recognizable without prior experience of it. Similarly, descriptions of high-level meditation often include experiences which aren't easily transmissible to non-practitioners -- not because of some ill-defined privileges attached to personal gnosis, but because they're grounded in very unusual mental states.
Thank you for your reply! It's certainly helped to clarify the matter. I wonder now if a language used in a hypothetical culture where people placed a much higher value on sense of smell or meditative states might have a far broader and more detailed vocabulary to describe them, resolving the problems with reconstructing the evidence. It's almost Sapir-Whorf - regardless of whether or not language influences thought, it certainly influences the transmission of thought.
I think on reflection that most of my other objections relate to cases where the evidence isn't in dispute but the conclusions drawn from it are (see: much of politics!) Those could, in principle, be resolved with a proper discussion of priors and a focus on the actual objective evidence as opposed to simply the parts of it that fit with one's chosen argument. That people in most cases don't (and don't want to) reconcile the beliefs and view the situation as more complex than 'cheering for the right team' is a fault in their thinking, not the principle itself.
Um... "There has to be Shannon mutual information between the evidential event and the target of inquiry"?
So, cause-and-effect chains would be pretty useful I would think. A you-must-think-through-every-step kind of problem's solver would benefit greatly for example.
If aliens with no concept of human math landed on earth then 2+2 would only equal 3 separate images IE = "3"
I remember spending hours agonizing over this idea. How do I know if my eyes and brain are working correctly? Any thought process that might lead me to a conclusion would be taking place in my brain. The same brain that I want to prove works correctly. The best I could come up with was that if my brain works correctly, I stand to gain by operating under the assumption that it does, and I stand to lose by operating under the assumption that it doesn't. If my brain does not work correctly, then I have no basis for any conclusion so it makes no difference what my operating assumptions are.
I don't get this inference. seems like the belief itself is the evidence -- and you entangle your friend with the object of your belief just by telling them your belief -- regardless if you can explain the reasons? (private beliefs seem to me suspicious on other grounds)
If your friend trusts that you arrived at your belief through rational means, you are correct. But often when someone can't give a reason, it's because there is no good reason. Hence "suspicious".
I struggle with comprehending this sentence:
To say it abstractly: For an event to be evidence about a target of inquiry, it has to happen differently in a way that’s entangled with the different possible states of the target.
That means if I want to show evidence that waters changes its solid form by melting (target of inquiry), there must be evidence that I can freeze water (different possible state)? And on top of that there must be evidence that gas can condense to a fluid and the fluid can vapourware into gas?
Is my rewritten interpretation correct?
I’m very sorry I have a hard time wrapping my head around this concept.
Your example seems still confused to me. Maybe try something simpler like "Will it rain tomorrow? " because you want to pack for a trip. There's lots things you can inquire to figure out if this is likely. For example if it's cloudy now that probably has some bearing on whether it will rain. You can look up past weather records for your region. More recently we have detailed models informing forecasts that you can access through the internet to inform you about the weather tomorrow. All of these are evidence.
There is also lots of observations you can make that are for all you know uncorrelated with whether it will rain tomorrow. Like the outcome of a dice throw you do. These do not constitute evidence toward your question or at least not very informative evidence.
Thank you for your reply it helped me a lot.
It seems to me you confused by overlap in meaning of word "state".
In this context, it is "state of target of inquiry" - water either changes its solid form by melting or not. So state refers to difference between "yes, waters changes its solid form by melting" and "no, waters does not change its solid form by melting". Those are your 2 possible states, and water itself having unrelated set of states(solid/liquid/gas) to be in is just coincidence.
Thank you, your explanation of state made it easier for me to understand the meaning.