Two Scenarios

Alice must answer the multiple-choice question, "What color is the ball?" The two choices are "Red" and "Blue." Alice has no relevant memories of The Ball other than she knows it exists. She cannot see The Ball or interact with it in any way; she cannot do anything but think until she answers the question.

In an independent scenario, Bob has the same question but Bob has two memories of The Ball. In one of the memories, The Ball is red. In the other memory, The Ball is blue. There are no "timestamps" associated with the memories and no way of determining if one came before the other. Bob just has two memories and he, somehow, knows the memories are of the same ball.

If you were Alice, what would you do?

If you were Bob, what would you do?

Variations

More questions to ponder:

  • Should they do anything at all?
  • Should Alice and Bob act differently?
  • If Alice and Bob could circle more than one color, should they?
  • Would either answer change if the option "Green" was added to the choice list?
  • If the question was fill-in-the-blank, what should they write?
  • If Bob's memories were of different balls but he didn't know which ball was The Ball, should his actions change?
  • If Alice and Bob could coordinate, should it affect their answers?

Further Discussion

The basic question I was initially pondering was how to resolve conflicting sensory inputs. If I were a brain in a vat and I received two simultaneous sensory inputs that conflicted (such as the color of a ball), how should I process them?

Another related topic is whether a brain in a vat with absolutely no sensory inputs should be considered intelligent. These two questions were reduced into the above two scenarios and I am asking for help in resolving them. I think they are similar to questions asked here before but their relation to these two brain-in-a-vat questions seemed relevant to me.

Realistic Scenarios

These scenarios are cute but there are similar real-world examples. When asked if a visible ball was red or green and you happened to be unable to distinguish between red and green, how do you interpret what you see?

Abstracting a bit, any input (sensory or otherwise) that is indistinguishable from another input can really muck with your head. Most optical illusions are tricks on eye-hardware (software?).

This post is not intended to be clever or teach anything new. Rather, the topic confuses me and I am seeking to learn about the correct behavior. Am I missing some form of global input theory that helps resolve colliding inputs or missing data? When the data is inadequate, what should I do? Start guessing randomly?

New Comment
21 comments, sorted by Click to highlight new comments since:

This post seems made to order to apply recently acquired knowledge. If I come across as pedantic, please attribute that to learner's thrill. From Probability Theory:

"Seeing is inference from incomplete information". -- E.T. Jaynes

Your usual sensory information is inadequate data. You're dealing with that every day. This seems a good starting point to generalize from; brains in vats seem like overkill to approach the question.

Alice and Bob are faced with a scenario of decision in uncertainty. Probability theory and decision theory are normative frameworks that apply there. All the information you've given is symmetrical, favoring no choice over the other.

  • Should Alice or Bob do anything at all ? That depends on the consequences to them of guessing one way or the other, or not guessing at all. If the outcomes are equally good (or equally bad) guessing randomly is optimal.
  • Should they act differently ? There's nothing in the information you've provided that seems to break the symmetry in uncertainty, so I'd say no.
  • Should they circle more than one color ? ... And other variants - you've given no reasons to prefer one outcome to another, so in general we can't say how they should act.
  • If Alice and Bob could coordinate ? They would (as far as I can tell by assessing the information given) have no more definite information by pooling their knowledge than they have separately.

Very well-put, Morendil. The decision one should make here depends on the consequences of erring one way or the other and so there's insufficient information. One quibble though:

Your usual sensory information is inadequate data. You're dealing with that every day. This seems a good starting point to generalize from

It's true, but I don't think there's anything such as "adequate data" to compare to. In a sense, all data is going to be inadequate. David MacKay's cardinal rule of information theory is, "To make inferences, you have to make assumptions." No matter how much data you get, it's going to be building on a prior. The data must be interpreted in light of the prior.

Human cognition has been refined over the evolutionary history to start from very good priors which allow it very accurate inferences from minimal data, and you have to go out of your way to find the places where the priors point it in the wrong direction, such as in optical illusions.

I wouldn't call it a quibble: I agree. There is a lovely tension between the idea that all perception, not just seeing, is "inference from incomplete information"; and the peripatetic axiom, "nothing is in the intellect that was not first in the senses".

The only way to have complete information is to be Laplace's demon. No one else has truly "adequate data", and all knowledge is in that sense incertain; nevertheless, inference does work pretty well. (So well that it sure feels as if logic need not have been "first in the senses", even though it is a form of knowledge and should therefore be to some extent incertain... the epistemology, it burns us !).

Your usual sensory information is inadequate data. You're dealing with that every day. This seems a good starting point to generalize from; brains in vats seem like overkill to approach the question.

Agreed. Brains-in-vats was one of the original questions that I was pondering and the specific questions were narrowed into goofy sensory data. Narrowing that down provided the two scenarios.

Should they act differently ? There's nothing in the information you've provided that seems to break the symmetry in uncertainty, so I'd say no.

What I find interesting is that Bob has more information than Alice but is stuck with the same problem. I found it counter-intuitive that more information did not help provide an action. Is it better to think of Bob as having no more information than Alice?

Adding a memory of Blue to Alice seems like adding information and provides a clear action. Additionally adding a memory of Red removes the clear action. Is this because there is now doubt in the previous information? Or... ?

Should they circle more than one color ? ... And other variants - you've given no reasons to prefer one outcome to another, so in general we can't say how they should act.

Why wouldn't Bob circle both Red and Blue if given the option?

What I find interesting is that Bob has more information than Alice but is stuck with the same problem

Yes, it seems that Bob has more information than Alice.

This is perhaps a good context to consider the supposed DIKW hierarchy: data < information < knowledge < wisdom. Or the related observation from Bateson that information is "a difference that makes a difference".

We can say that Bob has more data than Alice, but since this data has no effect on how Bob may weigh his choices, it's a difference that makes no difference.

Is this because there is now doubt in the previous information ?

"Doubt" is data, too (or what Jaynes would call "prior information"). Give Alice a memory of a blue ball, but at the same time give her a reason (unspecific) to doubt her senses, so that she reasons "I recall a blue ball, but I don't want to take that into account." This has the same effect as giving Bob conflicting memories.

We can say that Bob has more data than Alice, but since this data has no effect on how Bob may weigh his choices, it's a difference that makes no difference.

Okay, that makes sense to me.

Give Alice a memory of a blue ball, but at the same time give her a reason (unspecific) to doubt her senses, so that she reasons "I recall a blue ball, but I don't want to take that into account." This has the same effect as giving Bob conflicting memories.

Ah, okay, that makes a piece of the puzzle click into place.

In DIKW terms, what happens when we add Blue to Alice? When we later add Red? My hunch is that the label on the data simply changes as the set of data becomes useful or useless.

Also, would anything change if we add "Green" to Bob's choice list? My guess is that it would because Bob's memories of Red and Blue are useful in asking about Green. Specifically, there is no memory of Green and there are a memories of Red and Blue.

Interesting.

What I find interesting is that Bob has more information than Alice but is stuck with the same problem. I found it counter-intuitive that more information did not help provide an action. Is it better to think of Bob as having no more information than Alice?

The way you've set the question up Bob doesn't have any more relevant/useful information than Alice. They are both faced with only two apparently mutually exclusive options (red or blue) and you have not provided any information about how the test is scored or why either should have any reason to prefer to answer it over not answering it. Since Bob has two logically inconsistent memories he does not actually have any more relevant information than Alice and so there should not be anything counter-intuitive about the fact that the information doesn't change his probabilities.

Adding a memory of Blue to Alice seems like adding information and provides a clear action. Additionally adding a memory of Red removes the clear action. Is this because there is now doubt in the previous information? Or... ?

There's other information implicit in the decision that you are not accounting for. Alice has a set of background beliefs and assumptions, one of which is probably that her memory is generally believed to correlate with true facts about external reality. In the case of discovering logical inconsistencies in her memory she has to revise her beliefs about the reliability of her memory and change how she weights remembered facts as evidence. You can't just ignore the implicit background knowledge that provides the context for the agents' decision making when considering how they update in the light of new evidence.

Why wouldn't Bob circle both Red and Blue if given the option?

You haven't given enough context for anyone to provide an answer to this question. When confronted with the multiple choice question Bob may come up with a theory about what the existence of this question implies. If he hasn't been given any specific reason to believe there are any particular rules applied to the scoring of the answer he gives then he will have to fall back on his background knowledge about what kinds of agents might set him such a question and what their motivations and agendas might be. That will play into his decision about how to act.

[-]Cyan50

Jaynes argues that when you have symmetry in a discrete problem such that switching all the labels leaves the problem the same, you must assign equal probabilities to the available choices. (See page 34 of this.) This covers all of your scenarios except the one where Bob has the option of choosing Green, a Ball color that he does not recall, and the fill-in-the-blank scenario.

So then you just randomly pick between Red and Blue? What should you do if the question is fill-in-the-blank instead of multiple choice?

[-]Cyan20

The argument only speaks to probabilities, not actions. To choose what to pick, you need utilities. For example, if being right about the color has the same utility regardless of color but it's worse to guess wrong if the ball is red, then you'd want to pick red even if your probabilities are equal between the two alternatives.

The fill-in-the-blank problem is above my pay-grade. ;-)

Okay, "utilities" makes sense. That may have been the term I was missing.

The basic goal in all of this is preventing a system crash when there are two equal ways to move forward. Acting randomly isn't bad and is what I would have expected people to answer. What I was looking for is how to refine "acting randomly" after the system is modified. "Utilities" sounds right to me.

And as a major disclaimer, I understand this is probably very basic to most of you (plural, as in the community). I just don't want to start with the wrong building blocks.

There's a well-known example in philosophy called Buridan's Ass - a donkey is placed at the exact midpoint between two bales of hay, and being unable to choose between them (because they are identical), it starves to death. Somewhat amusingly, but also unfortunately, digital electronics can run into a similar problem known as metastability; a circuit can get stuck at a voltage roughly at the midpoint between those assigned to logic level 0 and logic level 1.

Oddly, adding a "if it's hard to decide, choose randomly" circuit doesn't help; it just creates another ambiguous situation at the borders of the voltage range you designate as "hard to decide".

From the LessWrong wiki: "I don't know"

If we don't know anything about which is more likely, but there are only two options, then i think you're left to just assign a 50% chance to each. Here, the characters are prompted for a discrete action, so both guesses are the same.

And they have to do something, because even refusing to circle an answer is a course of action. It's just that in this case we don't have any reason to be very confident in any specific choice.

EY's "I don't know." is an interesting way of treating open-ended scenarios. Does it apply to "pick Red or Green"? This isn't strictly what you linked to, I suppose, so that may not be relevant to what you were trying to say.

And they have to do something, because even refusing to circle an answer is a course of action. It's just that in this case we don't have any reason to be very confident in any specific choice.

So, when asking for an action, wouldn't "do nothing" be included in the choices? In other words, the three options are "Pick Red", "Pick Green", "Do nothing", and Alice and Bob choose randomly from those three?

[-]Cyan40

This post and the subsequent discussion seem relevant.

And it had a sequel.

Okay, yeah, that post was much more in vein with this one. Thanks for the link. Now I get to sift through the comments. :)

Yeah, I remember that post and almost linked it but decided not to. I don't remember the sequel... so I'll go read that.

I remember getting hopelessly lost in the comments and never finding an actual resolution.

Of note, this post really doesn't care about probabilities and went out of its way to make things symmetrical. That isn't the point. I want to know how to act. When faced with an impossible problem, what do I do?

Most optical illusions are tricks on eye-hardware.

No, almost all are software hacks.

Hmm, I edited the post. Thanks.

[-]Cyan10

Is an ASIC hardware or software?