Major spoilers for planecrash (Book 2) and for Eliezer's Masculine Mongoose #3.

How Bayesians Lie; How to Lie to Bayesians

Pyrofessor groaned out loud.  “This is why I can’t stand his kind of cognitive augment,” she said.  "He can’t just refuse to admit his identity like a normal fucking meta.  No, the Goose has to make a big deal out of trying to act exactly like a real human in his shoes.  Not because he’s trying to hide who he is.  He knows we all know.  He’s just being a fucking priss about his interpretation of the mask code.  He thinks that if you knowingly behave according to a likelihood function that you can probabilistically distinguish from the likelihood function of a normal, you might as well hang a sign on your forehead.  So he acts all ostentatiously precise about his interpretation of Bruce Kent, in order to sniff about how the rest of us are getting it wrong.  And he does that knowing all you admiring numbskulls are completely oblivious to how he’s behaving on the augment-to-augment level.  God, I hate Bayesians, they’re often right in principle but do they have to be such fucking snobs about it -“

--Eliezer, Masculine Mongoose #3

Keltham is constantly tracking the Conspiracy world in his mind.  That's part of this.  He's living in both worlds simultaneously and distinctly and unhesitatingly.  There's no pause in him about whether or not the Conspiracy is real, for purposes of accusing Carissa of being in on it within the Conspiracy world.  Keltham steps all the way mentally into the world where the Conspiracy is just a thing and Carissa is just part of it, and then in that world when Sevar suddenly vanished away 'to the bathroom' obviously she was up to something in response to his own lecture and obviously the other students' questions were meant as a distraction.

Asmodia sees the game now, has seen the game, even without the enhancement spells she remembers.

Cheliax can't rely on what anything 'looks like', they can't ask if it's a 'giveaway' or if it could 'just as reasonably be something else'.  Keltham isn't going to wonder each time whether or not the Conspiracy is real and mentally back down from labeling Carissa's departure as suspicious.  Cheliax has to consider what everything will look like to Keltham while he's mentally inhabiting the world where the Conspiracy is just real and there's no arguing with that.

There was only one guaranteed-correct move in that game, and it was to mentally live inside the alterCheliax world themselves, and just do what alterCheliax would do, notice every time anyone's overt behavior departed from their behavior in alterCheliax whether or not that looked like a giveaway at a first glance.  Sevar needed to notice that the version of her not in the Conspiracy world probably did not suddenly need to go to the bathroom, because Keltham did notice that.  If that was even twice as likely on the Conspiracy world as the Ordinary world, and Keltham correctly estimates that, Asmodia has grasped by now that a lot of "twice as likelies" multiplied together can add up very fast...

Asmodia sees the game, the game between true dath ilani.  She can't properly play the game against Keltham without enhancement, but she can see how fast Cheliax is losing.  They can lose it very quickly once Keltham gets oriented enough that he starts believing in his own numbers.  Hours, not days.  It's all there in the math.

--Eliezer, planecrash

Weighted Possible Worlds and their Correlated Observations

Here's how I like to think about Bayesian priors and updates. Imagine the panoply of possible worlds. Now imagine only the subset of that panoply that looks, first-personally, like the world you've seen so far. You're eliminating all the worlds that don't have you as an observer, and all the worlds where you-as-an-observer exist but made different observations than you recall seeing.

You now have this overlay of possible worlds on top of your view of the world. Weight each possible world in the overlay by its relative likelihood: let worlds that are very probable be heavy, and worlds that are deeply implausible be light. Don't worry too much about justifying those weights right now; the whole point of Bayesian updates is that your prior will quickly update to something reasonable. Just try and get a feel for your best gut judgements of possible world plausibility and encode those gut judgements as relative weights.

One way to visualize weight is as length. Let each possible world in your overlay be a line segment, in addition to its overlay across your visual field. When a possible world says that an event is 60% likely, that possible world is wagering 60% of its current weight on that event occurring and 40% against that event occurring. If the possible world is represented by a line segment, then  of the line segment is now colored blue for the event occurring and  of the segment is red for the event not occurring. If the event occurs, you live in the blue subset of the panoply -- keep only the blue lengths. If the event doesn't occur, keep only the red lengths. Your relative weighting of possible worlds is the relative length of the surviving line segments.

Another way of visualizing weight, which is a little harder for me, is as first-personal vividness of a possible world in your overlay. Flit back and forth between the possible worlds you might inhabit. The prior probability of each is its brightness or vividness. See what each of them wagers will occur next. Discard the subset inconsistent with your observations. The relative brightness of each remaining first-person viewpoint in your overlay is that viewpoint's credence in your newly updated prior.

The possible worlds that bet relatively heavily on the observations you end up making will be the worlds that end up weighing the most in your new prior.

9

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 6:05 AM

They seem hard(er) to visualize moment to moment with respect to updates, imo. One of the neat things about lines is that the renormalization step of an observational update isn't cognitively demanding, whereas translating areas of disks-with-bites-taken-out-of-them-by-an-inconsistent-observation into new disks with those same areas is cognitively demanding.

But different Bayesian visualizations will work best for different people.

New to LessWrong?