You discuss "compromised agents" of the FBI as if they're going to be lone, investigator-level agents. If there was going to be any FBI/CIA/whatever cover-up, the version of that which I would expect, is that Epstein would've had incriminating information on senior FBI/CIA personnel, or politicians. Incriminating information could just be that the FBI/CIA knew that Epstein was raping underage girls for 20 years, and didn't stop him, or even protected him. In all your explanations of how impossible a crime Epstein's murder would be to pull off, a thing that makes it seem more plausible to me is if the initial conspirator isn't just someone trying to wave money around, but is someone with authority.
Go ahead and mock, but this is what I thought the default assumption was whenever someone said "Epstein didn't kill himself" or "John McAfee didn't kill himself". I never assumed it would just be one or two lone, corrupt agents.
Now that I've had 5 months to let this idea stew, when I read your comment again just now, I think I understand it completely? After getting comfortable using "demons" to refer to patterns of thought or behavior which proliferate in ways not completely unlike some patterns of matter, this comment now makes a lot more sense than it used to.
Thank you, it does help! I know some people who revel in conspiracy theories, and some who believe conspiracies are so unlikely, that they dismiss any possibility of a conspiracy out of hand. I get left in the middle with the feeling that some situations "don't smell right", without having a provable, quantifiable excuse for why I feel that way.
without using "will"
Oh come on. Alright, but if your answer mentions future or past states, or references time at all, I'm dinging you points. Imaginary points, not karma points obviously.
So let's talk about this word, "could". Can you play Rationalist's Taboo against it?
Testing myself before I read further. World states which "could" happen are the set of world states which are not ruled impossible by our limited knowledge. Is "impossible" still too load-bearing here? Fine, let's get more concrete.
In a finite-size game of Conway's Life, each board state has exactly one following board state, which itself has only one following board state, and so on. This sequence of board states is a board's future. Each board state does not correspond to only a single previous board state, but rather a set of previous board states. If we only know the current board state, then we do not know the previous board state, but we know the set that contains the previous board state. We call this set the board states that could have been the previous board state. From the inverse of this, we know the set that does not contain the previous board state, which we call the boards which could not have been the previous board.
Going back up to our universe, what "could happen" is a set of things which our heuristics tell us contains one or more things which will happen. What "can't happen" is a set of things which our heuristics tell us does not contain a thing that will happen.
A thing which "could have happened" is thus a thing which was in a set which our heuristics told us contained a thing that will (or did) happen.
If I say "No, that couldn't happen", I am saying that your heuristics are too permissive, i.e. your "could" set contains elements which my heuristics exclude.
I think that got the maybe-ness out, or at least replaced it with set logic. The other key point is the limited information preventing us from cutting the "could" set down to one unique element. I expect Eliezer to have something completely different.
So then this initial probability estimate, 0.5, is not repeat not a "prior".
1:1 odds seems like it would be a default null prior, especially because one round of Bayes' Rule updates it immediately to whatever your first likelihood ratio is, kind of like the other mathematical identities. If your priors represent "all the information you already know", then it seems like you (or someone) must have gotten there through a series of Bayesian inferences, but that series would have to start somewhere, right? If (in the real universe, not the ball & urn universe) priors aren't determined by some chain of Bayesian inference, but instead by some degree of educated guesses / intuition / dead reckoning, wouldn't that make the whole process subject to a "garbage in, garbage out" fallacy(?).
For a use case: A, low internal resolution rounded my posterior probability to 0 or 1, and now new evidence is not updating my estimations anymore, or B, I think some garbage crawled into my priors, but I'm not sure where. In either case, I want to take my observations, and rebuild my chain of inferences from the ground up, to figure out where I should be. So... where is the ground? If 1:1 odds is not the null prior, not the Bayesian Identity, then what is?
Evil is a pattern of of behavior exhibited by agents. In embedded agents, that pattern is absolutely represented by material. As for what that pattern is, evil agents harm others for their own gain. That seems to be the core of "evilness" in possibility space. Whenever I try to think of the most evil actions I can, they tend to correlate with harming others (especially one's equals, or one's inner circle, who would expect mutual cooperation), for one's own gain. Hamlet's uncle. Domestic abusers. Executives who ruin lives for profit. Politicians who hand out public money in exchange for bribes. Bullies who torment other children for fun. It's a learnable script, which says "I can gain at others expense", whether that gain is power, control, money, or just pleasure.
If your philosopher thinks "evil" is immaterial, does he also think "epistemology" is immaterial?
(I apologize if this sounds argumentative, I've just heard "good and evil are social constructs" far too many times.)
Not really. "Collapse" is not the only failure case. Mass starvation is a clear failure state of a planned economy, but it doesn't necessarily burn through the nation's stock of proletariat laborers immediately. In the same way that a person with a terminal illness can take a long time to die, a nation with failing systems can take a long time to reach the point where it ceases functioning at all.
How do lies affect Bayesian Inference?
(Relative likelihood notation is easier, so we will use that)
I heard a thing. Well, I more heard a thing about another thing. Before I heard about it, I didn't know one way or the other at all. My prior was the Bayesian null prior of 1:1. Let's say the thing I heard is "Conspiracy thinking is bad for my epistemology". Let's pretend it was relevant at the time, and didn't just come up out of nowhere. What is the chance that someone would hold this opinion, given that they are not part of any conspiracy against me? Maybe 50%? If I heard it in a Rationality influenced space, probably more like 80%? Now, what is the chance that someone would share this as their opinion, given that they are involved in a conspiracy against me? Somewhere between 95% and 100%, so let's say 99%? Now, our prior is 1:1, and our likelihood ratio is 80:99, so our final prediction, of someone not being a conspirator vs being a conspirator, is 80:99, or 1:1.24. Therefore, my expected probability of someone not being a conspirator went from 50%, down to 45%. Huh.
For the love of all that is good, please shoot holes in this and tell me I screwed up somewhere.
The Mind-Flayer Proposition
Epistemic status & effort: this has been bouncing around in my head for a while in one form or another. This is a first draft of putting it into words. This has probably been said better in other places by other people, but I don't recall having seen it.
Suppose a technologically advanced, "superintelligent" [assume we are like chimpanzees relative to them] alien race of octopus-headed monsters land on Earth, and make an offer. If you accept, and for anyone else who accepts, they will let you live on their worlds, make sure that all of your physical needs are met, and will guarantee that your descendants will spread across the cosmos in numbers beyond your imagining, and never go extinct, until the heat death of the universe, unless the aliens manage to avert that end as well. But, when you turn 50, they will kill you painlessly and then eat your brain.
If you reject this offer, then the aliens will leave earth with everyone who did accept, and if they later find you trespassing on their worlds, they will shout at you and hit you with sticks until you go away.
Would you accept this offer, or reject it? Can you imagine a person who would accept it or who would reject it? If you'd reject, then what extra concessions would it take for you to accept the offer?
[Sidenote to those for whom it's relevant: If you accept, you would never have to pay taxes again.]