LESSWRONG
LW

Priors
Personal Blog

2

Priors Are Useless

by DragonGod
21st Jun 2017
1 min read
22

2

Priors
Personal Blog

2

Priors Are Useless
15Luke_A_Somers
2TheAncientGeek
2Luke_A_Somers
0TheAncientGeek
0ImmortalRationalist
0Luke_A_Somers
12WalterL
0DragonGod
109eB1
9Lumifer
9CronoDAS
0DragonGod
2Jayson_Virissimo
8Brendan Long
3entirelyuseless
2arisen
2MrMind
0Hafurelus
0arisen
0MrMind
0Akhenator
0DragonGod
New Comment
22 comments, sorted by
top scoring
Click to highlight new comments since: Today at 4:51 PM
[-]Luke_A_Somers8y150

This is totally backwards. I would phrase it, "Priors get out of the way once you have enough data." That's a good thing, that makes them useful, not useless. Its purpose is right there in the name - it's your starting point. The evidence takes you on a journey, and you asymptotically approach your goal.

If priors were capable of skewing the conclusion after an unlimited amount of evidence, that would make them permanent, not simply a starting-point. That would be writing the bottom line first. That would be broken reasoning.

Reply
[-]TheAncientGeek8y20

"A ladder you throw away once you have climbed up it".

Reply
[-]Luke_A_Somers8y20

Where's that from?

Reply
[-]TheAncientGeek8y00

https://en.wikipedia.org/wiki/Wittgenstein%27s_ladder

Reply
[-]ImmortalRationalist8y00

But what exactly constitutes "enough data"? With any finite amount of data, couldn't it be cancelled out if your prior probability is small enough?

Reply
[-]Luke_A_Somers8y00

Yes, but that's not the way the problem goes. You don't fix your prior in response to the evidence in order to force the conclusion (if you're doing it anything like right). So different people with different priors will have different amounts of evidence required: 1 bit of evidence for every bit of prior odds against, to bring it up to even odds, and then a few more to reach it as a (tentative, as always) conclusion.

Reply
[-]WalterL8y120

I definitely agree that after we become omniscient it won't matter where we started...but going from there to priors 'are useless' seems like a stretch. Like, shoes will be useless once my feet are replaced with hover engines, but I still own them now.

Reply
[-]DragonGod8y00

But this isn't all there is to it.
@Alex. also, take a set of rationalists with different priors. Let this set of priors be S.
Let the standard deviation of S after i trials be d_i.

d_{i+1} <= d_i for all i: i is in N. The more experiments are conducted the greater the precision of the probabilities of the rationalists.

Reply
[-]9eB18y100

Now analyze this in a decision theoretic context where you want to use these probabilities to maximize utility and where gathering information has a utility cost.

Reply
[-]Lumifer8y90

You keep using that word, "useless". I do not think it means what you think it means.

Reply
[-]CronoDAS8y90

It can take an awfully long time for N to get big enough.

Reply
[-]DragonGod8y00

True. I don't disagree with that.

Reply
[-]Jayson_Virissimo8y20

So, in the meantime, priors are useful?

Reply
[-]Brendan Long8y80

I think you lost me at the point where you assume it's trivial to gather an infinite amount of evidence for every hypothesis.

Reply
[-]entirelyuseless8y30

This is sometimes false, when there are competing hypotheses. For example, Jaynes talks about the situation where you assign an extremely low probability to some paranormal phenomenon, and a higher probability to the hypothesis that there are people who would fake it. More experiments apparently verifying the existence of the phenomenon just make you more convinced of deception, even in the situation where the phenomenon is real.

Additionally, you should have spoken of converging on the truth, rather than the "true probability," because there is no such thing.

Reply
[-]arisen8y20

The only way for a stochastic process to satisfy the Markov property if it's memoryless. Most phenomena are not memoryless, which means that observers will obtain information about them over time.

Reply
[-]MrMind8y20

the posterior probability [;Pr_{i_{z1_j}};] gets closer and closer to the true probability of the hypothesis [;Pr_i;]

There's no true probability. Either a model is true or not.

Reply
[-]Hafurelus8y*00
.
Reply
[-]arisen8y00

I'm using opera mini (beta some times) on android and I pasted a whole google search link , maybe their beta servers are in Russia? Or something about opera mini data optimization? I have nothing to hide, it's the ethics of science

Reply
[-]MrMind8y00

This is trivially false, if the prior probability is 1 or 0.

It might be true but irrelevant, if the number of needed experiments is impractical or no repeated independent experiment can be performed.

It is also false if applied to two agents: if they do not have the same prior and the same model, their posterior might converge, diverge or stay the same. Aumann's agreement theorem works only in the case of common priors, so it cannot be extended.

Reply
[-]Akhenator8y00

I might be a bit blind but what are Priz1 and Priz2? Because here it looks like Priz1=Priz2. And what the priors do? What are your hypothesis?

I am sorry if I didn't get it (and I'm maybe looking like a fool right now).

[This comment is no longer endorsed by its author]Reply
[-]DragonGod8y00

The priors are the probabilities you assign to hypotheses before you receive any evidence for or against that/those hypothesis/hypotheses.
[;Pr_{i_{z1}};] and [;Pr_{i_{z2}};] are the posterior probabilities on [;Pr_{i_1};] and [;Pr_{i_2};] respectively.

Reply
Moderation Log
More from DragonGod
View more
Curated and popular this week
22Comments

NOTE.

This post contains Latex. Please install Tex the World for Chromium or other similar Tex typesetting extensions to view this post properly.
 

Priors are Useless.

Priors are irrelevant. Given two different prior probabilities [;Pr_{i_1};], and [;Pr_{i_2};] for some hypothesis [;H_i;].
Let their respective posterior probabilities be [;Pr_{i_{z1}};] and [;Pr_{i_{z2};].
After sufficient number of experiments, the posterior probability [;Pr_{i_{z1}} \approx [;Pr_{i_{z2};].
Or More formally:
[;\lim_{n \to \infty} \frac{ Pr_{i_{z1}}}{Pr_{i_{z2}}} = 1 ;].
Where [;n;] is the number of experiments.
Therefore, priors are useless.
The above is true, because as we carry out subsequent experiments, the posterior probability [;Pr_{i_{z1_j}};] gets closer and closer to the true probability of the hypothesis [;Pr_i;]. The same holds true for [;Pr_{i_{z2_j}};]. As such, if you have access to a sufficient number of experiments the initial prior hypothesis you assigned the experiment is irrelevant.
 
To demonstrate.
http://i.prntscr.com/hj56iDxlQSW2x9Jpt4Sxhg.png
This is the graph of the above table:
http://i.prntscr.com/pcXHKqDAS\_C2aInqzqblnA.png
 
In the example above, the true probability of Hypothesis [;H_i;] [;(P_i);] is [;0.5;] and as we see, after sufficient number of trials, the different [;Pr_{i_{z1_j}};]s get closer to [;0.5;].
 
To generalize from my above argument:

If you have enough information, your initial beliefs are irrelevant—you will arrive at the same final beliefs.
 
Because I can’t resist, a corollary to Aumann’s agreement theorem.
Given sufficient information, two rationalists will always arrive at the same final beliefs irrespective of their initial beliefs.

The above can be generalized to what I call the “Universal Agreement Theorem”:

Given sufficient evidence, all rationalists will arrive at the same set of beliefs regarding a phenomenon irrespective of their initial set of beliefs regarding said phenomenon.

 

Exercise For the Reader

Prove [;\lim_{n \to \infty} \frac{ Pr_{i_{z1}}}{Pr_{i_{z2}}} = 1 ;].

Mentioned in
1Looking for ideas about Epistemology related topics