All of Cecil's Comments + Replies

You're Entitled to Arguments, But Not (That Particular) Proof

You claim there are significant issues with the climate science process, but admit there are no journal articles criticizing the process. If you know enough to find faults with their science, why haven't you yourself written an article on the matter?

Do you think there is something inherent in the culture of climatology science that introduces these anti-Bayesian biases? Why is climate science subject to this when other sciences are not?

Are you saying the field is systemically politically driven from the top down?

6mattnewport13y
Have you followed the climategate email leak story at all? One of the more damning themes in the leaked emails is the discussion of ways to keep dissenting views out of the peer reviewed journals. One of the stronger arguments used against AGW skeptics was that there were not more papers supporting their claims in peer reviewed journals. Given the prevalence of this argument, clear evidence of efforts to keep 'dissenting' opinions out of the main peer reviewed journals is a big problem for the credibility of climate science. For example [http://www.climatechangefraud.com/climate-reports/5642-climategate-emails-provide-unwanted-scrutiny-of-climate-scientists] : And this [http://www.eastangliaemails.com/emails.php?eid=295&filename=1047388489.txt] comment is also rather damning:
6SilasBarta13y
For the same reason I haven't personally solved every injustice: a) time constraints, and b) others are currently raising awareness of this problem. Other sciences are affected by anti-Bayesian biases, and this will be a tendency in proportion to the difficulty of finding solid evidence that your theory is wrong. Which is why I claim e.g. sociology and literature are mostly a waste of time. Generally speaking, science is in some ways too strict and some ways not strict enough. Eliezer_Yudkowsky has actually pointed out before the general failure [http://lesswrong.com/lw/qe/do_scientists_already_know_this_stuff/] to appropriately teach rationality in the classroom, and so scientists in general aren't aware of this problem. Politics, of course, does play a part. When it's not just about "who's right" but about "who gets to control resources", then the biases go into hyperdrive. People aren't just pointing out problems with your research, they're fighting for the other team! The goal is then about proving them wrong, not stopping to check whether your theory is correct in the first place. ("Ask whether, not why. [http://lesswrong.com/lw/vk/back_up_and_ask_whether_not_why/]")
You Be the Jury: Survey on a Current Event

In response to the cartwheel part - here's a possible explanation. It's from a pretty clearly biased source, but it does sound reasonable. http://perugia-shock.blogspot.com/2009/03/amanda-knox-finally-admits.html

At the very least I doubt she was leaping around exuberantly and spontaenously.

3AnlamK13y
From the link you give: Thanks for this - one more mystery solved.
You Be the Jury: Survey on a Current Event

Is this an arithmetic mean or a geometric mean?

Which is the correct mean to use for averaging probabilities, anyway?

0TraderJoe10y
[comment deleted]
6Douglas_Knight13y
The arithmetic mean of the log odds [https://en.wikipedia.org/wiki/Logit] is pretty natural. It is 27%, but the median looks like 30% to me.
2jimmy13y
Neither really, but there's no easy way to do it. The mean has the problem that if a lot of people claim near ignorance (like I did), then that counts against Knox, when really, it doesn't mean anything. The problem with the geometric mean is that it is biased towards the low end of the spectrum, so it depends on if the statement is negated or not. (GM(.001,.999)<4%) The median is probably better than both, but it's still not the right way to do it. Ideally you'd try to count up how much evidence each person saw and add those, but it is no easy task to estimate how correlated the evidence is (though it's probably a worthwhile subject to put thought into, since thats how you determine how much an additional persons belief is worth [http://lesswrong.com/lw/es/how_to_use_philosophical_majoritarianism/]) Even with a large degree of overlap, this is probably one of the cases where sharing beliefs should make everyones beliefs more extreme. I'd sorta like to see what it'd look like on round two.
No Individual Particles

I believe the idea here is that because particle A and B are indistinguishable, the probability assigned to the case where particle A is "before" B can be equivalently assigned to the opposite case.

In the same manner that a train schedule with cities across the top and side needs only one entry per cityA / cityB pairing.

0Liron13y
I see