Kindly
Kindly has not written any posts yet.

When lacking evidence, the testing process is difficult, weird and lengthy - and in light of the 'saturation' mentioned in [5.1] - I claim that, in most cases, the cost-benefit analysis will result in the decision to ignore the claim.
And I think that this inarguably the correct thing to do, unless you have some way of filtering out the false claims.
From the point of view of someone who has a true claim but doesn't have evidence for it and can't easily convince someone else, you're right that this approach is frustrating. But if I were to relax my standards, the odds are that I wouldn't start with your true claim, but... (read more)
Rational assessment can be misleading when dealing with experiential knowledge that is not yet scientifically proven, has no obvious external function but is, nevertheless, experientially accessible.
So, uh, is the typical claim that has an equal lack of scientific evidence true, or false? (Maybe if we condition on how difficult it is to prove.)
If true - then the rational assessment would be to believe such claims, and not wait for them to be scientifically proven.
If false - then the rational assessment would be to disbelieve such claims. But for most such claims, this is the right thing to do! It's true that person A has actually got hold of a true claim that there's no evidence for. But there's many more people making false claims with equal evidence; why should B believe A, and not believe those other people?
(More precisely, we'd want to do a cost-benefit analysis of believing/disbelieving a true claim vs. a comparably difficult-to-test false claim.)
I think that in the interests of being fair to the creators of the video, you should link to http://www.nottingham.ac.uk/~ppzap4/response.html, the explanation written by (at least one of) the creators of the video, which addresses some of the complaints.
In particular, let me quote the final paragraph:
... (read more)There is an enduring debate about how far we should deviate from the rigorous academic approach in order to engage the wider public. From what I can tell, our video has engaged huge numbers of people, with and without mathematical backgrounds, and got them debating divergent sums in internet forums and in the office. That cannot be a bad thing and I'm sure the simplicity of the
No, I think I meant what I said. I think that this song lyric can in fact only make a difference given a large pre-existing weight, and I think the distribution of being weirded out by Solstices is bimodal: there are not people that are moderately weirded out but not enough to leave.
Extremely unlikely that people exist that aren't weirded out by Solstices in general but one song lyric is the straw that breaks the camel's back.
Not quite. I outlined the things that have to be going on for me to be making a decision.
In the classic problem, Omega cannot influence my decision; it can only figure out what it is before I do. It is as though I am solving a math problem, and Omega solves it first; the only confusing bit is that the problem in question is self-referential.
If there is a gene that determines what my decision is, then I am not making the decision at all. Any true attempt to figure out what to do is going to depend on my understanding of logic, my familiarity with common mistakes in similar problems, my experience with all the arguments made about Newcomb's problem, and so on; if, despite all that, the box I choose has been determined since my birth, then none of these things (none of the things that make up me!) are a factor at all. Either my reasoning process is overridden in one specific case, or it is irreparably flawed to begin with.
Let's assume that every test has the same probability of returning the correct result, regardless of what it is (e.g., if + is correct, then Pr[A returns +] = 12/20, and if - is correct, then Pr[A returns +] = 8/20).
The key statistic for each test is the ratio Pr[X is positive|disease] : Pr[X is positive|healthy]. This ratio is 3:2 for test A, 4:1 for test B, and 5:3 for test C. If we assume independence, we can multiply these together, getting a ratio of 10:1.
If your prior is Pr[disease]=1/20, then Pr[disease] : Pr[healthy] = 1:19, so your posterior odds are 10:19. This means that Pr[disease|+++] = 10/29, just over 1/3.
You may have obtained 1/2 by a double confusion between odds and probabilities. If your prior had been Pr[disease]=1/21, then we'd have prior odds of 1:20 and posterior odds of 1:2 (which is a probability of 1/3, not of 1/2).
If you're looking for high-risk activities that pay well, why are you limiting yourself to legal options?
Is that a bad thing?
Because lotteries cost more to play than the chance of winning is worth, someone who understands basic probability will not buy lottery tickets. That puts them at a disadvantage for winning the lottery. But it gives than an overall advantage in having more money, so I don't see it as a problem.
The situation you're describing is similar. If you dismiss beliefs that have no evidence from a reference class of mostly-false beliefs, you're at a disadvantage in knowing about unlikely-but-true facts that have yet to become mainstream. But you're also not paying the opportunity... (read more)