Wiki Contributions


I think Zvi calls this a hostile epistemic environment since there are actors that try really hard to produce convincing propaganda. Maybe a helpful heuristic is this: Are there checks and balanches for the media? As far as I know, this is hardly the case in Russia right now since independent media outlets have been shut down and you can be jailed for expressing your sincere opinion. This is a very bad sign. (If there were some kind of freedom of speech, more people would be scrutinizing important claims, so that not hearing these critics would be evidence for the truthfulness of these claims, I guess.) Unfortunately, the EU also started blocking Russion state media outlets and thereby complicating the situation, but still, you don't have to worry being jailed for expressing a contrarian opinion.

Besides these quick thoughts, I want to propose a framing of the problem. Assume there's a coin in the world and everybody has high stakes in whether it is fair or biased. Now, different news outlets report what they found out when they flipped the coin themselves. So some report that they got "1000x tails" and others state that their experiments suggest the coin is fair. Maybe they are, technically, both correct in their statements but ignored some coin flips that did not fit into their narrative. [Disclaimer: This doesn't capture everything of real-world news but gives a feeling for the more complex topics where you build your opinion from lots of tiny pieces of evidence.]

The bottom line is that in a no-trust environment (which exists when people with disjoint trusted sources try to communicate), it's not possible to settle whether the coin is fair.

A solution that I find, theoretically, especially exciting is adversarial collaboration. You team up with a person of opposite opinion and devise some kind of experiment (or active observation) that helps settle the diagreement. In the above framing, flip the coin several times in the presence of the other person and follow a previously agreed protocol of determining which side is supported by the evidence.

In practice, this is hard. We (most of us) cannot just go to Ukraine (if we're not already there) to observe what really happens. But what if we think bigger? Imagine thousands of people with diverse opinions of the topic to join forces. They would have a lot more resources to do active observations to reconciliate their differing opinions. For example, as a large group they have better chances to interview important people. If they are honest players, they might agree on a small group of people to travel and actively make observations together. It is also easier for a large group to gather and prominent answers to unsettled questions. The precondition is the honest will to engage with the other side and truthfully settle the disputes.

Unfortunately, this is just a theoretical idea I wasn't able to test in practice, yet -- and it seems hard to imagine to found such an organization in a state where one can be punished for critical inquiry.

Thanks for your scrutiny :) (and sorry for the long-winded response...)
Let me try to clarify the bottom line of the post:

This post clarifies some subtle points about the ways in which confidence intervals are useful. In the way that a confidence interval is defined mathematically (as far as I understand), without any further axioms, it does not give lots of guarantees. As a side note, the NIH claim seems to be just wrong (and is not what I suppose to be the standard definition the rest of the article is about), and there isn't any method of attaching confidence intervals that can live up to their claim.

It's not that we shouldn't use confidence intervals in any form. But when some practical consequences are drawn conditional on a confidence interval, one has to be wary that there will be some error. In many situations, confidence intervals might be sufficiently "nice" such that these errors are negligible and the conclusions still point in the right direction, but there will be some error, at least in how strong the evidence is regarded to be (except if you don't just use the definition of a confidence interval but use the narrowness of the interval as an intuitive indicator of the strength of evidence if that's possible with your given method of attaching confidence intervals, but then you don't really use that it's a confidence interval).

Here's an example of a maliciously constructed confidence interval for the scenario in the post. If more than, say, 90 or less than 10 people from the sample prefer sandwiches, output  as confidence interval. If exactly 50 people prefer sandwiches, output . Otherwise, output the interval centered at the mean of the sample and adjust the narrowness to account for the standard deviation. Note that it's rare to have exactly 50 people prefer sandwiches (a bound independent of q is 8%), so this trick doesn't worsen the confidence level of the interval too much. If one plans to only act upon clear-cut intervals such as , one will almost always lose when these intervals occur (50:50 will be obtained most of the time when q is near 0.5).

Will something similarly bad but less drastic happen in reality when the confidence interval method is not constructed in a malicious way? When it's only about rough estimates probably not, but I don't know yet.

I should probably give the article a question as title. The current title seems a bit too harsh and overshadows my conclusion that confidence intervals seem to be handy while I don't understand when they are safe to use in practice. In view of the frequent use of confidence intervals in science (and their relevance for calibrated predictions), I'd like to understand how much I can infer from them in which situations. Do you know any good heuristics for this?

Nice list! :)

A little side note: I think

The risk of a reporting error by the CDC

might also count as a factor that could lead to "this question being a NO".

Nice discovery! I will look into it.
In my naive understanding, I imagine that each strain only infects a small fraction of all cells, so that two strains should rarely infect the same cell. On the other hand, the abstract explicitly mentions competition between strains, suggesting that there must be connection to multiple infection of cells.

So could I summarize this as follows? The MPG asserts in the linked article that the rapid evolution might arise from pre-existing immunity in a population because of some "increasing [...] selection pressure". On the other hand, you argue since that the new variants did not just change superficially to evade being recognized but seemed to have adapted to the human host, and this is not what one would expect if the main driving force would be immune evasion.

Thanks for you response -- if you have any thoughts to this proposal for a summary, I'd be very interested.

Regarding logic and methods of knowing, I agree that logic might not be the only useful way of knowledge production, but why shouldn't you have it in your toolbox? I'm just trying to argue that there's no reason for anyone to neglect logical arguments if they yield new knowledge.

I agree that "prior" is a vastly better word choice than "axiom" because it allows us to refine the prior later.

The "planetary consciousness" thing also appears to me as a misunderstanding: I don't want to propose that every information about the world should be retrieved and processed, in the same way that even in my direct environment, what my neighbour does in his house is none of my business.

How do you differentiate between "Truth" and "truth"? I would really appreciate some clarification regarding these two words because it would help me to understand your comment better. Thanks :)

I'm very grateful that you bring up these points. Sorry for the long response, but I like your comment and would like to write down some thoughts on each part of it.

One doesn't need to assume an objective reality if one wants to be agentic. One can believe that 1) Stuff you do influences your prosperity 2) It is possible select for more prosperous influences.

First of all, I think choosing the term "objective" in my post was a too strong, and not quite well-defined. (My post also seems at risk of circular reasoning because it somehow tries to argue for rationality using rationality.)
I really should have thought more about this paragraph. You proposed an alternative to the assumption of an objective reality. While this also requires the assumption that there are some "real" rules that tell which of one's actions cause which effects to my sensations today or in the future, and thus some form of reality, this reality indeed could be purely subjective in the sense that other "sentient beings" (if there are any) might not experience the same "reality", the same rules.

The use of the concept of "effective" is a bit wonky there and the word seems to carry a lot of the meaning. What I know of in my memory "effective method" is a measure of what a computer or mathematician is able to unambigiously specify. I have it hard to imagine to fairly judge a method to be ineffective.

What I mean by effectiveness is a "measure of completeness": If some method for obtaining knowledge does not obtain any knowledge at all, it is not effective at all; if it would be able to derive all true statements about the world, it would be very effective. Logic is a tool which just consists of putting existing statements together and yields new statements that are guaranteed to be true, given that the hypotheses were correct. So I'd argue that not having logic in one's toolbox is never an advantage with respect to effectiveness.

Just because you need to have a starting point doesn't mean that your approach needs to be axiomatic.

This is not clear to me. What do you think is the difference between an axiom and a starting point in epistemology?

It is unclear why planetary consiouness would be desirable. If you admit that you can't know what happen on the other side of the planet to a great degree you don't have to rely on unreliable data mediums. Typically your life happens here and not there. And even if "there" is relevant to your life it usually has an intermediary through which it affects stuff "here".

This is also a very good point, and I'll try to clarify. Consider an asteroid that is going to collide with Earth. There will be some point in the future where we know about the existence of the asteroid, even if only for a short time frame, depending on how deadly it is. But it can be hard to know the asteroid's position (or even existence) in advance, although this would be much more useful.

So, in a nutshell, I'm also interested in parts of reality that do not yet strongly interact with my environment but that might interact with my environment in the future. (Another reason might be ethical: We should know when somewhere on the world someone commits genocide so that we can use our impact to do something about it.)

So maybe the problem is a lag of feedback, or hidden complexity in the causal chain between one's own actions and the feedback, and this complexity requires one to have a deep understanding of something one cannot observe in a direct way.