Thanks for your scrutiny :) (and sorry for the long-winded response...)

Let me try to clarify the bottom line of the post:

This post clarifies some subtle points about the ways in which confidence intervals are useful. In the way that a confidence interval is defined mathematically (as far as I understand), without any further axioms, it does not give lots of guarantees. As a side note, the NIH claim seems to be just wrong (and is *not* what I suppose to be the standard definition the rest of the article is about), and there isn't any method of attaching confi...

110mo

Gotcha - thanks for clarifying and providing the example - it helps!
Everything I know is from the Bayesian way of doing things, so I'm going to talk
about uncertainty intervals, which I think are mostly the same as confidence
intervals; the main difference, as far as I can tell, is philosophy. (People
also call uncertainty intervals "credible intervals" or "credibility
intervals".)
With regard to evaluating the dependability of a given interval, I think it's
important to think about the underlying distribution the interval is being drawn
over. I've drawn 3 examples in this image [https://ibb.co/4Yy6VQs]: I think
you're worried about situations like the third case (#C). In #C, when q doesn't
fall in the interval, it probably is far from the interval, because the rest of
the probability is concentrated in the left & right bounds of the range.
I'm gonna come out strong and say that this can never happen in the
tomato-sandwich case, when you use the correct calculations to build the
interval. The correct calculations are:
1. Specify a Beta distribution, B(1, 1) as your prior. (The 1's can be other
numbers; doesn't change my broader argument).
2. Because the tomato-sandwich question is isomorphic to a coin flip, the data
distribution is most naturally modeled as a Bernoulli. So treat your data as
being drawn from a Bernoulli distribution.
3. Then the posterior distribution is Beta(1 + # tomato, 1 + # sandwich).
[Since the Beta and Bernoulli are conjugate, this is always the form of the
posterior].
4. Use either the equal-tails or highest-probability-density method to
construct the interval.
Since the posterior distribution is a Beta, and a Beta with a few data points
always has exactly one hump, C won't happen.[1] [#fn4vcj5zu16m2]. So if you know
a calculation was done correctly, and that it is modeling a Bernoulli[2]
[#fnzdg8mnpbhwc]situation, you're safe - the risks of C won't be there. (You can
play with different Beta distributions

Nice list! :)

A little side note: I think

The risk of a reporting error by the CDC

might also count as a factor that could lead to "this question being a NO".

21y

Variance only increases chance of Yes here. If cases spike and we're averaging
over 100k, reporting errors won't matter. If we've averaging 75k, a state
dumping extra cases could plausibly push it over 100k

Nice discovery! I will look into it.

In my naive understanding, I imagine that each strain only infects a small fraction of all cells, so that two strains should rarely infect the same cell. On the other hand, the abstract explicitly mentions competition between strains, suggesting that there must be connection to multiple infection of cells.

So could I summarize this as follows? The MPG asserts in the linked article that the rapid evolution might arise from pre-existing immunity in a population because of some "increasing [...] selection pressure". On the other hand, you argue since that the new variants did not just change superficially to evade being recognized but seemed to have adapted to the human host, and this is not what one would expect if the main driving force would be immune evasion.

Thanks for you response -- if you have any thoughts to this proposal for a summary, I'd be very interested.

Regarding logic and methods of knowing, I agree that logic might not be the only useful way of knowledge production, but why shouldn't you have it in your toolbox? I'm just trying to argue that there's no reason for anyone to neglect logical arguments if they yield new knowledge.

I agree that "prior" is a vastly better word choice than "axiom" because it allows us to refine the prior later.

The "planetary consciousness" thing also appears to me as a misunderstanding: I don't want to propose that every information about the world should be retrieved and processed, in the same way that even in my direct environment, what my neighbour does in his house is none of my business.

How do you differentiate between "Truth" and "truth"? I would really appreciate some clarification regarding these two words because it would help me to understand your comment better. Thanks :)

I'm very grateful that you bring up these points. Sorry for the long response, but I like your comment and would like to write down some thoughts on each part of it.

One doesn't need to assume an objective reality if one wants to be agentic. One can believe that 1) Stuff you do influences your prosperity 2) It is possible select for more prosperous influences.

First of all, I think choosing the term "objective" in my post was a too strong, and not quite well-defined. (My post also seems at risk of circular reasoning because it somehow tries to argue for rati...

12y

With effectiveness my doubt is that you iss kinds of knowledge in your
definition and that logic might be less than effective in the grander scheme of
things. For example the knowledge of how to ride a bike is hard to get into the
scope of logic, in that respect logic is incomplete ie it leaves a bit of
knowledge out. There is the issue with Mary's room and whether color experience
counts as knowledge, we can grant her allthe math test books and science books
but we can still doubt whether we have caught all knowledge. Even the context of
"effective method" Turing suspected that mathematicians use a kind of "insight"
that coming up with a proof is a different kind of process than following a
proof. Universal turing machine captures "effective method" which encompasses
all of formal mathematics that person could write down. But still doubt lingers
whether that is all the intersting kind of processes.
One could also be worried about a method of knowing that encapsulates logic.
Divine relevation could be posited to give vast amounts of knowledge maybe
enough so that further knowledge production work ceases to be viable. There is
also the "trivial theory of arithmetic" where we just assume all arithmetic
truths as axioms. In such a system there are no theorems, there is only a check
whether or not a thing is a axiom or not. Such a system could be all
encompassing and avoid the use of logical inference.
Starting point is a bit undefined, axiomaatic approach is way more defined. Sure
we don'thave super cdertani "boot-system" on how we get going. But it doesn't
feature the characteristics of a axiomatic system. In the axiomatic style you
can go "Assume X. Is it the case that X?" and you can definetely that "yes, X is
the case". If you tried to shoehorn the sensory reliance in axiomatic terms it
would go something like "Assume X. Now turns out that X isn't the case" which is
non-sense in proof terms. Sure there is appeal to absurdity "Entertain
subthought:[Assume X. X lea

I think Zvi calls this a hostile epistemic environment since there are actors that try really hard to produce convincing propaganda. Maybe a helpful heuristic is this: Are there checks and balanches for the media? As far as I know, this is hardly the case in Russia right now since independent media outlets have been shut down and you can be jailed for expressing your sincere opinion. This is a very bad sign. (If there were some kind of freedom of speech, more people would be scrutinizing important claims, so that not hearing these critics would be evidence... (read more)