Right, it depends on the vegan diet. Grains and legume protein are complementary, one deficient in branched chain amino acids, the other deficient in sulfur-containing amino acids, if I recall correctly. I think it's an easy failure mode of a vegan diet to be all legume protein, and the gluten-free trend has made this even worse, but that's a rant for the other day. The point here is that when it comes to dietary protein, all that matters is the amino acid composition. Every protein, including collagen, is broken down in the stomach into component amino acids. And that's why collagen supplements are a scam, and while I am broadly sympathetic to the message of this post I think rationalists should do better.
I eat oysters but am otherwise vegan. The reason I didn't just go with standard veganism is something like the more general arguments in this post. I had my reasons for nitpicking the details of this post; rationalists should learn some science and thereby be less wrong than the rest of the cultic milieu. But I want to comment again to focus on the positive: this post was a great reminder that I'm not a real vegan and why, and I've been making more of an effort to get oysters since reading it.
Wait, you think people need to eat collagen? Collagen is just a kind of protein, it'll get broken down into raw amino acids in the stomach. There can be issues with a vegan diet not getting complete protein (that is, low on one or more essential amino acids) but there's nothing special about collagen specifically.
I'm surprised at how hard it is for me to think of counterexamples.
I thought surely whale populations due to the slow generation time, but it looks like humpback whale populations have already recovered from whaling, and blue whales will get there before long.
Thinking again—in my baseball example, gravity is pulling the ball into the domain of applicability of the constant acceleration model.
Maybe what's special about the exponential growth model is it implies escape from its own domain of applicability, in time that grows slowly (logarithmically) with the threshold.
I remember this by analogy to Curry's paradox.
Where the sentence from Curry's paradox says "If this statement is true, then ", says "if this statement is provable, then ", that is, .
In Curry's paradox, if the sentence is true, that would indeed imply that is true. And with , the situation is analogous, but with truth replaced by provability: if is provable, then is provable. That is, .
But, unlike in Curry's paradox, this is not what itself says! Replacing truth with provability has attenuated the sentence, destroyed its ability to cause paradox.
If only , then we would have our paradox back... and that's Löb's theorem.
This is all about , just about one direction of the biimplication, whereas the post proves not just that but the other direction. It seems that only this forward direction is used in the proof at the end of the post though.
You say "if we are to accurately model the world"...
If I am modelling the path of a baseball, and I write "F = mg", would you "correct" me that it's actually inverse square, that the Earth's gravitation cannot stay at this strength to arbitrary heights? If you did, I would remind you that we are talking about a baseball game, and not shooting it into orbit—or conclude that you had an agenda other than determining where the ball lands.
What if I'm sampling from a population, and you catch me multiplying probabilities together, as if my draws are independent, as if the population is infinite? Yes there is an end to the population, but as long as it's far away, the dependence induced by sampling without replacement is negligible.
Well, that's the question, whether to include an effect in the model or whether it's negligible. An effect like finite population size, diminishing gravity, or the "crowding" effects that turn an exponential growth model logistic.
And the question cannot be escaped just by noting the effect is important eventually.
Eliezer in 2008, in When (Not) To Use Probabilities, wrote:
To be specific, I would advise, in most cases, against using non-numerical procedures to create what appear to be numerical probabilities. Numbers should come from numbers.
Yeah... well, I thought of the because it sounds like we're getting the probabilities of from some experiment. So is the results of the experiment, which in this case is a vector of frequencies. When I put it like that, it sounds like it's is just a rhetorical device for saying that we have given probabilities of .
But I still seem to need for my dictionary. I have . What is ? It is some kind of updated probability of , right? Like we went from one probability to the other by doing an experiment. If I didn't write , I'd need something like and .
Reading again, it seems like this is exactly Jeffrey conditionalization. So whether you include some extra variable just depends on what you think of Jeffrey conditionalization.
I feel like I'm missing something, though, about what this experiment is and means. For example, I'm not totally clear on whether we have one state , and a collection of replicates of state ; or is it a collection of replicates of pairs?
Looking at the paper, I see the connection to Jeffrey conditionalization is made explicitly. And it mentions Pearl's "virtual evidence method"; is this what he calls introducing this ? But no clarity on exactly what this experiment is. It just says:
But how should the above be generalized to the situation where the new information does not come in the form of a definite value for , but as “soft evidence,” i.e., a probability distribution ?"
By the way, regarding your coin toss example, I can at least say how this is handled in Bayesian statistics. There are separate random variables for each coin toss. is the first, is the second, etc. If you have coin tosses, then your sample is a vector containing to . Then the posterior probability is . This will be covered in any Bayesian statistics textbook as "the Bernoulli model". My class used Hoff's book, which provides a quick start.
I guess this example suggests a single unknown (whether the coin is loaded or not) and replicates of .
That's interesting—if it's broken down not into single amino acids, but a mixture of single amino acids, dipeptides, and tripeptides, that still fits with how I understand the system to work; like we're breaking it down into pieces, but not reliably into single units, sometimes two or three. And then collagen consists of distinctive tripeptide repeats, so the tripeptides you get from collagen are a distinct mixture rather than just random 3-mers, I didn't think of that. That these tripeptides actually do something is surprising if true, but why not.
I guess what I was thinking was that when you eat collagen, it doesn't become your collagen. Which seems to be true: your collagen is made at the ribosome from single amino acids, not assembled from the kind of dipeptides and tripeptides discussed in the paper. So it's not like you get collagen by eating collagen, the way you get vitamin B12 by eating vitamin B12. But if there's some totally separate biological effect... well, I can't rule it out.