I haven't played this, but I've watched a video of Japanese comedians playing it, which actually does give a sense of how it works.
There's a (IMO very obvious) algorithm for winning this with literally zero communication: play card N after N seconds have elapsed. I don't know how easy it is to precisely count double-digit-second intervals, but it doesn't seem that interesting to find out. It seems pretty clear that steelmanning the rules means not counting seconds.
So what you end up with is a game of reading precise system-2 information (numbers), translating it into nebulous system-1 body language, that the other players need to be able to process back into a precise number.
Re: cells defecting by becoming gametes, I think you were maybe a bit too terse. I believe I've figured out what's going on, but let me run it by you:
*Within the organism*, there's no selection pressure for cells to become gametes--mutations are random variations, not strategic actors, so a leaf is no more likely to 'decide' to become a flower than the reverse (which would also be harmful overall). The organism *does* have an incentive to keep the random mutation rate down, but no reason to *specifically* combat cells 'defecting' in this way.
And actually, if flowers are especially costly, the organism might evolve specific "no accidental flowers" adaptations--but for reasons unrelated to coordination problems.
Meanwhile, on a species level, there might be a bias in favor of the flower-instead-of-leaf mutations appearing in the gene pool, since these can show up via gamete mutations or leaf mutations, whereas most mutations can only appear via gamete mutations. Intuitively this seems unlikely to be a big deal, but I do wonder if tweaking the parameters could make it significant enough to make a specific adaptation to fight it worthwhile.
This makes a lot more sense with some background on what a ribozyme is, which I lacked before reading this. AIUI certain sequences of RNA fold up in a way that makes them act as enzymes.
Though the real point isn't about biology, but rather generic coordination mechanisms...
FWIW I first read this post before this comment was written, then happened to think about it again today and had this idea, and came here to post it.
I do think it's a dangerous fallacy to assume mutually-altruistic equilibria are optimal--'I take care of me, you take care of you' is sometimes more efficient than 'you take care of me, I take care of you'.
Maybe someone needs to study whether Western countries ever exhibit "antisocial cooperation," that is, an equilibrium of enforced public contributions in an "inefficient public goods game" where each of four players gets 20% of the central pool. Might be more likely if you structure it as tokens starting out in the center and players have the option to take them? (Call it the 'enclosure game', perhaps)
So the big question here is, why are zetetic explanations good? Why do we need or want them when civilization will happily supply us with finished bread, or industrial yeast, or rote instructions for how to make sourdough from scratch? The paragraph beginning "Zetetic explanations are empowering" starts to answer, but a little bit vaguely for my tastes. Here's my list of possible answers:
1) Subjective reasons. They're fun or aesthetically pleasing. This feels like a throwaway reason, and doesn't get listed explicitly in the OP unless 'empowering' unpacks to 'subjectively pleasing', but I wouldn't throw it away so fast--if enough people find them fun, that alone could justify a campaign to put more zetetic explanations in the world.
2) They let you test what you're told. This is one of the reasons given in OP. Unfortunately, not every subject is amenable to zetetic explanation, and as long as I have to make up my mind about lots of science without zetetic understanding, I don't see zetetic explanation being an important part of my fake science filter.
3) They let you discover new things, whereas following rote instructions will only let you do what's been done before. This is true, but I think it usually takes a large base of zetetic understanding to do new useful things. If I tried to create new fermented foods based solely on having read this post, I probably wouldn't achieve anything useful. But if I did want to create novel fermented foods, I'd want to load up on lots more zetetic knowledge.
4) General increased wisdom? Maybe a zetetic understanding of bread ripples through your knowledge, leading you to a slightly better understanding of biology, the process of innovation, nutrition, and a variety of related fields, and if you keep amassing zetetic understandings of things it'll add up and you'll be smarter about everything. It's a nice story, but I'm not convinced it's true.
I think what we need is some notion of mediation. That is, a way to recognize that your liver's effects on your bank account are mediated by effects on your health and it's therefore better thought of as a health optimizer.
This has to be counteracted by some kind of complexity penalty, though, or else you can only ever call a thing a [its-specific-physical-effects-on-the-world]-maximizer.
I wonder if we might define this complexity penalty relative to our own ontology. That is, to me, a description of what specifically the liver does requires lots of new information, so it makes sense to just think of it as a health optimizer. But to a medical scientist, the "detoxifies..." description is still pretty simple and obviously superior to my crude 'health optimizer' designation.
if there's a sufficiently large amount of sufficiently precise data, then the physically-correct model's high accuracy is going to swamp the complexity penalty
I don't think that's necessarily true?
Probutility of winning = 1 USD
Perceived chemical-ness is a very rough heuristic for the degree of optimization a food has undergone for being sold in a modern economy (see http://slatestarcodex.com/2017/04/25/book-review-the-hungry-brain/ for why this might be something you want to avoid). Very, very rough--you could no doubt list examples of 'non-chemicals' that are more optimized than 'chemicals' all day, as well as optimizations that are almost certainly not harmful. And yet I'd wager the correlation is there.
Okay, I think I see where you're coming from. I've definitely updated towards considering the OP proposal scarier. Thanks for spelling things out.