While, if successful, such an epistemic technology would be incredibly valuable, I think that the possibility of failure should give us pause. In the worst case, this effectively has the same properties as arbitrary censorship: one side "wins" and gets to decide what is legitimate, and what counts towards changing the consensus, afterwards, perhaps by manipulating the definitions of success or testability. Unlike in sports, where the thing being evaluated and the thing doing the evaluating are generally separate (the success or failure of athletes doesn't impede the abilities of statisticians, and vice versa), there is a risk that the system is both its subject and its controller.
I do think "[a]bility to contribute to the thought process seems under-valued" is very relevant here. A prediction-tracking system captures one...layer[^1], I suppose, of intellectuals; the layer that is concerned with making frequent, specific, testable predictions about imminent events. Those who make theories that are more vague, or with more complex outcomes, or even less frequent[^2][^3], while perhaps instrumental to the frequent, specific, testable predictors, would not be recognized, unless there were some sort of complex system compelling the assignment of credit to the vague contributors (and presumably to their vague contributors, et cetera, across the entire intellectual lineage or at least some maximum feasible depth).
This would be useful to help the lay public understand outcomes of events, but not necessarily useful in helping them learn about the actual models behind them; it leaves them with models like "trust Alice, Bob, and Carol, but not Dan, Eve, or Frank" rather than "Alice, Bob, and Carol all subscribe to George's microeconomic theory which says that wages are determined by the House of Mars, and Dan, Eve, and Frank's failure to predict changes in household income using Helena's theory that wage increases are caused by three-ghost visitations to CEOs' dreams substantially discredits it". Intellectuals could declare that their successes or failures, or those of their peers, were due to adherence to a specific theory, or the lay people could try to infer as such, but this is another layer of intellectual analysis that is nontrivial unless everyone wears jerseys declaring what theoretical school of thought they follow (useful if there are a few major schools of thought in a field and the main conflict is between them, in which case we really ought to be ranking those instead of individuals; not terribly useful otherwise).
[^1]: I do not mean to imply here that such intellectuals are above or below other sorts. I use layer here in the same way that it is used in neural networks, denoting that its elements are posterior to other layers and closer to a human-readable/human-valued result.
[^2]: For example, someone who predicts the weather will have much more opportunity to be trusted than someone who predicts elections. Perhaps this is how it should be; while the latter are less frequent, they will likely have a wider spread, and if our overall confidence in election-predicting intellectuals is lower than in our predictions of weather-predicting intellectuals, that might just be the right response to a field with relatively fewer data points: less confidence in any specific prediction or source of knowledge.
[^3] On the other hand, these intellectuals may be less applied not because of the nature of their field, but the nature of their specialization; a grand an abstract genius could produce incredibly detailed models of the world, and the several people who run the numbers on those models would be the ones rewarded with a track record of successful predictions.
Why _haven't_ they already switched? Presumably, these companies are full of people with some vague incentives that point at maximizing efficacy, but they're leaving a "clearly superior" product on the table. It may be that the answer is that this is some sort of systemic, widespread failure of decision-making, or a decision-making success under different criteria (lower tolerance for the risk of change, perhaps, than these same systems have now) rather than a reflection of some inadequacy of RT-LAMP, but "the folks with the expertise and incentive to get it right are all getting it wrong and leaving money on the table" sounds like a more complex explanation than "there are shortcomings to RT-LAMP that I haven't considered", and I'd like to see some further evidence in favor of it.
You may be familiar with the term "Technological Singularity" as used to describe what happens in the wake of the development of superintelligent AGI; this term is not merely impressive but refers to the belief that what follows such a development would be incredibly and unpredictably transformative, subject to new phenomena and patterns of which we may not yet be able to conceive.
I don't believe it would be smart to invest with such a scenario in mind; we have little reason to believe that how much pre-Singularity wealth one has would matter post-Singularity in such a way that it would be wise to include such a term in one's expected value and decision-making. It would be not entirely unlike buying stock based on which companies would most benefit from the announcement of an incoming Earth-shattering asteroid. The development of superintelligent AGI is an existential threat to just about every institution, including the stock market and our current conception of the economy in general. A rational, entirely selfish actor or aggregate thereof does not make plans for what happens after its death.
However, I must admit that I have no data on the subject, and while I would not guess that there is much relevant data available, I imagine there is some - did the U.S. stock market account for what companies might be most successful in the case of a Soviet conquest of the U.S.? Is the potential profitability of a company in a world transformed by a global Communist revolution accounted for in its current stock price? I do not know, but I would be very surprised to learn that the stock market priced scenarios in which it and the institutions on which it depends are unlikely to continue to exist in recognizable forms.
The example of the pile of sand sounds a lot like the Chinese Room thought experiment, because at some point, the function for translating between states of the "computer" and the mental states which it represents must begin to (subjectively, at least, but also with some sort of information-theoretic similarity) resemble a giant look-up table. Perhaps it would be accurate to say that a pile of sand with an associated translation function is somewhere on a continuum between an unambiguously conscious (if anything can be said to be conscious) mind (such as a natural human mind) and a Chinese Room. In such a case, the issue raised by this post is an extension of the Chinese Room problem, and may not require a separate answer, but does do the notable service of illustrating a continuum along which the Chinese Room lies, rather than a binary.
Not entirely true; low sperm counts are associated with low male fertility in part because sperm carry enzymes which clear the way for other sperm - so a single sperm isn't going to get very far.
In addition to enjoying the content, I liked the illustrations, which I did not find necessary for understanding but which did break up the text nicely. I encourage you to continue using them.
1) Historical counter-examples are valid. Counter-examples of the form of "if you had followed this premise at that time, with the information available in that circumstance, you would have come to a conclusion we now recognize as incorrect" are valid and, in my opinion, quite good. Alternately, this other person has a very stupid argument; just ask about other things which tend to be correlated with what we consider "advanced", such as low infant mortality rates (does that mean human value lies entirely in surviving to age five?) or taller buildings (is the United Arab Emirates is the objectively best country?).
2) "Does life have meaning" is a confused question. Define what "meaning" means in whatever context it is being used before engaging in any further debate, otherwise you will be arguing over definitions indefinitely and never know it. Your argument does sound suspiciously similar to Pascal's Wager, which I suspect other commenters are more qualified to dissect than I am.
I agree that growth shouldn't be a big huge marker of success (at least at this point), but even if it's not a metric on which we place high terminal value, it can still be a very instrumentally valuable metric - for example, if our insight rate per person is very expensive to increase, and growth is our most effective way to increase total insight.
So while growth should be sacrificed for impact on other metrics - for example, if growth is has a strong negative impact on insight rate per person - I would say it's still reasonable to assume it's valuable until proven otherwise.