I'm confused. It seems to me that if we already have a discrete/combinatorial mess on our hands, sprinkling some chance of failure on each little gear won't summon the analysis fairy and make the mess easier to reason about. Or at least we need to be clever about the way randomness is added, to make the mess settle into something analytical. But von Neumann's article sounds more optimistic about this approach. Does anyone understand why?
I think what he's saying is that the existence of noise in computing hardware means that any computation done on this hardware must be (essentially) invariant to this noise, which leads the methods away from the precise, all-or-nothing logic of discrete math and into the fuzzier, smoother logic of probability distributions and the real line. This makes me think of analog computing, which is often done in environments with high noise and can indeed produce computations that are mostly invariant to it.
But, of course, analog computing is a niche field dwarfed by digital computing, making this prediction of von Neumann's comically wrong: the solution people went with wasn't to re-imagine all computations in a noise-invariant way, it was to improve the hardware to the point that the noise becomes negligible. But I don't want to sound harsh here at all. The prediction was so wrong only because, at least as far as I know, there was no reasonable way to predict the effect that transistors would have on computing in the '50s since they were not invented until around then. It seems reasonable from that vantage point to expect creative improvements in mathematical methods before a several-orders-of-magnitude improvement in hardware accuracy.
I don't think he's saying that error is good. He's saying that ignoring error is bad. And there's a valley of bad rationality in which the error level is low enough that people ignore it, but it still matters. Specifically, he says that black box models that don't include error are bad. You should model error of components and error propagation and design circuits that are robust to error. I'm not sure what this would mean in practice, though. Today we design systems that recover from computers crashing, but recovering from wrong computation is harder.
But this is 1948. In 1956, he showed how to take digital gates with known error rates and build from them digital gates with lower error rates. This opens up the possibility of getting the error low enough that you can ignore it and embrace digital logic. I'm guessing that he changed his mind and rejected the earlier paper, but it would be nice to have an explicit statement.
#1 (at the end) sounds like complexity theory.
Some of what von Neumann says makes it sound like he's interested in a mathematical foundation for analog computing, which I think has been done by now.
Quote from The General and Logical Theory of Automata. Corrected some typos using this version. H/T Hacker News.