Today MIRI released a new technical report by visiting researcher Katja Grace called "Algorithmic Progress in Six Domains." The report summarizes data on algorithmic progress – that is, better performance per fixed amount of computing hardware – in six domains:
- SAT solvers,
- Chess and Go programs,
- Physics simulations,
- Factoring,
- Mixed integer programming, and
- Some forms of machine learning.
MIRI's purpose for collecting these data was to shed light on the question of intelligence explosion microeconomics, though we suspect the report will be of broad interest within the software industry and computer science academia.
One finding from the report was previously discussed by Robin Hanson here. (Robin saw an early draft on the intelligence explosion microeconomics mailing list.)
This is the preferred page for discussing the report in general.
Summary:
In recent boolean satisfiability (SAT) competitions, SAT solver performance has increased 5–15% per year, depending on the type of problem. However, these gains have been driven by widely varying improvements on particular problems. Retrospective surveys of SAT performance (on problems chosen after the fact) display significantly faster progress.
Chess programs have improved by around 50 Elo points per year over the last four decades. Estimates for the significance of hardware improvements are very noisy, but are consistent with hardware improvements being responsible for approximately half of progress. Progress has been smooth on the scale of years since the 1960s, except for the past five. Go programs have improved about one stone per year for the last three decades. Hardware doublings produce diminishing Elo gains, on a scale consistent with accounting for around half of progress.
Improvements in a variety of physics simulations (selected after the fact to exhibit performance increases due to software) appear to be roughly half due to hardware progress.
The largest number factored to date has grown by about 5.5 digits per year for the last two decades; computing power increased 10,000-fold over this period, and it is unclear how much of the increase is due to hardware progress.
Some mixed integer programming (MIP) algorithms, run on modern MIP instances with modern hardware, have roughly doubled in speed each year. MIP is an important optimization problem, but one which has been called to attention after the fact due to performance improvements. Other optimization problems have had more inconsistent (and harder to determine) improvements.
Various forms of machine learning have had steeply diminishing progress in percentage accuracy over recent decades. Some vision tasks have recently seen faster progress.
Yes, C is the one counterexample I had in mind, but I vaguely recall compatibility issues much larger than is acceptable for JS. I might be wrong.
I remember porting code between GCC and ICC basically painlessly but not so much the details. Are the three compilers expected to run code tuned on the others with 0 complaints? How prevalent are compiler-specific settings? (I do remember needing to do inline assembly in a special way, I think).
Ok, got some answers. It seems GCC leads by adding non-standard features, and ICC/Clang follow to be compatible. This doesn't seem to be the case in the JS world, where the competitors seem to be on a more equal footing (though V8 has the mindshare lead, with node.js etc.). If this is enough to support the "first ever" description is I guess a matter of personal taste at this point. All in all, thanks for the counter Gwern.