[this is a linkpost to Analysis of World Records in Speedrunning]

TL;DR: I have scraped a database of World Record improvements for fastest videogame completition for several videogames, noted down some observations about the trends of improvement and attempted to model them with some simple regressions. Reach out if you'd be interested in researching this topic!

### Key points

- I argue that researching speedrunning can help us understand scientific discovery, AI alignment and extremal distributions.
__More__. - I’ve scraped a dataset on world record improvements in videogame speedrunning. It spans 15 games, 22 categories and 1462 runs.
__More__. - Most world record improvements I studied follow a diminishing returns pattern. Some exhibit successive cascades of improvements, with continuous phases of diminishing returns periodically interrupted by (presumably) sudden discoveries that speed up the rate of progress.
__More__. - Simple linear extrapolation techniques could not improve on just guessing that the world record will not change in the near future.
__More__. - Possible next steps include trying different extrapolation techniques, modelling the discontinuities in the data and curating a dataset of World Record improvements in Tool Assisted Speedruns.
__More__.

The script to scrape the data and extrapolate it is available __here__. A snapshot of the data as of 30/07/2021 is available __here__.

Feedback on the project would be appreciated. I am specially keen on discussion about:

- Differences and commonalities to expect between speedrunning and technological improvement in different fields.
- Discussion on how to mathematically model the discontinuities in the data.
- Ideas on which techniques to prioritize to extrapolate the observed trends.

This is cool! I like speedrunning! There's definitely a connection between speed-running and AI optimization/misalignment (see When Bots Teach Themselves to Cheat, for example). Some specific suggestions:

plotting Speed-Run-Time-on-Game-Release divided by Speed-Run-Time at time t vs time. Some benefits of this includeIntuitive meaning: This ratio tells you how many optimal speed-runs at time t could be accomplished over the course of a single speed-run at game releasePartially addresses diminishing returns: Say the game's first speed-run completes the game in 60 seconds and the second speed-run is completes the game at 15 seconds (a 45 second improvement). No matter how much you work at the game, its not possible to reduce the speed-run time by more than the 45 second improvement (at most you can do 15 seconds) so diminishing returns are impliedIn contrast, if you look at the ratio, the first speed has a ratio of 1 (60 seconds/60 seconds), the second has a ratio of 4 (60 seconds/15 seconds), and a third one-second speed run has a ratio of 60 (60 seconds/1 second). Between the second and third speed-run, we've gone from a value of 4 to a value of 60 (a 15x increase!). Diminishing returns are no longer inevitable!

Easier to visualize: By normalizing by the initial speed-run time, all games start out with the same value regardless of how long they objectively take. This will allow you to more easily identify similarities between the trends.More comparable to tech progress: Since diminishing returns aren't inevitable by construction, this looks more like tech progress where diminishing returns also aren't inevitable by construction. Note that they still can be in practice howeverplot time relative to when the first speed-run was registered. That is, set the date of the first speed run to t=0. This should help you identify trends.I don't think speed-running can be particularly predictive of the tech advancesusefunctions that expect asymptotes (eglogistic equations). Combinations of logistic equations can probably capture the cascading L curves you notice in your write-up. May also be worth doing somebasic analysis like counting the number of inflections in each speed-run(do this by plotting derivatives and counting the number of peaks).If you don't transform for whatever reason, try exponential decay.Have fun out there!

Those are good suggestions!

Here is what happens when we align the start dates and plot the improvements relative to the time of the first run.

I am slightly nervous about using the first run as the reference, since early data in a category is quite unrealiable and basically reflects the time of the first person to thought to submit a run. But I

thinkit should not create any problems.Interestingly, plotting the relative improvement reveals some S-curve patterns, with phases of increasing returns followed by phases of diminishing returns.

I did not manage either to beat the baseline by extrapolating the relative improvement times. Interestingly, using a grid to count non-improvements as observations made the extrapolation worse, so this time the best fit was achieved with log linear regression over the last 8 weeks of data in each category.

As before, the code to replicate my analysis is available here.

Haven't had time yet to include logistic models or do analysis of the derivative of the improvements - if you feel so inclined feel free to reuse my code to perform the analysis yourself and if you share them here we can comment on the results!

PS: there is a sentence missing an ending in your comment

Ah yes, the bottle glitch...

I stuck this on Twitter already, but normalised these shake out to a consistent-ish set of curves

code

This is so cool!

It seems like the learning curves are reasonable close to the diagonal, which means that:

On the other hand, despite all curves being close to the diagonal, they seem to mostly undershoot it. This might imply that the rate of improvement is slighly decreasing over time.

One thing that tripped me from this graph for other readers: the relative attempt is wrt to the amount of WR improvements. That means that if there are 100 WRs, the point with relative attempt = 0.5 is the 50th WR improvement, not the one whose time is closer to the average between the date of the first and last attempt.

So this graph is giving information about "conditional on you putting enough effort to beat the record, by how much should you expect to beat it?" rather than on "conditional on spending X amount of effort on the margin, by how much should you expect to improve the record?".

Here is the plot that would correspond to the other question, where the x axis value is not proportional to the ordinal index of WR improvement but to the date when the WR was submitted.

It shows a far weaker correlation. This suggests that a) the best predictor of new WRs is the amount of runs overall being put into the game and 2) the amount of new WRs around a given time is a good estimate of the amount of runs overall being put into the game.

This has made me update a bit against plotting WR vs time, and in favor of plotting WR vs cumulative number of runs. Here are some suggestions about how one could go about estimating the number of runs being put into the game, if somebody want to look into this!

PS: the code for the graph above, and code to replicate Andy's graph, is now here

Update: I tried regressing on the ordinal position of the world records and found a much better fit, and better (above baseline!) forecasts of the last WR of each category.

This makes me update further towards the hypothesis that

`date`

is a bad predictive variable. Sadly this would mean that we really need to track whatever the index in WR is correlated with (presumably the cumulative number of runs overall by the speedrunning community).I strongly suggest looking at world records in TrackMania; it should be an absolute treasure trove of data for this purpose. 15+ years of history over dozens of tracks, with loads of incremental improvements and breakthrough exploits alike.

Here's an example of one such incredible history:

Apparently many records have been subjected to cheating:

Doesn't seem particularly relevant for the purpose of understanding trends, the underlying dynamics aren't changed by slowing down time.

Is there any way to estimate how many cumulative games that speedrunners have run at a given point? It is intuitive that progress should be related to amount of effort put into it, and that the more people play a game, the further they can push the limits, which may explain a lot of the apparent heterogeneity, even if all games have a similar experience curve exponent.

It's also interesting because the form might suggest that each attempt has an equal chance of setting a record (equal-odds rule; "On the distribution of time-to-proof of mathematical conjectures", Hisano & Sornette 2012 for math proof attempts; counting-argument in "Scaling Scaling Laws with Board Games", Jones 2021), which shows how progress comes from brute force thinking.

One should be able to use the Speedrun.com API to search for the number of runs submitted by a certain date, as a proxy for the cumulative games (though it will not reflect all attempts since AFAIK many runners only submit their personal bests to speedrun.com).

Additionally, speedrun.com provides some stats on the amount of runs and players for each game, for example the current stats for Super Metroid can be found here: https://www.speedrun.com/supermetroid/gamestats

There are some problems with this approach too.

I'd be excited about learning about the results of either approach if anybody ends up scrapping this data!