Our investigation into last week's LW downtime is complete: here (Google Docs).
We failed to update our AWS configuration after changes at Amazon, which caused a cycle of servers being spawned then killed before they could properly boot. Our automated testing should have notified us of this failure immediately, but included a predictable failure mode (identified by us last year but not fixed). We became aware of the downtime when I checked my email and worked on it until it was resolved.
I personally feel very bad about our multiple failures leading to this incident.
ref. the last time I did this to you: http://lesswrong.com/lw/29v/lesswrong_downtime_20100511_and_other_recent/
- We have reconfigured AWS and the tools we use to communicate with it to avoid this failure in the future.
- Improvements to our automated site testing system (Nagios) are underway (expected to be live before 2012-04-13 - these tests will detect greater-than-X-failures-from-Y-trials, rather than the current detect zero-successes-from-Z-trials).
- We have changed our staffing in part in recognition that some systems (including this one) had been allowed to fall out of date, and allocated a developer to review our system administration project planning.
Further actions - site speed:
We're unhappy with the site's speed. We plan on spending some time next week doing what we can to improve it.
(If you upvote this post, please downvote my "Karma sink" comment below - I would prefer not to earn karma from an event like this.)