One of my long-standing interests is in writing content that will age gracefully, but as a child of the Internet, I am addicted to linking and linkrot is profoundly threatening to me, so another interest of mine is in archiving URLs; my current methodology is a combination of archiving my browsing in public archives like Internet Archive and locally, and proactively archiving entire sites. Anyway, sites I have previously archived in part or in total include:

  1. LessWrong (I may've caused some downtime here, sorry about that)
  2. OvercomingBias
  3. SL4
  4. Chronopause.com
  5. Yudkowsky.net (in progress)
  6. Singinst.org
  7. PredictionBook.com (for obvious reasons)
  8. LongBets.org & LongNow.org
  9. Intrade.com
  10. Commonsenseatheism.com
  11. finney.org
  12. nickbostrom.com
  13. unenumerated.blogspot.com & http://szabo.best.vwh.net/
  14. weidai.com
  15. mattmahoney.net
  16. aibeliefs.blogspot.com

Having recently added WikiWix to my archival bot, I was thinking of re-running various sites, and I'd like to know - what other LW-related websites are there that people would like to be able to access somewhere in 30 or 40 years?

(This is an important long-term issue, and I don't want to miss any important sites, so I am posting this as an Article rather than the usual Discussion. I already regret not archiving Robert Bradbury's full personal website - having only his Matrioshka Brains article - and do not wish to repeat the mistake.)

That would already be covered by my own reading of it, my browser history being the main source of URLs for archiver-bot.

I think it's like backups - you don't appreciate the need until it's gone, and then it's too late. And to be fair, I don't think I would get much value out of an archive of my web browsing history from age 10-16, say.

That would already be covered by my own reading of it, my browser history being the main source of URLs for archiver-bot.

You're making a permanent backup of everything you ever read on the internet? That's... that's... well I suppose data storage is cheap these days. It makes perfect sense. Reading your scripting instructions now.

Not everything; I filter out things I am sure I won't want in the future and things I strongly expect to be available & which would take up a lot of space (Wikipedia in particular), and the bot is rate-limited by the IA/WebCite submissions. Increasingly more stuff is difficult to archive as sites load stuff via JS. But much of what I read, yes.

I'm working on a rationality blog aggregator, and should be ready to make it public in the next few days. Would you like to know when it is released?

Can you post a link in the discussion section when it's done? I'd be interested in it, and I suspect many others on this site would be as well.

Does archive.org plan to implement a download feature and domain archive coverage indicator? (I assume they don't have that, otherwise you'd probably mention it. It would also make sense to publish such incremental archives as distributed version control access points.)

Edit: From the FAQ:

Can people download sites from the Wayback?

Our terms of use specify that users of the Wayback Machine are not to copy data from the collection. If there are special circumstances that you think the Archive should consider, please contact info at archive dot org.

(No explanation is given for why this is though.)

But... the only way to view the 'data' is by copying it to my computer! That's how the Internet works!

I think that legally, the copy in your browser doesn't count somehow, the same way that the copy of a painting that you make by holding a mirror near it doesn't count. I'm guessing the criterion is whether the copy is ephemeral or persistent.

This is a place where copyright law and theory still haven't quite caught up, though there are numerous attempts to make laws about these things while just ignoring facts like "To use software one must often copy a significant part of it into memory".

ETA: There's usually something about being allowed to make copies of software if it "is an essential step in the utilization of the computer program", which is arguably an extension of the "transitory duration" clause (which would cover the 'mirror' case)

Eliezer's homepage and any papers on the SingInst site?

(I had another suggestion, but it became redundant when I saw who wrote the post.)

Eliezer's homepage and any papers on the SingInst site?

Eliezer I covered already, and I'm added singinst.org to the queue. (Singinst.org yielded 4343 filtered URLs, on-site and off the site, to be archived.)

I have finished another spider & populated my queue with links from the following site:

  • sl4.org
  • chronopause.com
  • yudkowsky.net
  • intelligence.org
  • www.predictionbook.com
  • longbets.org
  • longnow.org
  • www.intrade.com
  • slatestarcodex.com
  • squid314.livejournal.com
  • aibeliefs.blogspot.com
  • mattmahoney.net
  • www.weidai.com
  • unenumerated.blogspot.com
  • szabo.best.vwh.net
  • nickbostrom.com
  • commonsenseatheism.com
  • rationality.org
  • www.acceleratingfuture.com

(Note that if you use linkchecker, you will want >4GB of RAM to spider all those domains.)

I've done all those except delicious.com, because I don't know how to confine my spidering to just that tag.

I wasn't suggesting you spider everything associated with that tag, just look through it for more blogs. I guess maybe that's too much work?

At this point yeah. I now have 56k URLs in the queue, and at 20 seconds a URL... Pareto is the idea here, what are the main sites worth preserving?

Does anyone know if one could convince the Archive Team to archive them? Or does the Archive Team often consist of more difficult personalities?

BTW, technology lock-in aside I highly recommend things like OfflinePages for iPhone/iPad, as they preserve full look and feel of the sites (very useful for LW, to see threaded comments). If there were similar solutions that were more open I'd recommend them even more.

Sounds like ReadItLater. As far as preservation goes, does that do anything that 'wget --page-requisites' would not?

Similar to read it later, but has scraping capabilities (up to 3 levels I think) and looks exactly like the page. I haven't user wget in a while, it might be same as --page-requisites; from previous usage I remember wget-copied sites not looking quite right afterwards, but it might well have been my fault.

u_ suggests yudkowsky.net which my history says I haven't archived, so I'm adding that into the archive queue.

You be careful with yudkowsky.net - the last few times I visited I was greeted by an error message from the DNS provider. Don't know if Eliezer has fixed that permanently or not.

Yudkowsky.net's up now; looking over the list of URLs output by the spider, it seemed to be accurate in general.