I saw this post and I was curious what was out there.
https://neuromatch.social/@jonny/113444325077647843
Id like to put my lab servers to work archiving US federal data thats likely to get pulled - climate and biomed data seems mostly likely. The most obvious strategy to me seems like setting up mirror torrents on academictorrents. Anyone compiling a list of at-risk data yet?
One option that I’ve heard of in the past
ArchiveBox is a powerful, self-hosted internet archiving solution to collect, save, and view websites offline.
I am using archivebox, it is pretty straight-forward to self-host and use.
However, it is very difficult to archive most news sites with it and many other sites as well. Most cookie etc pop ups on a site will render the archived page unusable and often archiving won’t work at all because some bot protection (Cloudflare etc.) will kick-in when archivebox tries to access a site.
If anyone else has more success using it, please let me know if I am doing something wrong…
Monolith has the same problem here. I think the best resolution might be some sort of browser-plugin based solution where you could say “archive this” and have it push the result somewhere.
I wonder if I could combine a dumb plugin with Monolith to do that… A weekend project perhaps.
I don’t self-host it, I just use archive.org. That makes it available to others too.
Yes. This isn’t something you want your own machines to be doing if something else is already doing it.
I guess they back either other up. Like archive.is is able to take archives from archive.org but the saved page reflects the original URL and the original archiving time from the wayback machine (though it also notes the URL used from wayback itself plus the time they got archived it from wayback).
NOAA is at risk I think.
Flash drives and periodic transfers.
I use M-Discs to long term archival.