ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive (or other archive sites).
To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel operator (
@) or voice (
+) permissions in order to issue archiving jobs; please ask for assistance or leave a message describing the website you want to archive.
- The bot listens for commands and reports back status on the IRC channel. You can ask it to archive a website or webpage, check whether the URL has been saved, change the delay time between request, or add some ignore rules. This IRC interface is collaborative meaning anyone with permission can adjust the parameter of jobs. Note that the bot isn't a chat bot so it will ignore you if it doesn't understand a command.
- The dashboard displays the URLs being downloaded. Each URL line in the dashboard is categorized into successes, warnings, and errors. It will be highlighted in yellow or red. It also provides RSS feeds.
- The backend contains the database of jobs and several maintenance tasks such as trimming logs and posting Tweets on Twitter. The backend is the centralized portion of ArchiveBot.
- The crawler will download and spider the website into WARC files. The crawler is the distributed portion of ArchiveBot. Volunteers run nodes connected to the backend. The backend will tell the nodes what jobs to run. Once the node has finished, it reports back to the backend and uploads the WARC files to the staging server. This process is handled by a supervisor script called a pipeline.
- The staging server is the place where all the WARC files are uploaded temporary. Once the current batch has been approved, it will be uploaded to the Internet Archive for consumption by the Wayback Machine.
Volunteer a Node
Note: New nodes are not being accepted right now. (as of July 2016)
If you have a machine with
- lots of disk space (40 GB minimum / 200 GB recommended / 500 GB atypical)
- 512 MB RAM (2 GB recommended, 2 GB swap recommended)
- 10 mbps upload/download speeds (100 mbps recommended)
- long-term availability (2 months minimum)
- unrestricted internet accesses (absolutely no firewall/proxies/censorship/ISP-injected-ads/DNS-redirection/free-cafe-wifi)
Installing the ArchiveBot can be difficult.
Since it's good enough for testing... it's good enough for installation, right? There must be a way to convert it into an installer script.
- Everything is provided on a best-effort basis; nothing is guaranteed to work. (We're volunteers, not a support team.)
- We can decide to stop a job or ban a user if a job is deemed unnecessary. (We don't want to run up operator bandwidth bills and waste Internet Archive donations on costs.)
- We're not Internet Archive. (We do what we want.)
- We're not the Wayback Machine. Specifically, we are not
archive.org_bot. (We don't run crawlers on behalf of other crawlers.)
Occasionally, we had to ban blocks of IP addresses from the channel. If you think a ban does not apply to you but cannot join the #archivebot channel, please join the main #archiveteam channel instead.
If you are a website operator and you notice ArchiveBot misbehaving, please contact us on #archivebot or #archiveteam on EFnet (see top of page for links).
ArchiveBot understands robots.txt (please read the article) but does not match any directives. It uses it for discovering more links such as sitemaps however.
Also, please remember that we are not the Internet Archive.
- Formerly known as @ATArchiveBot