ArchiveBot

From Archiveteam
Revision as of 05:45, 8 September 2014 by Chfoo (talk | contribs) (→‎Details: update about new pipeline monitor page)
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Imagine Motoko Kusanagi as an archivist.

ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive (or other archive sites).

Details

To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel operator (@) or voice (+) permissions in order to issue archiving jobs; please ask for assistance or leave a message describing the website you want to archive.

The dashboard shows the sites being downloaded currently. The pipeline monitor station shows the status of deployed instances of crawlers.

Follow @ArchiveBot on Twitter![1]

Components

IRC interface

The bot listens for commands and reports back status on the IRC channel. You can ask it to archive a website or webpage, check whether the URL has been saved, change the delay time between request, or add some ignore rules. This IRC interface is collaborative meaning anyone with permission can adjust the parameter of jobs. Note that the bot isn't a chat bot so it will ignore you if it doesn't understand a command.

Dashboard

The dashboard displays the URLs being downloaded. Each URL line in the dashboard is categorized into successes, warnings, and errors. It will be highlighted in yellow or red. It also provides RSS feeds.

Backend

The backend contains the database of jobs and several maintenance tasks such as trimming logs and posting Tweets on Twitter. The backend is the centralized portion of ArchiveBot.

Crawler

The crawler will download and spider the website into WARC files. The crawler is the distributed portion of ArchiveBot. Volunteers run nodes connected to the backend. The backend will tell the nodes what jobs to run. Once the node has finished, it reports back to the backend and uploads the WARC files to the staging server. This process is handled by a supervisor script called a pipeline.

Staging server

The staging server is the place where all the WARC files are uploaded temporary. Once the current batch has been approved, it will be uploaded to the Internet Archive for consumption by the Wayback Machine.

ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot. Contributions welcomed! Any issues or feature requests may be filed at the issue tracker.

People

The IRC bot, backend and dashboard is operated by yipdw. The staging server is operated by SketchCow. The crawlers are operated by various people.

Volunteer a Node

If you have a machine with

  • lots of disk space (40 GB minimum / 200 GB recommended / 500 GB atypical)
  • 512 MB RAM (2 GB recommended, 2 GB swap recommended)
  • 10 mbps upload/download speeds (100 mbps recommended)
  • long-term availability (2 months minimum)
  • unrestricted internet accesses (no firewall/proxies/censorship)

and would like to volunteer, please review the Pipeline Install instructions and contact yipdw.

More

Like ArchiveBot? Check out our homepage and other projects!

Notes

  1. Formerly known as @ATArchiveBot