A screen shot of the GitHub home page taken on 2015-11-08
|Archiving status||Not saved yet|
- See also GitHub Downloads
GitHub is a software repository powered by Git. Does not seem to have any site issues, often 24 hours uptime (see site status). Looks pretty sunny at the moment, but when disaster strikes it would be a problem archiving the private repositories.
As of 12th August 2012: 1,963,652 people hosting over 3,460,582 repositories 1,117,147 public repositories are forks, which greatly reduces the amount of data required to archive it. As of 22 November 2015: There are 32,000,000 repositories, with a similar fork ratio. Back-of-the-envelope calculations suggest 120TB of data in git repositories.
Acquisition by Microsoft
A discussion into the feasibility of archiving GitHub has commenced in.
- Users in the FOSS community fear Microsoft's "embrace, extend, extinguish" schemes in the 1990s and 2000s and many called for a move to rival GitLab in the wake of the news.
- LinkedIn shows how user content can be gradually taken away (by means of paywalls and login walls).
git clone is the simplest one (and also works outside of GitHub, obviously). However, it does not get some project data that is not stored in git, including issue reports, comments, pull requests.
When cloning a repository for archival, it is best to use the --mirror option. This mirror will include all branches and even the code associated with pull requests. (Note however that the PR code will get purged eventually by Git's GC when you create a clone from this mirror as the PR commits aren't referenced by any branches, though this can be solved by adding a line like fetch = +refs/pull/*/head:refs/remotes/origin/pr/* to the repository config file.)
To pack a clone/mirror into a single, easily handleable file, use git bundle create FILE --all inside the clone/mirror.
github-backup runs in a git repository and chases down that information, committing it to a "github" branch. It also chases down the forks and efficiently downloads them as well.
See also Software Heritage.
GitHub Replacement Engines
If we ever have to archive the data out of GitHub, the data will need to be exportable to a GitHub-style engine.
Currently[when?], the best GitHub-style engine that has a Wiki, issues, Git Repo hosting, and is free and open source to use is GitLab. The engine is used by and paid for by many major organizations, so it is likely to live on in a stable way. Other popular FOSS alternatives to GitHub include Gitea and Gogs.
We will need a complete migration system to move a git repository and all related GitHub service information of a repository to GitLab.
Things to Scrape
In case of emergency, these are the items we need to grab.
- Git Repository - Accomplished by github-backup
- Forked Repositories - Accomplished by github-backup
- Notes on Commits/Lines of Code - Not supported by github-backup yet. GitHub API support exists since ca. 2011.
- GitHub Gollum Wiki - No tool yet, but just clone the whole thing, and then push it to GitLab.
- The wiki is a full-blown git repository, though only few features are exposed on the user interfaces (e.g. no branches). The clone URL is shown on wiki pages and is https://github.com/owner/repository.wiki.git.
- Releases - Tags on GitHub can have binaries attached. These are of high priority to archive.
- Issues + Comments - Accomplished by github-backup
- Milestones - github-backup currently does not archive this yet.
- Labels - github-backup currently does not archive this yet.
- Hooks - Needs some kind of tool to archive GitHub Hooks
List of Repositories
A list of repositories from GitHub API data are maintained by an archive team member at za3k.com. It scrapes continuously. Public downloads are updated once a day. This list does not include gists.
The metadata generated by the GitHub API are archived to Google BigQuery every hour by GithubArchive.
It obviously doesn't grab events dating before 2011, so a targeted repository scrape may still be ideal.
But at least it could be possible to grab all info about a single repository using Google BigQuery's free version, since it would use a low amount of CPU power. However, we need to create such an export script for it when the time comes.