reddit home page as seen on December 14, 2019
|Archiving status||Partially saved|
|IRC channel||(on hackint)|
Reddit is a content aggregator and social bookmarking service similar to the likes of Digg. Users can submit links, text posts, images and videos, vote and comment on submissions in communities called "subreddits". It received considerable attention from its twelve-hour SOPA blackout early in January 2012.
Reddit "quarantines" some controversial subreddits. Many of such quarantine subreddits have been deleted, and to date no quarantined subreddit has ever emerged unscathed, so it is important to make backups of them. Here is a list of quarantined reddits.
It contains some subredits devoted to similar goals as ArchiveTeam, including /r/AbandonedWebsites, /r/ForgottenWebsites, & /r/DataHoarder, which are worth checking for material to be added to ArchiveBot or otherwise benefit from the attention of the team.
Appears stable, though a small to medium size team is a concern.
- 2015-10-06: The admins carried out bannings of several subreddits claiming they were harassing people, the most notable of which was /r/fatpeoplehate. This has instilled some fear, uncertainty, and doubt in some part of the userbase, with a few claiming that reddit will soon become what Digg is now: nearly dead.
Extremely endangered- many subreddits were picketing after the firing of a reddit employee named Victoria by turning themselves private or restricting submissions.
- 'Caution' - Reddit seems to have calmed down and returned to normal functionality after Ellen Pao's firing, and the Reddit team is making serious reforms (reducing shadowbanning, more mod tools). However, the revolt left unresolved issues and sour grapes within the community, and it seems Reddit was only saved by the lack of a practical alternative (Voat.co was crushed and went offline due to floods of refugees). It would be wise to preemptively archive the site before another crisis occurs.
- On July 3rd, 2015, Jason Baumgartner completed his 14-month effort to archive Reddit's entire publicly available textual content, just in time before the onset of the Reddit revolt. The archive is still updated monthly. The files are available here. However, images and videos hosted by Reddit are not archived.
- In 2017-2018, Reddit has carried out bannings of several subreddits including r/incels and r/maleforeveralone, which had tens of thousands of subscribers each. Other subreddits including r/Braincels, r/foreveralone, r/TheRedPill and r/MGTOW are endangered. Discussions and petitions about banning those subreddits are currently taking place.
- In 2018, a new, redesigned website became the default version of Reddit. This redesigned version has numerous usability issues. It heavily relies on JS and is essentially uncrawlable without dedicated code. The pre-redesign version of Reddit continues to be available at old.reddit.com.
- In March 2019, /r/watchpeopledie, /r/Gore, and some other subs were banned after the Christchurch shooting – this was clearly not due to the video recording of that shooting getting shared (that was forbidden on WPD at least) but due to the negative press coverage, just like for previous bans.
- Also in March 2019, /r/Piracy got threatened by Reddit's legal team with a ban due to the mods allegedly doing too little against copyright infringement.
- Reddit has quarantined manosphere subreddits including /r/Braincels, /r/TheRedPill. /r/Braincels was banned on October 30, 2019.
- Users began to spot in December 2019 that comment threads, at least on the "new" version of the site, were being locked behind a registration wall in an apparent A/B test.
Textual Archive (Without Images or Videos)
On July 3rd, 2015, Jason Baumgartner completed his 14-month effort to archive Reddit's entire publicly available textual content, just in time before the onset of the Reddit revolt. The archive is still being updated monthly. The files are available here.
- Does not include images and videos hosted by Reddit
- Reddit JSON API output. Posts are archived incrementally in real-time.
- Some comments not accessible due to private subreddits or comment deletion or other API issues
- Reddit /r/datasets - I have every publicly available Reddit comment for research. ~ 1.7 billion comments @ 250 GB compressed. Any interest in this?
- Google BigQuery Analysis of Reddit
The scripts used to generate this API dump were not made public, but it likely used PRAW, and it would probably be better to rewrite from scratch.
Also, this only preserves textual submissions and comments. All images and videos hosted on Reddit are not archived. All sidebar, wiki, and live thread data are not retrieved, so these should be scraped in an expansion pack.
Jason Baumgartner also provides an API for accessing Reddit's textual archive available here. The archive is updated in real-time. This API does not have the limitations of Reddit's API. For example, it does not impose limits on the number of submissions or comments that are retrieved.
To search for submissions of a subreddit (500 limit):
To retrieve all comments for a submission (with tens of thousands of comments):
Note that posts are archived in real-time after they are created. Newer versions of edited posts are not archived. One may have to re-fetch the content on Reddit's site to get the latest revision of an edited post.
Also, one may also have to fetch the images and videos as they are not archived by the API.
As of March 26, 2013, users can only see up to 1,000 posts and comments on a profile page. However, it was stated by admin "spladug" that older comments and posts are still in the database. spladug also states that the team is in favor for retrieving dumps of a user's data, but that the task would be taxing on the servers.
Since this comment was posted, there appears to have been no progress on a dump system. Archiving would be nearly impossible using the old-fashioned way (without wget) if things do wind up FUBAR in the future because of this limitation.
Instead, any archival methods should scrape from the Reddit API (which would have to run over several months). The API provides all nested comments that are not noticed by HTML. In addition, it significantly reduces server load.
Because of EU GDPR, progress was forcibly made to be compliant and the site now has a request form. Users can specify that they want a copy of all of their data, or data from specific date ranges. The site says requests may take up to 30 days to be processed.
- • • •
- • • •
- • • •