Autonomía digital y tecnológica

Código e ideas para una internet distribuida

Linkoteca. archivo digital


On Monday, the blogging platform Tumblr announced it would be removing all adult content after child pornography was discovered on some blogs hosted on the site. Given that an estimated one-quarter of blogs on the platform hosted at least some not safe for work (NSFW) content, this is a major content purge. Although there are ways to export NSFW content from a Tumblr page, Tumblr’s purge will inevitably result in the loss of a lot of adult content.    Unless, of course, Reddit’s data hoarding community has anything to say about it.

On Wednesday afternoon, the redditor u/itdnhr posted a list of 67,000 NSFW Tumblrs to the r/Datasets subreddit. Shortly thereafter, they posted an updated list of 43,000 NSFW Tumblrs (excluding those that were no longer working) to the r/Datahoarders subreddit, a group of self-described digital librarians dedicated to preserving data of all types.

The Tumblr preservation effort, however, poses some unique challenges. The biggest concern, based on the conversations occurring on the subreddit is that a mass download of these Tumblrs is liable to also contain some child porn. This would put whoever stores these Tumblrs at serious legal risk.

Still, some data hoarders are congregating on Internet Relay Chat (IRC) channels to strategize about how to pull and store the content on these Tumblrs. At this point, it’s unclear how much data that would represent, but one data hoarder estimated it to be as much as 600 terabytes.

Trying to preserve the blogosphere’s favorite nude repository is a noble effort, but doesn’t change the fact that Tumblr’s move to ban adult content will deal a serious blow to sex workers around the world. Indeed, the entire debacle is just another example of how giant tech companies like Apple continue to homogenize the internet and are the ultimate arbiters of what can and cannot be posted online.

Hello and welcome! We (Matt Stempeck, Micah Sifry of Civic Hall, and Erin Simpson, previously of Civic Hall Labs and now at the Oxford Internet Institute) put this sheet together to try to organize the civic tech field by compiling hundreds of civic technologies and grouping them to see what patterns emerge. We started doing this because we think that a widely-used definition and field guide would help us: share knowledge with one another, attract more participation in the field, study and measure impact, and move resources in productive directions. Many of these tools and social processes are overlapping: our categories are not mutually exclusive nor collectively exhaustive.

wget --recursive --no-clobber --page-requisites --html-extension --convert-links --domains website.org --no-parent www.website.org/tutorials/html/

This command downloads the Web site www.website.org/tutorials/html/.

The options are:

–recursive: download the entire Web site.
–domains website.org: don’t follow links outside website.org.
–no-parent: don’t follow links outside the directory tutorials/html/.
–page-requisites: get all the elements that compose the page (images, CSS and so on).
–html-extension: save files with the .html extension.
–convert-links: convert links so that they work locally, off-line.
–no-clobber: don’t overwrite any existing files (used in case the download is interrupted and resumed).