Automated Discovery of Internet Censorship by Web Crawling
Censorship of the Internet is widespread around the world. As access to the web becomes increasingly ubiquitous, filtering of this resource becomes more pervasive. Transparency about specific content that citizens are denied access to is atypical. To counter this, numerous techniques for maintaining URL filter lists have been proposed by various individuals and organisations that aim to empirical data on censorship for benefit of the public and wider censorship research community. We present a new approach for discovering filtered domains in different countries. This method is fully automated and requires no human interaction. The system uses web crawling techniques to traverse between filtered sites and implements a robust method for determining if a domain is filtered. We demonstrate the effectiveness of the approach by running experiments to search for filtered content in four different censorship regimes. Our results show that we perform better than the current state of the art and have built domain filter lists an order of magnitude larger than the most widely available public lists as of Jan 2018. Further, we build a dataset mapping the interlinking nature of blocked content between domains and exhibit the tightly networked nature of censored web resources.
Darer, A., Farnan, O., & Wright, J. (2018, May). Automated Discovery of Internet Censorship by Web Crawling. In Proceedings of the 10th ACM Conference on Web Science (pp. 195-204). ACM. DOI: 10.1145/3201064.3201091
Published: Apr 2018 | Categories: Research Articles
Join & Contact us
11a Mansfield Rd
Oxford OX1 3SZ
@kmrpaudel et Al study says wildlife reporting practices create ‘feedback loops’ that may reinforce biases and can further entrench official responses to wildlife crime. My new story for @mongabay