Article has been migrated here

It will be a long article so I added a Table of content 👇 Fancy, right?

This tutorial is relying on a package called Rcrawler by Salim Khalil. It’s a very handy crawler with some nice native functionalities.

After R is being installed and rstudio launched, same as always, we’ll install and load our package:

Crawl an entire website with Rcrawler

To launch a simple website analysis, you only need this line of code:

It will crawl the entire website and provide you with the data

Less than 30s to crawl a small website

After the crawl is being done, you’ll have access to:

The INDEX variable

it’s a data frame, if don’t know what’s a data frame, it’s like an excel file. Please note that it will be overwritten every time so export it if you want to keep it!

To take a look at it, just run

INDEX data frame

Most of the columns are self-explanatory. Usually, the most interesting ones are ‘Http Resp‘ and ‘Level

The Level is what SEOs call “crawl depth” or “page depth”. With it, you can easily check how far from the homepage some webpages are.

Quick example with BrightonSEO website, let’s do a quick ‘ggplot’ and we’ll be able to see pages distribution by level.

blank

HTML Files

By default, the rcrawler function also store HTML files in your ‘working directory’. Update location by running setwd() function

Each file is named for its crawl order. So the homepage should be 1.html

Let’s go deeper into options by replying to the most commons questions:

So how to extract metadata while crawling?

It’s possible to extract any elements from webpages, using a CSS or XPath selector. We’ll have to use 2 new parameters

  • PatternsNames to name the new parameters
  • ExtractXpathPat or ExtractCSSPat to setup where to grab it in the web page

Let’s take an example:

You can access the scraped data in two ways:

  • option 1 = DATA – it’s an environment variable that you can directly access using the console. A small warning, it’s a ‘list’ a little less easy to read
View(DATA) will display something like that

If you want to convert it to a data frame, easier to deal with, here the code:

  • option 2 = extracted_data.csv

    It’s a CSV file that has been saved inside your working directory along with the HTML files.

It might be useful to merge INDEX and NEWDATA files, here the code

As an example, let’s try to collect webpage type using scraped body class

Seems that the first word is the page type

Let’s extract the first word and feed it inside a new column

A little bit a cleaning to make the labels easier to read

the 3 steps being displayed

And then a quick ggplot

blank
Count of Pagetype per level

Want to see something even cooler?

blank
An interactive graph

This is a static HTML file that can be store anywhere, even on my shared hosting

Explore Crawled Data with rpivottable

This create a drag & drop pivot explorer

blank

It’s also possible make some quick data viz

blank

Full DEMO – see by yourself

Extract more data without having to recrawl

All the HTML files are in your hard drive, so if you need more data extracted, it’s entirely possible.

You can list of your recent crawl by using ListProjects() function,

it displays 2 recent crawling projects

First, we’re going to load the crawling project HTML files:

Let’s say you forgot to grab h2’s and h3’s you can extract them again using the ContentScraper() also included inside rcrawler package.

et voilaaa

Categorize URLs using Regex

For those not afraid of regex, here is a complimentary script to categorize URLs. Be careful the regex order is important, some values can overwrite others. Usually, it’s a good idea to place the home page last

What if I want to follow robots.txt rules?

just had Obeyrobots parameter

What if I want to limit crawling speed?

By default, this crawler is rather quick and can grab a lot of webpage in no times. To every advantage an inconvenience, it’s fairly easy to wrongly detected as a DOS. To limit the risks, I suggest you use the parameter RequestsDelay. it’s the time interval between each round of parallel HTTP requests, in seconds. Example

Other interesting limitation options:

no_cores: specify the number of clusters (logical cpu) for parallel crawling, by default it’s the numbers of available cores.

no_conn: it’s the number of concurrent connections per one core, by default it takes the same value of no_cores.

What if I want to crawl only a subfolder?

2 parameters help you do that. crawlUrlfilter will limit the crawl, dataUrlfilter will tell from which URLs data should be extracted

How to change user-agent?

What if my IP is banned?

option 1: Use a VPN on your computer

Option 2: use a proxy

Use the httr package to set up a proxy and use it

Where to find proxy? It’s been a while I didn’t need one so I don’t know.

By default, RCrawler doesn’t save internal links, you have to ask for them explicitly by using NetworkData option, like that:

Then you’ll have two new variables available at the end of the crawling:

  • NetwIndex var that is simply all the webpage URLs. The row number are the same than locally stored HTML files, so
    row n°1 = homepage = 1.html
blank
NetwIndex data frame
  • NetwEdges with all the links. It’s a bit confusing so let me explain:
NetwEdges data frame

Each row is a link. From and To columns indicate “from” which page “to” which page are each link.

On the image above:
row n°1 is a link from homepage (page n°1) to homepage
row n°2 is a link from homepage to webpage n°2. According to NetwIndex variable, page n°2 is the article about rvest.
etc…

Weight is the Depth level where the link connection has been discovered. All the first rows are from the homepage so Level 0.

Type is either 1 for internal hyperlinks or 2 for external hyperlinks

I guess you guys are interested in counting links. Here is the code to do it. I won’t go into too many explanations, it would be too long. if you are interested (and motivated) go and check out the dplyr package and specifically Data Wrangling functions

the homepage (n°1) has 13 outbound links

To make it more readable let’s replace page IDs with URLs

using website URLs

The same thing but the other way around

count of inbound links

Again to make it more readable

using website URLs

So the useless ‘author page‘ has 14 links pointing at it, as many as the homepage… Maybe I should fix this one day.

Compute ‘Internal Page Rank’

Many SEOs, I spoke to, seem to be very interested in this. I might as well add here the tutorial. It is very much an adaptation of Paul Shapiro awesome Script.

But Instead of using ScreamingFrog export file, we will use the previously extracted links.

Internal Page Rank calculation

Let make it more readable, we’re going to put the number on a ten basis, just like when the PageRank was a thing.

blank

On 15 webpages website, it’s not very impressive but I encourage you to try on a bigger website.

What if a website is using a JavaScript framework like React or Angular?

RCrawler handly includes Phantom JS, the classic headless browser.
Here is how to to use

After that, reference it as an option

It’s fairly possible to run 2 crawls, one with and one without, and compare the data afterwards

This Browser option can also be used with the other Rcrawler functions.

⚠️ Rendering webpage means every Javascript files will be run, including Web Analytics tags. If you don’t take the necessary precaution, it’ll change your Web Analytics data

So what’s the catch?

Rcrawler is a great tool but it’s far from being perfect. SEO will definitely miss a couple of things like there is no internal dead links report, It doesn’t grab nofollow attributes on Links and there is always a couple of bugs here and there, but overall it’s a great tool to have.

Another concern is the git repo which is quite inactive.

This is it. I hope you did find this article useful, reach to me for slow support, bugs/corrections or ideas for new articles. Take care.

ref:
Khalil, S., & Fakir, M. (2017). RCrawler: An R package for parallel web crawling and scraping. SoftwareX, 6, 98-106.

* *