by_row vs rowwise iteration - r

Does anyone know what the difference is between by_row, and rowwise? I am trying to scrape 3 simple websites, and I can't seem to get either approach to work, so I'm not sure if I am just using purr/dplyr wrong.
Data:
structure(list(beer_brewerid = c("8481", "3228", "10325"), link =
c("https://www.ratebeer.com/beer/8481/", "https://www.ratebeer.com/beer/3228/", "https://www.ratebeer.com/beer/10325/" ), scrapedname = c("", "", "")), .Names = c("beer_brewerid", "link", "scrapedname"), row.names = c(NA, 3L), class = "data.frame")
For every URL(or row), I would like to scrape the webpage using the following function:
dplyr approach:
table %>%
rowwise() %>%
read_html() %>%
extract2(2) %>%
html_nodes("#_brand4 span") %>%
html_text()
Purr Approach:
#Apply function to each row
table %>%
by_row(..f = parserows(), collate = c("rows"), .to = "scrapedname")
#Takes in row
parserows = function(){
read_html() %>%
extract2(., 2) %>%
html_nodes("#_brand4 span") %>%
html_text()
}
In the purr approach I keep getting an error where x is missing with no default. Shouldn't the value be coming from the row number? Otherwise I'd be writing a for loop specifying what index the row number is located at.
Using this magrittr piping, I keep getting timeout errors with my code.So:
How do I avoid timeout errors when using purr/dplyr to iterate over all the elements in my df? If so, should I be looking at using trycatch, or some sort of error handling mechanism to capture errors when they occur?
Is rowwise/ by_row really meant for this task? I think these functions are meant to iterative over every element within a row, which is not exactly what I am trying to solve with this problem at hand. Thanks.
output = table$link %>%
extract() %>%
map(read_html) %>%
html_nodes(row,"#_brand4 span") %>%
html_text(row)

Here is what #Thomas K's suggestions could look like:
First with purrr only:
library(purrr)
library(dplyr)
library(httr)
library(xml2)
library(rvest)
table$link %>%
purrr::set_names() %>%
map(read_html) %>%
map(html_node, "#_brand4 span") %>%
map(html_text)
# $`https://www.ratebeer.com/beer/8481/`
# [1] "Föroya Bjór"
#
# $`https://www.ratebeer.com/beer/3228/`
# [1] "King Brewing Company"
#
# $`https://www.ratebeer.com/beer/10325/`
# [1] "Bavik-De Brabandere"
(Note there is no need to use html_nodes (plural), rather than html_node (singular)).
A mixed dplyr/purrr alternative, which lets you keep each html doc in a tidy dataframe, if you need to reuse them:
res <-
table %>%
mutate(html = map(link, read_html),
brand_node = map(html, html_node, "#_brand4 span"),
scrapedname = map_chr(brand_node, html_text))
The html and brand_node columns are stored as external pointers and are not very print-friendly, so here is the resulting dataframe without them:
select(res, - html, - brand_node)
# beer_brewerid link scrapedname
# 1 8481 https://www.ratebeer.com/beer/8481/ Föroya Bjór
# 2 3228 https://www.ratebeer.com/beer/3228/ King Brewing Company
# 3 10325 https://www.ratebeer.com/beer/10325/ Bavik-De Brabandere
glimpse(res)
# Observations: 3
# Variables: 5
# $ beer_brewerid <chr> "8481", "3228", "10325"
# $ link <chr> "https://www.ratebeer.com/beer/8481/", "https://www.ratebeer.com/beer/3228/", "https://www.ratebeer.com/beer/10325/"
# $ scrapedname <chr> "Föroya Bjór", "King Brewing Company", "Bavik-De Brabandere"
# $ html <list> [<html lang="en">, <html lang="en">, <html lang="en">]
# $ brand_node <list> [<span itemprop="name">, <span itemprop="name">, <span itemprop="name">]
For the timeout issue, you could, also per #Thomas K's comment, simply wrap read_html in safely() or possibly() (which are indeed alternatives to tryCatch):
safe_read_html <- possibly(read_html, otherwise = read_html("<html></html>"))
But to address the (possible) real issue that you're going too hard on the server, I would suggest httr::RETRY() that lets you, well, retry, with "exponential backoff times":
safe_retry_read_html <- possibly(~ read_html(RETRY("GET", url = .x)), otherwise = read_html("<html></html>"))
A good practice when scraping is to go real gentle on the server, so you could even manually add an offset time before each request, with Sys.sleep(1 + runif(1)) for instance.
table$link %>%
c("https://www.wrong-url.foobar") %>%
purrr::set_names() %>%
map(~ {
Sys.sleep(1 + runif(1))
safe_retry_read_html(.x)
}) %>%
map(html_node, "#_brand4 span") %>%
map_chr(html_text)
# https://www.ratebeer.com/beer/8481/ https://www.ratebeer.com/beer/3228/
# "Föroya Bjór" "King Brewing Company"
# https://www.ratebeer.com/beer/10325/ https://www.wrong-url.foobar
# "Bavik-De Brabandere" NA
Lastly, there is your separate question about by_row()/rowwise().
First, note that by_row has been removed from the development version of purrr, and moved to a separate package, purrrlyr, where it's deprecated anyway, and it's recommended to "use a combination of: tidyr::nest(); dplyr::mutate(); purrr::map()"
From help("rowwise"), rowwise is mostly meant to be "used for the results of do() when you create list-variables".
So, no, neither is "really meant for this task", they would be superfluous.

Related

How do I put xml-nodesets (created with rvest) into a tibble using purrr's map-function in R?

I want to scrape a large amount of websites. For this, I first read in the websites' html-scripts and store them as xml_nodesets. As I only need the websites' contents, I lastly extract each websites' contents from the xml_nodesets. To achieve this, I have written following code:
# required packages
library(purrr)
library(dplyr)
library(xml2)
library(rvest)
# urls of the example sources
test_files <- c("https://en.wikipedia.org/wiki/Web_scraping", "https://en.wikipedia.org/wiki/Data_scraping")
# reading in the html sources, storing them as xml_nodesets
test <- test_files %>%
map(., ~ xml2::read_html(.x, encoding = "UTF-8"))
# extracting selected nodes (contents)
test_tbl <- test %>%
map(., ~tibble(
# scrape contents
test_html = rvest::html_nodes(.x, xpath = '//*[(#id = "toc")]')
))
Unfortunately, this produces following error:
Error: All columns in a tibble must be vectors.
x Column `test_html` is a `xml_nodeset` object.
I think I understand the substance of this error, but I can't find a way around it. It's also a bit strange, because I was able to smoothly run this code in January and suddenly it is not working anymore. I suspected package updates to be the reason, but installing older versions of xml2, rvest or tibble didn't help either. Also, scraping only one single website doesn't produce any errors either:
test <- read_html("https://en.wikipedia.org/wiki/Web_scraping", encoding = "UTF-8") %>%
rvest::html_nodes(xpath = '//*[(#id = "toc")]')
Do you have any suggestions on how to solve this issue? Thank you very much!
EDIT: I removed %>% html_text from ...
test_tbl <- test %>%
map(., ~tibble(
# scrape contents
test_html = rvest::html_nodes(.x, xpath = '//*[(#id = "toc")]')
))
... as this doesn't produce this error. The edited code does, though.
You need to store the objects in a list.
test %>%
purrr::map(~tibble(
# scrape contents
test_html = list(rvest::html_nodes(.x, xpath = '//*[(#id = "toc")]'))
))
#[[1]]
# A tibble: 1 x 1
# test_html
# <list>
#1 <xml_ndst>
#[[2]]
# A tibble: 1 x 1
# test_html
# <list>
#1 <xml_ndst>

Trouble mapping a function to a list of scraped links using rvest

I am trying to apply a function that extracts a table from a list of scraped links. I am at the final stage where I am applying the get_injury_data function to the links - I have been having issues with successfully executing this. I get the following error:
Error in matrix(unlist(values), ncol = width, byrow = TRUE) :
'data' must be of a vector type, was 'NULL'
I wonder if anyone can help me spot where I am going wrong. The code is as follows:
library(tidyverse)
library(rvest)
# create a function to grab the team links
get_team_links <- function(url){
url %>%
read_html %>%
html_nodes('td.hauptlink a') %>%
html_attr('href') %>%
.[. != '#'] %>% # remove rows with # string
paste0('https://www.transfermarkt.com', .) %>% # pat the website link to the url strings
unique() %>% # keep only unique links
as_tibble() %>% # turn strings into a tibble datatset
rename("links" = "value") %>% # rename the value column
filter(!grepl('profil', links)) %>% # remove link of players included
filter(!grepl('spielplan', links)) %>% # remove link of additional team pages included
mutate(links = gsub("startseite", "kader", links)) # change link to go to the detailed page
}
# create a function to grab the player links
get_player_links <- function(url){
url %>%
read_html %>%
html_nodes('td.hauptlink a') %>%
html_attr('href') %>%
.[. != '#'] %>% # remove rows with # string
paste0('https://www.transfermarkt.com', .) %>% # pat the website link to the url strings
unique() %>% # keep only unique links
as_tibble() %>% # turn strings into a tibble datatset
rename("links" = "value") %>% # rename the value column
filter(grepl('profil', links)) %>% # remove link of players included
mutate(links = gsub("profil", "verletzungen", links)) # change link to go to the injury page
}
# create a function to get the injury dataset
get_injury_data <- function(url){
url %>%
read_html() %>%
html_nodes('#yw1') %>%
html_table()
}
# get team links and save it as team_links
team_links <- get_team_links('https://www.transfermarkt.com/premier-league/startseite/wettbewerb/GB1')
# get player links and by mapping the function on to the player_injury_links dataset
# and then unnest the list of lists as a long list
player_injury_links <- team_links %>%
mutate(links = map(team_links$links, get_player_links)) %>%
unnest(links)
# using the player_injury_links list create a dataset by web scrapping the play injury pages
player_injury_data <- map(player_injury_links$links, get_injury_data)
Solution
So the issue that I was having was that some of the links that I was scraping did not have any data.
To overcome this issue used, I used the possibly function from purrr package. This helped me create a new, error-free function.
The line code that was giving me trouble is as follows:
player_injury_data <- player_injury_links %>%
purrr::map(., purrr::possibly(get_injury_data, otherwise = NULL, quiet = TRUE))

Map a tbl of hyperlinks into read_html

I have a tibble containing one column which stores hyperlinks in each column. Now I want to map over these links using map_dfr, passing the links one after another through read_html(.x[.x]) %>%
html_node(".body-copy-lg") %>% html_text. If I do so I always end up with the error :
Error in doc_parse_file(con, encoding = encoding, as_html = as_html, options = options) :
Expecting a single string value: [type=character; extent=3].
Which tells me that the read_html basically says: " Hey stop throwing more than one string at the same time on me."
So did I make a mistake in the mapper? Is this a bug? I really can't see why the mapper-function does not grab each element one after another.
What I tried so far :
target_regex <- "(xtm)|((k|K)(i|I|1|11)(d|D)(n|N).)|(Ar<e)\\s(you)\\s(in)|
(LOAN)|(AR(\\s|\\S)[0-9])|((B|b)(i|1|l)tc.)|(Coupon)|(Plastic.King)|(organs)|(SILI)|(Electric.Cigarette.Machine)"
adverts <- function(df) df[!grepl(target_regex, df$...1,perl = T), ]
bribe <- read_html(paste("http://ipaidabribe.com/reports/paid?page", 10, sep = "="))
report <- map(".read-more", ~html_nodes(bribe, .x) %>%
html_attr(.x[[1]][[1]][[1]], name = "href"))[[1]] %>%
as_tibble(.name_repair = "unique") %>%
bind_rows() %>%
rename( ...1 = value) %>%
adverts() %>%
map_dfr(~read_html(.x[.x]) %>%
html_node(".body-copy-lg") %>%
html_text)
Do not mind the call of rename() which is basically something what needed to be done to make the adverts usable in this case.
You're forgetting that most functions in R are vectorized, and that using map or apply functions is unnecessary. In your case, it is needed in the final step of getting the html text.
The syntax your are using in map is also puzzling, and I think you should review ?map to get a better handle on it. For instance, you use multiple .x or extracted values where you should just be using .x to refer to the sub-element of the object you are iterating over.
library(tidyverse)
library(rvest)
target_regex <- "(xtm)|((k|K)(i|I|1|11)(d|D)(n|N).)|(Ar<e)\\s(you)\\s(in)|
(LOAN)|(AR(\\s|\\S)[0-9])|((B|b)(i|1|l)tc.)|(Coupon)|(Plastic.King)|(organs)|(SILI)|(Electric.Cigarette.Machine)"
adverts <- function(df) df[!grepl(target_regex, df$...1,perl = T), ]
bribe <- read_html(paste("http://ipaidabribe.com/reports/paid?page", 10, sep = "="))
report <- html_nodes(bribe, ".read-more") %>%
html_attr("href") %>%
as_tibble(.name_repair = "unique") %>%
filter(str_detect(value, target_regex, negate = TRUE)) %>%
mutate(text = map_chr(value, ~read_html(.x) %>%
html_node(".body-copy-lg") %>%
html_text))
result
# A tibble: 3 x 2
value text
<chr> <chr>
1 http://ipaidabribe.com/reports/paid/paid-bribe-to-settle-matter… "\r\n Place: Nelamangala Police Station, Bangalore\nDate of incident: 5th Jan 2020, 3PM…
2 http://ipaidabribe.com/reports/paid/paid-500-rs-bribe-at-nizamu… "\r\n My Brother Mahesh Prasad travelling on PNR number 4822171124 train no 12721 Ni…
3 http://ipaidabribe.com/reports/paid/drone-air-follow-focus-wire… "\r\n This new Silencer Air+ is a tremendously versatile and resourceful follow focus, z…

rvest: for loop/map to pull multiple tables using html_node & html_table

I'm trying to programmatically pull all of the box scores for a given day from NBA Reference (I used January 4th, 2020 which has multiple games). I started by creating a list of integers to denote the amount of box scores to pull:
games<- c(1:3)
Then I used developer tools from my browser to determine what each table contains (you can use selector gadget):
#content > div.game_summaries > div:nth-child(1) > table.team
Then I used purrr::map to create a list of the the tables to pull, using games:
map_list<- map(.x= '', paste, '#content > div.game_summaries > div:nth-child(', games, ') > table.teams',
sep = "")
# check map_list
map_list
Then I tried to run this list through a for loop to generate three tables, using tidyverse and rvest, which delivered an error:
for (i in map_list){
read_html('https://www.basketball-reference.com/boxscores/') %>%
html_node(map_list[[1]][i]) %>%
html_table() %>%
glimpse()
}
Error in selectr::css_to_xpath(css, prefix = ".//") :
Zero length character vector found for the following argument: selector
In addition: Warning message:
In selectr::css_to_xpath(css, prefix = ".//") :
NA values were found in the 'selector' argument, they have been removed
For reference, if I explicitly denote the html or call the exact item from map_list, the code works as intended (run below items for reference):
read_html('https://www.basketball-reference.com/boxscores/') %>%
html_node('#content > div.game_summaries > div:nth-child(1) > table.teams') %>%
html_table() %>%
glimpse()
read_html('https://www.basketball-reference.com/boxscores/') %>%
html_node(map_list[[1]][1]) %>%
html_table() %>%
glimpse()
How do I make this work with a list? I have looked at other threads but even though they use the same site, they're not the same issue.
Using your current map_list, if you want to use for loop this is what you should use
library(rvest)
for (i in seq_along(map_list[[1]])){
read_html('https://www.basketball-reference.com/boxscores/') %>%
html_node(map_list[[1]][i]) %>%
html_table() %>%
glimpse()
}
but I think this is simpler as you don't need to use map to create map_list since paste is vectorized :
map_list<- paste0('#content > div.game_summaries > div:nth-child(', games, ') > table.teams')
url <- 'https://www.basketball-reference.com/boxscores/'
webpage <- url %>% read_html()
purrr::map(map_list, ~webpage %>% html_node(.x) %>% html_table)
#[[1]]
# X1 X2 X3
#1 Indiana 111 Final
#2 Atlanta 116
#[[2]]
# X1 X2 X3
#1 Toronto 121 Final
#2 Brooklyn 102
#[[3]]
# X1 X2 X3
#1 Boston 111 Final
#2 Chicago 104
This page is reasonably straight forward to scrape. Here is a possible solution, first scrape the game summary nodes "div with class=game_summary". This provides a list of all of the games played. Also this allows the use of html_node function which guarantees a return, thus keeping the list sizes equal.
Each game summary is made up of three subtables, the first and third tables can be scraped directly. The second table does not have a class assigned thus making it more tricky to retrieve.
library(rvest)
page <- read_html('https://www.basketball-reference.com/boxscores/')
#find all of the game summaries on the page
games<-page %>% html_nodes("div.game_summary")
#Each game summary has 3 sub tables
#game score is table 1 of class=teams
#the stats is table 3 of class=stats
# the quarterly score is the second table and does not have a class defined
table1<-games %>% html_node("table.teams") %>% html_table()
stats <-games %>% html_node("table.stats") %>% html_table()
quarter<-sapply(games, function(g){
g %>% html_nodes("table") %>% .[2] %>% html_table()
})

How to resolve 'Don't know how to pluck from a closure' error in R

The code below works if I remove the Sys.sleep() from within the map() function. I tried to research the error ('Don't know how to pluck from a closure') but i haven't found much on that topic.
Does anyone know where I can find documentation on this error, and any help on why it is happening and how to prevent it?
library(rvest)
library(tidyverse)
library(stringr)
# lets assume 3 pages only to do it quickly
page <- (0:18)
# no need to create a list. Just a vector
urls = paste0("https://www.mlssoccer.com/players?page=", page)
# define this function that collects the player's name from a url
get_the_names = function( url){
url %>%
read_html() %>%
html_nodes("a.name_link") %>%
html_text()
}
# map the urls to the function that gets the names
players = map(urls, get_the_names) %>%
# turn into a single character vector
unlist() %>%
# make lower case
tolower() %>%
# replace the `space` to underscore
str_replace_all(" ", "-")
# Now create a vector of player urls
player_urls = paste0("https://www.mlssoccer.com/players/", players )
# define a function that reads the 3rd table of the url
get_the_summary_stats <- function(url){
url %>%
read_html() %>%
html_nodes("table") %>%
html_table() %>% .[[3]]
}
# lets read 3 players only to speed things up [otherwise it takes a significant amount of time to run...]
a_few_players <- player_urls[1:5]
# get the stats
tables = a_few_players %>%
# important step so I can name the rows I get in the table
set_names() %>%
#map the player urls to the function that reads the 3rd table
# note the `safely` wrap around the get_the_summary_stats' function
# since there are players with no stats and causes an error (eg.brenden-aaronson )
# the output will be a list of lists [result and error]
map(., ~{ Sys.sleep(5)
safely(get_the_summary_stats) }) %>%
# collect only the `result` output (the table) INTO A DATA FRAME
# There is also an `error` output
# also, name each row with the players name
map_df("result", .id = "player") %>%
#keep only the player name (remove the www.mls.... part)
mutate(player = str_replace(player, "https://www.mlssoccer.com/players/", "")) %>%
as_tibble()
tables <- tables %>% separate(Match,c("awayTeam","homeTeam"), extra= "drop", fill = "right")
purrr::safely(...) returns a function, so your map(., { Sys.sleep(5); safely(get_the_summary_stats) }) is returning functions, not any data. In R, a "closure" is a function and its enclosing environment.
Tilde notation is a tidyverse-specific method of more-terse anonymous functions. Typically (e.g., with lapply) one would use lapply(mydata, function(x) get_the_summary_stats(x)). In tilde notation, the same thing is written as map(mydata, ~ get_the_summary_stats(.))
So, re-write to:
... %>% map(~ { Sys.sleep(5); safely(get_the_summary_stats)(.); })
From comments by #r2evans

Resources