Receiving NAs when scraping links (href) with rvest - r

I have seen some similar questions, but none of the solutions are working for me. I'm trying to get the urls of the links of each of the nodes, but list is only null values.
beer <- read_html("https://www.beeradvocate.com/lists/top/")
beerLink <- beer %>%
html_nodes(".hr_bottom_light a b") %>%
html_attr('href') %>%
as.data.frame()
Any help would be appreciated.

b is a descendant node, but a contains the links you want. You can search around for some descendant patterns (I am only familiar with xpath versions, and it seems you prefer CSS), but this alternative gets the links you want without that:
#using a stub to facilitate accessing the URLs later with
# an absolute address
stub = 'https://www.beeradvocate.com'
beer <- read_html(paste0(stub, '/lists/top/'))
lnx = beer %>% html_nodes('a') %>% html_attr('href') %>%
#this pattern matches beer profile links --
# the first . is a brewery ID, the second .
# is a beer ID within that brewery
grep('profile/.*/.*/', ., value = TRUE) %>%
paste0(stub, .)
head(lnx)
# [1] "https://www.beeradvocate.com/beer/profile/23222/78820/"
# [2] "https://www.beeradvocate.com/beer/profile/28743/136936/"
# [3] "https://www.beeradvocate.com/beer/profile/28743/146770/"
# [4] "https://www.beeradvocate.com/beer/profile/28743/87846/"
# [5] "https://www.beeradvocate.com/beer/profile/863/21690/"
# [6] "https://www.beeradvocate.com/beer/profile/17981/110635/"
Also, Abraxas is an amazing beer and Santana album

Related

How to get rvest or sapply to skip NA values?

I am using rvest to (try to) scrape all the author affiliation data from a database of academic publications called RePEc. I have the authors' short IDs (author_reg), which I'm using to scrape affiliation data. However, I have several columns indicating multiple authors (each of which I need the affiliation data for). When there aren't multiple authors, the cell has an NA value. Some of the columns are mostly NA values so how do I alter my code so it skips the NA values but doesn't delete them?
Here is the code I'm using:
library(rvest)
library(purrr)
df$author_reg <- c("paa6","paa2","paa1", "paa8", "pve266", "pya500", "NA", "NA")
http1 <- "https://ideas.repec.org/e/"
http2 <- "https://ideas.repec.org/f/"
df$affiliation_author_1 <- sapply(df$author_reg_1, function(x) {
links = c(paste0(http1, x, ".html"),paste0(http2, x, ".html"))
# here we try both links and store under attempts
attempts = links %>% map(function(i){
try(read_html(i) %>% html_nodes("#affiliation h3") %>% html_text())
})
# the good ones will have "character" class, the failed ones, try-error
gdlink = which(sapply(attempts,class) != "try-error")
if(length(gdlink)>0){
return(attempts[[gdlink[1]]])
}
else{
return("True 404 error")
}
})
Thanks in advance for your help!
As far as I see the target links, you can try the following way. First, you want to scrape all links from https://ideas.repec.org/e/ and create all links. Then, check if each link exists or not. (There are about 26000 links with this URL, and I do not have time to check all. So I just used 100 URLs in the following demonstration.) Extract all existing links.
library(rvest)
library(httr)
library(tidyverse)
# Get all possible links from this webpage. There are 26665 links.
read_html("https://ideas.repec.org/e/") %>%
html_nodes("td") %>%
html_nodes("a") %>%
html_attr("href") %>%
.[grepl(x = ., pattern = "html")] -> x
# Create complete URLs.
mylinks1 <- paste("https://ideas.repec.org/e/", x, sep = "")
# For this demonstration I created a subset.
mylinks_samples <- mylinks1[1:100]
# Check if each URL exists or not. If FALSE, a link exists.
foo <- sapply(mylinks_sample, http_error)
# Using the logical vector, foo, extract existing links.
urls <- mylinks_samples[!foo]
Then, for each link, I tried to extract affiliation information. There are several spots with h3. So I tried to specifically target h3 that stays in xpath containing id = "affiliation". If there is no affiliation information, R returns character(0). When enframe() is applied, these elements are removed. For instance, pab127 does not have any affiliation information, so there is no entry for this link.
lapply(urls, function(x){
read_html(x, encoding = "UTF-8") %>%
html_nodes(xpath = '//*[#id="affiliation"]') %>%
html_nodes("h3") %>%
html_text() %>%
trimws() -> foo
return(foo)}) -> mylist
Then, I assigned names to mylist with the links and created a data frame.
names(mylist) <- sub(x = basename(urls), pattern = ".html", replacement = "")
enframe(mylist) %>%
unnest(value)
name value
<chr> <chr>
1 paa1 "(80%) Institutt for ØkonomiUniversitetet i Bergen"
2 paa1 "(20%) Gruppe for trygdeøkonomiInstitutt for ØkonomiUniversitetet i Bergen"
3 paa2 "Department of EconomicsCollege of BusinessUniversity of Wyoming"
4 paa6 "Statistisk SentralbyråGovernment of Norway"
5 paa8 "Centraal Planbureau (CPB)Government of the Netherlands"
6 paa9 "(79%) Economic StudiesBrookings Institution"
7 paa9 "(21%) Brookings Institution"
8 paa10 "Helseøkonomisk Forskningsprogram (HERO) (Health Economics Research Programme)\nUniversitetet i Oslo (Unive~
9 paa10 "Institutt for Helseledelse og Helseökonomi (Institute of Health Management and Health Economics)\nUniversi~
10 paa11 "\"Carlo F. Dondena\" Centre for Research on Social Dynamics (DONDENA)\nUniversità Commerciale Luigi Boccon~

rvest: for loop/map to pull multiple tables using html_node & html_table

I'm trying to programmatically pull all of the box scores for a given day from NBA Reference (I used January 4th, 2020 which has multiple games). I started by creating a list of integers to denote the amount of box scores to pull:
games<- c(1:3)
Then I used developer tools from my browser to determine what each table contains (you can use selector gadget):
#content > div.game_summaries > div:nth-child(1) > table.team
Then I used purrr::map to create a list of the the tables to pull, using games:
map_list<- map(.x= '', paste, '#content > div.game_summaries > div:nth-child(', games, ') > table.teams',
sep = "")
# check map_list
map_list
Then I tried to run this list through a for loop to generate three tables, using tidyverse and rvest, which delivered an error:
for (i in map_list){
read_html('https://www.basketball-reference.com/boxscores/') %>%
html_node(map_list[[1]][i]) %>%
html_table() %>%
glimpse()
}
Error in selectr::css_to_xpath(css, prefix = ".//") :
Zero length character vector found for the following argument: selector
In addition: Warning message:
In selectr::css_to_xpath(css, prefix = ".//") :
NA values were found in the 'selector' argument, they have been removed
For reference, if I explicitly denote the html or call the exact item from map_list, the code works as intended (run below items for reference):
read_html('https://www.basketball-reference.com/boxscores/') %>%
html_node('#content > div.game_summaries > div:nth-child(1) > table.teams') %>%
html_table() %>%
glimpse()
read_html('https://www.basketball-reference.com/boxscores/') %>%
html_node(map_list[[1]][1]) %>%
html_table() %>%
glimpse()
How do I make this work with a list? I have looked at other threads but even though they use the same site, they're not the same issue.
Using your current map_list, if you want to use for loop this is what you should use
library(rvest)
for (i in seq_along(map_list[[1]])){
read_html('https://www.basketball-reference.com/boxscores/') %>%
html_node(map_list[[1]][i]) %>%
html_table() %>%
glimpse()
}
but I think this is simpler as you don't need to use map to create map_list since paste is vectorized :
map_list<- paste0('#content > div.game_summaries > div:nth-child(', games, ') > table.teams')
url <- 'https://www.basketball-reference.com/boxscores/'
webpage <- url %>% read_html()
purrr::map(map_list, ~webpage %>% html_node(.x) %>% html_table)
#[[1]]
# X1 X2 X3
#1 Indiana 111 Final
#2 Atlanta 116
#[[2]]
# X1 X2 X3
#1 Toronto 121 Final
#2 Brooklyn 102
#[[3]]
# X1 X2 X3
#1 Boston 111 Final
#2 Chicago 104
This page is reasonably straight forward to scrape. Here is a possible solution, first scrape the game summary nodes "div with class=game_summary". This provides a list of all of the games played. Also this allows the use of html_node function which guarantees a return, thus keeping the list sizes equal.
Each game summary is made up of three subtables, the first and third tables can be scraped directly. The second table does not have a class assigned thus making it more tricky to retrieve.
library(rvest)
page <- read_html('https://www.basketball-reference.com/boxscores/')
#find all of the game summaries on the page
games<-page %>% html_nodes("div.game_summary")
#Each game summary has 3 sub tables
#game score is table 1 of class=teams
#the stats is table 3 of class=stats
# the quarterly score is the second table and does not have a class defined
table1<-games %>% html_node("table.teams") %>% html_table()
stats <-games %>% html_node("table.stats") %>% html_table()
quarter<-sapply(games, function(g){
g %>% html_nodes("table") %>% .[2] %>% html_table()
})

Web Scraping Notable Names

I'm trying to get the
Gender
Race or Ethnicity
Sexual orientation
Occupation
Nationality
from each site listed here: https://www.nndb.com/lists/494/000063305/
Here's an individual site so viewers can see the single page.
I'm trying to model my R code after this site but it's difficult because on the individual sites there aren't headings for Gender, for example. Can someone assist?
library(purrr)
library(rvest)
url_base <- "https://www.nndb.com/lists/494/000063305/"
b_dataset <- map_df(1:91, function(i) {
page <- read_html(sprintf(url_base, i))
data.frame(ICOname = html_text(html_nodes(page, ".name")))
})
I'll take you halfway there: it's not too difficult to figure out from here.
library(purrr)
library(rvest)
url_base <- "https://www.nndb.com/lists/494/000063305/"
First, the following will generate a list of A-Z surname list URLs, and then consequently each person's profile URLs.
## Gets A-Z links
all_surname_urls <- read_html(url_base) %>%
html_nodes(".newslink") %>%
html_attrs() %>%
map(pluck(1, 1))
all_ppl_urls <- map(
all_surname_urls,
function(x) read_html(x) %>%
html_nodes("a") %>%
html_attrs() %>%
map(pluck(1, 1))
) %>%
unlist()
all_ppl_urls <- setdiff(
all_ppl_urls[!duplicated(all_ppl_urls)],
c(all_surname_urls, "http://www.nndb.com/")
)
You are correct---there are no separate headings for gender or any other, really. You'll just have to use tools such as SelectorGadget to see what elements contain what you need. In this case it's simply p.
all_ppl_urls[1] %>%
read_html() %>%
html_nodes("p") %>%
html_text()
The output will be
[1] "AKA Lee William Aaker"
[2] "Born: 25-Sep-1943Birthplace: Los Angeles, CA"
[3] "Gender: MaleRace or Ethnicity: WhiteOccupation: Actor"
[4] "Nationality: United StatesExecutive summary: The Adventures of Rin Tin Tin"
...
Although the output is not clean, things rarely are when webscraping---this is actually relatively easier one. You can use series of grepl and map to subset the contents that you need, and make a dataframe out of them.

webscraping loop over url and html node in R rvest

I have a dataframe pubs with two columns: url, html.node. I want to write a loop that reads each url retrieves the html contents, and extract the information indicated by html.node column, and accumulates it in a data frame, or list.
All URL's are different, all html nodes are different. My code so far is:
score <- vector()
k <- 1
for (r in 1:nrow(pubs)){
art.url <- pubs[r, 1] # column 1 contains URL
art.node <- pubs[r, 2] # column 2 contains html nodes as charcters
art.contents <- read_html(art.url)
score <- art.contents %>% html_nodes(art.node) %>% html_text()
k<-k+1
print(score)
}
I appreciate your help.
First of all, be sure that each site you're going to scrape, allows you to scrape data, you can incurr in legal issue if you break some rules.
(Note, I've used only http://toscrape.com/ , a sandbox site to scraping, due you did not provide your data)
After that, you can proceed with this, hope it helps:
# first, your data I think they're similar to this
pubs <- data.frame(site = c("http://quotes.toscrape.com/",
"http://quotes.toscrape.com/"),
html.node = c(".text",".author"), stringsAsFactors = F)
Then the loop you required:
library(rvest)
# an empty list, to fill with the scraped data
empty_list <- list()
# here you are going to fill the list with the scraped data
for (i in 1:nrow(pubs)){
art.url <- pubs[i, 1] # choose the site as you did
art.node <- pubs[i, 2] # choose the node as you did
# scrape it!
empty_list[[i]] <- read_html(art.url) %>% html_nodes(art.node) %>% html_text()
}
Now the result is a list, but, with:
names(empty_list) <- pubs$site
You are going to add to each element of the list the name of the site, with the result:
$`http://quotes.toscrape.com/`
[1] "“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”"
[2] "“It is our choices, Harry, that show what we truly are, far more than our abilities.”"
[3] "“There are only two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle.”"
[4] "“The person, be it gentleman or lady, who has not pleasure in a good novel, must be intolerably stupid.”"
[5] "“Imperfection is beauty, madness is genius and it's better to be absolutely ridiculous than absolutely boring.”"
[6] "“Try not to become a man of success. Rather become a man of value.”"
[7] "“It is better to be hated for what you are than to be loved for what you are not.”"
[8] "“I have not failed. I've just found 10,000 ways that won't work.”"
[9] "“A woman is like a tea bag; you never know how strong it is until it's in hot water.”"
[10] "“A day without sunshine is like, you know, night.”"
$`http://quotes.toscrape.com/`
[1] "Albert Einstein" "J.K. Rowling" "Albert Einstein" "Jane Austen" "Marilyn Monroe" "Albert Einstein" "André Gide"
[8] "Thomas A. Edison" "Eleanor Roosevelt" "Steve Martin"
Clearly it should work with different sites, and different nodes.
You could also use map from the purrr package instead of a loop:
expand.grid(c("http://quotes.toscrape.com/", "http://quotes.toscrape.com/tag/inspirational/"), # vector of urls
c(".text",".author"), # vector of nodes
stringsAsFactors = FALSE) %>% # assuming that the same nodes are relevant for all urls, otherwise you would have to do something like join
as_tibble() %>%
set_names(c("url", "node")) %>%
mutate(out = map2(url, node, ~ read_html(.x) %>% html_nodes(.y) %>% html_text())) %>%
unnest()

Scraping from one URL to another URL in R

My question is in regards to R being able to read a URL link. The example that I use is solely for illustration purposes. Say that I have the following webpage that I want to read (chosen at random);
https://www.mcdb.ucla.edu/faculty
It has a list of professor names with a URL link, I am trying to build a script which can read a webpage similar to this for instance and access each URL link and make a search for certain keywords regarding their publications.
I currently have my script to scan an individual website for certain keywords which I post below.
library(rvest)
library(dplyr)
library(tidyverse)
library(stringr)
prof <- readLines("https://www.mcdb.ucla.edu/faculty/jsadams")
library(dplyr)
text_df <- data_frame(text = prof)
text_df <- as.data.frame.table(text_df)
keywords <- c("nonskeletal", "antimicrobial response")
text_df %>%
filter(str_detect(text, keywords[1]) | str_detect(text, keywords[2]))
This should return publications 1, 2 and 4 under the section "Selected Publications" on the professors webpage.
Now I am trying to get R to read each professors page from the faculty link (https://www.mcdb.ucla.edu/faculty) and see if each professor has publications with the keywords listed above.
Read: https://www.mcdb.ucla.edu/faculty
Access each link and read each faculty member page:
Return if value "keywords" = TRUE:
List professors publications or text that has the "keywords" in:
I have already been able to do this for each individual page but I would perhaps prefer a loop or function so I do not have to copy and paste each professors page URL each time.
Just a slight disclaimer - I have no connection with the UCLA or the professor on that website, the professor URL I chose just so happened to be the first professor listed on the faculty of professors webpage.
I'd approach this as follows. This is "quick and dirty" code, but hopefully provides a basis for something better.
First, you need the correct selectors to get the faculty names and the links to their pages. Create a data frame with that information:
library(dplyr)
library(rvest)
library(tidytext)
page <- read_html("https://www.mcdb.ucla.edu/faculty")
table1 <- page %>%
html_nodes(xpath = "///table[1]/tr/td/a")
names <- table1 %>%
html_text() %>%
unlist(use.names = FALSE)
links <- table1 %>%
html_attrs() %>%
unlist(use.names = FALSE)
data1 <- data.frame(name = names, href = links)
head(data1)
name href
1 John Adams /faculty/jsadams
2 Utpal Banerjee /faculty/banerjee
3 Siobhan Braybrook /faculty/siobhanb
4 Jau-Nian Chen /faculty/chenjn
5 Amander Clark /faculty/clarka
6 Daniel Cohn /faculty/dcohn
Next, you need a function that takes the values in the href column, fetches the staff page and looks for keywords. I took a different approach to you, using tidytext to break all of the publications down into individual words, then counting rows where any of the keywords occur. This means that "antimicrobial response" has to be two separate words, so you may want to do that differently.
The function returns a count which is > 0 if any of the keywords were present.
get_pubs <- function(href) {
page <- read_html(paste0("https://www.mcdb.ucla.edu", href))
pubs <- data.frame(text = page %>%
html_nodes("div.mcdb-faculty-pubs p") %>%
html_text(),
stringsAsFactors = FALSE)
pubs <- pubs %>%
unnest_tokens(word, text)
pubs %>%
filter(word %in% c("nonskeletal", "antimicrobial", "response")) %>%
nrow()
}
Now you can apply the function to each href:
data1 <- data1 %>%
mutate(count = sapply(href, function(x) get_pubs(x)))
Which faculty had at least one keyword in their publications?
data1 %>%
filter(count > 0)
name href count
1 John Adams /faculty/jsadams 9
2 Arjun Deb /faculty/adeb 1
3 Tracy Johnson /faculty/tljohnson 1
4 Chentao Lin /faculty/clin 1
5 Jeffrey Long /faculty/jeffalong 1
6 Matteo Pellegrini /faculty/matteop 1

Resources