I have a dataframe pubs with two columns: url, html.node. I want to write a loop that reads each url retrieves the html contents, and extract the information indicated by html.node column, and accumulates it in a data frame, or list.
All URL's are different, all html nodes are different. My code so far is:
score <- vector()
k <- 1
for (r in 1:nrow(pubs)){
art.url <- pubs[r, 1] # column 1 contains URL
art.node <- pubs[r, 2] # column 2 contains html nodes as charcters
art.contents <- read_html(art.url)
score <- art.contents %>% html_nodes(art.node) %>% html_text()
k<-k+1
print(score)
}
I appreciate your help.
First of all, be sure that each site you're going to scrape, allows you to scrape data, you can incurr in legal issue if you break some rules.
(Note, I've used only http://toscrape.com/ , a sandbox site to scraping, due you did not provide your data)
After that, you can proceed with this, hope it helps:
# first, your data I think they're similar to this
pubs <- data.frame(site = c("http://quotes.toscrape.com/",
"http://quotes.toscrape.com/"),
html.node = c(".text",".author"), stringsAsFactors = F)
Then the loop you required:
library(rvest)
# an empty list, to fill with the scraped data
empty_list <- list()
# here you are going to fill the list with the scraped data
for (i in 1:nrow(pubs)){
art.url <- pubs[i, 1] # choose the site as you did
art.node <- pubs[i, 2] # choose the node as you did
# scrape it!
empty_list[[i]] <- read_html(art.url) %>% html_nodes(art.node) %>% html_text()
}
Now the result is a list, but, with:
names(empty_list) <- pubs$site
You are going to add to each element of the list the name of the site, with the result:
$`http://quotes.toscrape.com/`
[1] "“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”"
[2] "“It is our choices, Harry, that show what we truly are, far more than our abilities.”"
[3] "“There are only two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle.”"
[4] "“The person, be it gentleman or lady, who has not pleasure in a good novel, must be intolerably stupid.”"
[5] "“Imperfection is beauty, madness is genius and it's better to be absolutely ridiculous than absolutely boring.”"
[6] "“Try not to become a man of success. Rather become a man of value.”"
[7] "“It is better to be hated for what you are than to be loved for what you are not.”"
[8] "“I have not failed. I've just found 10,000 ways that won't work.”"
[9] "“A woman is like a tea bag; you never know how strong it is until it's in hot water.”"
[10] "“A day without sunshine is like, you know, night.”"
$`http://quotes.toscrape.com/`
[1] "Albert Einstein" "J.K. Rowling" "Albert Einstein" "Jane Austen" "Marilyn Monroe" "Albert Einstein" "André Gide"
[8] "Thomas A. Edison" "Eleanor Roosevelt" "Steve Martin"
Clearly it should work with different sites, and different nodes.
You could also use map from the purrr package instead of a loop:
expand.grid(c("http://quotes.toscrape.com/", "http://quotes.toscrape.com/tag/inspirational/"), # vector of urls
c(".text",".author"), # vector of nodes
stringsAsFactors = FALSE) %>% # assuming that the same nodes are relevant for all urls, otherwise you would have to do something like join
as_tibble() %>%
set_names(c("url", "node")) %>%
mutate(out = map2(url, node, ~ read_html(.x) %>% html_nodes(.y) %>% html_text())) %>%
unnest()
Related
I'm trying to scrape some data from websites with rvest. I have a tibble of thousands of URLs, and I need to extract one piece of data from each URL. In order to not be blocked by the main site I'm visiting, I need to rest about 2 minutes after each 200 URLs I visit (learned this via trial and error). I'm wondering how I can use sys.sleep to do this.
My current code is below. I am going to each URL in url_tibble and pulling data (".verified").
# Function to extract data
get_data <- function(x) {
read_html(x) %>%
html_nodes(".verified") %>%
html_attr("href")
}
# Extract data
data_I_need <- url_tibble %>%
mutate(profile = map(url, ~ get_data(.x)),)
This code works for a limited number of URLS, until I get blocked for trying to scrape from the site too quickly. To avoid being blocked, I'd like to pause for 2 minutes after each 200 URLs using sys.sleep. Can you help me figure out how to do this?
The best recommendation I found for how to do this was from the solution posted on Recommendation when using Sys.sleep() in R with rvest, but I can't figure out how to integrate the solution with my code. This solution uses loops instead of map. I tried doing something like this:
output <- vector(length = length(url_tibble$url))
for(i in 1:length(url_tibble$url)) {
data_I_need <- read_html(url_tibble$url[i]) %>%
html_nodes(".verified") %>%
html_attr("href")
output[i] <- data_I_need
if((i %% 200) == 0){
Sys.sleep(160)
}
}
However, this does not work either, and I receive an error message.
We can lapply in lieu of a loop. Also, I have added an https:// to each URL such that read_html recognises them as links not files. We should replace 2 with 200 for the actual data.
lapply(1:length(url_tibble$url), function(x){
if(x%%2 == 0){
print(paste0("Sleeping at ", x))
Sys.sleep(20)
}
read_html(paste0("https://",url_tibble$url[x])) %>%
html_nodes(".verified") %>%
html_attr("href")
})
Output (truncated)
[1] "Sleeping at 2"
[1] "Sleeping at 4"
[1] "Sleeping at 6"
[[1]]
[1] "https://www.psychologytoday.com/us/therapists/aak-bright-start-rego-park-ny/936718"
[2] "https://www.psychologytoday.com/us/therapists/leslie-aaron-new-york-ny/148793"
[3] "https://www.psychologytoday.com/us/therapists/lindsay-aaron-frieman-new-york-ny/761657"
[4] "https://www.psychologytoday.com/us/therapists/fay-m-aaronson-brooklyn-ny/840861"
[5] "https://www.psychologytoday.com/us/therapists/anita-aasen-staten-island-ny/291614"
[6] "https://www.psychologytoday.com/us/therapists/aask-therapeutic-services-fishkill-ny/185423"
[7] "https://www.psychologytoday.com/us/therapists/amanda-abady-brooklyn-ny/935849"
[8] "https://www.psychologytoday.com/us/therapists/denise-abatemarco-new-york-ny/143678"
[9] "https://www.psychologytoday.com/us/therapists/raya-abat-robinson-new-york-ny/810730"
I am using rvest to (try to) scrape all the author affiliation data from a database of academic publications called RePEc. I have the authors' short IDs (author_reg), which I'm using to scrape affiliation data. However, I have several columns indicating multiple authors (each of which I need the affiliation data for). When there aren't multiple authors, the cell has an NA value. Some of the columns are mostly NA values so how do I alter my code so it skips the NA values but doesn't delete them?
Here is the code I'm using:
library(rvest)
library(purrr)
df$author_reg <- c("paa6","paa2","paa1", "paa8", "pve266", "pya500", "NA", "NA")
http1 <- "https://ideas.repec.org/e/"
http2 <- "https://ideas.repec.org/f/"
df$affiliation_author_1 <- sapply(df$author_reg_1, function(x) {
links = c(paste0(http1, x, ".html"),paste0(http2, x, ".html"))
# here we try both links and store under attempts
attempts = links %>% map(function(i){
try(read_html(i) %>% html_nodes("#affiliation h3") %>% html_text())
})
# the good ones will have "character" class, the failed ones, try-error
gdlink = which(sapply(attempts,class) != "try-error")
if(length(gdlink)>0){
return(attempts[[gdlink[1]]])
}
else{
return("True 404 error")
}
})
Thanks in advance for your help!
As far as I see the target links, you can try the following way. First, you want to scrape all links from https://ideas.repec.org/e/ and create all links. Then, check if each link exists or not. (There are about 26000 links with this URL, and I do not have time to check all. So I just used 100 URLs in the following demonstration.) Extract all existing links.
library(rvest)
library(httr)
library(tidyverse)
# Get all possible links from this webpage. There are 26665 links.
read_html("https://ideas.repec.org/e/") %>%
html_nodes("td") %>%
html_nodes("a") %>%
html_attr("href") %>%
.[grepl(x = ., pattern = "html")] -> x
# Create complete URLs.
mylinks1 <- paste("https://ideas.repec.org/e/", x, sep = "")
# For this demonstration I created a subset.
mylinks_samples <- mylinks1[1:100]
# Check if each URL exists or not. If FALSE, a link exists.
foo <- sapply(mylinks_sample, http_error)
# Using the logical vector, foo, extract existing links.
urls <- mylinks_samples[!foo]
Then, for each link, I tried to extract affiliation information. There are several spots with h3. So I tried to specifically target h3 that stays in xpath containing id = "affiliation". If there is no affiliation information, R returns character(0). When enframe() is applied, these elements are removed. For instance, pab127 does not have any affiliation information, so there is no entry for this link.
lapply(urls, function(x){
read_html(x, encoding = "UTF-8") %>%
html_nodes(xpath = '//*[#id="affiliation"]') %>%
html_nodes("h3") %>%
html_text() %>%
trimws() -> foo
return(foo)}) -> mylist
Then, I assigned names to mylist with the links and created a data frame.
names(mylist) <- sub(x = basename(urls), pattern = ".html", replacement = "")
enframe(mylist) %>%
unnest(value)
name value
<chr> <chr>
1 paa1 "(80%) Institutt for ØkonomiUniversitetet i Bergen"
2 paa1 "(20%) Gruppe for trygdeøkonomiInstitutt for ØkonomiUniversitetet i Bergen"
3 paa2 "Department of EconomicsCollege of BusinessUniversity of Wyoming"
4 paa6 "Statistisk SentralbyråGovernment of Norway"
5 paa8 "Centraal Planbureau (CPB)Government of the Netherlands"
6 paa9 "(79%) Economic StudiesBrookings Institution"
7 paa9 "(21%) Brookings Institution"
8 paa10 "Helseøkonomisk Forskningsprogram (HERO) (Health Economics Research Programme)\nUniversitetet i Oslo (Unive~
9 paa10 "Institutt for Helseledelse og Helseökonomi (Institute of Health Management and Health Economics)\nUniversi~
10 paa11 "\"Carlo F. Dondena\" Centre for Research on Social Dynamics (DONDENA)\nUniversità Commerciale Luigi Boccon~
I'm trying to get the
Gender
Race or Ethnicity
Sexual orientation
Occupation
Nationality
from each site listed here: https://www.nndb.com/lists/494/000063305/
Here's an individual site so viewers can see the single page.
I'm trying to model my R code after this site but it's difficult because on the individual sites there aren't headings for Gender, for example. Can someone assist?
library(purrr)
library(rvest)
url_base <- "https://www.nndb.com/lists/494/000063305/"
b_dataset <- map_df(1:91, function(i) {
page <- read_html(sprintf(url_base, i))
data.frame(ICOname = html_text(html_nodes(page, ".name")))
})
I'll take you halfway there: it's not too difficult to figure out from here.
library(purrr)
library(rvest)
url_base <- "https://www.nndb.com/lists/494/000063305/"
First, the following will generate a list of A-Z surname list URLs, and then consequently each person's profile URLs.
## Gets A-Z links
all_surname_urls <- read_html(url_base) %>%
html_nodes(".newslink") %>%
html_attrs() %>%
map(pluck(1, 1))
all_ppl_urls <- map(
all_surname_urls,
function(x) read_html(x) %>%
html_nodes("a") %>%
html_attrs() %>%
map(pluck(1, 1))
) %>%
unlist()
all_ppl_urls <- setdiff(
all_ppl_urls[!duplicated(all_ppl_urls)],
c(all_surname_urls, "http://www.nndb.com/")
)
You are correct---there are no separate headings for gender or any other, really. You'll just have to use tools such as SelectorGadget to see what elements contain what you need. In this case it's simply p.
all_ppl_urls[1] %>%
read_html() %>%
html_nodes("p") %>%
html_text()
The output will be
[1] "AKA Lee William Aaker"
[2] "Born: 25-Sep-1943Birthplace: Los Angeles, CA"
[3] "Gender: MaleRace or Ethnicity: WhiteOccupation: Actor"
[4] "Nationality: United StatesExecutive summary: The Adventures of Rin Tin Tin"
...
Although the output is not clean, things rarely are when webscraping---this is actually relatively easier one. You can use series of grepl and map to subset the contents that you need, and make a dataframe out of them.
I have seen some similar questions, but none of the solutions are working for me. I'm trying to get the urls of the links of each of the nodes, but list is only null values.
beer <- read_html("https://www.beeradvocate.com/lists/top/")
beerLink <- beer %>%
html_nodes(".hr_bottom_light a b") %>%
html_attr('href') %>%
as.data.frame()
Any help would be appreciated.
b is a descendant node, but a contains the links you want. You can search around for some descendant patterns (I am only familiar with xpath versions, and it seems you prefer CSS), but this alternative gets the links you want without that:
#using a stub to facilitate accessing the URLs later with
# an absolute address
stub = 'https://www.beeradvocate.com'
beer <- read_html(paste0(stub, '/lists/top/'))
lnx = beer %>% html_nodes('a') %>% html_attr('href') %>%
#this pattern matches beer profile links --
# the first . is a brewery ID, the second .
# is a beer ID within that brewery
grep('profile/.*/.*/', ., value = TRUE) %>%
paste0(stub, .)
head(lnx)
# [1] "https://www.beeradvocate.com/beer/profile/23222/78820/"
# [2] "https://www.beeradvocate.com/beer/profile/28743/136936/"
# [3] "https://www.beeradvocate.com/beer/profile/28743/146770/"
# [4] "https://www.beeradvocate.com/beer/profile/28743/87846/"
# [5] "https://www.beeradvocate.com/beer/profile/863/21690/"
# [6] "https://www.beeradvocate.com/beer/profile/17981/110635/"
Also, Abraxas is an amazing beer and Santana album
I want download Amazon books review counts but I have one problem
I tried the following:
library(rvest)
url<-paste0("http://www.amazon.com/s/ref=lp_4_nr_p_72_3?",
"fst=as%3Aoff&rh=n%3A283155%2Cn%3A%211000%2C",
"n%3A4%2Cp_72%3A1250224011&bbn=4&ie=UTF8&qid",
"=1440446201&rnid=1250219011")
html<-html(url)
Reviews <- try({html_nodes(html, "#s-results-list-atf .a-text-normal:nth-child(2)") %>%
html_text()}, silent = TRUE)
But I only have 4 review counts in my R console and not 12 (Using selector gadget). What did I do wrong?
When I tried to download the books' names I didn't have the same problem... only in review counts.
Book <- try({ html_nodes(html, ".s-access-title") %>%
html_text()}, silent = TRUE)
page link Amazon Page
This is probably not the canonical approach, but here's what I did that works:
#via Inspect element in Chrome, the relevant info is
# in an <a> tag with class 'a-size-small a-link-normal a-text-normal'
# but this does not uniquely identify the review counts
# (e.g., the $12.00 Buy used & new... bit is also there)
# so we take a step up and find that both the rating
# and the review count are stored in a <div> tag
# with class 'a-row a-spacing-mini'
x<-html(url) %>% html_nodes("div.a-row.a-spacing-mini") %>%
html_nodes("a.a-size-small.a-link-normal.a-text-normal") %>%
html_text
#upon inspection of x, we can see that the relevant numbers
# always appear by themselves, thus:
> x[!is.na(as.integer(gsub(",","",x)))]
[1] "168" "232" "1,607" "2,226" "1,060" "25" "731" "2,374" "345" "7,205"
[11] "1,134" "1,137"