I'm trying to scrape hotel reviews from a certain hotel on from tripadvisor. I'm using Rvest to accomplish my goal. This script has to scrape multiple pages.
When executing my script rvest sometimes returns vectors with empty values when executing in a loop. This is completely random. Does anyone have a fix for this?
I tried manually walking trough the script. When i slowly go trough it it works most of the time, but sometimes still manages to pull empty data.
# Webscrapen
df <- data.frame()
x = 0
for(i in 1:250){
url <- paste("https://www.tripadvisor.com/Hotel_Review-g295424-d7760386-Reviews-or",x,"-Hyatt_Regency_Dubai_Creek_Heights-Dubai_Emirate_of_Dubai.html", sep = "")
x = x + 5
reviews <- url %>%
read_html() %>%
html_nodes('.common-text-ReadMore__content--2X4LR') %>%
html_node('.hotels-hotel-review-community-content-review-list-parts-ExpandableReview__reviewText--2OVqJ span') %>%
html_text()
rating <- url %>%
read_html() %>%
html_nodes(".hotels-hotel-review-community-content-review-list-parts-RatingLine__bubbles--3d2Be span") %>%
html_attr("class")
rating <- sapply(strsplit(rating, "_"), `[`, 4) %>%
as.numeric()
if(nrow(df) == 0){
df <- data.frame(reviews[!is.na(reviews)], rating, stringsAsFactors = F)
} else {
temp <- df
df <- rbind(temp, data.frame(reviews[!is.na(reviews)], rating, stringsAsFactors = F))
}
}
I expect to scrape all the reviews till my for loop stops. I should have a dataframe of at least 100 reviews.
I've found a workaround by placing the review in a repeat loop and keep repeating as long as the vector hasn't been filled.
The code takes a bit longer to execute but it gets the job done.
repeat{
Review <- url %>%
read_html() %>%
html_nodes('.common-text-ReadMore__content--2X4LR') %>%
html_node('.hotels-hotel-review-community-content-review-list-parts-ExpandableReview__reviewText--2OVqJ span') %>%
html_text()
if(length(Review) >= 1 ){
break;
}
}
Related
I'm trying to webscrape the government release calendar: https://www.gov.uk/government/statistics and use the rvest follow_link functionality to go to each publication link and scrape text from the next page. I have this working for each single page of results (40 publications are displayed per page), but can't get a loop to work so that I can run the code over all publications listed.
This is the code I run first to get the list of publications (just from the first 10 pages of results):
#Loading the rvest package
library('rvest')
library('dplyr')
library('tm')
#######PUBLISHED RELEASES################
###function to add number after 'page=' in url to loop over all pages of published releases results (only 40 publications per page)
###check the site and see how many pages you want to scrape, to cover months of interest
##titles of publications - creates a list
publishedtitles <- lapply(paste0('https://www.gov.uk/government/statistics?page=', 1:10),
function(url_base){
url_base %>% read_html() %>%
html_nodes('h3 a') %>%
html_text()
})
##Dates of publications
publisheddates <- lapply(paste0('https://www.gov.uk/government/statistics?page=', 1:10),
function(url_base){
url_base %>% read_html() %>%
html_nodes('.public_timestamp') %>%
html_text()
})
##Organisations
publishedorgs <- lapply(paste0('https://www.gov.uk/government/statistics?page=', 1:10),
function(url_base){
url_base %>% read_html() %>%
html_nodes('.organisations') %>%
html_text()
})
##Links to publications
publishedpartial_links <- lapply(paste0('https://www.gov.uk/government/statistics?page=', 1:10),
function(url_base){
url_base %>% read_html() %>%
html_nodes('h3 a') %>%
html_attr('href')
})
#Check all lists are the same length - if not, have to deal with missings before next step
# length(publishedtitles)
# length(publisheddates)
# length(publishedorgs)
# length(publishedpartial_links)
#str(publishedorgs)
#Combining all the lists to form a data frame
published <-data.frame(Title = unlist(publishedtitles), Date = unlist(publisheddates), Organisation = unlist(publishedorgs), PartLinks = unlist(publishedpartial_links))
#adding prefix to partial links, to turn into full URLs
published$Links = paste("https://www.gov.uk", published$PartLinks, sep="")
#Drop partial links column
keeps <- c("Title", "Date", "Organisation", "Links")
published <- published[keeps]
Then I want to run something like the below, but over all pages of results. I've ran this code manually changing the parameters for each page, so know it works.
session1 <- html_session("https://www.gov.uk/government/statistics?page=1")
list1 <- list()
for(i in published$Title[1:40]){
nextpage1 <- session1 %>% follow_link(i) %>% read_html()
list1[[i]]<- nextpage1 %>%
html_nodes(".grid-row") %>% html_text()
df1 <- data.frame(text=list1)
df1 <-as.data.frame(t(df1))
}
So the above would need to change page=1 in the html_session, and also the publication$Title[1:40] - I'm struggling with creating a function or loop that includes both variables.
I think I should be able to do this using lapply:
df <- lapply(paste0('https://www.gov.uk/government/statistics?page=', 1:10),
function(url_base){
for(i in published$Title[1:40]){
nextpage1 <- url_base %>% follow_link(i) %>% read_html()
list1[[i]]<- nextpage1 %>%
html_nodes(".grid-row") %>% html_text()
}
}
)
But I get the error
Error in follow_link(., i) : is.session(x) is not TRUE
I've also tried other methods of looping and turning it into a function but didn't want to make this post too long!
Thanks in advance for any suggestions and guidance :)
It looks like you may have just need to start a session inside the lapply function. In the last chunk of code, url_base is simply a text string that gives the base URL. Would something like this work:
df <- lapply(paste0('https://www.gov.uk/government/statistics?page=', 1:10),
function(url_base){
for(i in published$Title[1:40]){
tmpSession <- html_session(url_base)
nextpage1 <- tmpSession %>% follow_link(i) %>% read_html()
list1[[i]]<- nextpage1 %>%
html_nodes(".grid-row") %>% html_text()
}
}
)
To change the published$Title[1:40] for each iteraction of the lapply function, you could make an object that holds the lower and upper bounds of the indices:
lowers <- cumsum(c(1, rep(40, 9)))
uppers <- cumsum(rep(40, 10))
Then, you could include those in the call to lapply
df <- lapply(1:10, function(j){
url_base <- paste0('https://www.gov.uk/government/statistics?page=', j)
for(i in published$Title[lowers[j]:uppers[j]]){
tmpSession <- html_session(url_base)
nextpage1 <- tmpSession %>% follow_link(i) %>% read_html()
list1[[i]]<- nextpage1 %>%
html_nodes(".grid-row") %>% html_text()
}
}
)
Not sure if this is what you want or not, I might have misunderstood the things that are supposed to be changing.
So I'm trying to scrape data from a site that contains club data from clubs at my school. I've got a good script going that scrapes the surface level data from the site, however I can get more data by clicking the "more information" link at each club which leads to the club's profile page. I would like to scrape the data from that page (specifically the facebook link).
Below you'll see my current attempt at this.
url <- 'https://uws-community.symplicity.com/index.php?s=student_group'
page <- html_session(url)
get_table <- function(page, count) {
#find group names
name_text <- html_nodes(page,".grpl-name a") %>% html_text()
df <- data.frame(name_text, stringsAsFactors = FALSE)
#find text description
desc_text <- html_nodes(page, ".grpl-purpose") %>% html_text()
df$desc_text <- trimws(desc_text)
#find emails
# find the parent nodes with html_nodes
# then find the contact information from each parent using html_node
email_nodes<-html_nodes(page, "div.grpl-grp") %>% html_node( ".grpl-contact a") %>% html_text()
df$emails<-email_nodes
category_nodes <- html_nodes(page, "div.grpl-grp") %>% html_node(".grpl-type") %>% html_text()
df$category<-category_nodes
pic_nodes <-html_nodes(page, "div.grpl-grp") %>% html_node( ".grpl-logo img") %>% html_attr("src")
df$logo <- paste0("https://uws-community.symplicity.com/", pic_nodes)
more_info_nodes <- html_nodes(page, ".grpl-moreinfo a") %>% html_attr("href")
df$more_info <- paste0("https://uws-community.symplicity.com/", more_info_nodes)
sub_page <- page %>% follow_link(css = ".grpl-moreinfo a")
df$fb <- html_node(sub_page, xpath = '//*[#id="dnf_class_values_student_group__facebook__widget"]') %>% html_text()
if(count != 44) {
return (rbind(df, get_table(page %>% follow_link(css = ".paging_nav a:last-child"), count + 1)))
} else{
return (df)
}
}
RSO_data <- get_table(page, 0)
The current error I'm getting is:
Error in `$<-.data.frame`(`*tmp*`, "logo", value = "https://uws-community.symplicity.com/") :
replacement has 1 row, data has 0
I know I need to make a function that will go through each element and follow the link, then mapply that function to the dataframe df. However I don't know how I'd go about making that function so that it would work correctly.
your error says that you are trying to combine two different dimensions... your page variable already has one dimension and second is 0. page <- html_session(url) add this inside you function.
This is a reproducable example of your error message.
x = data.frame()
x[1] <- c(1)
I haven't checked your code, but the error is in there, you have to go step by step through your code. You will find the error, where you've created an empty data.frame and then tried to assign a value to it.
good luck
I am trying to scrape the reviews for a product using the below url in R. When I run the below code, I am able to get a single review scraped.
comment<- read_html("https://www.influenster.com/reviews/chobani-greek-yogurt")
comment %>% html_node(".content-item-text") %>% html_text()
comment %>% html_node(".date") %>% html_text()
However, when I use the below code for scraping multiple comments on multiple pages, it returns NULL.
reviews <- lapply(paste0('https://www.influenster.com/reviews/chobani-greek-yogurt?review_page=2', 2:50),
function(url){
url %>% read_html() %>%
html_nodes(".content-item-text review-text") %>%
html_nodes(".date") %>%
html_text()
})
Does the following code achieve what you are looking for?
comment<- read_html("https://www.influenster.com/reviews/chobani-greek-yogurt")
reviews <- c()
dates <- c()
for(i in 1:10){
reviews <- c(reviews,
comment %>%
html_node(paste0(".review-item:nth-child(", i, ") .review-text")) %>%
html_text())
dates <- c(dates,
comment %>%
html_node(paste0(".review-item:nth-child(", i, ") .date")) %>%
html_text())
}
for(j in 2:50){
comment <- read_html(paste0("https://www.influenster.com/reviews/chobani-greek-yogurt?review_page=", j))
for(i in 1:10){
reviews <- c(reviews,
comment %>%
html_node(paste0(".review-item:nth-child(", i, ") .review-text")) %>%
html_text())
dates <- c(dates,
comment %>%
html_node(paste0(".review-item:nth-child(", i, ") .date")) %>%
html_text())
}
}
Just note that I am in the UK and the extracted dates seem to be corrected (- 6 hours what is stated on the site)
Furthermore, apologies for the multiple looping I am not yet very quick at translating loops to the apply functions :)
I am trying to scrape the results from the 2012-2016 Stockholm Marathon races. I am able to do so using the code outlined below, but every time that I've scraped the results from one year I have to go through the process of manually changing the URL to capture the next year.
This bothers me as the only thing that needs to change is the bold part of http://results.marathon.se/2012/?content=list&event=STHM&num_results=250&page=1&pid=list&search[sex]=M&lang=SE.
How can I modify the code below so that it scrapes the results from each year, outputting the results into a single dataframe that also includes a column to indicate the year to which the observation belongs?
library(dplyr)
library(rvest)
library(tidyverse)
# Find the total number of pages to scrape
tot_pages <- read_html('http://results.marathon.se/2012/?content=list&event=STHM&num_results=250&page=1&pid=list&search[sex]=M&lang=EN') %>%
html_nodes('a:nth-child(6)') %>% html_text() %>% as.numeric()
#Store the URLs in a vector
URLs <- sprintf('http://results.marathon.se/2012/?content=list&event=STHM&num_results=250&page=%s&pid=list&search[sex]=M&lang=EN', 1:tot_pages)
#Create a progress bar
pb <- progress_estimated(tot_pages, min = 0)
# Create a function to scrape the name and finishing time from each page
getdata <- function(URL) {
pb$tick()$print()
pg <- read_html(URL)
html_nodes(pg, 'tbody td:nth-child(3)') %>% html_text() %>% as_tibble() %>% set_names(c('Name')) %>%
mutate(finish_time = html_nodes(pg, 'tbody .right') %>% html_text())
}
#Map everything into a dataframe
map_df(URLs, getdata) -> results
You can use lapply to do this:
library(dplyr)
library(rvest)
library(tidyverse)
# make a vector of the years you want
years <- seq(2012,2016)
# now use lapply to iterate your code over those years
Results.list <- lapply(years, function(x) {
# make a target url with the relevant year
link <- sprintf('http://results.marathon.se/%s/?content=list&event=STHM&num_results=250&page=1&pid=list&search[sex]=M&lang=EN', x)
# Find the total number of pages to scrape
tot_pages <- read_html(link) %>%
html_nodes('a:nth-child(6)') %>% html_text() %>% as.numeric()
# Store the URLs in a vector
URLs <- sprintf('http://results.marathon.se/%s/?content=list&event=STHM&num_results=250&page=%s&pid=list&search[sex]=M&lang=EN', x, 1:tot_pages)
#Create a progress bar
pb <- progress_estimated(tot_pages, min = 0)
# Create a function to scrape the name and finishing time from each page
getdata <- function(URL) {
pb$tick()$print()
pg <- read_html(URL)
html_nodes(pg, 'tbody td:nth-child(3)') %>% html_text() %>% as_tibble() %>% set_names(c('Name')) %>%
mutate(finish_time = html_nodes(pg, 'tbody .right') %>% html_text())
}
#Map everything into a dataframe
map_df(URLs, getdata) -> results
# add an id column indicating which year
results$year <- x
return(results)
})
# now collapse the resulting list into one tidy df
Results <- bind_rows(Results.list)
I am using rvest to scrape a website. It works, buy highly inefficient, and I can't figure out how to get it to work better.
in url is a list of over 10.000 url's.
number <- sapply(url, function(x)
read_html(x) %>%
html_nodes(".js-product-artnr") %>%
html_text())
price_new <- sapply(url, function(x)
read_html(x) %>%
html_nodes(".product-page__price__new") %>%
html_text())
price_old <- sapply(url, function(x)
read_html(x) %>%
html_nodes(".product-page__price__old") %>%
html_text())
The problem above is, rvest visits the 10.000 urls to get the first node in ".js-product-artnr", then visits the same 10.000 urls again for the second node and so on. In the end I expect to need about 10 different nodes from these 10.000 pages. getting them 1 by 1 and combining into a data frame later on takes way to long, there must be a better way.
I am looking for something like below, to get all information in 1 search
info <- sapply(url, function(x)
read_html(x) %>%
html_nodes(".js-product-artnr") %>%
html_nodes(".product-page__price__new") %>%
html_nodes(".product-page__price__old") %>%
html_text())
This works for me.
func <- function(url){
sample <- read_html(url) %>%
scrape1 <- html_nodes(sample, ".js-product-artnr")%>%
html_text()
scrape2 <- html_nodes(sample, ".product-page__price__new") %>%
html_text()
scrape3 <- html_nodes(sample,".product-page__price__old") %>%
html_text()
df <- cbind(scrape1, scrape2, scrape3)
final_df <- as.data.frame(df)
return(final_df)
}
data <- lapply(urls_all, func)