R code for login to online news paper while scraping urls? - r

I have a list of URLs, and I am trying to scrape the content from them for my research in R. I scrape all content using read_html in a for loop. The problem is that I need to login to the newspaper to scrape the content. So I am trying to login with my ID and password so that I can scrape the news content and date for each URL I found in the search result.
Can I somehow write login information into the for loop in order to access the content of the news articles?
library(rvest)
library(stringr)
library(purrr)
library(readbulk)
library(dplyr)
#Read URLs
urls <- read_bulk("C:/Users/XXXX", extension = ".csv") %>%
dplyr::distinct(link) #removing dublicates
#For Loop
titles <- c()
text <- c()
url <- c()
date <- c()
for(i in 1:nrow(urls)){
data <- read_html(paste0(urls$link[i]))
body <- data %>%
html_nodes("p") %>%
html_text() %>%
str_c(collapse = " ", sep = "")
text = append(text, body)
data <- read_html(paste0(urls$link[i]))
header <- data %>%
html_node("title") %>%
html_text()
titles = append(titles, rep(header,each=length(body)))
data <- read_html(paste0(urls$link[i]))
time <- data %>%
html_nodes("time") %>% #See HTML source code for data within this tag
html_text() %>%
str_c(collapse = " ", sep = "")
date = append(date, rep(time,each=length(time)))
url = append(url, rep(paste0(urls$link[i]),each=length(body)))
print(i)
}
data <- data.frame(Headline=titles, Body=text, Date=date, Url=url) # As Dataframe

Related

Rvest Pulls Empty Tables

The site I use to scrape data has changed and I'm having issues pulling the data into table format. I used two different types of codes below trying to get the tables, but it is returning blanks instead of tables.
I'm a novice in regards to scraping and would appreciate the expertise of the group. Should I look for other solutions in rvest, or try to learn a program like rSelenium?
https://www.pgatour.com/stats/detail/02675
Scrape for Multiple Links
library("dplyr")
library("purr")
library("rvest")
df23 <- expand.grid(
stat_id = c("02568","02674", "02567", "02564", "101")
) %>%
mutate(
links = paste0(
'https://www.pgatour.com/stats/detail/',
stat_id
)
) %>%
as_tibble()
#replaced tournament_id with stat_id
get_info <- function(link, stat_id){
data <- link %>%
read_html() %>%
html_table() %>%
.[[2]]
}
test_main_stats <- df23 %>%
mutate(tables = map2(links, stat_id, possibly(get_info, otherwise = tibble())))
test_main_stats <- test_main_stats %>%
unnest(everything())
Alternative Code
url <- read_html("https://www.pgatour.com/stats/detail/02568")
test1 <- url %>%
html_nodes(".css-8atqhb") %>%
html_table
This page uses javascript to create the table, so rvest will not directly work. But if one examines the page's source code, all of the data is stored in JSON format in a "<script>" node.
This code finds that node and converts from JSON to a list. The variable is the main table but there is a wealth of other information contained in the JSON data struture.
#read page
library(rvest)
page <- read_html("https://www.pgatour.com/stats/detail/02675")
#find the script with the correct id tage, strip the html code
datascript <- page %>% html_elements(xpath = ".//script[#id='__NEXT_DATA__']") %>% html_text()
#convert from JSON
output <- jsonlite::fromJSON(datascript)
#explore the output
str(output)
#get the main table
answer <-output$props$pageProps$statDetails$rows

R webscraper is not outputting pdf text in one row

I've been web scraping articles in R from the Oxford journals and want to grab the full text of specific articles. All articles have a pdf link to them so I've been trying to pull the pdf link and scrape the entire text onto a csv. The full text should all fit into 1 row however the output in the csv file shows one article of 11 rows. How can I fix this issue?
The code is below:
####install.packages("rvest")
library(rvest)
library(RCurl)
library(XML)
library(stringr)
#for Fulltext to read pdf
####install.packages("pdftools")
library(pdftools)
fullText <- function(parsedDocument){
endLink <- parsedDocument %>%
html_node('.article-pdfLink') %>% html_attr('href')
frontLink <- "https://academic.oup.com"
#link of pdf
pdfLink <- paste(frontLink,endLink,sep = "")
#extract full text from pdfLink
pdfFullText <- pdf_text(pdfLink)
fulltext <- paste(pdfFullText, sep = "\n")
return(fulltext)
}
#############################################
#main function with input as parameter year
testFullText <- function(DOIurl){
parsedDocument <- read_html(DOIurl)
DNAresearch <- data.frame()
allData <- data.frame("Full Text" = fullText(parsedDocument), stringsAsFactors = FALSE)
DNAresearch <- rbind(DNAresearch, allData)
write.csv(DNAresearch, "DNAresearch.csv", row.names = FALSE)
}
testFullText("https://doi.org/10.1093/dnares/dsm026")
Looking at your last function, if I understand correctly, you want to take the url and scrape all the text into the a data frame/tibble and then export it to a csv. Here is how you can do it with just 1 article, and you should be able to loop through some links with a little manipulation (apologies if I am misunderstanding):
library(tidyverse)
library(rvest)
# read in html link
document_link <- read_html("https://doi.org/10.1093/dnares/dsm026")
# get the text, and put it into a tibble with only 1 row
text_tibble <- document_link %>%
html_nodes('.chapter-para') %>%
html_text() %>%
as_tibble() %>%
summarize(full_text = paste(value, collapse = " ")) ## this will collpase to 1 row
# now write to csv
## write_csv(text_tibble, file = "")

R: How can I open a list of links to scrape the homepage of a news website?

I'm trying to build a web scraper to scrape articles published on www.20min.ch, a news website, with R. Their api is openly accessible so I could create a dataframe containing titles, urls, descriptions, and timestamps with rvest. The next step would be to access every single link and create a list of article texts and combine it with my dataframe. However I don't know how to automatize the access to those articles. Ideally, I would like to read_html link 1, then copy the text with html node and then proceed to link 2...
This is what I wrote so far:
site20min <- read_xml("https://api.20min.ch/rss/view/1")
site20min
url_list <- site20min %>% html_nodes('link') %>% html_text()
df20min <- data.frame(Title = character(),
Zeit = character(),
Lead = character(),
Text = character()
)
for(i in 1:length(url_list)){
myLink <- url_list[i]
site20min <- read_html(myLink)
titel20min <- site20min %>% html_nodes('h1 span') %>% html_text()
zeit20min <- site20min %>% html_nodes('#story_content .clearfix span') %>% html_text()
lead20min <- site20min %>% html_nodes('#story_content h3') %>% html_text()
text20min <- site20min %>% html_nodes('.story_text') %>% html_text()
df20min_a <- data.frame(Title = titel20min)
df20min_b <- data.frame(Zeit = zeit20min)
df20min_c <- data.frame(Lead = lead20min)
df20min_d <- data.frame(Text = text20min)
}
What I need is R to open every single link and extract some information:
site20min_1 <- read_html("https://www.20min.ch/schweiz/news/story/-Es-liegen-auch-Junge-auf-der-Intensivstation--14630453")
titel20min_1 <- site20min_1 %>% html_nodes('h1 span') %>% html_text()
zeit20min_1 <- site20min_1 %>% html_nodes('#story_content .clearfix span') %>% html_text()
lead20min_1 <- site20min_1 %>% html_nodes('#story_content h3') %>% html_text()
text20min_1 <- site20min_1 %>% html_nodes('.story_text') %>% html_text()
It should not be too much of a problem to rbind this to a dataframe. but at the moment some of my results turn out empty.
thx for your help!
You're on the right track with setting up a dataframe. You can loop through each link and rbind it to your existing dataframe structure.
First, you can set a vector of urls to be looped through. Based on the edit, here is such a vector:
url_list <- c("http://www.20min.ch/ausland/news/story/14618481",
"http://www.20min.ch/schweiz/news/story/18901454",
"http://www.20min.ch/finance/news/story/21796077",
"http://www.20min.ch/schweiz/news/story/25363072",
"http://www.20min.ch/schweiz/news/story/19113494",
"http://www.20min.ch/community/social_promo/story/20407354",
"https://cp.20min.ch/de/stories/635-stressfrei-durch-den-verkehr-so-sieht-der-alltag-von-busfahrer-claudio-aus")
Next, you can set a dataframe structure that includes everything you're looking to gether.
# Set up the dataframe first
df20min <- data.frame(Title = character(),
Link = character(),
Lead = character(),
Zeit = character())
Finally, you can loop through each url in your list and add the relevant info to your dataframe.
# Go through a loop
for(i in 1:length(url_list)){
myLink <- url_list[i]
site20min <- read_xml(myLink)
# Extract the info
titel20min <- site20min %>% html_nodes('title') %>% html_text()
link20min <- site20min %>% html_nodes('link') %>% html_text()
zeit20min <- site20min %>% html_nodes('pubDate') %>% html_text()
lead20min <- site20min %>% html_nodes('description') %>% html_text()
# Structure into dataframe
df20min_a <- data.frame(Title = titel20min, Link =link20min, Lead = lead20min)
df20min_b <- df20min_a [-(1:2),]
df20min_c <- data.frame(Zeit = zeit20min)
# Insert into final dataframe
df20min <- rbind(df20min, cbind(df20min_b,df20min_c))
}

R scraping reviews from multiple pages on TripAdvisor

I'm trying to pull out a few pages of reviews from TripAdvisor for a academic project.
Here's my attempt using R
#Load libraries
library(rvest)
library(RSelenium)
# main url for stadium
urlmainlist=c(
hampdenpark="http://www.tripadvisor.com.ph/Attraction_Review-g186534-d214132-Reviews-Hampden_Park-Glasgow_Scotland.html"
)
# Specify how many search pages and counter
morepglist=list(
hampdenpark=seq(10,360,10)
)
#----------------------------------------------------------------------------------------------------------
# create pickstadium variable
pickstadium="hampdenpark"
# get list of urllinks corresponding to different pages
# url link for first search page
urllinkmain=urlmainlist[pickstadium]
# counter for additional pages
morepg=as.numeric(morepglist[[pickstadium]])
urllinkpre=paste(strsplit(urllinkmain,"Reviews-")[[1]][1],"Reviews",sep="")
urllinkpost=strsplit(urllinkmain,"Reviews-")[[1]][2]
urllink=rep(NA,length(morepg)+1)
urllink[1]=urllinkmain
for(i in 1:length(morepg)){
urllink[i+1]=paste(urllinkpre,"-or",morepg[i],"-",urllinkpost,sep="")
}
head(urllink)
write.csv(urllink,'urllink.csv')
##########
#SCRAPING#
##########
library(rvest)
library(RSelenium)
#install.packages('RSelenium')
testurl <- read.csv("urllink.csv", header=FALSE, quote="'", stringsAsFactors = F)
testurl=testurl[-1,]
testurl=testurl[,-1]
testurl=as.data.frame(testurl)
testurl=gsub('"',"",testurl$testurl)
list<-unlist(testurl)
tripadvisor <- NULL
#Scrape
for(i in 1:length(list)){
reviews <- list[i] %>%
read_html() %>%
html_nodes("#REVIEWS .innerBubble")
id <- reviews %>%
html_node(".quote a") %>%
html_attr("id")
rating <- reviews %>%
html_node(".rating .rating_s_fill") %>%
html_attr("alt") %>%
gsub(" of 5 stars", "", .) %>%
as.integer()
date <- reviews %>%
html_node(".rating .ratingDate") %>%
html_attr("title") %>%
strptime("%b %d, %Y") %>%
as.POSIXct()
review <- reviews %>%
html_node(".entry .partial_entry") %>%
html_text()%>%
as.character()
rowthing <- data.frame(id, review,rating, date, stringsAsFactors = FALSE)
tripadvisor<-rbind(rowthing, tripadvisor)
}
However this results in an empty tripadvisor dataframe. Any help on fixing this would be appreciated.
Additional Question
I'd like to capture the full reviews, as my code currently intends to capture partial entries only. For each review, I'd like to automatically click on the 'More' link and then extract the full review.
Here too, any help would be grately appreciated.

extract all the possible text from a webpage in R

I used this script to extract the text from a webpage
url <- "http://www.dlink.com/it/it"
doc <- getURL(url)
#get the text from the body
html <- htmlTreeParse(doc, useInternal = TRUE)
txt <- xpathApply(html, "//body//text()[not(ancestor::script)][not(ancestor::style)][not(ancestor::noscript)]", xmlValue)
txt<-toString(txt)
but the problem is that it takes just the words in the first page, how can I extend it to the whole website?
I'd go with rvest to scrape the links and purrr to iterate:
library(rvest)
library(purrr)
url <- "http://www.dlink.com/it/it"
r <- read_html(url) %>%
html_nodes('a') %>%
html_attr('href') %>%
Filter(function(f) !is.na(f) & !grepl(x = f, pattern = '#|facebook|linkedin|twitter|youtube'), .) %>%
map(~{
print(.x)
html_session(url) %>%
jump_to(.x) %>%
read_html() %>%
html_nodes('body') %>%
html_text() %>%
toString()
})
I filtered out social nets and dead links from the list of links, some more tuning might be in order.
Be advised that you will be scraping a lot of garbage. Some more targeting on what to scrape inside each page might be needed (ie: something more specfic than the whole body tag)

Resources