I have been trying to scrap a page, but after a few scraps, the page block my access for an hour.
I read about the R package called {polite} and it maybe can solve my problem.
But I'm failing to implement the function`s package in my code.
I did:
#conection
url_ini <- paste0("https://www.instagram.com/instagram/?__a=1&__d=11")
document_ini <- jsonlite::fromJSON(txt = url_ini)
#extracting information
id <- document_ini$graphql$user$id
end_cursor <- document_ini$graphql$user$edge_owner_to_timeline_media$page_info$end_cursor
n1 <- 'https://www.instagram.com/graphql/query/?query_hash=e769aa130647d2354c40ea6a439bfc08&variables={%22id%22:%22'
n2 <- '%22,%22first%22:12,%22after%22:%22'
n3 <- "%22}"
url <- noquote(paste0(n1, id, n2, end_cursor, n3))
document <- jsonlite::fromJSON(txt = url)
There are more code, but I think if I can do this part, I will be able to do the rest.
I tried without success things like this:
url_ini <- paste0("https://www.instagram.com/instagram/?__a=1&__d=11")
session <- polite::bow(url_ini)
document_ini <- jsonlite::fromJSON(txt = session) # doesn't work
responses <- map(session, ~polite::scrape(session,jsonlite::fromJSON)) # doesn't work
Related
I am trying to scrape a website using the following:
industryurl <- "https://finance.yahoo.com/industries"
library(rvest)
read <- read_html(industryurl) %>%
html_table()
library(plyr)
industries <- ldply(read, data.frame)
industries = industries[-1,]
read <- read_html(industryurl)
industryurls <- html_attr(html_nodes(read, "a"), "href")
links <- industryurls[grep("/industry/", industryurls)]
industryurl <- "https://finance.yahoo.com"
links <- paste0(industryurl, links)
links
##############################################################################################
store <- NULL
tbl <- NULL
for(i in links){
store[[i]] = read_html(i)
tbl[[i]] = html_table(store[[i]])
}
#################################################################################################
I am mostly interested in the code between ########## and I want to apply a function instead of a for loop since I am running into time out issues with yahoo and I want to make it more human like to extract this data (it is not too much).
My question is, how can I take links apply a function and set a sort of delay timer to read in the contents of the for loop?
I can paste my own version of the for loop which does not work.
This is the function I came up with
##First argument is the link you need
##The second argument is the total time for Sys.sleep
extract_function <- function(define_link, define_time){
print(paste0("The system will stop for: ", define_time, " seconds"))
Sys.sleep(define_time)
first <- read_html(define_link)
print(paste0("It will now return the table for link", define_link))
return(html_table(first))
}
##I added the following tryCatch function
link_try_catch <- function(define_link, define_time){
out <- tryCatch(extract_function(define_link,define_time), error =
function(e) NA)
return(out)
}
##You can now retrieve the data using the links vector in two ways
##Picking the first ten, so it should not crash on link 5
p <- lapply(1:10, function(i)link_try_catch(links[i],1))
##OR (I subset the vector just for demo purposes
p2 <- lapply(links[1:10], function(i)extract_function(i,1))
Hope it helps
I am new to Webscraping. The url I am working with is this (https://tsmc.tripura.gov.in/doc_list). At present, I am able to extract data from the first page. Since, the url is unchanging, I don't have an identifier for the other pages to create a loop for data table extraction.
Here is my code:
install.packages("XML")
install.packages("RCurl")
install.packages("rlist")
install.packages("bitops")
library(bitops)
library(XML)
library(RCurl)
url1<- getURL("https://tsmc.tripura.gov.in/doc_list",.opts =
list(ssl.verifypeer = FALSE))
table1<- readHTMLTable(url1)
table1<- list.clean(table1, fun = is.null, recursive = FALSE)
n.rows <- unlist(lapply(table1, function(t) dim(t)[1]))
table1[[which.max(n.rows)]]
View(table1)
table11= table1[["NULL"]]
Please help. Thanks!
Perhaps try this solution:
url <- "https://tsmc.tripura.gov.in/doc_list?page="
sq <- seq(1, 30) # There appears to be 30 pages so we create a sequence of 1:30 results
links <- paste0(url, sq) #Paste the sequence after the url "page="
store <- NULL
tbl <- NULL
library(rvest) #extract the tables
for(i in links){
store[[i]] = read_html(i)
tbl[[i]] = html_table(store[[i]])
}
library(plyr)
df <- ldply(tbl, data.frame) #combine the list of data frames into one large data frame
df$`.id` <- gsub("https://tsmc.tripura.gov.in/doc_list?page=", " ", df$`.id`, fixed = TRUE)
Which gives 846 observations across 8 variables.
EDIT: I found that the first url does not have a sequence. In order to add the first page and rbind it with the rest of the data use the following:
firsturl <- "https://tsmc.tripura.gov.in/doc_list"
first_store = read_html(firsturl)
first_tbl = html_table(first_store)
first_df <- as.data.frame(first_tbl)
first_df$`.id` <- 0
df2 <- rbind(first_df, df)
I am new to webscraping and have tried several methods to perform a rvest across multiple pages. Somehow it is still not working and I only get 15 results instead of the 207 products listed in this category. What am I doing wrong?
library(rvest)
all_df<-0
library(data.table)
for(i in 1:5){
url_fonq <- paste0("https://www.fonq.nl/producten/categorie-lichtbronnen/?p=",i,sep="")
webpage_fonq <- read_html(url_fonq)
head(webpage_fonq)
product_title_data_html <- html_nodes(webpage_fonq, '.product-title')
product_title_data <- html_text(product_title_data_html)
head(product_title_data)
product_title_data<-gsub("\n","",product_title_data)
product_title_data<-gsub(" ","",product_title_data)
head(product_title_data)
length(product_title_data)
product_price_data_html <- html_nodes(webpage_fonq, '.product-price')
product_price_data <- html_text(product_price_data_html)
head(product_price_data)
product_price_data<-gsub("\n","",product_price_data)
product_price_data<-gsub(" ","",product_price_data)
head(product_price_data)
product_price_data
length(product_price_data)
fonq.df <- data.frame(Procuct_title = product_title_data, Price = product_price_data)
all_df <-list(fonq.df)
}
final2<-rbindlist(all_df,fill = TRUE)
View(final2)
The problem is that you keep only the data scraped from the last page of the website, and thus you have the last 15 products stored only.
So instead of overwriting the all_df variable in every iteration
all_df <- list(fonq.df)
append the fonq.df dataframe at the end of the all_df:
all_df <- bind_rows(all_df, fonq.df)
Here is my complete solution:
library(rvest)
all_df <- list()
library(dplyr)
for(i in 1:5){
url_fonq <- paste0("https://www.fonq.nl/producten/categorie-lichtbronnen/?p=",i,sep="")
webpage_fonq <- read_html(url_fonq)
head(webpage_fonq)
product_title_data_html <- html_nodes(webpage_fonq, '.product-title')
product_title_data <- html_text(product_title_data_html)
head(product_title_data)
product_title_data<-gsub("\n","",product_title_data)
product_title_data<-gsub(" ","",product_title_data)
head(product_title_data)
length(product_title_data)
product_price_data_html <- html_nodes(webpage_fonq, '.product-price')
product_price_data <- html_text(product_price_data_html)
head(product_price_data)
product_price_data<-gsub("\n","",product_price_data)
product_price_data<-gsub(" ","",product_price_data)
head(product_price_data)
product_price_data
length(product_price_data)
fonq.df <- data.frame(Procuct_title = product_title_data, Price = product_price_data)
all_df <-bind_rows(all_df, fonq.df)
}
View(all_df)
Is anyone experienced in scraping data from the Yahoo! Finance key statistics page with R? I am familiar scraping data directly from html using read_html, html_nodes(), and html_text() from rvest package. However, this web page MSFT key stats is a bit complicated, I am not sure if all the stats are kept in XHR, JS, or Doc. I am guessing the data is stored in JSON. If anyone knows a good way to extract and parse data for this web page with R, kindly answer my question, great thanks in advance!
Or if there is a more convenient way to extract these metrics via quantmod or Quandl, kindly let me know, that would be a extremely good solution!
I know this is an older thread, but I used it to scrape Yahoo Analyst tables so I figure I would share.
# Yahoo webscrape Analysts
library(XML)
symbol = "HD"
url <- paste('https://finance.yahoo.com/quote/HD/analysts?p=',symbol,sep="")
webpage <- readLines(url)
html <- htmlTreeParse(webpage, useInternalNodes = TRUE, asText = TRUE)
tableNodes <- getNodeSet(html, "//table")
earningEstimates <- readHTMLTable(tableNodes[[1]])
revenueEstimates <- readHTMLTable(tableNodes[[2]])
earningHistory <- readHTMLTable(tableNodes[[3]])
epsTrend <- readHTMLTable(tableNodes[[4]])
epsRevisions <- readHTMLTable(tableNodes[[5]])
growthEst <- readHTMLTable(tableNodes[[6]])
Cheers,
Sody
I gave up on Excel a long time ago. R is definitely the way to go for things like this.
library(XML)
stocks <- c("AXP","BA","CAT","CSCO")
for (s in stocks) {
url <- paste0("http://finviz.com/quote.ashx?t=", s)
webpage <- readLines(url)
html <- htmlTreeParse(webpage, useInternalNodes = TRUE, asText = TRUE)
tableNodes <- getNodeSet(html, "//table")
# ASSIGN TO STOCK NAMED DFS
assign(s, readHTMLTable(tableNodes[[9]],
header= c("data1", "data2", "data3", "data4", "data5", "data6",
"data7", "data8", "data9", "data10", "data11", "data12")))
# ADD COLUMN TO IDENTIFY STOCK
df <- get(s)
df['stock'] <- s
assign(s, df)
}
# COMBINE ALL STOCK DATA
stockdatalist <- cbind(mget(stocks))
stockdata <- do.call(rbind, stockdatalist)
# MOVE STOCK ID TO FIRST COLUMN
stockdata <- stockdata[, c(ncol(stockdata), 1:ncol(stockdata)-1)]
# SAVE TO CSV
write.table(stockdata, "C:/Users/your_path_here/Desktop/MyData.csv", sep=",",
row.names=FALSE, col.names=FALSE)
# REMOVE TEMP OBJECTS
rm(df, stockdatalist)
When I use the methods shown here with XML library, I get a Warning
Warning in readLines(page) : incomplete final line found on
'https://finance.yahoo.com/quote/DIS/key-statistics?p=DIS'
We can use rvest and xml2 for a cleaner approach. This example demonstrates how to pull a key statistic from the key-statistics Yahoo! Finance page. Here I want to obtain the float of an equity. I don't believe float is available from quantmod, but some of the key stats values are. You'll have to reference the list.
library(xml2)
library(rvest)
getFloat <- function(stock){
url <- paste0("https://finance.yahoo.com/quote/", stock, "/key-statistics?p=", stock)
tables <- read_html(url) %>%
html_nodes("table") %>%
html_table()
float <- as.vector(tables[[3]][4,2])
last <- substr(float, nchar(float)-1+1, nchar(float))
float <-gsub("[a-zA-Z]", "", float)
float <- as.numeric(as.character(float))
if(last == "k"){
float <- float * 1000
} else if (last == "M") {
float <- float * 1000000
} else if (last == "B") {
float <- float * 1000000000
}
return(float)
}
getFloat("DIS")
[1] 1.81e+09
That's a lot of shares of Disney available.
Is it possible to get the publication date of CRAN packages from within R? I would like to get a list of the k most recently published CRAN packages, or alternatively all packages published after date dd-mm-yy. Similar to the information on the available_packages_by_date.html?
The available.packages() command has a "fields" argument, but this only extracts fields from the DESCRIPTION. The date field on the package description is not always up-to-date.
I can get it with a smart regex from the html page, but I am not sure how reliable and up-to-date the this html file is... At some point Kurt might decide to give the layout a makeover which would break the script. An alternative is to use timestamps from the CRAN FTP but I am also not sure how good this solution is. I am not sure if there is somewhere a formally structured file with publication dates? I assume the HTML page is automatically generated from some DB.
Turns out there is an undocmented file "packages.rds" which contains the publication dates (not times) of all packages. I suppose these data are used to recreate the HTML file every day.
Below a simple function that extracts publication dates from this file:
recent.packages.rds <- function(){
mytemp <- tempfile();
download.file("http://cran.r-project.org/web/packages/packages.rds", mytemp);
mydata <- as.data.frame(readRDS(mytemp), row.names=NA);
mydata$Published <- as.Date(mydata[["Published"]]);
#sort and get the fields you like:
mydata <- mydata[order(mydata$Published),c("Package", "Version", "Published")];
}
The best approach is to take advantage of the fact the package DESCRIPTION is published on the cran mirror, and since the DESCRIPTION is from the build package, it contains information about exactly when it was packaged:
pkgs <- unname(available.packages()[, 1])[1:20]
desc_urls <- paste("http://cran.r-project.org/web/packages/", pkgs, "/DESCRIPTION", sep = "")
desc <- lapply(desc_urls, function(x) read.dcf(url(x)))
sapply(desc, function(x) x[, "Packaged"])
sapply(desc, function(x) x[, "Date/Publication"])
(I'm restricting it to the first 20 packages here to illustrate the basic idea)
Here a function that uses the HTML and regular expressions. I still rather get the information from a more formal place though in case the HTML ever changes layout.
recent.packages <- function(number=10){
#html is malformed
maxlines <- number*2 + 11
mytemp <- tempfile()
if(getOption("repos") == "#CRAN#"){
repo <- "http://cran.r-project.org"
} else {
repo <- getOption("repos");
}
newurl <- paste(repo,"/web/packages/available_packages_by_date.html", sep="");
download.file(newurl, mytemp);
datastring <- readLines(mytemp, n=maxlines)[12:maxlines];
#we only find packages from after 2010-01-01
myexpr1 <- '201[0-9]-[0-9]{2}-[0-9]{2} </td> <td> <a href="../../web/packages/[a-zA-Z0-9\\.]{2,}/'
myexpr2 <- '^201[0-9]-[0-9]{2}-[0-9]{2}'
myexpr3 <- '[a-zA-Z0-9\\.]{2,}/$'
newpackages <- unlist(regmatches(datastring, gregexpr(myexpr1, datastring)));
newdates <- unlist(regmatches(newpackages, gregexpr(myexpr2, newpackages)));
newnames <- unlist(regmatches(newpackages, gregexpr(myexpr3, newpackages)));
newdates <- as.Date(newdates);
newnames <- substring(newnames, 1, nchar(newnames)-1);
returndata <- data.frame(name=newnames, date=newdates);
return(head(returndata, number));
}
So here a solution that uses the dir listing from the FTP. It is a little tricky because the FTP gives the date in linux format with either a timestamp or a year. Other than that it does it's job. I'm still not convinced this is reliable though. If packages are copied over to another server all timestmaps might be reset.
recent.packages.ftp <- function(){
setwd(tempdir())
download.file("ftp://cran.r-project.org/pub/R/src/contrib/", destfile=tempfile(), method="wget", extra="--no-htmlify");
#because of --no-htmlify the destfile argument does not work
datastring <- readLines(".listing");
unlink(".listing");
myexpr1 <- "(?<date>[A-Z][a-z]{2} [0-9]{2} [0-9]{2}:[0-9]{2}) (?<name>[a-zA-Z0-9\\.]{2,})_(?<version>[0-9\\.-]*).tar.gz$"
matches <- gregexpr(myexpr1, datastring, perl=TRUE);
packagelines <- as.logical(sapply(regmatches(datastring, matches), length));
#subset proper lines
matches <- matches[packagelines];
datastring <- datastring[packagelines];
N <- length(matches)
#from the ?regexpr manual
parse.one <- function(res, result) {
m <- do.call(rbind, lapply(seq_along(res), function(i) {
if(result[i] == -1) return("")
st <- attr(result, "capture.start")[i, ]
substring(res[i], st, st + attr(result, "capture.length")[i, ] - 1)
}))
colnames(m) <- attr(result, "capture.names")
m
}
#parse all records
mydf <- data.frame(date=rep(NA, N), name=rep(NA, N), version=rep(NA,N))
for(i in 1:N){
mydf[i,] <- parse.one(datastring[i], matches[[i]]);
}
row.names(mydf) <- NULL;
#convert dates
mydf$date <- strptime(mydf$date, format="%b %d %H:%M");
#So linux only displays dates for packages of less then six months old.
#However strptime will assume the current year for packages that don't have a timestamp
#Therefore for dates that are in the future, we subtract a year. We can use some margin for timezones.
infuture <- (mydf$date > Sys.time() + 31*24*60*60);
mydf$date[infuture] <- mydf$date[infuture] - 365*24*60*60;
#sort and return
mydf <- mydf[order(mydf$date),];
row.names(mydf) <- NULL;
return(mydf);
}
You could process the page http://cran.r-project.org/src/contrib/, and split the fields by whitespace in order to obtain the fully specified package source filename, which includes the version # and a .gz suffix.
There are a few other items in the list that are not package files, such as the .rds files, various subdirectories, and so on.
Barring changes in how the directory structure is presented or the locations of the files, I can't think of anything more authoritative than this.