Trouble Scraping Whole chart from HTML - r

I'm trying to scrape the entire chart from this website:
http://stats.ncaa.org/team/stats/12021?org_id=749&sport_year_ctl_id=12021
But when I run this code:
library(XML)
library(gsubfn)
URL = 'http://stats.ncaa.org/team/stats?org_id=381&sport_year_ctl_id=12021'
Player_Stats = readHTMLTable(URL, header = T, stringsAsFactors = F)
Player_Stats
Player_Stats only returns the data for the players, up until and not including the Total line.
What I want is the Team Totals and Opponent Totals.
Thanks

That information is in a <tfoot> element at the bottom of the table, which is why readHTMLTable() isn't picking up on it. You can extract the <tfoot> bit separately using getNodeSet() as follows. I've bound the two bits of the table together at the end, but you may want to keep the different kinds of information apart for your application.
library(XML)
library(gsubfn)
URL = 'http://stats.ncaa.org/team/stats?org_id=381&sport_year_ctl_id=12021'
Player_Stats = readHTMLTable(URL, header = T, stringsAsFactors = F)
stats <- Player_Stats$stat_grid
doc <- htmlTreeParse(URL, useInternalNodes=T)
foot <- getNodeSet(doc,"//tfoot")
totals <- readHTMLTable(unlist(foot)[[1]])
colnames(totals) <- colnames(stats)
fulltable <- rbind(stats,totals)

Related

How can I extract this specific table from this web page using R?

I am trying to extract a table from a specific webpage but I am not getting any results from my codes.
My codes stand as follows:
library(rvest)
library(dplyr)
url1<-"https://finance.yahoo.com/quote/SKLZ/cash-flow?p=SKLZ"
url_page<- read_html(url1)
listings <- html_nodes(url_page, css = '.Pos')
The table I am interested to extract falls under the (after doing an "inspect" in Chrome).
Here is a screenshot of the table:
Any help would be appreciated.
[QHarr] has answered the same question in this post I've copied the relevant code from their answer.
library(rvest)
library(stringr)
library(magrittr)
page <- read_html("https://finance.yahoo.com/quote/SKLZ/cash-flow?p=SKLZ")
nodes <- page %>%html_nodes(".fi-row")
df = NULL
for(i in nodes){
r <- list(i %>%html_nodes("[title],[data-test='fin-col']")%>%html_text())
df <- rbind(df,as.data.frame(matrix(r[[1]], ncol = length(r[[1]]), byrow = TRUE), stringsAsFactors = FALSE))
}
df

r Web scraping: Unable to read the main table

I am new to web scraping. I am trying to scrape a table with the following code. But I am unable to get it. The source of data is
https://www.investing.com/stock-screener/?sp=country::6|sector::a|industry::a|equityType::a|exchange::a%3Ceq_market_cap;1
url <- "https://www.investing.com/stock-screener/?sp=country::6|sector::a|industry::a|equityType::a|exchange::a%3Ceq_market_cap;1"
urlYAnalysis <- paste(url, sep = "")
webpage <- readLines(urlYAnalysis)
html <- htmlTreeParse(webpage, useInternalNodes = TRUE, asText = TRUE)
tableNodes <- getNodeSet(html, "//table")
Tab <- readHTMLTable(tableNodes[[1]])
I copied this apporach from the link (Web scraping of key stats in Yahoo! Finance with R) where it is applied on yahoo finance data.
In my opinion, in readHTMLTable(tableNodes[[12]]), it should be Table 12. But when I try giving tableNodes[[12]], it always gives me an error.
Error in do.call(data.frame, c(x, alis)) :
variable names are limited to 10000 bytes
Please suggest me the way to extract the table and combine the data from other tabs as well (Fundamental, Technical and Performance).
This data is returned dynamically as json. In R (behaves differently from Python requests) you get html from which you can extract a given page's results as json. A page includes all the tabs info and 50 records. From the first page you are given the total record count and therefore can calculate the total number of pages to loop over to get all results. Perhaps combine them info a final dataframe during a loop to total number of pages; where you alter the pn param of the XHR POST body to the appropriate page number for desired results in each new POST request. There are two required headers.
Probably a good idea to write a function that accepts a page number in signature and returns a given page's json as a dataframe. Apply that via a tidyverse package to handle loop and combining of results to final dataframe?
library(httr)
library(jsonlite)
library(magrittr)
library(rvest)
library(stringr)
headers = c(
'User-Agent' = 'Mozilla/5.0',
'X-Requested-With' = 'XMLHttpRequest'
)
data = list(
'country[]' = '6',
'sector' = '7,5,12,3,8,9,1,6,2,4,10,11',
'industry' = '81,56,59,41,68,67,88,51,72,47,12,8,50,2,71,9,69,45,46,13,94,102,95,58,100,101,87,31,6,38,79,30,77,28,5,60,18,26,44,35,53,48,49,55,78,7,86,10,1,34,3,11,62,16,24,20,54,33,83,29,76,37,90,85,82,22,14,17,19,43,89,96,57,84,93,27,74,97,4,73,36,42,98,65,70,40,99,39,92,75,66,63,21,25,64,61,32,91,52,23,15,80',
'equityType' = 'ORD,DRC,Preferred,Unit,ClosedEnd,REIT,ELKS,OpenEnd,Right,ParticipationShare,CapitalSecurity,PerpetualCapitalSecurity,GuaranteeCertificate,IGC,Warrant,SeniorNote,Debenture,ETF,ADR,ETC,ETN',
'exchange[]' = '109',
'exchange[]' = '127',
'exchange[]' = '51',
'exchange[]' = '108',
'pn' = '1', # this is page number and should be altered in a loop over all pages. 50 results per page i.e. rows
'order[col]' = 'eq_market_cap',
'order[dir]' = 'd'
)
r <- httr::POST(url = 'https://www.investing.com/stock-screener/Service/SearchStocks', httr::add_headers(.headers=headers), body = data)
s <- r %>%read_html()%>%html_node('p')%>% html_text()
page1_data <- jsonlite::fromJSON(str_match(s, '(\\[.*\\])' )[1,2])
total_rows <- str_match(s, '"totalCount\":(\\d+),' )[1,2]%>%as.integer()
num_pages <- ceiling(total_rows/50)
My current attempt at combining which I would welcome feedback on. This is all the returned columns, for all pages, and I have to handle missing columns and different ordering of columns as well as 1 column being a data.frame. As the returned number is far greater than those visible on page, you could simply revise to subset returned columns with a mask just for the columns present in the tabs.
library(httr)
library(jsonlite)
library(magrittr)
library(rvest)
library(stringr)
library(tidyverse)
library(data.table)
headers = c(
'User-Agent' = 'Mozilla/5.0',
'X-Requested-With' = 'XMLHttpRequest'
)
data = list(
'country[]' = '6',
'sector' = '7,5,12,3,8,9,1,6,2,4,10,11',
'industry' = '81,56,59,41,68,67,88,51,72,47,12,8,50,2,71,9,69,45,46,13,94,102,95,58,100,101,87,31,6,38,79,30,77,28,5,60,18,26,44,35,53,48,49,55,78,7,86,10,1,34,3,11,62,16,24,20,54,33,83,29,76,37,90,85,82,22,14,17,19,43,89,96,57,84,93,27,74,97,4,73,36,42,98,65,70,40,99,39,92,75,66,63,21,25,64,61,32,91,52,23,15,80',
'equityType' = 'ORD,DRC,Preferred,Unit,ClosedEnd,REIT,ELKS,OpenEnd,Right,ParticipationShare,CapitalSecurity,PerpetualCapitalSecurity,GuaranteeCertificate,IGC,Warrant,SeniorNote,Debenture,ETF,ADR,ETC,ETN',
'exchange[]' = '109',
'exchange[]' = '127',
'exchange[]' = '51',
'exchange[]' = '108',
'pn' = '1', # this is page number and should be altered in a loop over all pages. 50 results per page i.e. rows
'order[col]' = 'eq_market_cap',
'order[dir]' = 'd'
)
get_data <- function(page_number){
data['pn'] = page_number
r <- httr::POST(url = 'https://www.investing.com/stock-screener/Service/SearchStocks', httr::add_headers(.headers=headers), body = data)
s <- r %>% read_html() %>% html_node('p') %>% html_text()
if(page_number==1){ return(s) }
else{return(data.frame(jsonlite::fromJSON(str_match(s, '(\\[.*\\])' )[1,2])))}
}
clean_df <- function(df){
interim <- df['viewData']
df_minus <- subset(df, select = -c(viewData))
df_clean <- cbind.data.frame(c(interim, df_minus))
return(df_clean)
}
initial_data <- get_data(1)
df <- clean_df(data.frame(jsonlite::fromJSON(str_match(initial_data, '(\\[.*\\])' )[1,2])))
total_rows <- str_match(initial_data, '"totalCount\":(\\d+),' )[1,2] %>% as.integer()
num_pages <- ceiling(total_rows/50)
dfs <- map(.x = 2:num_pages,
.f = ~clean_df(get_data(.)))
r <- rbindlist(c(list(df),dfs),use.names=TRUE, fill=TRUE)
write_csv(r, 'data.csv')

Web scraping of key stats in Yahoo! Finance with R

Is anyone experienced in scraping data from the Yahoo! Finance key statistics page with R? I am familiar scraping data directly from html using read_html, html_nodes(), and html_text() from rvest package. However, this web page MSFT key stats is a bit complicated, I am not sure if all the stats are kept in XHR, JS, or Doc. I am guessing the data is stored in JSON. If anyone knows a good way to extract and parse data for this web page with R, kindly answer my question, great thanks in advance!
Or if there is a more convenient way to extract these metrics via quantmod or Quandl, kindly let me know, that would be a extremely good solution!
I know this is an older thread, but I used it to scrape Yahoo Analyst tables so I figure I would share.
# Yahoo webscrape Analysts
library(XML)
symbol = "HD"
url <- paste('https://finance.yahoo.com/quote/HD/analysts?p=',symbol,sep="")
webpage <- readLines(url)
html <- htmlTreeParse(webpage, useInternalNodes = TRUE, asText = TRUE)
tableNodes <- getNodeSet(html, "//table")
earningEstimates <- readHTMLTable(tableNodes[[1]])
revenueEstimates <- readHTMLTable(tableNodes[[2]])
earningHistory <- readHTMLTable(tableNodes[[3]])
epsTrend <- readHTMLTable(tableNodes[[4]])
epsRevisions <- readHTMLTable(tableNodes[[5]])
growthEst <- readHTMLTable(tableNodes[[6]])
Cheers,
Sody
I gave up on Excel a long time ago. R is definitely the way to go for things like this.
library(XML)
stocks <- c("AXP","BA","CAT","CSCO")
for (s in stocks) {
url <- paste0("http://finviz.com/quote.ashx?t=", s)
webpage <- readLines(url)
html <- htmlTreeParse(webpage, useInternalNodes = TRUE, asText = TRUE)
tableNodes <- getNodeSet(html, "//table")
# ASSIGN TO STOCK NAMED DFS
assign(s, readHTMLTable(tableNodes[[9]],
header= c("data1", "data2", "data3", "data4", "data5", "data6",
"data7", "data8", "data9", "data10", "data11", "data12")))
# ADD COLUMN TO IDENTIFY STOCK
df <- get(s)
df['stock'] <- s
assign(s, df)
}
# COMBINE ALL STOCK DATA
stockdatalist <- cbind(mget(stocks))
stockdata <- do.call(rbind, stockdatalist)
# MOVE STOCK ID TO FIRST COLUMN
stockdata <- stockdata[, c(ncol(stockdata), 1:ncol(stockdata)-1)]
# SAVE TO CSV
write.table(stockdata, "C:/Users/your_path_here/Desktop/MyData.csv", sep=",",
row.names=FALSE, col.names=FALSE)
# REMOVE TEMP OBJECTS
rm(df, stockdatalist)
When I use the methods shown here with XML library, I get a Warning
Warning in readLines(page) : incomplete final line found on
'https://finance.yahoo.com/quote/DIS/key-statistics?p=DIS'
We can use rvest and xml2 for a cleaner approach. This example demonstrates how to pull a key statistic from the key-statistics Yahoo! Finance page. Here I want to obtain the float of an equity. I don't believe float is available from quantmod, but some of the key stats values are. You'll have to reference the list.
library(xml2)
library(rvest)
getFloat <- function(stock){
url <- paste0("https://finance.yahoo.com/quote/", stock, "/key-statistics?p=", stock)
tables <- read_html(url) %>%
html_nodes("table") %>%
html_table()
float <- as.vector(tables[[3]][4,2])
last <- substr(float, nchar(float)-1+1, nchar(float))
float <-gsub("[a-zA-Z]", "", float)
float <- as.numeric(as.character(float))
if(last == "k"){
float <- float * 1000
} else if (last == "M") {
float <- float * 1000000
} else if (last == "B") {
float <- float * 1000000000
}
return(float)
}
getFloat("DIS")
[1] 1.81e+09
That's a lot of shares of Disney available.

Keep the architecture of table with multiple elements in one cell while crawling in R

In the webpage, there is a kind of table in the webpage which have more than one element in one cell. I can crawl the content in the table by following code, but I could not bind these elements as their webpage architecture. Do we have some methods to combine these element perfectly, or we should use other idea to get each element?
library(XML)
dataissued <- "http://www.irgrid.ac.cn/handle/1471x/294320/browse?type=dateissued"
ec_parsed <- htmlTreeParse(dataissued, encoding = "UTF-8", useInternalNodes = TRUE)
# gether content in table and build the dataframe
# title and introduction link of IR resource
item_title <- xpathSApply(ec_parsed, '//td[#headers="t1"]//a', xmlValue)
item_hrefs <- xpathSApply(ec_parsed, '//td[#headers="t1"]//a/#href')
# author and introduction link of IR resource
auth_name <- xpathSApply(ec_parsed, '//td[#headers="t2"]//a', xmlValue)
auth_hrefs <- xpathSApply(ec_parsed, '//td[#headers="t2"]//#href')
# publish date of IR resource
pub_date <- xpathSApply(ec_parsed, '//td[#headers="t3"]', xmlValue)
# whole content link of IR resource
con_link <- xpathSApply(ec_parsed, '//td[#headers="t3"]//a[#href]', xmlValue)
item_table <- cbind(item_title, item_hrefs, auth_name, auth_hrefs, pub_date, con_link)
colnames(item_table) <- c("t1", "href1", "t2", "href2", "t3", "t4", "href4")
I have tried many times but still cannot organise them as it should be, just like one paper may have several authors, and all the authors and their links should save in one "row", but now one author is in one row, and the title of paper totally reused. That makes the result messed up.
This is one way to make a long data frame from that table:
library(rvest)
library(purrr)
library(tibble)
pg <- read_html("http://www.irgrid.ac.cn/handle/1471x/294320/browse?type=dateissued")
# extract the columns
col1 <- html_nodes(pg, "td[headers='t1']")
col2 <- html_nodes(pg, "td[headers='t2']")
col3 <- html_nodes(pg, "td[headers='t3']")
# this is the way to get the full text column
col4 <- html_nodes(pg, "td[headers='t3'] + td")
# now, iterate over the rows; map_df() will bind all our data.frame's together
map_df(1:legnth(col1), function(i) {
# extract the links
a1 <- xml_nodes(col1[i], "a")
a2 <- xml_nodes(col2[i], "a")
a4 <- xml_nodes(col4[i], "a")
# put the row into a long data.frame for the row
data_frame( title = html_text(a1, trim=TRUE),
title_link = html_attr(a1, "href"),
author = html_text(a2, trim=TRUE),
author_link = html_attr(a2, "href"),
issue_date = html_text(col3[i], trim=TRUE),
full_text = html_attr(a4, "href"))
})
The biggest problem during using "rvest" package is mess code. Even the parameter "encoding" has been used in the program, the result still have mess code. But the web page encoding is UTF-8. Such as:
library(rvest)
pg <- read_html("http://www.irgrid.ac.cn/handle/1471x/294320/browse?type=dateissued", encoding = "UTF-8")
For my test, the best performance should be "XML", when I use getNodeset function, the result is right, no mess code at all. However, I only get the whole node, and could not gether each row of table with their structure.
library(XML)
pg <- "http://www.irgrid.ac.cn/handle/1471x/294320/browse?type=dateissued"
pg_tables <- getNodeSet(htmlParse(pg), "//table[#summary='This table browse all dspace content']")
# gether the node of whole table
papernode <- getNodeSet(pg_tables[[1]], "//td[#headers='t1']")
paper_hrefs <- xpathSApply(papernode[[1]], '//a/#href')
paper_name <- xpathSApply(papernode[[1]], '//a', xmlValue)
# gether authors in table
authnode <- getNodeSet(pg_tables[[1]], "//td[#headers='t2']")
# gether date in table
datenode <- getNodeSet(pg_tables[[1]], "//td[#headers='t3']")
With this program, I could get these "nodes" separatly. However, crawling the headers and their links seems getting harder. Because the result class of "getNodeSet" is not same as "html_nodes". How can we read the dataframe generated by "getNodeSet" automatically and extract the header and their links from these nodes in an exact way?

Web scraping techniques to obtain links that the website of interest contains

I am working with the following website:
http://www.crowdrise.com/skollsechallenge
Specifically on this page there are 57 crowdfunding campaigns.  Each of those crowdfunding campaigns have text that details out why they want to raise money, the total money raised so far, and the team members.  Some of the campaigns also specify the fundraising goal. I want to write some R code that will scrape and organize this information from each of the 57 sites.
for now, I am trying to scrap each of the 57 links that leads to the 57 different campaigns.
Below is the code I tried:
library("RCurl")
library("XML")
library("stringr")
url <- "http://www.crowdrise.com/skollSEchallenge"
cat("URL:", url)
url.data <- readLines(url)
doc <- htmlTreeParse(url.data, useInternalNodes=TRUE)
xp_exp <- "//a[#href]"
links <- xpathSApply(doc, xp_exp,xmlValue)
the variable
links
however, does not contain links to the 57 websites.....I am little confused...
can someone help me?
thanks,
Using this for example :
xpathApply(doc, '//*[#id="teams-results"]/div/div/div/h4/a'
,xmlGetAttr,'href')
You will get the 16 links of the first page. But you still have the problem of activating javascript code behind( SHOW MORE TEAMS) to see the rest of links.
This very ugly solution gets 32 of them, it is very very verbose, but it does not need to evaluate javascript.
library(httr)
x <- as.character(GET("http://www.crowdrise.com/skollSEchallenge"))
x <- unlist(strsplit(x, split = "\n", fixed = TRUE))
x <- gsub("\t", "", grep('class="profile">', x, value = TRUE, fixed = TRUE))
x <- unlist(strsplit(x, split = 'class="profile">', fixed = TRUE))[-1]
x <- gsub("\r<div class=\"content\">\r<a href=\"/", "", x, fixed = TRUE)
x <- substr(x, 1, as.integer(regexpr('\"><img', x)) - 1)
x <- paste("www.crowdrise.com/", x, sep = '')

Resources