Ran in to another problem. I have a for loop that contains urls to scrape batting information from a table with the id batting_gamelogs. If that id does not exist on the page then move on to the next url else scrape the table.
I think it should be something like this below, but I can't get it to work.
if xpathSApply(batting, '//*[#id != "batting_gamelogs"]')[[1]] next
else
{
tableNode <- xpathSApply(batting, '//*[#id="batting_gamelogs"]')[[1]]
data <- readHTMLTable(tableNode, stringsAsFactors = FALSE)
data # select the first table
total <- cbind(id,year,data)
batlist <- rbind(batlist, total)
}
I have attached sample code.
#SCRAPE BATTING STATS
data = NULL
batlist = NULL
battingURLs <- paste("http://www.baseball-reference.com",yplist[,c("hrefs")],sep="")
for(thisbattingURL in battingURLs){
batting <- htmlParse(thisbattingURL)
fstampid <- regexpr("&", thisbattingURL, fixed=TRUE)-1
fstampyr <- regexpr("year=", thisbattingURL, fixed=TRUE)+5
id <- substr(thisbattingURL, 53, fstampid)
year <- substr(thisbattingURL, fstampyr, 75)
tableNode <- xpathSApply(batting, '//*[#id="batting_gamelogs"]')[[1]]
data <- readHTMLTable(tableNode, stringsAsFactors = FALSE)
data # select the first table
total <- cbind(id,year,data)
batlist <- rbind(batlist, total)
}
batlist
Any help is much appreciated!
I can't get it to work.
This phrase should always be a reminder to tell what actually happened (and how it differs from what you expected to happen). I suspect what happened was that it skipped too often (vs. not skipping when it should have). But you could tell us that, instead of leaving us to figure it out.
if xpathSApply(batting, '//*[#id != "batting_gamelogs"]')[[1]] next
The "not" is in the wrong place. Here, you're saying, skip this iteration if there is an element on the page that has an id attribute whose value is not batting_gamelogs. Instead you want to skip this iteration if there is no element on the page that has an id attribute whose value is batting_gamelogs.
So, use this for your XPath expression:
'//*[#id = "batting_gamelogs"]'
and put the "not" outside of xpathSApply(), by testing whether the length of the result list is zero (thanks to the answer at https://stackoverflow.com/a/25553805/423105):
if (length(xpathSApply(batting, '//*[#id = "batting_gamelogs"]')) == 0) next
I took out the [[1]] because you just want to test whether any values are returned; you don't care about extracting the first result.
Related
Looking to scrape a table from this site -
https://dr16.sdss.org/optical/spectrum/search?id=285009 [based on https://labrtorian.com/2017/08/14/parse-an-online-table-into-an-r-dataframe-westgards-biological-variation-database/
and
https://www.r-bloggers.com/2015/01/using-rvest-to-scrape-an-html-table/].
All the data is numeric.
The following libraries were employed -
library(httr)
library(rvest)
library(xlsx)
astro <- session("https://dr16.sdss.org/optical/spectrum/search?id=285009")
xpaths <- paste0("paste0('//*[#id="results"]/div[1]/div[2][', 1:1, ']")
spectable = data.frame(matrix(NA, 0, 10))
#loop over the 1 html table
for (j in 1:1){
subtable <- astro %>%
`read_html() %>%`
`html_nodes(xpath = xpaths[j] ) %>%`
`html_table(., fill = TRUE)`
subtable <- subtable[[1]]
spectable <- rbind(spectable, subtable)
}
## Error in subtable[[1]] : subscript out of bounds
table.header <- c("Plate", "MJD" ,"FibreID", "specobj_id", "RA", "Dec", "z","zerr", "S/N", "class")
names(spectable) <- table.header
There is only 1 table, the various fields are populated by the user completing certain fields. e.g say Plate = 400. There is an option to set the number of rows on the table - say 10.
I am unable to figure out how to define the table from within the webpage using xpaths, elements does not clearly show the table. The best I could come up with was
xpaths <- paste0("//*[#id="results"]/div[1]/div[2][", 1:1, "]")
which returned the error
"unexpected symbol in ..."
A second issue, presumably related to the first, is defining the row names. The script uses this
row.names(spectable) <- 1:nrow(spectable)
where nrow = number of rows in the table.
The returned error is
Error in .rowNamesDF<-(x, value = value) : invalid 'row.names' length
Would appreciate any enlightenment. Thank you.
I want to extract data from the OECD website particularily the dataset "REGION_ECONOM" with the dimensions "GDP" (GDP of the respective regions) and "POP_AVG" (the average population of the respective region).
This is the first time I am doing this:
I picked all the required dimensions on the OECD website and copied the SDMX (XML) link.
I tried to load them into R and convert them to a data frame with the following code:
(in the link I replaced the list of all regions with "ALL" as otherwise the link would have been six pages long)
if (!require(rsdmx)) install.packages('rsdmx') + library(rsdmx)
url2 <- "https://stats.oecd.org/restsdmx/sdmx.ashx/GetData/REGION_ECONOM/1+2.ALL.SNA_2008.GDP+POP_AVG.REAL_PPP.ALL.1990+1991+1992+1993+1994+1995+1996+1997+1998+1999+2000+2001+2002+2003+2004+2005+2006+2007+2008+2009+2010+2011+2012+2013+2014+2015+2016+2017+2018/all?"
sdmx2 <- readSDMX(url2)
stats2 <- as.data.frame(sdmx2)
head(stats2)
Unfortunately, this returns a "400 Bad request" error.
When just selecting a couple of regions the error does not appear:
if (!require(rsdmx)) install.packages('rsdmx') + library(rsdmx)
url1 <- "https://stats.oecd.org/restsdmx/sdmx.ashx/GetData/REGION_ECONOM/1+2.AUS+AU1+AU101+AU103+AU104+AU105.SNA_2008.GDP+POP_AVG.REAL_PPP.ALL.1990+1991+1992+1993+1994+1995+1996+1997+1998+1999+2000+2001+2002+2003+2004+2005+2006+2007+2008+2009+2010+2011+2012+2013+2014+2015+2016+2017+2018/all?"
sdmx1 <- readSDMX(url1)
stats1 <- as.data.frame(sdmx1)
head(stats1)
I also tried to use the "OECD" package to get the data. There I had the same problem. ("400 Bad Request")
if (!require(OECD)) install.packages('OECD') + library(OECD)
df1<-get_dataset("REGION_ECONOM", filter = "GDP+POP_AVG",
start_time = 2008, end_time = 2009, pre_formatted = TRUE)
However, when I use the package for other data sets it does work:
df <- get_dataset("FTPTC_D", filter = "FRA+USA", pre_formatted = TRUE)
Does anyone know where my mistake could lie?
the sdmx-ml api does not seem to work as explained (using the all parameter), whereas the json API works just fine. The following query returns the values for all countries and returns them as json - I simply replaced All by an empty field.
query <- https://stats.oecd.org/sdmx-json/data/REGION_ECONOM/1+2..SNA_2008.GDP+POP_AVG.REAL_PPP.ALL.1990+1991+1992+1993+1994+1995+1996+1997+1998+1999+2000+2001+2002+2003+2004+2005+2006+2007+2008+2009+2010+2011+2012+2013+2014+2015+2016+2017+2018/all?
Transforming it to a readable format is not so trivial. I played around a bit to find the following work-around:
# send a GET request using httr
library(httr)
query <- "https://stats.oecd.org/sdmx-json/data/REGION_ECONOM/1+2..SNA_2008.GDP+POP_AVG.REAL_PPP.ALL.1990+1991+1992+1993+1994+1995+1996+1997+1998+1999+2000+2001+2002+2003+2004+2005+2006+2007+2008+2009+2010+2011+2012+2013+2014+2015+2016+2017+2018/all?"
dat_raw <- GET(query)
dat_parsed <- parse_json(content(dat_raw, "text")) # parse the content
Next, access the observations from the nested list and transform them to a matrix. Also extract the features from the keys:
dat_obs <- dat_parsed[["dataSets"]][[1]][["observations"]]
dat0 <- do.call(rbind, dat_obs) # get a matrix
new_features <- matrix(as.numeric(do.call(rbind, strsplit(rownames(dat0), ":"))), nrow = nrow(dat0))
dat1 <- cbind(new_features, dat0) # add feature columns
dat1_df <- as.data.frame(dat1) # optionally transform to data frame
Finally you want to find out about the keys. Those are hidden in the "structure". This one you also need to parse correctly, so I wrote a function for you to easier extract the values and ids:
## Get keys of features
keys <- dat_parsed[["structure"]][["dimensions"]][["observation"]]
for (i in 1:length(keys)) print(paste("id position:", i, "is feature", keys[[i]]$id))
# apply keys
get_features <- function(data_input, keys_input, feature_index, value = FALSE) {
keys_temp <- keys_input[[feature_index]]$values
keys_temp_matrix <- do.call(rbind, keys_temp)
keys_temp_out <- keys_temp_matrix[, value + 1][unlist(data_input[, feature_index])+1] # column 1 is id, 2 is value
return(unlist(keys_temp_out))
}
head(get_features(dat1_df, keys, 7))
head(get_features(dat1_df, keys, 2, value = FALSE))
head(get_features(dat1_df, keys, 2, value = TRUE))
I hope that helps you in your project.
Best, Tobias
My code below is me scraping data from IMDB from multiple pages, however, when I try to combine the data into one data frame it is giving me an error telling me the differing rows for gross and meta. I was wondering how would I go about inserting NA values to those empty places so the strings are equal in length? (Note, I have to remove some links because I need certain rep to post more links)
urls <- c("https://www.imdb.com/search/title?title_type=feature&release_date=2010-01-01,2017-12-31",
"https://www.imdb.com/search/title?title_type=feature&release_date=2010-01-01,2017-12-31&start=51&ref_=adv_nxt",
"https://www.imdb.com/search/title?title_type=feature&release_date=2010-01-01,2017-12-31&start=101&ref_=adv_nxt",
"https://www.imdb.com/search/title?title_type=feature&release_date=2010-01-01,2017-12-31&start=151&ref_=adv_nxt",
"https://www.imdb.com/search/title?title_type=feature&release_date=2010-01-01,2017-12-31&start=201&ref_=adv_nxt",
"https://www.imdb.com/search/title?title_type=feature&release_date=2010-01-01,2017-12-31&start=251&ref_=adv_nxt",
"https://www.imdb.com/search/title?title_type=feature&release_date=2010-01-01,2017-12-31&start=301&ref_=adv_nxt",
"https://www.imdb.com/search/title?title_type=feature&release_date=2010-01-01,2017-12-31&start=351&ref_=adv_nxt",
"https://www.imdb.com/search/title?title_type=feature&release_date=2010-01-01,2017-12-31&start=401&ref_=adv_nxt",
"https://www.imdb.com/search/title?title_type=feature&release_date=2010-01-01,2017-12-31&start=451&ref_=adv_nxt",
"https://www.imdb.com/search/title?title_type=feature&release_date=2010-01-01,2017-12-31&start=501&ref_=adv_nxt",
"https://www.imdb.com/search/title?title_type=feature&release_date=2010-01-01,2017-12-31&start=551&ref_=adv_nxt",
"https://www.imdb.com/search/title?
)
results_list <- list()
for(.page in seq_along(urls)){
webpage <- read_html(urls[[.page]])
titlehtml <- html_nodes(webpage,'.lister-item-header a')
title <- html_text(titlehtml)
runtimehtml <- html_nodes(webpage,'.text-muted .runtime')
runtime <- html_text(runtimehtml)
runtime <- gsub(" min","",runtime)
ratinghtml <- html_nodes(webpage,'.ratings-imdb-rating strong')
rating<- html_text(ratinghtml)
voteshtml <- html_nodes(webpage,'.sort-num_votes-visible span:nth-child(2)')
votes <- html_text(voteshtml)
votes<-gsub(",","",votes)#removing commas
metascorehtml <- html_nodes(webpage,'.metascore')
metascore <- html_text(metascorehtml)
metascore<-gsub(" ","",metascore)#removing extra space in metascore
grosshtml <- html_nodes(webpage,'.ghost~ .text-muted+ span')
gross <- html_text(grosshtml)
gross<-gsub("M","",gross)#removing '$' and 'M' signs
gross<-substring(gross,2,6)
results_list[[.page]] <- data.frame(Title = title,
Runtime = as.numeric(runtime),
Rating = as.numeric(rating),
Metascore = as.numeric(metascore),
Votes = as.numeric(votes),
Gross_Earning_in_Mil = as.numeric(unlist(gross))
)
}
final_results <- plyr::ldply(results_list)
Error in data.frame(Title = title, Runtime = as.numeric(runtime), Rating = as.numeric(rating), :
arguments imply differing number of rows: 50, 49, 48
You need to know where your data is missing, so you need to know which items belong together. Right now you just have seperate vectors of values, so you don't know which belong together.
Looking at the page, it looks they are neatly organized into "lister-item-content"-nodes, so the clean thing to do is first extract those nodes, and only then pull out more info from each unit seperately. Something like this works for me:
items <- html_nodes(webpage,'.lister-item-content')
gross <- sapply(items, function(i) {html_text(html_node(i, '.ghost~ .text-muted+ span'))})
It inserts NA at every place where 'items' does not contain the header you're looking for.
I have the following .csv file:
https://drive.google.com/open?id=0Bydt25g6hdY-RDJ4WG41VFpyX1k
And I would like to be able to take the date and agent name(pasting its constituent parts) and append them as columns to the right of the table, up until it finds a different name and date, doing the same for the remaining name and date items, to get the following result:
The only thing I have been able to do with the dplyr package is the following:
library(dplyr)
library(stringr)
report <- read.csv(file ="test15.csv", head=TRUE, sep=",")
date_pattern <- "(\\d+/\\d+/\\d+)"
date <- str_extract(report[,2], date_pattern)
report <- mutate(report, date = date)
Which gives me the following result:
The difficulty I am finding is probably using conditionals in order make the script get the appropriate string and append it as a column at the end of the table.
This might be crude, but I think it illustrates several things: a) setting stringsAsFactors=F; b) "pre-allocating" the columns in the data frame; and c) using the column name instead of column number to set the value.
report<-read.csv('test15.csv', header=T, stringsAsFactors=F)
# first, allocate the two additional columns (with NAs)
report$date <- rep(NA, nrow(report))
report$agent <- rep(NA, nrow(report))
# step through the rows
for (i in 1:nrow(report)) {
# grab current name and date if "Agent:"
if (report[i,1] == 'Agent:') {
currDate <- report[i+1,2]
currName=paste(report[i,2:5], collapse=' ')
# otherwise append the name/date
} else {
report[i,'date'] <- currDate
report[i,'agent'] <- currName
}
}
write.csv(report, 'test15a.csv')
This code will be used to count number of links in my tweets collection. The collection is collected from 10 accounts. The questions is, how could I loop through the ten accounts in one code and drop the output in a table or graph? "Unames" is representing the name of the account. Thanks in adavance,
mydata <- read.csv("tweets.csv",sep=",", header=TRUE)
head(mydata)
dim(mydata)
colnames(mydata)
****#tweets for each university****
table(mydata$University)
Unames<- unique(mydata$University)
mystring <- function(Uname, string){
mydata_temp <- subset(mydata,University==Uname)
mymatch <- rep(NA,dim(mydata_temp)[1])
for(i in 1:dim(mydata_temp)[1]){
mymatch[i] <- length(grep(string, mydata_temp[i,2]))
}
return(mymatch)
}
**#web link e.g. (Here I would like to see the total links for all universities in table or graph. The below code is only giving me the output one by one!
mylink <- mystring(Unames[1],"http://")
So my suspicions are wrong and you do have a body of data for which this command produces the desired results (and you expect all the :
mylink <- mystring(Unames[1],"http://")
In that case, you should just do this:
links_list <- lapply(Unames, mystring, "http://")