Subsetting Puts from getOptionChain in Quantmod - r

The result of running the getOptionChain is a list of symbols that includes both Calls and Puts.
I would like to subset the data and create a dataset that will include only the Puts.
This is the code I'm running to get the option chain. Now I need to subset and create a new dataset only for Puts.
library(quantmod)
Symbols<-c ("AA","AAL","AAOI","ABBV","ABC","ABNB")
Options.20221111 <- lapply(Symbols, getOptionChain)
names(Options.20221111) <- Symbols
What is the best approach to get the Puts alone?

When working with lists, lapply is your friend.
only_puts_list <- lapply(Options.20221111, function(x) x$puts)
This will create a list with only the puts in there.

Related

R: Doing the same steps on many data frames with their names stored in a vector

I have several .RData files, each of which has letters and numbers in its name, eg. m22.RData. Each of these contains a single data.frame object, with the same name as the file, eg. m22.RData contains a data.frame object named "m22".
I can generate the file names easily enough with something like datanames <- paste0(c("m","n"),seq(1,100)) and then use load() on those, which will leave me with a few hundred data.frame objects named m1, m2, etc. What I am not sure of is how to do the next step -- prepare and merge each of these dataframes without having to type out all their names.
I can make a function that accepts a data frame as input and does all the processing. But if I pass it datanames[22] as input, I am passing it the string "m22", not the data frame object named m22.
My end goal is to epeatedly do the same steps on a bunch of different data frames without manually typing out "prepdata(m1) prepdata(m2) ... prepdata(n100)". I can think of two ways to do it, but I don't know how to implement either of them:
Get from a vector of the names of the data frames to a list containing the actual data frames.
Modify my "prepdata" function so that it can accept the name of the data frame, but then still somehow be able to do things to the data frame itself (possibly by way of "assign"? But the last step of the function will be to merge the prepared data to a bigger data frame, and I'm not sure if there's a method that uses "assign" that can do that...)
Can anybody advise on how to implement either of the above methods, or another way to make this work?
See this answer and the corresponding R FAQ
Basically:
temp1 <- c(1,2,3)
save(temp1, file = "temp1.RData")
x <- c()
x[1] <- load("temp1.RData")
get(x[1])
#> [1] 1 2 3
Assuming all your data exists in the same folder you can create an R object with all the paths, then you can create a function that gets a path to a Rdata file, reads it and calls "prepdata". Finally, using the purr package you can apply the same function on a input vector.
Something like this should work:
library(purrr)
rdata_paths <- list.files(path = "path/to/your/files", full.names = TRUE)
read_rdata <- function(path) {
data <- load(path)
return(data)
}
prepdata <- function(data) {
### your prepdata implementation
}
master_function <- function(path) {
data <- read_rdata(path)
result <- prepdata(data)
return(result)
}
merged_rdatas <- map_df(rdata_paths, master_function) # This create one dataset. Merging all together

Using Purrr to export a list

I have a list which contains sub tables. I want to be able to use purrr to export the tables individually with the name of the item in the list - in the case below i would get three files with each plant named with today's date
library('purrr')
library('tidyverse')
mytest <- iris
mylist <- split(mytest,f = mytest$Species)
names(mylist)
# basically pseudo code for explanation purposes
write_excel_csv(mylist[1], names(mylist[1]))
I'm only learning how to use it effectively at the moment so any help with the explanation and why you did it this way would be great
I get that i could write a for loop to just iterate through the list but i want to use this as a learning experience to start into purrr
Thank you for your time
Map from base R will work fine for something like this:
Map(write.csv, mylist, sprintf("%s-%s.csv", names(mylist), Sys.Date()))
list.files(pattern = "*.csv")
# [1] "setosa-2017-02-13.csv" "versicolor-2017-02-13.csv" "virginica-2017-02-13.csv"
Alternatively, walk2 (and probably several other functions in purrr) could be used in this manner.

Using a loop to create multiple data frames in R

I have this function that returns a data frame of JSON data from the NBA stats website. The function takes in the game ID of a certain game and returns a data frame of the halftime box score for that game.
getstats<- function(game=x){
for(i in game){
url<- paste("http://stats.nba.com/stats/boxscoretraditionalv2?EndPeriod=10&
EndRange=14400&GameID=",i,"&RangeType=2&Season=2015-16&SeasonType=
Regular+Season&StartPeriod=1&StartRange=0000",sep = "")
json_data<- fromJSON(paste(readLines(url), collapse=""))
df<- data.frame(json_data$resultSets[1, "rowSet"])
names(df)<-unlist(json_data$resultSets[1,"headers"])
}
return(df)
}
So what I would like to do with this function is take a vector of several game ID's and create a separate data frame for each one. For example:
gameids<- as.character(c(0021500580:0021500593))
I would want to take the vector "gameids", and create fourteen data frames. If anyone knew how I would go about doing this it would be greatly appreciated! Thanks!
You can save your data.frames into a list by setting up the function as follows:
getstats<- function(games){
listofdfs <- list() #Create a list in which you intend to save your df's.
for(i in 1:length(games)){ #Loop through the numbers of ID's instead of the ID's
#You are going to use games[i] instead of i to get the ID
url<- paste("http://stats.nba.com/stats/boxscoretraditionalv2?EndPeriod=10&
EndRange=14400&GameID=",games[i],"&RangeType=2&Season=2015-16&SeasonType=
Regular+Season&StartPeriod=1&StartRange=0000",sep = "")
json_data<- fromJSON(paste(readLines(url), collapse=""))
df<- data.frame(json_data$resultSets[1, "rowSet"])
names(df)<-unlist(json_data$resultSets[1,"headers"])
listofdfs[[i]] <- df # save your dataframes into the list
}
return(listofdfs) #Return the list of dataframes.
}
gameids<- as.character(c(0021500580:0021500593))
getstats(games = gameids)
Please note that I could not test this because the URLs do not seem to be working properly. I get the connection error below:
Error in file(con, "r") : cannot open the connection
Adding to Abdou's answer, you could create dynamic data frames to hold results from each gameID using the assign() function
for(i in 1:length(games)){ #Loop through the numbers of ID's instead of the ID's
#You are going to use games[i] instead of i to get the ID
url<- paste("http://stats.nba.com/stats/boxscoretraditionalv2?EndPeriod=10&
EndRange=14400&GameID=",games[i],"&RangeType=2&Season=2015-16&SeasonType=
Regular+Season&StartPeriod=1&StartRange=0000",sep = "")
json_data<- fromJSON(paste(readLines(url), collapse=""))
df<- data.frame(json_data$resultSets[1, "rowSet"])
names(df)<-unlist(json_data$resultSets[1,"headers"])
# create a data frame to hold results
assign(paste('X',i,sep=''),df)
}
The assign function will create data frames same as number of game IDS. They be labelled X1,X2,X3......Xn. Hope this helps.
Use lapply (or sapply) to apply a function to a list and get the results as a list. So if you get a vector of several game ids and a function that do what you want to do, you can use lapply to get a list of dataframe (as your function return df).
I haven't been able to test your code (I got an error with the function you provided), but something like this should work :
library(RJSONIO)
gameids<- as.character(c(0021500580:0021500593))
df_list <- lapply(gameids, getstats)
getstats<- function(game=x){
url<- paste0("http://stats.nba.com/stats/boxscoretraditionalv2?EndPeriod=10&EndRange=14400&GameID=",
game,
"&RangeType=2&Season=2015-16&SeasonType=Regular+Season&StartPeriod=1&StartRange=0000")
json_data<- fromJSON(paste(readLines(url), collapse=""))
df<- data.frame(json_data$resultSets[1, "rowSet"])
names(df)<-unlist(json_data$resultSets[1,"headers"])
return(df)
}
df_list will contain 1 dataframe per Id you provided in gameids.
Just use lapply again for additionnal data processing, including saving the dataframes to disk.
data.table is a nice package if you have to deal with a ton of data. Especially rbindlist allows you to rbind all the dt (=df) contained in a list into a single one if needed (split will do the reverse).

R Apply function to list and create new dataframe

I am wanting to retrieve data from several webpages that is in the same place on all the pages and put it all in one data frame.
I have the following code attempt:
library(XML)
library(plyr)
**##the urls**
raceyears<-list(url2013,url2012,url2011)
**##function that is not producing what I want**
raceyearfunction<-function(x){
page<-readLines(x)
stats<-page[10:19]
y<-read.table(textConnection(stats))
run<-data.frame(y$V1,y$V2)
colnames(run)<-c("Country","Participants")
rbind.fill(run)
}
data<-llply(raceyears,raceyearfunction)
This places all the data in multiple columns (two columns for each webpage) but I am wanting all the data in two columns (Participants, Country) one data frame not many columns in one data frame.
I haven't found a question quite like this already on the site but am open to follow a link. Thank you in advance.
You need to use rbindlist outside of raceyearfunction. Let it return(run) without rbind.fill(run).
You can use ldply instead, then it will return binded data.frame already.
library(XML)
library(plyr)
raceyears <- list(url2013,url2012,url2011)
raceyearfunction<-function(x)
{
page <- readLines(x)
stats <- page[10:19]
y <- read.table(textConnection(stats))
run <- data.frame(y$V1,y$V2)
colnames(run) <- c("Country","Participants")
return(run)
}
data<-ldply(raceyears, raceyearfunction)

R extract variable from multiple dataframe in loop

I have a lot of result from parametric study to analyze. Fortunately there is an output file where the output file are saved. I need to save the name of file. I used this routine:
IndexJobs<-read.csv("C:/Users/.../File versione7.1/
"IndexJobs.csv",sep=",",header=TRUE,stringsAsFactors=FALSE)
dir<-IndexJobs$WORKDIR
Dir<-gsub("\\\\","/",dir)
Dir1<-gsub(" C","C",Dir)
Now I use e for in order to read CSV and create different dataframe
for(i in Dir1){
filepath <- file.path(paste(i,"eplusout.csv",sep=""))
dat<-NULL
dat<-read.table(filepath,header=TRUE,sep=",")
filenames <- substr(filepath,117,150)
names <-substr(filenames,1,21)
assign(names, dat)
}
Now I want to extract selected variables from each database, and putting together each variable for each database into separated database. I would also joint name of variable and single database in order to have a clear database for making some analysis. I try to make something but with bad results.
I tried to insert in for some other row:
for(i in Dir1){
filepath <- file.path(paste(i,"eplusout.csv",sep=""))
dat<-NULL
dat<-read.table(filepath,header=TRUE,sep=",")
filenames <- substr(filepath,117,150)
names <-substr(filenames,1,21)
assign(names, dat)
datTest<-dat$X5EC132.Surface.Outside.Face.Temperature..C..TimeStep.
nameTest<-paste(names,"_Test",sep="")
assign(nameTest,datTest)
DFtest=c[,nameTest]
}
But for each i there is an overwriting of DFtest and remain only the last database column.
Some suggestion?Thanks
Maybe it will work if you replace DFtest=c[,nameTest] with
DFtest[nameTest] <- get(nameTest)
or, alternatively,
DFtest[nameTest] <- datTest
This procedure assumes the object DFtest exists before you run the loop.
An alternative way is to create an empty list before running the loop:
DFtest <- list()
In the loop, you can use the following command:
DFtest[[nameTest]] <- datTest
After the loop, all values in the list DFtest can be combined using
do.call("cbind", DFtest)
Note that this will only work if all vectors in the list DFtesthave the same length.

Resources