Rows not being added to data frame in R - r

So, I've been trying to do a simulation study about a SIR-model.
I have the following code (Tryed to clean it up):
# Initial parameters
N <- 1E6 # Total population
I <- 1 # Number of Infectious at time 0
S <- N-1 # Number of Susceptibles at time 0
R <- 0 # Number of Recovered at time 0
# Vector to store observations in
Df1 <- data.frame("final_size" = as.numeric(), "peak_size" = as.numeric())
# Setting a seed for reproducibility
set.seed(1996)
n_sim <- 100
# Setting different values for R0, the basic reproduction number
R0 <- seq(0.5,2.5, 1)
for(values in R0){
if(values == 0.5){
# Transmission parameters
R0 <- values # Basic Reproduction number
nu <- 1/6 # Recovery rate (in days)
b <- nu*R0/N # Infection rate (in days)
for(sim in n_sim){
temp <- NULL
# Binomial model
#----------------
# Initial states
Sold = S # Number of Susceptibles at time t=0
Iold = I # Number of Infectious at time t=0
Rold = R # Number of Recovered at time t=0
# Output vectors
Svec =Sold; Ivec = Iold; Rvec = Rold
stop = FALSE
# Loop - continue until stop=TRUE
while (!stop){
Ih = rbinom(1,Sold,(1-exp(-b*Iold)))
Rh = rbinom(1,Iold,(1-exp(-nu)))
Rh = nu*Iold
Sold = Sold-Ih
Iold = Iold+Ih-Rh
Rold = Rold+Rh
Svec = c(Svec,Sold)
Ivec = c(Ivec,Iold)
Rvec = c(Rvec,Rold)
if (Iold<=2e-5){stop=T}
}
peak_size_df <- max(Ivec)
final_size_df <- Rvec[length(Rvec)]/N
temp <- rbind(temp, c(final_size_df, peak_size_df))
colnames(temp) <- c("final_size", "peak_size")
Df1 <- rbind(Df1, temp)
}
}
}
I'm looking to store data in Df1. However, at the end of the loop, only 1 loop has been stored, I assume the last loop. Which I don't really properly understand. I've recoded it a few times and in those cases I ended up with 99 NA rows for final size and peak size. In this version, I only end up with 1 row (with values however). I plan on expanding the loop for different values of R0 as seen in the R0 seq. Due to it not working for the first value, I haven't expanded it yet.
Any suggestions? Improvements?
After comments from Gregory and Cath, the following adjustments:
# Setting a seed for reproducibility
#set.seed(1996)
n_sim <- 100
# Initial parameters
N <- 1E6 # Total population
I <- 1 # Number of Infectious at time 0
S <- N-1 # Number of Susceptibles at time 0
R <- 0 # Number of Recovered at time 0
# Vector to store observations in
Df1 <- data.frame("final_size" = rep(NA, n_sim), "peak_size" = rep(NA, n_sim))
Df2 <- NULL
Df3 <- NULL
# Setting different values for R0, the basic reproduction number
R0 <- seq(0.5,2.5, 1)
#plot(Svec, type = "l", ylim = c(0, 1000000), col = "red")
#lines(Rvec, type = "l", col = "blue")
#lines(Ivec, type = "l")
#max(Ivec)
for(values in R0){
if(values == 0.5){
# Transmission parameters
R0_value <- values # Basic Reproduction number
nu <- 1/6 # Recovery rate (in days)
b <- nu*R0_value/N # Infection rate (in days)
for(sim in n_sim){
# Binomial model
#----------------
# Initial states
Sold = S # Number of Susceptibles at time t=0
Iold = I # Number of Infectious at time t=0
Rold = R # Number of Recovered at time t=0
# Output vectors
Svec =Sold; Ivec = Iold; Rvec = Rold
stop = FALSE
# Loop - continue until stop=TRUE
while (!stop){
Ih = rbinom(1,Sold,(1-exp(-b*Iold)))
Rh = rbinom(1,Iold,(1-exp(-nu)))
Rh = nu*Iold
Sold = Sold-Ih
Iold = Iold+Ih-Rh
Rold = Rold+Rh
Svec = c(Svec,Sold)
Ivec = c(Ivec,Iold)
Rvec = c(Rvec,Rold)
if (Iold<=2e-5){stop=T}
}
peak_size_df <- max(Ivec)
final_size_df <- Rvec[length(Rvec)]/N
Df1[sim, "final_size"] <- final_size_df
Df1[sim, "peak_size"] <- peak_size_df
}
}
}
Which gives me 99 rows with NA and row 100 the last simulation with values.
Any idea what is causing the NA's?

Related

R Expanding Window RandomForest, Accuracy Not Dropping Off With Increases in Lag

I am trying make a binary prediction (predicting QQQ states) using 16 input variables. My data set is 2001-2022. Here is what my data set looks like (predicting X0, which is 5 days ahead)
First I use cross validation with an 80-20 train test split on data from 2001-2017 in order
to test the accuracy of a potential model.
However, since I want our model doing forward predictions, I train the model using the 2001-2017 data set and make a chronological prediction for the 2018-2022 data set.
Understandably, the accuracy drops off
In order to improve the accuracy, I run an expanding window prediction model, where I keep
retraining the model using all prior available observations in order to predict the next state in the data set. For each model I increment the training set by one date. The output is a 2018-2022 prediction of states where the state for each date was predicted using a different training set. This ideally should also help the model to train on new market conditions/phases. The accuracy improves.
However, when I change the lags, I begin to notice that the accuracy does not begin to drop off with increased lags…
The code has been checked extensively and it seems like the lags for each dataset are legitimate. This leads to the question…what is wrong with mu model? Might there be a model better suited for our purposes? It also makes me wonder, why is there such a variability in the Sharpe for each model, is the 15th lag having the highest Sharpe purely coincidental? One theory was that the training set is quite comprehensive, therefore the model is great at making prediction regardless of lag in the near term. However, when I took the lags to an extreme, the accuracy still did not drop off:
Should I try using a different model? Any advice or guidance would be greatly appreciated. Please see my code below (the loop commented out is the expanding window RandomForest application).
library(ggplot2)
library(BatchGetSymbols)
library(data.table)
library(plyr)
library(quantmod)
library(PerformanceAnalytics)
defaultW <- getOption("warn")
options(warn = -1)
library(dplyr)
library(caret)
library(ranger)
### Data Import ######
states_full <- read.csv(file = "rolling_qqq_hidden_states_full_five_back.csv")
states_full$formatted_date <- as.Date(states_full$formatted_date)
states_full <- states_full[!duplicated(states_full$formatted_date),]
tickers <- c("QQQ", "^VXN")
l.out <- BatchGetSymbols(tickers = tickers,
first.date = states_full$formatted_date[1],
last.date = states_full$formatted_date[nrow(states_full)]+1, do.cache=FALSE, be.quiet = TRUE)
price_data <- data.frame(l.out$df.tickers$price.adjusted,l.out$df.tickers$ret.adjusted.prices, l.out$df.tickers$ref.date, l.out$df.tickers$ticker)
colnames(price_data) <- c("Value", "Daily Return", "Date", "Label")
QQQ_full <- price_data[which(price_data$Label == "QQQ"),]
# Make sure dates match
mylist <- c()
for (i in i:nrow(QQQ_full)){
if (sum(QQQ_full$Date[i] == states_full$formatted_date) != 1){
mylist <- c(mylist, i)
}
}
if(length(mylist) > 0){
QQQ_full <- QQQ_full[-mylist,]
}
mylist <- c()
for (i in 1:nrow(QQQ_01_17)){
if (sum(states_full$formatted_date[i] == QQQ_full$Date) != 1){
mylist <- c(mylist, i)
}
}
if(length(mylist) > 0){
states_full <- states_full[-mylist,]
}
# split the data into 2001-2017, 2018-2022
states_01_17 <- states_full[1:which(states_full$formatted_date == "2017-12-29"),]
states_17_22 <- states_full[(nrow(states_01_17)+1):nrow(states_full),]
QQQ_01_17<- QQQ_full[1:which(QQQ_full$Date == "2017-12-29"),]
QQQ_17_22 <- QQQ_full[(which(QQQ_full$Date == "2017-12-29")+1):nrow(QQQ_full),]
# build QQQ portfolio
QQQ_portfolio <- as.data.frame(matrix(nrow = nrow(QQQ_17_22) , ncol = 3))
colnames(QQQ_portfolio) <- c("Value", "Date", "Label")
QQQ_portfolio$Value <- 100
QQQ_portfolio$Label <- "QQQ Portfolio"
QQQ_portfolio$Date <- QQQ_17_22$Date
for(m in 2:nrow(QQQ_portfolio)){
QQQ_portfolio$Value[m] <- QQQ_portfolio$Value[m-1] * (1+QQQ_17_22$`Daily Return`[m])
}
# build non-lagged states portfolio
states_portfolio <- as.data.frame(matrix(nrow = nrow(QQQ_17_22) , ncol = 3))
colnames(states_portfolio) <- c("Value", "Date", "Label")
states_portfolio$Value <- 100
states_portfolio$Label <- "0 Lag RandomForest Prediction of MSDR"
states_portfolio$Date <- QQQ_17_22$Date
for(i in 2:nrow(states_portfolio)){
if (states_17_22$X0[i-1] == 1){
states_portfolio$Value[i] <- states_portfolio$Value[i-1] * (1+QQQ_17_22$`Daily Return`[i])
} else {
states_portfolio$Value[i] <- states_portfolio$Value[i-1]
}
}
# Calculate non-lagged sharpe as benchmark
#states_portfolio_returns <- data.frame(Delt(states_portfolio$Value)[-1])
#states_portfolio_returns_xts <- xts(states_portfolio_returns,states_portfolio$Date[-1])
#as.numeric(SharpeRatio.annualized(states_portfolio_returns_xts))
# bind portfolios together for plotting
port_comp <- rbind(QQQ_portfolio,states_portfolio)
# data set that will hold performance metrics
loop_output <- as.data.frame(matrix(0, nrow = 22, ncol = 8))
colnames(loop_output) <- c("Lag", "Cross Validation Accuracy 01-17","Forward Accuracy 18-22","Sharpe", "Average 1YR Rolling Sharpe",
"Median 1YR Rolling Sharpe","Min 1YR Rolling Sharpe","Max 1YR Rolling Sharpe")
# read macro data (do it each time because)
macro_full <- data.frame(read.csv("macroindicators3.csv"))
for (j in 2:ncol(macro_full)){
macro_full[j] <- as.numeric(nafill(macro_full[,j], type = "locf"))
}
macro_full$Date <- as.Date(macro_full[,1], "%m/%d/%Y")
macro_full <- macro_full[,-1]
macro_full <- macro_full[-1,]
# Remove NA columns, can try with more columns values later...
macro_no_na_full <- macro_full[,colSums(is.na(macro_full))==0]
# make sure dates match
mylist <- c()
for (k in 1:nrow(states_full)){
if (sum(states_full$formatted_date[k] == macro_full$Date) != 1){
mylist <- c(mylist, k)
}
}
if(length(mylist) > 0){
states_full <- states_full[-mylist,]
}
mylist <- c()
for (l in 1:nrow(macro_full)){
if (sum(macro_full$Date[l] == states_full$formatted_date) != 1){
mylist <- c(mylist, l)
}
}
if(length(mylist) > 0){
macro_full <- macro_full[-mylist,]
}
# states are a factor
states_full$X0 <- as.factor(states_full$X0)
set.seed(42)
for (i in 1:50){
if (i <= 8){
lag = i*5 # increment lag by 5 until 40
} else if (i <= 14){
lag = 40 + (i-8)*10 # increment lag by 10 until 100
} else {
lag = 100+(i-14)*100 # increment lag by 100 until 900
}
print(lag)
#Save lag
loop_output$Lag[i] <- lag
#Create a lagged data frame
full <- cbind(macro_no_na_full[1:(nrow(macro_no_na_full)-lag),], states_full[(lag+1):nrow(states_full),])
full_01_17 <- full[1:(which(full$Date == "2017-12-29")-lag),]
full_17_22 <- full[-(1:nrow(full_01_17)),]
# save version with dates to verify lags later
full_w_dates <- full
full_01_17_w_dates <- full_01_17
full_17_22_w_dates <- full_17_22
# remove dates for ml
full <- full[,-c(17,18)]
full_01_17 <- full_01_17[,-c(17,18)]
full_17_22 <- full_17_22[,-c(17,18)]
# this is just for cross validation model
x_01_17 <- data.frame(full_01_17[,-ncol(full_01_17)])
y_01_17 <- full_01_17$X0
# run cross validation model
train=sample(nrow(full_01_17),nrow(full_01_17)*.8,replace=FALSE) #Train/Test
rf.reg = ranger(y = y_01_17[train], x= x_01_17[train,] ,mtry=round(sqrt(ncol(x_01_17))),num.trees=200,importance = "impurity")
y.rf.pred = predict(rf.reg, x_01_17[-train,])$predictions # Predict with bagging
# cross validation model accuracy
rf.acc = mean(y.rf.pred==y_01_17[-train]) # Directly compute the accuracy
#rf.acc
#table(y.rf.pred,y_01_17[-train])
loop_output$`Cross Validation Accuracy 01-17`[i] <- rf.acc
# Expanding window models - takes a while
# prediction <- as.data.frame(matrix(0,nrow = nrow(full_17_22), ncol= 2)) # data set to store predictions
# prediction$V1 <- as.factor(c(0,1))[1] # store predictions as a factor
# previous = 0 # progress bar
# for(a in nrow(full_01_17):(nrow(full)-1)){ #expanding window starts with 2001-2017, next iteration is 2001-2017+1day
# progress = (a-nrow(full_01_17))/(nrow(full)-1-nrow(full_01_17)) # progress bar
# progress = round_any(progress, 0.01) # progress bar
# if (progress != previous){ # progress bar
# print(progress) # progress bar
# }
# previous = progress # progress bar
# rf.reg = ranger(full$X0[1:a]~.,data=full[1:a,],mtry=round(sqrt(ncol(x_01_17))),num.tree=800,importance = 'impurity') # ranger model
# y.rf.pred = predict(rf.reg, full[a+1,])$prediction # make the prediction on the a+1 observation
# prediction$V1[a-nrow(full_01_17)+1] <- y.rf.pred #save the prediction
# prediction$V2<-as.Date(prediction$V2) # save the date so we can verify lags
# prediction$V2[a-nrow(full_01_17)+1] <- as.Date(full_w_dates$formatted_date[a+1])
# if (a == nrow(full)-1) message("Done!") # gives a status update
# }
#
# write.csv(prediction, paste(lag,"lagprediction.csv", sep = "")) # save the prediction so we don't have to rerun
####
### to read-in results from already completed backtets
prediction <- read.csv(paste(lag,"lagprediction.csv", sep = ""))[2]
###
full_17_22_w_pred <- full_17_22_w_dates
full_17_22_w_pred$prediction <- prediction$V1
# Evaluate the accuracy
rf.acc = mean(full_17_22_w_pred$prediction==full_17_22_w_pred$X0)
loop_output$`Forward Accuracy 18-22`[i] <- rf.acc
# build a portfolio out of the predicted states
portfolio <- as.data.frame(matrix(0,nrow = nrow(full_17_22), ncol= 3))
colnames(portfolio) <- c("Value", "Date", "Label")
portfolio$Date <- full_17_22_w_pred$formatted_date
portfolio$Value <- 100
portfolio$Label <- paste(lag,"Lag RandomForest Prediction of MSDR", sep = " ")
for(b in 2:nrow(portfolio)){
if (full_17_22_w_pred$prediction[b-1] == 1){
portfolio$Value[b] <- portfolio$Value[b-1] * (1+QQQ_17_22$`Daily Return`[b])
} else {
portfolio$Value[b] <- portfolio$Value[b-1]
}
}
# save it to dataset containing port
port_comp <- rbind(port_comp, portfolio)
# calculate Sharpe
portfolio_returns <- data.frame(Delt(portfolio$Value)[-1])
portfolio_returns_xts <- xts(portfolio_returns, portfolio$Date[-1])
loop_output$Sharpe[i] <- as.numeric(SharpeRatio.annualized(portfolio_returns_xts))
# rolling sharpe
mylist <- c()
for (z in 1:(nrow(portfolio_returns)-252)){
portfolio_xts_rolling <- portfolio_returns_xts[z:(z+252)]
mylist <- c(mylist, as.numeric(SharpeRatio.annualized(portfolio_xts_rolling)))
}
loop_output$`Average 1YR Rolling Sharpe`[i]<- mean
loop_output$`Median 1YR Rolling Sharpe`[i]<- median(mylist)
loop_output$`Min 1YR Rolling Sharpe`[i]<- min(mylist)
loop_output$`Max 1YR Rolling Sharpe`[i]<- max(mylist)
}
options(warn = defaultW)
# plot output
ggplot(port_comp, aes(x = port_comp$Date, y = port_comp$Value, color = port_comp$Label, group = port_comp$Label))+geom_line()
#loop_output_v1 <- rbind(loop_output_v1, loop_output)
loop_output_v1

How to get a random observation point at a specific time over multiple trials in R?

I am working on Spike Trains and my code to get a spike train like this:
for 20 trials is written below. The image is representational for 5 trials.
fr = 100
dt = 1/1000 #dt in milisecond
duration = 2 #no of duration in s
nBins = 2000 #10msSpikeTrain
nTrials = 20 #NumberOfSimulations
MyPoissonSpikeTrain = function(p, fr= 100) {
p = runif(nBins)
q = ifelse(p < fr*dt, 1, 0)
return(q)
}
set.seed(1)
SpikeMat <- t(replicate(nTrials, MyPoissonSpikeTrain()))
plot(x=-1,y=-1, xlab="time (s)", ylab="Trial",
main="Spike trains",
ylim=c(0.5, nTrials+1), xlim=c(0, duration))
for (i in 1: nTrials)
{
clip(x1 = 0, x2= duration, y1= (i-0.2), y2= (i+0.4))
abline(h=i, lwd= 1/4)
abline(v= dt*which( SpikeMat[i,]== 1))
}
Each trial has spikes occuring at random time points. Now what I am trying to work towards, is getting a random sample time point that works for all 20 trials and I want to get the vector consisting of length of the intervals this point falls into, for each trial. The code to get the time vector for the points where the spikes occur is,
A <- numeric()
for (i in 1: nTrials)
{
ISI <- function(i){
spike_times <- c(dt*which( SpikeMat[i, ]==1))
ISI1vec <- c(diff(spike_times))
A <- c(A, ISI1vec)
return(A)}
}
Then you call ISI(i) for whichever trial you wish to see the Interspike interval vector for. A visual representation of what I want is:
I want to get a vector that has the lengths of the interval where this points fall into, for each trial. I want to figure out it's distribution as well, but that's for later. Can anybody help me figure out how to code my way to this? Any help is appreciated, even if it's just about how to start/where to look.
Your data
set.seed(1)
SpikeMat <- t(replicate(nTrials, MyPoissonSpikeTrain()))
I suggest transforming your sparse matrix data into a list of indices where spikes occur
L <- lapply(seq_len(nrow(SpikeMat)), function(i) setNames(which(SpikeMat[i, ] == 1), seq_along(which(SpikeMat[i, ] == 1))))
Grab random timepoint
set.seed(1)
RT <- round(runif(1) * ncol(SpikeMat))
# 531
Result
distances contains the distances to the 2 nearest spikes - each element of the list is a named vector where the values are the distances (to RT) and their names are their positions in the vector. nearest_columns shows the original timepoint (column number) of each spike in SpikeMat.
bookend_values <- function(vec) {
lower_val <- head(sort(vec[sign(vec) == 1]), 1)
upper_val <- head(sort(abs(vec[sign(vec) == -1])), 1)
return(c(lower_val, upper_val))
}
distances <- lapply(L, function(i) bookend_values(RT-i))
nearest_columns <- lapply(seq_along(distances), function(i) L[[i]][names(distances[[i]])])
Note that the inter-spike interval of the two nearest spikes that bookend RT can be obtained with
sapply(distances, sum)

Optimizing a raster::calc function - function 1 vs 2 - R

I am working on calculating a new raster (output ras) based on 2 rasters (input ras) and a 'stratum' raster. The Stratum raster values (1 to 4) refer to the rows in the bias and weight dataframes. Strata value '4' was used to fill any 'NA' in the Strata raster, otherwise the function would crash. The following input is required.
# load library
library(raster)
# reproducing the bias and weight data.frames
bias <- data.frame(
ras_1 = c(56,-7,-30,0),
ras_2 = c(29,18,-52,0),
ras_3 = c(44,4,-15,0)
)
rownames(bias) <- c("Strat 1","Strat 2","Strat 3","Strat 4")
weight <- data.frame(
ras_1 = c(0.56,0.66,0.23,0.33),
ras_2 = c(0.03,0.18,0.5,0.33),
ras_3 = c(0.41,0.16,0.22,0.34)
)
rownames(weight) <- c("Strat 1","Strat 2","Strat 3","Strat 4")
The following function (fusion) allows me to add a 'bias' value to the input rasters. After the bias has been added, the two corrected input raster cell values will be multiplied by a weight value, depending in which stratum they belong.
The result of the input 2 raster values will be summed and returned using 'calc'.
## Create raster data for input
# create 2 rasters
r1 <- raster(ncol=10,nrow=10)
r2 <- raster(ncol=10,nrow=10)
r1[] <- sample(seq(from = 1, to = 500, by = 1), size = 100, replace = TRUE)
r2[] <- sample(seq(from = 1, to = 500, by = 1), size = 100, replace = TRUE)
r2[1:2] <- NA # include NA in input maps for example purpose
# Create strata raster (4 strata's)
r3 <- raster(ncol=10,nrow=10)
r3[] <- sample(seq(from = 1, to = 4, by = 1), size = 100, replace = TRUE)
Strata.n <- 4 # number of strata values in this example
fusion <- function(x) {
result <- matrix(NA, dim(x)[1], 1)
for (n in 1:Strata.n) {
ok <- !is.na(x[,3]) & x[,3] == n
a <- x[ok,1] + bias[n,1] # add bias to first input raster value
b <- x[ok,2] + bias[n,2] # add bias to second input raster value
result[ok] <- a * weight[n,1] + b * weight[n,2] # Multiply values by weight
}
return(result)
}
s <- stack(r1,r2,r3)
Fused.map <- calc(s, fun = fusion, progress = 'text')
The problem with the above function is that:
It is only suited for 2 rasters
If one raster has NA, then the result will be NA for that cell
is.na(Fused.map#data#values) # check for NA in the fused map
What I would like to have is:
A function that takes any number of input rasters
It can work with NA values (ignores NA values in the rasters)
Re-adjusts the 'weight' if a raster has a NA value, so that the remaining weight values add up to 1
EDIT
The following function does what I need, but is significantly slower than the function above on large rasters. Fusion does it in 10 seconds, fusion2 function below needs 8 hours on large rasters...
fusion2 <- function(x) {
m <- matrix(x, nrow= 1, ncol=3) # Create matrix per stack of cells
n <- m[,3] # get the stratum
g <- m[1:(Strata.n-1)] + as.matrix(bias[n,]) # add bias to raster values
g[g < 0] <- 0 # set values below 0 to 0
w <- weight[n,1:(Strata.n-1)] # get correct strata weight values
w[is.na(g)]<- NA # set weight to NA if (g) raster values are NA
p <- sum(w, na.rm = T) # calculate sum of weight values
pp <- w/p # divide weight values by sum to get the proportion to == 1
pp <- as.numeric(pp)
result <- as.integer(round(sum(pp*g, na.rm = T))) # return raster value
return(result)
}
Fused.map <- calc(s, fun = fusion2, progress = 'text')
Any way to optimize the fusion2 function to a similar method as fusion1?
> sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
Thank you for your time!
There seems to be a lot of unnecessary format conversions going on, and using the simplest data structures available is the fastest. calc parameter is a numeric vector, so you can use numeric vectors everywhere. Also, rounding and casting into an integer is redundant.
fusion3 <- function(x) {
n <- x[3] # get the stratum
g <- x[1:(Strata.n-1)] + as.numeric(bias[n,]) # add bias to raster values
g[g < 0] <- 0 # set values below 0 to 0
w <- as.numeric(weight[n,1:(Strata.n-1)]) # get correct strata weight values
w[is.na(g)]<- NA # set weight to NA if (g) raster values are NA
p <- sum(w, na.rm = T) # calculate sum of weight values
pp <- w/p # divide weight values by sum to get the proportion to == 1
result <- as.integer(sum(pp*g, na.rm = T)+0.5) # return raster value
return(result)
}
On a 100x100 raster, your original functions take:
system.time(Fused.map <- calc(s, fun = fusion, progress = 'text'))
user system elapsed
0.015 0.000 0.015
system.time(Fused.map <- calc(s, fun = fusion2, progress = 'text'))
user system elapsed
8.270 0.078 8.312
The modified function is already 5 times faster:
system.time(Fused.map3 <- calc(s, fun = fusion3, progress = 'text'))
user system elapsed
1.970 0.026 1.987
Next, precompute matrices from the data frames so you don't need to do that for each pixel:
bias_matrix = as.matrix(bias)
weight_matrix = as.matrix(weight)
fusion3 <- function(x) {
n <- x[3] # get the stratum
g <- x[1:(Strata.n-1)] + bias_matrix[n,] # add bias to raster values
g[g < 0] <- 0 # set values below 0 to 0
w <- weight_matrix[n,1:(Strata.n-1)] # get correct strata weight values
w[is.na(g)]<- NA # set weight to NA if (g) raster values are NA
p <- sum(w, na.rm = T) # calculate sum of weight values
pp <- w/p # divide weight values by sum to get the proportion to == 1
result <- as.integer(sum(pp*g, na.rm = T)+0.5) # return raster value
return(result)
}
We get:
system.time(Fused.map3 <- calc(s, fun = fusion3, progress = 'text'))
user system elapsed
0.312 0.008 0.318
And finally, also precompute 1:(Strata.n-1):
bias_matrix = as.matrix(bias)
weight_matrix = as.matrix(weight)
Strata.minus1 = 1:(Strata.n-1)
fusion3 <- function(x) {
n <- x[3] # get the stratum
g <- x[Strata.minus1] + bias_matrix[n,] # add bias to raster values
g[g < 0] <- 0 # set values below 0 to 0
w <- weight_matrix[n,Strata.minus1] # get correct strata weight values
w[is.na(g)]<- NA # set weight to NA if (g) raster values are NA
p <- sum(w, na.rm = T) # calculate sum of weight values
pp <- w/p # divide weight values by sum to get the proportion to == 1
result <- as.integer(sum(pp*g, na.rm = T)+0.5) # return raster value
return(result)
}
We get:
system.time(Fused.map3 <- calc(s, fun = fusion3, progress = 'text'))
user system elapsed
0.252 0.011 0.262
That's not quite 0.015 yet, but you also have to take into consideration that your original function does not output integers, nor does it set values below 0 to 0, nor does it make the proportions sum to 1, nor as you mentioned deal with NAs.
Mind you, this function still only works with only two rasters, because you hardcode stratum as layer 3. You should instead use raster::overlay with two parameters, the stratum raster and the layers themselves (or use calc with the stratum raster as layer 1, but that's not what calc is designed for).

R: simulating 2-level model

I am trying to simulate the unequal sample size in the multilevel model.I have four groups, the sample size is 100,200,300,and 400, respectively.
So, the total sample size is 1000. w, u0,u1 variables are in the level 2 ; x , r0 are in the level 1. y is an outcome
nSubWithinGroup <- c(100,200,300,400)###the sample size in each group
nGroup <-4 ## 4 groups
gamma00 <- 1
gamma01 <- 1 ## b0 = gamma00+gamma01*w+u0
gamma10 <- 1 ## b1 = gamma10+gamma11*w+u1
gamma11 <- 1
dataLevel1 <- mat.or.vec(sum(nSubWithinGroup),4)
colnames(dataLevel1) <- c("Group","X","W","Y")
rowIndex <- 0
for (group in 1:nGroup) {
u0 <- rnorm(1,mean=0,sd=1)
u1 <- rnorm(1,mean=0,sd=1)
w <- rnorm(1,mean=0,sd=1)
for(i in 1:length(nSubWithinGroup)){
for (j in 1:nSubWithinGroup[i]){
r0 <- rnorm(1,mean=0,sd=1)
x <- rnorm(1,mean=0,sd=1)
y <- (gamma00+gamma01*w+u0)+(gamma10+gamma11*w+u1)*x+r0
rowIndex <- rowIndex + 1
dataLevel1[rowIndex,] <- c(group,x,w,y)
}
}
}
I ran the codes, and it showed me the value in the "Group" column is 1 , no 2,3, or 4. Also, it has errors, which is:
"Error in [<-(*tmp*, rowIndex, , value = c(2, -1.94476463667851, -0.153516782293473, :
subscript out of bounds"
Your original issue was a bit hard to find with all the for-loops, but you were looping twice on your grouping level (one time in 1:nGroup and then again in 1:length(nSubWithinGroup). This lead to more combinations than you had allowed for in your matrix, and thus your error. (If you want to check, run your loop without assigining to dataLevel1 and see what value rowIndex has at the end.
However, generating data like this in R can be notoriously slow and every function you use with n=1 can just as easily be used to generate nTotal numbers. I have rewritten your code to something that's (hopefully) more readable, but also more vectorized.
#set seed; you can never reproduce your result if you don't do this
set.seed(289457)
#set constants
gamma00 <- 1
gamma01 <- 1 ## b0 = gamma00+gamma01*w+u0
gamma10 <- 1 ## b1 = gamma10+gamma11*w+u1
gamma11 <- 1
#set size parameters
nSubWithinGroup <- c(100,200,300,400)###the sample size in each group
nGroup <-4
nTotal <- sum(nSubWithinGroup)
#simulate group-level data
level2_data <- data.frame(group=1:nGroup,
size=nSubWithinGroup, #not really necessary here, but I like to have everything documented/accessible
u0 = rnorm(nGroup,mean=0,sd=1),
u1 = rnorm(nGroup,mean=0,sd=1),
w = rnorm(nGroup,mean=0,sd=1)
)
#simulate individual_level data (from example code x and r0 where generated in the same way for each individual)
level1_data <- data.frame(id=1:nTotal,
group=rep(1:nGroup, nSubWithinGroup),
r0 = rnorm(nTotal,mean=0,sd=1),
x = rnorm(nTotal, mean=0,sd=1)
)
#several possibilities here, you can merge the two dataframes together or reference the level2data when calculating the outcome
#merging generates more data, but is also readable
combined_data <- merge(level1_data,level2_data,by="group",all.x=T)
#calculate outcome. This can be shortened for instance by calculating some linear parts before
#merging but wanted to stay as close to original code as possible.
combined_data$y <- (gamma00+gamma01*combined_data$w+combined_data$u0)+
(gamma10+gamma11*combined_data$w+combined_data$u1)*combined_data$x+combined_data$r0

How to vectorize a function in R

I need some help vectorizing the following code because I believe that it will become more efficient. However i do not know how to begin... I created a loop that goes through z. z has 3 columns and 112847 rows, which might be a reason it takes a long time. The 3 columns contain numbers that are used in the MACD() function...
library(quantmod)
library(TTR)
# get stock data
getSymbols('LUNA')
#Choose the Adjusted Close of a Symbol
stock <- Ad(LUNA)
#Create matrix for returns only
y <- stock
#Create a "MATRIX" by choosing the Adjusted Close
Nudata3 <- stock
#Sharpe Ratio Matrix
SR1<- matrix(NA, nrow=1)
# I want to create a table with all possible combinations from the ranges below
i = c(2:50)
k = c(4:50)
j = c(2:50)
# stores possible combinations into z
z <- expand.grid(i,k,j)
colnames(z)<- c("one","two","three")
n = 1
stretches <- length(z[,1])
while (n < stretches){
# I am trying to go through all the values in "z"
Nuw <- MACD((stock), nFast=z[n,1], nSlow=z[n,2], nSig=z[n,3], maType="EMA")
colnames(Nuw) <- c("MACD","Signal") #change the col names to create signals
x <- na.omit(merge((stock), Nuw))
x$sig <- NA
# Create trading signals
sig1 <- Lag(ifelse((x$MACD <= x$Signal),-1, 0)) # short when MACD < SIGNAL
sig2 <- Lag(ifelse((x$MACD >= x$Signal),1, 0)) # long when MACD > SIGNAL
x$sig <- sig1 + sig2
#calculate Returns
ret <- na.omit(ROC(Ad(x))*x$sig)
colnames(ret)<- c(paste(z[n,1],z[n,2],z[n,3],sep=","))
x <- merge(ret,x)
y <- merge(y,ret) #This creates a MATRIX with RETURNs ONLY
Nudata3 <- merge(Nudata3, x)
((mean(ret)/sd(ret)) * sqrt(252)) -> ANNUAL # Creates a Ratio
ANNUAL->Shrat # stores Ratio into ShRat
SR1 <- cbind(SR1,Shrat) # binds all ratios as it loops
n <- (n+1)
}
I would like to know how to vectorize the MACD() function, to speed up the process since the length of stretches is approx. 112847. It takes my computer quite some time to go through the loop itself.
First and foremost - case specific optimization - remove the cases where nFast > nSlow as it doesn't make sense technically.
Secondly - you are creating objects and copying them over and over again. This is very expensive.
Thirdly - you can code this better perhaps by creating a matrix of signals in one loop and doing rest of the operations in vectorized manner.
I would code what you are doing something like this.
Please read help pages of mapply, do.call, merge and sapply if you don't understand.
require(quantmod)
getSymbols("LUNA")
#Choose the Adjusted Close of a Symbol
stock <- Ad(LUNA)
# I want to create a table with all possible combinations from the ranges below
i = c(2:50)
k = c(4:50)
j = c(2:50)
# stores possible combinations into z
z <- expand.grid(i,k,j)
IMO : This is where your first optimization should be. Remove cases where i > k
z <- z[z[,1]<z[,2], ]
It reduces the number of cases from 112847 to 57575
#Calculate only once. No need to calculate this in every iteration.
stockret <- ROC(stock)
getStratRet <- function(nFast, nSlow, nSig, stock, stockret) {
x <- MACD((stock), nFast=nFast, nSlow=nSlow, nSig=nSig, maType="EMA")
x <- na.omit(x)
sig <- Lag(ifelse((x$macd <= x$signal),-1, 0)) + Lag(ifelse((x$macd >= x$signal),1, 0))
return(na.omit(stockret * sig))
}
RETURNSLIST <- do.call(merge, mapply(FUN = getStratRet, nFast = z[,1], nSlow = z[,2], nSig = z[,3], MoreArgs = list(stock = stock, stockret = stockret), SIMPLIFY = TRUE))
getAnnualSharpe <- function(ret) {
ret <- na.omit(ret)
return ((mean(ret)/sd(ret)) * sqrt(252))
}
SHARPELIST <- sapply(RETURNSLIST, FUN = getAnnualSharpe)
Results will be as below. Which column belongs to which combo of i, j, k is trivial.
head(RETURNSLIST[, 1:3])
## LUNA.Adjusted LUNA.Adjusted.1 LUNA.Adjusted.2
## 2007-01-10 0.012739026 -0.012739026 0
## 2007-01-11 -0.051959739 0.051959739 0
## 2007-01-12 -0.007968170 -0.007968170 0
## 2007-01-16 -0.007905180 -0.007905180 0
## 2007-01-17 -0.005235614 -0.005235614 0
## 2007-01-18 0.028315920 -0.028315920 0
SHARPELIST
## LUNA.Adjusted LUNA.Adjusted.1 LUNA.Adjusted.2 LUNA.Adjusted.3 LUNA.Adjusted.4 LUNA.Adjusted.5 LUNA.Adjusted.6
## 0.04939150 -0.07428392 NaN 0.02626382 -0.06789803 -0.22584987 -0.07305477
## LUNA.Adjusted.7 LUNA.Adjusted.8 LUNA.Adjusted.9
## -0.05831643 -0.08864845 -0.08221986
system.time(
+ RETURNSLIST <- do.call(merge, mapply(FUN = getStratRet, nFast = z[1:100,1], nSlow = z[1:100,2], nSig = z[1:100,3], MoreArgs = list(stock = stock, stockret = stockret), SIMPLIFY = TRUE)),
+ SHARPELIST <- sapply(RETURNSLIST, FUN = getAnnualSharpe)
+ )
user system elapsed
2.28 0.00 2.29

Resources