User defined function input to loop every row of data frame - r

Im trying to create my very own first project in R but have hit a roadblock.
I have a data frame such as below where every row represents dataset of a financial option.
type <- c("C", "C")
marketV <- c(1.1166, 1.911)
S <- c(20, 60)
K <- c(20, 56)
T <- c(0.333, 0.5)
df <- data.frame(type, marketV, S, K, T)
I made a user defined function to take this data frame as an input and works great when the data frame is one row long. However, I'm not sure how to have my function iterate through all the data frame rows and produce a result for all of them.
I'm new to R so I'm unsure whether I should be running a 'for' loop around or playing around with lapply, or if theres a simple syntax answer. I simply want the function to take the df as input, but repeat its calculation for n row, and produce n results. Thank you for the help in advance.
My current function code for a df with 1 row below as reference:

This is a corrected version of your program:
df <- data.frame(type=c("C", "C"), marketV=c(1.1166, 1.911), S=c(20, 60), K=c(20, 56), T=c(0.333, 0.5))
IV <- function(df) {
# check if df has more then 1 row:
if (nrow(df)>1) { message("!! nrow(df)>1 !!"); return(NA) }
# Initializing of variables
r <- 0
sigma <- 0.3
sigma_down <- 0.001
sigma_up <- 1
count <- 0
type <- df$type; marketV <- df$marketV; S <- df$S; K <- df$K; T <- df$T
d1 <- (log(S/K) + (sigma^2/2)*T)/(sigma*sqrt(T))
d2 <- (log(S/K) - (sigma^2/2)*T)/(sigma*sqrt(T))
if(type=="C") {
V <- exp(-r*T)*(S*pnorm(d1) - K*pnorm(d2))
} else {
V <- exp(-r*T)*(K*pnorm(-d2) - S*pnorm(-d1)) }
difference <- V - marketV
# Root finding of sigma by Bisection method
while(abs(difference)>0.001 && count<1000) {
if(difference < 0) {
sigma_down <- sigma
sigma <- (sigma_up + sigma)/2
} else {
sigma_up <- sigma
sigma <- (sigma_down + sigma)/2
}
d1 <- (log(S/K) + (sigma^2/2)*T)/(sigma*sqrt(T))
d2 <- d1 - sigma*sqrt(T)
if(type=="C") {
V <- exp(-r*T)*(S*pnorm(d1) - K*pnorm(d2))
} else {
V <- exp(-r*T)*(K*pnorm(-d2) - S*pnorm(-d1)) }
difference <- V - marketV
count <- count + 1
}
if(count == 1000){
return(NA) # If sigma to satisfy Black76 price cannot be found
} else{
return(sigma)
}
}
sapply(split(df, seq(nrow(df))), IV)
The main thing is to run row by row through the dataframe. This is done by
sapply(split(df, seq(nrow(df))), IV)
In your original function are many errors: the biggest is accessing to S, K and so on. You might thinking taking the values from the dataframe df. But in fact you were taking the values from the workspace! I corrected this by redefining:
type <- df$type; marketV <- df$marketV; S <- df$S; K <- df$K; T <- df$T
I inserted a test for the number of rows in df, so you will get:
> IV(df)
!! nrow(df)>1 !!
[1] NA
Here is a cleaned up version of your program:
df <- data.frame(type=c("C", "C"), marketV=c(1.1166, 1.911), S=c(20, 60), K=c(20, 56), T=c(0.333, 0.5))
IV2 <- function(type, marketV, S, K, T) {
r <- 0; sigma <- 0.3
sigma_down <- 0.001; sigma_up <- 1
count <- 0
if(type=="C") {
f.sig <- function(sigma) {
d1 <- (log(S/K) + (sigma^2/2)*T)/(sigma*sqrt(T))
d2 <- d1 - sigma*sqrt(T)
exp(-r*T)*(S*pnorm(d1) - K*pnorm(d2)) - marketV
}
} else {
f.sig <- function(sigma) {
d1 <- (log(S/K) + (sigma^2/2)*T)/(sigma*sqrt(T))
d2 <- d1 - sigma*sqrt(T)
exp(-r*T)*(K*pnorm(-d2) - S*pnorm(-d1)) - marketV
}
}
ifelse(f.sig(sigma_down)*f.sig(sigma_up) < 0, uniroot(f.sig, c(sigma_down,sigma_up))$root, NA) # sigma
}
sapply(split(df, seq(nrow(df))), do.call, what="IV2")

Related

I am trying to create a script that runs consecutive prop.test( ) for consecutive values using a for Loop

Script:
a <- c(10, 20)
b <- c(100, 200)
c <- c(50 , 1000)
d <- c(3000, 4300)
for (i in c(a,b,c,d))
{
print(prop.test(a,b))
}.
So essentially I want every 2 objects to be paired up. I hope I am somewhat clear.
You can put the vectors in a list and use a for loop as follows -
list_data <- list(a, b, c, d)
result <- vector('list', length(list_data)/2)
for(i in seq_along(result)) {
n <- (i -1) * 2 + 1
result[[i]] <- prop.test(list_data[[n]], list_data[[n+1]])
print(result[[i]])
}

Speeding up linear model fitting on complete pairwise observations in large sparse matrix in R

I have a numeric data.frame df with 134946 rows x 1938 columns.
99.82% of the data are NA.
For each pair of (distinct) columns "P1" and "P2", I need to find which rows have non-NA values for both and then do some operations on those rows (linear model).
I wrote a script that does this, but it seems quite slow.
This post seems to discuss a related task, but I can't immediately see if or how it can be adapted to my case.
Borrowing the example from that post:
set.seed(54321)
nr = 1000;
nc = 900;
dat = matrix(runif(nr*nc), nrow=nr)
rownames(dat) = paste(1:nr)
colnames(dat) = paste("time", 1:nc)
dat[sample(nr*nc, nr*nc*0.9)] = NA
df <- as.data.frame(dat)
df_ps <- names(df)
N_ps <- length(df_ps)
My script is:
tic = proc.time()
out <- do.call(rbind,sapply(1:(N_ps-1), function(i) {
if (i/10 == floor(i/10)) {
cat("\ni = ",i,"\n")
toc = proc.time();
show(toc-tic);
}
do.call(rbind,sapply((i+1):N_ps, function(j) {
w <- which(complete.cases(df[,i],df[,j]))
N <- length(w)
if (N >= 5) {
xw <- df[w,i]
yw <- df[w,j]
if ((diff(range(xw)) != 0) & (diff(range(yw)) != 0)) {
s <- summary(lm(yw~xw))
o <- c(i,j,N,s$adj.r.squared,s$coefficients[2],s$coefficients[4],s$coefficients[8],s$coefficients[1],s$coefficients[3],s$coefficients[7])} else {
o <- c(i,j,N,rep(NA,7))
}
} else {o <- NULL}
return(o)
},simplify=F))
}
,simplify=F))
toc = proc.time();
show(toc-tic);
This takes about 10 minutes on my machine.
You can imagine what happens when I need to handle a much larger (although more sparse) data matrix. I never managed to finish the calculation.
Question: do you think this could be done more efficiently?
The thing is I don't know which operations take more time (subsetting of df, in which case I would remove duplications of that? appending matrix data, in which case I would create a flat vector and then convert it to matrix at the end? ...).
Thanks!
EDIT following up from minem's post
As shown by minem, the speed of this calculation strongly depended on the way linear regression parameters were calculated. Therefore changing that part was the single most important thing to do.
My own further trials showed that: 1) it was essential to use sapply in combination with do.call(rbind, rather than any flat vector, to store the data (I am still not sure why - I might make a separate post about this); 2) on the original matrix I am working on, much more sparse and with a much larger nrows/ncolumns ratio than the one in this example, using the information on the x vector available at the start of each i iteration to reduce the y vector at the start of each j iteration increased the speed by several orders of magnitude, even compared with minem's original script, which was already much better than mine above.
I suppose the advantage comes from filtering out many rows a priori, thus avoiding costly xna & yna operations on very long vectors.
The modified script is the following:
set.seed(54321)
nr = 1000;
nc = 900;
dat = matrix(runif(nr*nc), nrow = nr)
rownames(dat) = paste(1:nr)
colnames(dat) = paste("time", 1:nc)
dat[sample(nr*nc, nr*nc*0.90)] = NA
df <- as.data.frame(dat)
df_ps <- names(df)
N_ps <- length(df_ps)
tic = proc.time()
naIds <- lapply(df, function(x) !is.na(x))
dl <- as.list(df)
rl <- sapply(1:(N_ps - 1), function(i) {
if ((i-1)/10 == floor((i-1)/10)) {
cat("\ni = ",i,"\n")
toc = proc.time();
show(toc-tic);
}
x <- dl[[i]]
xna <- which(naIds[[i]])
rl2 <- sapply((i + 1):N_ps, function(j) {
y <- dl[[j]][xna]
yna <- which(naIds[[j]][xna])
w <- xna[yna]
N <- length(w)
if (N >= 5) {
xw <- x[w]
yw <- y[yna]
if ((min(xw) != max(xw)) && (min(yw) != max(yw))) {
# extracts from lm/lm.fit/summary.lm functions
X <- cbind(1L, xw)
m <- .lm.fit(X, yw)
# calculate adj.r.squared
fitted <- yw - m$residuals
rss <- sum(m$residuals^2)
mss <- sum((fitted - mean(fitted))^2)
n <- length(m$residuals)
rdf <- n - m$rank
# rdf <- df.residual
r.squared <- mss/(mss + rss)
adj.r.squared <- 1 - (1 - r.squared) * ((n - 1L)/rdf)
# calculate se & pvals
p1 <- 1L:m$rank
Qr <- m$qr
R <- chol2inv(Qr[p1, p1, drop = FALSE])
resvar <- rss/rdf
se <- sqrt(diag(R) * resvar)
est <- m$coefficients[m$pivot[p1]]
tval <- est/se
pvals <- 2 * pt(abs(tval), rdf, lower.tail = FALSE)
res <- c(m$coefficients[2], se[2], pvals[2],
m$coefficients[1], se[1], pvals[1])
o <- c(i, j, N, adj.r.squared, res)
} else {
o <- c(i,j,N,rep(NA,7))
}
} else {o <- NULL}
return(o)
}, simplify = F)
do.call(rbind, rl2)
}, simplify = F)
out2 <- do.call(rbind, rl)
toc = proc.time();
show(toc - tic)
E.g. try with nr=100000; nc=100.
I should probably mention that I tried using indices, i.e.:
naIds <- lapply(df, function(x) which(!is.na(x)))
and then obviously generating w by intersection:
w <- intersect(xna,yna)
N <- length(w)
This however is slower than the above.
Larges bottleneck is lm function, because there are lot of checks & additional calculations, that you do not necessarily need. So I extracted only the needed parts.
I got this to run in +/- 18 seconds.
set.seed(54321)
nr = 1000;
nc = 900;
dat = matrix(runif(nr*nc), nrow = nr)
rownames(dat) = paste(1:nr)
colnames(dat) = paste("time", 1:nc)
dat[sample(nr*nc, nr*nc*0.9)] = NA
df <- as.data.frame(dat)
df_ps <- names(df)
N_ps <- length(df_ps)
tic = proc.time()
naIds <- lapply(df, function(x) !is.na(x)) # outside loop
dl <- as.list(df) # sub-setting list elements is faster that columns
rl <- sapply(1:(N_ps - 1), function(i) {
x <- dl[[i]]
xna <- naIds[[i]] # relevant logical vector if not empty elements
rl2 <- sapply((i + 1):N_ps, function(j) {
y <- dl[[j]]
yna <- naIds[[j]]
w <- xna & yna
N <- sum(w)
if (N >= 5) {
xw <- x[w]
yw <- y[w]
if ((min(xw) != max(xw)) && (min(xw) != max(xw))) { # faster
# extracts from lm/lm.fit/summary.lm functions
X <- cbind(1L, xw)
m <- .lm.fit(X, yw)
# calculate adj.r.squared
fitted <- yw - m$residuals
rss <- sum(m$residuals^2)
mss <- sum((fitted - mean(fitted))^2)
n <- length(m$residuals)
rdf <- n - m$rank
# rdf <- df.residual
r.squared <- mss/(mss + rss)
adj.r.squared <- 1 - (1 - r.squared) * ((n - 1L)/rdf)
# calculate se & pvals
p1 <- 1L:m$rank
Qr <- m$qr
R <- chol2inv(Qr[p1, p1, drop = FALSE])
resvar <- rss/rdf
se <- sqrt(diag(R) * resvar)
est <- m$coefficients[m$pivot[p1]]
tval <- est/se
pvals <- 2 * pt(abs(tval), rdf, lower.tail = FALSE)
res <- c(m$coefficients[2], se[2], pvals[2],
m$coefficients[1], se[1], pvals[1])
o <- c(i, j, N, adj.r.squared, res)
} else {
o <- c(i,j,N,rep(NA,6))
}
} else {o <- NULL}
return(o)
}, simplify = F)
do.call(rbind, rl2)
}, simplify = F)
out2 <- do.call(rbind, rl)
toc = proc.time();
show(toc - tic);
# user system elapsed
# 17.94 0.11 18.44

speed problems with odesolver in R

I have a differential equation model in R that uses the odesolver from the deSolve package. However, at the moment the model is running very slowly. I think this might be something to do with the function that I feed to odesolver being poorly written, but can't figure out what exactly is slowing it down and how I might speed it up. Does anyone have any ideas?
I've made an example that works in a similar way to mine:
library(data.table)
library(deSolve)
matrix_1 <- matrix(runif(100),10,10)
matrix_1[which(matrix_1 > 0.5)] <- 1
matrix_1[which(matrix_1 < 0.5)] <- 0
matrix_2 <- matrix(runif(100),10,10)
matrix_2[which(matrix_2 > 0.5)] <- 1
matrix_2[which(matrix_2 < 0.5)] <- 0
group_ID <- rep(c(1,2), 5)
N <- runif(10, 0, 100000)
Nchange <- function(t, N, parameters) {
with(as.list(c(N, parameters)), {
N_per_1 <- matrix_1 * N_per_connection
N_per_1[is.na(N_per_1)] <- 0
total_N_2 <- as.vector(N_per_1)
if (nrow(as.matrix(N_per_1)) > 1) {
total_N_2 <- colSums(N_per_1[drop = FALSE])
}
N_per_1_cost <- N_per_1
for (i in possible_competition) {
column <- as.vector(N_per_1[, i])
if (sum(column) > 0) {
active_groups <- unique(group_ID[column > 0])
if (length(active_groups) > 1){
group_ID_dets <- data.table("group_ID" = group_ID, "column"= column, "n_IDS" = 1:length(group_ID))
group_ID_dets$portions <- ave(group_ID_dets$column, group_ID_dets$group_ID, FUN = function(x) x / sum(x))
group_ID_dets[is.na(group_ID_dets)] <- 0
totals <- as.vector(unlist(tapply(group_ID_dets$column, group_ID_dets$group_ID, function(x) sum(x))))
totals[is.na(totals)] <- 0
totals <- totals*2 - sum(totals)
totals[totals < 0] <- 0
group_ID_totals <- data.table("group_ID" = unique(group_ID), "totals" = as.vector(totals))
group_ID_dets$totals <- group_ID_totals$totals[match(group_ID_dets$group_ID, group_ID_totals$group_ID)]
N_per_1[, i] <- group_ID_dets$totals * group_ID_dets$portions
}
}
}
res_per_1 <- N_per_1 * 0.1
N_per_2 <- matrix_2 * N_per_connection
N_per_2[is.na(N_per_2)] <- 0
res_per_2 <- N_per_2 * 0.1
dN <- rowSums(res_per_1) - rowSums(N_per_1_cost * 0.00003) + rowSums(res_per_2) -
rowSums(N_per_2 * 0.00003) - N*0.03
list(c(dN))
})
} # function describing differential equations
N_per_connection <- N/(rowSums(matrix_1) + rowSums(matrix_2))
possible_competition <- which(colSums(matrix_1 != 0)>1)
times <- seq(0, 100, by = 1)
out <- ode(y = N, times = times, func = Nchange, parms = NULL)
A good way to identify the bottle neck is with a profiler and the profvis package provides a good way of drilling down into the results. Wrapping your code in p <- profvis({YourCodeInHere}) and then viewing the results with print(p) gives the following insights:
The lines that are taking the most time are (in descending order of time taken):
group_ID_totals <- data.table("group_ID" = unique(group_ID), "totals" = as.vector(totals))
group_ID_dets$portions <- ave(group_ID_dets$column, group_ID_dets$group_ID, FUN = function(x) x / sum(x))
group_ID_dets <- data.table("group_ID" = group_ID, "column"= column, "n_IDS" = 1:length(group_ID))
totals <- as.vector(unlist(tapply(group_ID_dets$column, group_ID_dets$group_ID, function(x) sum(x))))
group_ID_dets$totals <- group_ID_totals$totals[match(group_ID_dets$group_ID, group_ID_totals$group_ID)]
I'm not familiar with the details of your ODE, but you should focus on optimising these tasks. I think the larger issue is that you're running these commands in a loop. Often, you'll hear that loops are slow in R, but a more nuanced discussion of this issue is found in the answers here. Some tips there might help you restructure your code/loop. Good luck!

Extracting row and column in R

I have to write a one sample proportion Z test function in R. I need to have the sample proportion be the proportion of data in the first factor level.
For example,
data <- factor(c(NA, rep("a", 60), rep("b", 40)))
table(data)
a b
60 40
And I need the sample proportion to be 60/100. Here is portion of my code and it is returning an error saying unexpected symbol in mtab <- addmargins(table(data)).
hyp_test <- function(data, hyp_val=NULL, alpha, alternative="two-sided",graph=FALSE) {
n <- sum(!is.na(data))
ifelse(is.factor(data),
mtab <- addmargins(table(data))
phat <- mtab[1]/mtab[3]
qhat <- 1 - phat
if(length(hyp_val) > 0) {
q <- 1-hyp_val
SE.phat <- sqrt((hyp_val*q)/n)
ts.z <- (phat - hyp_val)/SE.phat
p.val <- pnorm(ts.z)*2
if(alternative=="less") {
p.val <- pnorm(ts.z)
}
if(alternative=="greater") {
p.val <- 1 - p.val
}
}
Any help would be much appreciated. I need to basically find out how to find the sample proportion.
In addition to what r2evans states, you should review if statements and pnorm. This is a guesstimate of what you are trying to accomplish since the code is cut off.
hyp_test <- function(data, hyp_val=NULL, alpha, alternative="two-sided",graph=FALSE) {
n <- sum(!is.na(data))
mtab <- addmargins(table(data))
phat <- mtab[1]/mtab[3]
qhat <- 1 - phat
q <- 1-hyp_val
SE.phat <- sqrt((hyp_val*q)/n)
ts.z <- (phat - hyp_val)/SE.phat
p.val <- ifelse(alternative=="two-sided", dnorm(ts.z)*2,ifelse(alternative=="less",1-dnorm(ts.z), dnorm(ts.z)))
if(graph==TRUE) {plot(...)}
return(p.val)
}

R: creating a matrix with unknown number of rows

I have written the code below to generate a matrix containing what is, to me, a fairly complex pattern. In this case I determined that there are 136 rows in the finished matrix by trial and error.
I could write a function to calculate the number of matrix rows in advance, but the function would be a little complex. In this example the number of rows in the matrix = ((4 * 3 + 1) + (3 * 3 + 1) + (2 * 3 + 1) + (1 * 3 + 1)) * 4.
Is there an easy and efficient way to create matrices in R without hard-wiring the number of rows in the matrix statement? In other words, is there an easy way to let R simply add a row to a matrix as needed when using for-loops?
I have presented one solution that employs rbind at each pass through the loops, but that seems a little convoluted and I was wondering if there might be a much easier solution.
Sorry if this question is redundant with an earlier question. I could not locate a similar question using the search feature on this site or using an internet search engine today, although I think I have found a similar question somewhere in the past.
Below are 2 sets of example code, one using rbind and the other where I used trial and error to set nrow=136 in advance.
Thanks for any suggestions.
v1 <- 5
v2 <- 2
v3 <- 2
v4 <- (v1-1)
my.matrix <- matrix(0, nrow=136, ncol=(v1+4) )
i = 1
for(a in 1:v2) {
for(b in 1:v3) {
for(c in 1:v4) {
for(d in (c+1):v1) {
if(d == (c+1)) l.s = 4
else l.s = 3
for(e in 1:l.s) {
my.matrix[i,c] = 1
if(d == (c+1)) my.matrix[i,d] = (e-1)
else my.matrix[i,d] = e
my.matrix[i,(v1+1)] = a
my.matrix[i,(v1+2)] = b
my.matrix[i,(v1+3)] = c
my.matrix[i,(v1+4)] = d
i <- i + 1
}
}
}
}
}
my.matrix2 <- matrix(0, nrow=1, ncol=(v1+4) )
my.matrix3 <- matrix(0, nrow=1, ncol=(v1+4) )
i = 1
for(a in 1:v2) {
for(b in 1:v3) {
for(c in 1:v4) {
for(d in (c+1):v1) {
if(d == (c+1)) l.s = 4
else l.s = 3
for(e in 1:l.s) {
my.matrix2[1,c] = 1
if(d == (c+1)) my.matrix2[1,d] = (e-1)
else my.matrix2[1,d] = e
my.matrix2[1,(v1+1)] = a
my.matrix2[1,(v1+2)] = b
my.matrix2[1,(v1+3)] = c
my.matrix2[1,(v1+4)] = d
i <- i+1
if(i == 2) my.matrix3 <- my.matrix2
else my.matrix3 <- rbind(my.matrix3, my.matrix2)
my.matrix2 <- matrix(0, nrow=1, ncol=(v1+4) )
}
}
}
}
}
all.equal(my.matrix, my.matrix3)
If you have some upper bound on the size of the matrix,
you can create a matrix
large enough to hold all the data
my.matrix <- matrix(0, nrow=v1*v2*v3*v4*4, ncol=(v1+4) )
and truncate it at the end.
my.matrix <- my.matrix[1:(i-1),]
This is the generic form to do it. You can adapt it to your problem
matrix <- NULL
for(...){
...
matrix <- rbind(matriz,vector)
}
where vector contains the row elements
I stumbled upon this solution today: convert the matrix to a data.frame. As new rows are needed by the for-loop those rows are automatically added to the data.frame. Then you can convert the data.frame back to a matrix at the end if you want. I am not sure whether this constitutes something similar to iterative use of rbind. Perhaps it becomes very slow with large data.frames. I do not know.
my.data <- matrix(0, ncol = 3, nrow = 2)
my.data <- as.data.frame(my.data)
j <- 1
for(i1 in 0:2) {
for(i2 in 0:2) {
for(i3 in 0:2) {
my.data[j,1] <- i1
my.data[j,2] <- i2
my.data[j,3] <- i3
j <- j + 1
}
}
}
my.data
my.data <- as.matrix(my.data)
dim(my.data)
class(my.data)
EDIT: July 27, 2015
You can also delete the first matrix statement, create an empty data.frame then convert the data.frame to a matrix at the end:
my.data <- data.frame(NULL,NULL,NULL)
j <- 1
for(i1 in 0:2) {
for(i2 in 0:2) {
for(i3 in 0:2) {
my.data[j,1] <- i1
my.data[j,2] <- i2
my.data[j,3] <- i3
j <- j + 1
}
}
}
my.data
my.data <- as.matrix(my.data)
dim(my.data)
class(my.data)

Resources