Related
Say we have a data.frame where the columns represent the quantiles for a given set of probabilities. Each row represents a different subject and the quantiles vary by subject. The goal is to take n_draws for each subject.
n <- 1e5
alphas <- c(.05, .25, .5, .75, .95)
n_draws <- 100
dt <- data.frame(quantile_05 = runif(n),
quantile_25 = runif(n, min = 10, max = 20),
quantile_5 = runif(n, min = 30, max = 40),
quantile_75 = runif(n, min = 50, max = 60),
quantile_95 = runif(n, min = 70, max = 80))
R has stats::approx. The issue is that it can only be applied to 1 row at once.
draws <- apply(X = dt, MARGIN = 1, function(q){
stats::approx(x=alphas, y=q, yleft = 0, rule = 2, method="linear", xout = seq(.0001, .99999, length.out = n_draws))$y
})
Naturally, one way to speed this up is parallelization:
library(parallel)
registerDoParallel(cores=8)
cl <- makeCluster(8)
clusterExport(cl, c('alphas', 'n_draws'))
draws <- parApply(cl=cl, X = dt, MARGIN = 1, function(q){
stats::approx(x=alphas, y=q, yleft = 0, rule = 2, method="linear", xout = seq(.0001, .99999, length.out = n_draws))$y
})
This parallel code is several times faster on my machine. I am curious if anyone has suggestions for further speed ups.
To be a little more clear, this is just a simplification of what I'm using in real life. I have more than 5 quantiles. I want to use this concept to estimate not only other quantiles, but also the mean. Further, I'd like to use the draws to model other quantities like max(y - 10, 0) (or y - any arbitrary value).
Your proposed method is rather inefficient for the desired goal. You end up storing 100 points for each desired ECDF and you will still need to write a function to extract a desired draw. Instead I suggest you consider using the approxfun functions. It will return a more compact set of values which will be individual functions with associated environments that contain the knots for later calculation. The calculation will be done with a C call that is accessed with the invisible helper function, .approxfun.
Demonstrating the internals (up to a point) that I'm suggesting:
out <- approxfun(y=dt[1,], x=alphas,yleft = 0, rule = 2, method="linear")
# So out is now a single instance using the knots in the first row
out
#function (v)
#.approxfun(x, y, v, method, yleft, yright, f, na.rm)
#<bytecode: 0x558366535968>
#<environment: 0x5583690a04f8>
ls(environment(out))
#[1] "f" "method" "na.rm" "x" "y" "yleft" "yright"
environment(out)$x
#[1] 0.05 0.25 0.50 0.75 0.95
environment(out)$y
#[1] 0.4038727 17.7069735 33.4438595 57.2753257 77.2024894
If you wanted the estimated 55th percentile for the first case, you could get it with:
out(55/100)
#[1] 38.21015
And now that I've suggested a way to speed up you creation of this list of functions, I'm not even sure it's worth it. I think you could just leave that dt dataframe in place and call approxfun when needed. But that's your call.
Note: This is essentially the method used by the ecdf function:
ecdf
function (x)
{
x <- sort(x)
n <- length(x)
if (n < 1)
stop("'x' must have 1 or more non-missing values")
vals <- unique(x)
rval <- approxfun(vals, cumsum(tabulate(match(x, vals)))/n,
method = "constant", yleft = 0, yright = 1, f = 0, ties = "ordered")
class(rval) <- c("ecdf", "stepfun", class(rval))
assign("nobs", n, envir = environment(rval))
attr(rval, "call") <- sys.call()
rval
}
<bytecode: 0x558364a0f360>
<environment: namespace:stats>
And it's possible that you might want to use the ecdf function because it has some class-associated functions.
I have been trying to use am R function called ipsi, which takes arguments (a, y, id, time, x.trt, x.out, delta.seq, nsplits) Originally, the components of the arguments were in one dataframe (except for delta.seq and nsplits which are coded later), but my understanding is I needed to put them in separate lists, and in the case of x.trt and x.out, matrices. This function is very easy to run on one of each argument, but since I multiply imputed the dataframe 30 times before splitting it up into different elements to be taken as ipsi arguments, I now want to iterate over the set of elements 30 times as if there were 30 dataframes. Additionally, I want to parallelize to optimize my computing power.
I have just expanded the npcausal example:
n <- 500
T <- 4
time <- rep(1:T, n)
time <- list(time,time,time,time,time,time,time,time,time,time,time,time,time,time,time,
time,time,time,time,time,time,time,time,time,time,time,time,time,time,time)
id <- rep(1:n, rep(T, n))
id <- list(id,id,id,id,id,id,id,id,id,id,id,id,id,id,id,
id,id,id,id,id,id,id,id,id,id,id,id,id,id,id)
x.trt <- matrix(rnorm(n * T * 5), nrow = n * T)
x.trt <- list(x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,
x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt,x.trt)
x.out <- matrix(rnorm(n * T * 5), nrow = n * T)
x.out <- list(x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,
x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out,x.out)
a <- rbinom(n * T, 1, .5)
a <- list(a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,
a,a,a,a,a,a,a,a,a,a,a,a,a,a,a)
y <- rnorm(mean=1,n)
y <- list(y,y,y,y,y,y,y,y,y,y,y,y,y,y,y,
y,y,y,y,y,y,y,y,y,y,y,y,y,y,y)
d.seq <- seq(0.1, 5, length.out = 10)
d.seq <- list(d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,
d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq,d.seq)
set.seed(500, kind = "L'Ecuyer-CMRG")
numcores <- future::availableCores()
cl <- parallel::makeCluster(numcores)
parallel::clusterEvalQ(cl, library(dplyr))
parallel::clusterEvalQ(cl, library(npcausal))
parallel::clusterExport(cl, "d.seq", envir = environment())
parallel::clusterEvalQ(cl, d.seq <- d.seq)
new_element <- parallel::parLapply(cl = cl, for(i in 1:30){
npcausal::ipsi(a = a[[i]],
y = y[[i]],
id = id[[i]],
time = time[[i]],
x.out = x.out[[i]],
x.trt = x.trt[[i]],
delta.seq = d.seq[[i]],
nsplits = 10)
})
This actually runs, but at the end of the process it gives me an error saying that the FUN was missing. I knew that already, but I have no FUN to call besides ipsi. Thanks for any help you can provide.
My suggestion is to first figure out how to do it with a regular base-R *apply function without worrying about parallelization. I suspect you can use mapply() for this, so something like (non confirmed):
res <- mapply(
a, y, id, time, xout, x,out, x.trt, d.seq,
FUN = function(a_i, y_i, id_i, time_i, xout_i, x,out_i, x.trt_i, d.seq_i) {
npcausal::ipsi(a = a_i, y = y_i, id = id_i, time = time_i,
x.out = x.out_i, x.trt = x.trt_i, delta.seq = d.seq_i,
nsplits = 10)
}
)
When you figured that part out, you can start thinking about parallelization.
(Disclaimer: I'm the author) If you get an mapply() solution to work, then the simplest would be to replace that as-is with future_mapply() of the future.apply package. That will parallelize on your local machine if you set plan(multisession).
I've tried to optimize a function which I wrote a few weeks ago.
It got better but it is still slow. So I used Rprof() and found out split() takes the most time which for some reason makes me think this function can be a lot better.
Can it be done?!
normDist_V2 <- function(size=1e5, precision=1, ...)
{
data <- rnorm(size)
roundedData <- round(data, precision)
framedData <- data.frame(cbind(data, roundedData))
factoredData <- split(framedData$data, framedData$roundedData)
actualsize <- (size)/10^precision
X <- names(factoredData)
Probability <- sapply(factoredData, length) / actualsize
plot(X, Probability, ...)
}
Current speed:
system.time(normDist_V2(size=1e7, precision = 2)) #11.14 sec
normDist_V2 <- function(size = 1e5, precision = 1, ...) {
require(data.table)
data <- rnorm(size)
roundedData <- round(data, precision)
framedData <- data.table(data, roundedData)
actualsize <- (size)/10^precision
dt <- framedData[, .N, keyby = roundedData]
X <- dt$roundedData
Probability <- dt$N/actualsize
plot(X, Probability, ...)
}
system.time(normDist_V2(size=1e7, precision = 2)) # 1.26 sec
I am struggling with this for so long. I have a logistic growth function where the growth parameter
r is a matrix. The model is constructed in a way that I have as an output two N the N1 and N2.
I would like to be able to change the r parameter over time. When time < 50 I would like
r = r1 where
r1=matrix(c(
2,3),
nrow=1, ncol=2
When time >= 50 I would like r=r2 where
r2=matrix(c(
1,2),
nrow=1, ncol=2
Here is my function. Any help is highly appreciated.
rm(list = ls())
library(deSolve)
model <- function(time, y, params) {
with(as.list(c(y,params)),{
N = y[paste("N",1:2, sep = "")]
dN <- r*N*(1-N/K)
return(list(c(dN)))
})
}
r=matrix(c(
4,5),
nrow=1, ncol=2)
K=100
params <- list(r,K)
y<- c(N1=0.1, N2=0.2)
times <- seq(0,100,1)
out <- ode(y, times, model, params)
plot(out)
I would like ideally something like this but it does not work
model <- function(time, y, params) {
with(as.list(c(y,params)),{
N = y[paste("N",1:2, sep = "")]
r = ifelse(times < 10, matrix(c(1,3),nrow=1, ncol=2),
ifelse(times > 10, matrix(c(1,4),nrow=1, ncol=2), matrix(c(1,2),nrow=1, ncol=2)))
print(r)
dN <- r*N*(1-N/K)
return(list(c(dN)))
})
}
Thank you for your time.
Here a generic approach that uses an extended version of the approx function. Note also some further simplifications of the model function and the additional plot of the parameter values.
Edit changed according to the suggestion of Lewis Carter to make the parameter change at t=3, so that the effect can be seen.
library(simecol) # contains approxTime, a vector version of approx
model <- function(time, N, params) {
r <- approxTime(params$signal, time, rule = 2, f=0, method="constant")[-1]
K <- params$K
dN <- r*N*(1-N/K)
return(list(c(dN), r))
}
signal <- matrix(
# time, r[1, 2],
c( 0, 2, 3,
3, 1, 2,
100, 1, 2), ncol=3, byrow=TRUE
)
## test of the interpolation
approxTime(signal, c(1, 2.9, 3, 100), rule = 2, f=0, method="constant")
params <- list(signal = signal, K = 100)
y <- c(N1=0.1, N2=0.2)
times <- seq(0, 10, 0.1)
out <- ode(y, times, model, params)
plot(out)
For a small number of state variables like in the example, separate signals with approxfun from package stats will look less generic but may be slighlty faster.
As a further improvement, one may consider to replace the "hard" transitions with a more smooth one. This can then directly be formulated as a function without the need of approx, approxfun or approxTime.
Edit 2:
Package simecol imports deSolve, and we need only a small function from it. So instead of loading simecol it is also possible to include the approxTime function explicitly in the code. The conversion from data frame to matrix improves performance, but a matrix is preferred anyway in such cases.
approxTime <- function(x, xout, ...) {
if (is.data.frame(x)) {x <- as.matrix(x); wasdf <- TRUE} else wasdf <- FALSE
if (!is.matrix(x)) stop("x must be a matrix or data frame")
m <- ncol(x)
y <- matrix(0, nrow=length(xout), ncol=m)
y[,1] <- xout
for (i in 2:m) {
y[,i] <- as.vector(approx(x[,1], x[,i], xout, ...)$y)
}
if (wasdf) y <- as.data.frame(y)
names(y) <- dimnames(x)[[2]]
y
}
If you want to pass a matrix parameter you should pass a list of parameters and you can modify it inside the model when your time limit is exceeded (in the example below you don't even have to pass the r matrix to the model function)
library(deSolve)
model <- function(time, y, params) {
with(as.list(c(y,params)),{
if(time < 3) r = matrix(c(2,3), nrow = 1, ncol = 2)
else r = matrix(c(1,3), nrow = 1, ncol = 2)
N = y[paste("N",1:2, sep = "")]
dN <- r*N*(1-N/K)
return(list(c(dN)))
})
}
y <- c(N1=0.1, N2=0.2)
params <- list(r = matrix(c(0,0), nrow = 1, ncol = 2), K=100)
times <- seq(0,10,0.1)
out <- ode(y, times, model, params)
plot(out)
You can see examples of this for instance with Delay Differential Equations ?dede
This is home work.
I am new to R.
I have two data frames each containing two columns of data. I have to find a function that normalize the first data frame to a mean of 0 and a variance of 1 - for both columns. Then I want to apply that function on the second data frame.
I have tried this:
my_scale_test <- function(x,y) {
apply(y, 2, function(x,y) {
(y - mean(x ))/sd(x)
})
}
where x is the first data frame and y is the data frame to be normalized.
Can some one help me?
Edit:
I have now tried this aswell, but not working either:
scale_func <- function(x,y) {
xmean <- mean(x)
xstd <- sd(x)
yout <- y
for (i in 1:length(x[1,]))
yout[,i] <- yout[,i] - xmean[i]
for (i in 1:length(x[1,]))
yout[,i] <- yout[,i]/xsd[i]
invisible(yout)
}
Edit 2:
I found this working function for MatLab (which i tried to translate in edit 1):
function [ Xout ] = scale( Xbase, Xin )
Xmean = mean(Xbase);
Xstd = std(Xbase);
Xout = Xin;
for i=1:length(Xbase(1,:))
Xout(:,i) = Xout(:,i) - Xmean(i);
end
for i=1:length(Xbase(1,:))
Xout(:,i) = Xout(:,i)/Xstd(i);
end
end
Can someone help me translate it?
Since you are new to R, let's try something really basic.
my_scale_test <- function(x, y) {
y.nrow <- nrow(y)
x.mean <- data.frame(t(apply(x, 2, mean)))
x.sd <- data.frame(t(apply(x, 2, sd)))
# To let x.mean and x.sd have the same dimension as y, let's repeat the rows.
x.mean <- x.mean[rep(1, y.nrow), ]
x.sd <- x.sd[rep(1, y.nrow), ]
(y - x.mean)/x.sd
}
To test, try
set.seed(1)
x <- data.frame(matrix(rnorm(10), nrow = 5))
y <- x
result <- my_scale_test(x, y)
apply(result, 2, mean)
apply(result, 2, sd)