I'm trying to move some Fortran code to R for finite differences related to chemical kinetics.
Sample Fortran loop:
DOUBLE PRECISION, DIMENSION (2000,2) :: data=0.0
DOUBLE PRECISION :: k1=5.0, k2=20.0, dt=0.0005
DO i=2, 2000
data(i,1) = data(i-1,1) + data(i-1,1)*(-k1)*dt
data(i,2) = data(i-1,2) + ( data(i-1,1)*k1*dt - data(i-1,2)*k2*dt )
...
END DO
The analogous R code:
k1=5
k2=20
dt=0.0005
data=data.frame(cbind(c(500,rep(0,1999)),rep(0,2000)))
a.fun=function(y){
y2=y-k1*y*dt
return(y2)
}
apply(data,2,a.fun)
This overwrites my first value in the dataframe and leaves zeros elsewhere. I'd like to run this vectorized and not using a for loop since they are so slow in R. Also, my function only calculates the first column so far. I can't get the second column working until I get the syntax right on the first.
Its not necessarily true that R is bad at loops. It very much depends on what you are doing. Using k1, k2, dt and data from the question (i.e. the four lines beginning with k1=5) and formulating the problem in terms of an iterated matrix, the loop in the last line below returns nearly instantaneously on my PC:
z <- as.matrix(data)
m <- matrix(c(1-k1*dt, k1*dt, 0, 1-k2*dt), 2)
for(i in 2:nrow(z)) z[i, ] <- m %*% z[i-1, ]
(You could also try storing the vectors in columns of z rather than rows since R stores matrices by column.)
Here is the first bit of the result:
> head(z)
X1 X2
[1,] 500.0000 0.000000
[2,] 498.7500 1.250000
[3,] 497.5031 2.484375
[4,] 496.2594 3.703289
[5,] 495.0187 4.906905
[6,] 493.7812 6.095382
May be this can help.
I think you need to have the initial condition for data[1,2]. I assumed both data[1,1] as 500 and data[1,2 as 0 at the initial condition.
The code goes like this:
> ## Define two vectors x and y
> x <- seq(from=0,length=2000,by=0)
> y <- seq(from=0,length=2000,by=0)
>
> ## Constants
> k1 = 5.0
> dt = 0.0005
> k2 = 20.0
>
> ## Initialize x[1]=500 and y[1]=0
> x[1]=500
> y[1] = 0
>
> for (i in 2:2000){
+ x[i]=x[i-1]+x[i-1]*-k1*dt
+ y[i] = y[i-1]+x[i-1]*k1*dt-y[i-1]*k2*dt
+ }
>
> finaldata <- data.frame(x,y)
> head(finaldata)
x y
1 500.0000 0.000000
2 498.7500 1.250000
3 497.5031 2.484375
4 496.2594 3.703289
5 495.0187 4.906905
6 493.7812 6.095382
I hope this helps.
Related
I have two large sparse matrices (about 41,000 x 55,000 in size). The density of nonzero elements is around 10%. They both have the same row index and column index for nonzero elements.
I now want to modify the values in the first sparse matrix if values in the second matrix are below a certain threshold.
library(Matrix)
# Generating the example matrices.
set.seed(42)
# Rows with values.
i <- sample(1:41000, 227000000, replace = TRUE)
# Columns with values.
j <- sample(1:55000, 227000000, replace = TRUE)
# Values for the first matrix.
x1 <- runif(227000000)
# Values for the second matrix.
x2 <- sample(1:3, 227000000, replace = TRUE)
# Constructing the matrices.
m1 <- sparseMatrix(i = i, j = j, x = x1)
m2 <- sparseMatrix(i = i, j = j, x = x2)
I now get the rows, columns and values from the first matrix in a new matrix. This way, I can simply subset them and only the ones I am interested in remain.
# Getting the positions and values from the matrices.
position_matrix_from_m1 <- rbind(i = m1#i, j = summary(m1)$j, x = m1#x)
position_matrix_from_m2 <- rbind(i = m2#i, j = summary(m2)$j, x = m2#x)
# Subsetting to get the elements of interest.
position_matrix_from_m1 <- position_matrix_from_m1[,position_matrix_from_m1[3,] > 0 & position_matrix_from_m1[3,] < 0.05]
# We add 1 to the values, since the sparse matrix is 0-based.
position_matrix_from_m1[1,] <- position_matrix_from_m1[1,] + 1
position_matrix_from_m1[2,] <- position_matrix_from_m1[2,] + 1
Now I am getting into trouble. Overwriting the values in the second matrix takes too long. I let it run for several hours and it did not finish.
# This takes hours.
m2[position_matrix_from_m1[1,], position_matrix_from_m1[2,]] <- 1
m1[position_matrix_from_m1[1,], position_matrix_from_m1[2,]] <- 0
I thought about pasting the row and column information together. Then I have a unique identifier for each value. This also takes too long and is probably just very bad practice.
# We would get the unique identifiers after the subsetting.
m1_identifiers <- paste0(position_matrix_from_m1[1,], "_", position_matrix_from_m1[2,])
m2_identifiers <- paste0(position_matrix_from_m2[1,], "_", position_matrix_from_m2[2,])
# Now, I could use which and get the position of the values I want to change.
# This also uses to much memory.
m2_identifiers_of_interest <- which(m2_identifiers %in% m1_identifiers)
# Then I would modify the x values in the position_matrix_from_m2 matrix and overwrite m2#x in the sparse matrix object.
Is there a fundamental error in my approach? What should I do to run this efficiently?
Is there a fundamental error in my approach?
Yes. Here it is.
# This takes hours.
m2[position_matrix_from_m1[1,], position_matrix_from_m1[2,]] <- 1
m1[position_matrix_from_m1[1,], position_matrix_from_m1[2,]] <- 0
Syntax as mat[rn, cn] (whether mat is a dense or sparse matrix) is selecting all rows in rn and all columns in cn. So you get a length(rn) x length(cn) matrix. Here is a small example:
A <- matrix(1:9, 3, 3)
# [,1] [,2] [,3]
#[1,] 1 4 7
#[2,] 2 5 8
#[3,] 3 6 9
rn <- 1:2
cn <- 2:3
A[rn, cn]
# [,1] [,2]
#[1,] 4 7
#[2,] 5 8
What you intend to do is to select (rc[1], cn[1]), (rc[2], cn[2]) ..., only. The correct syntax is then mat[cbind(rn, cn)]. Here is a demo:
A[cbind(rn, cn)]
#[1] 4 8
So you need to fix your code to:
m2[cbind(position_matrix_from_m1[1,], position_matrix_from_m1[2,])] <- 1
m1[cbind(position_matrix_from_m1[1,], position_matrix_from_m1[2,])] <- 0
Oh wait... Based on your construction of position_matrix_from_m1, this is just
ij <- t(position_matrix_from_m1[1:2, ])
m2[ij] <- 1
m1[ij] <- 0
Now, let me explain how you can do better. You have underused summary(). It returns a 3-column data frame, giving (i, j, x) triplet, where both i and j are index starting from 1. You could have worked with this nice output directly, as follows:
# Getting (i, j, x) triplet (stored as a data.frame) for both `m1` and `m2`
position_matrix_from_m1 <- summary(m1)
# you never seem to use `position_matrix_from_m2` so I skip it
# Subsetting to get the elements of interest.
position_matrix_from_m1 <- subset(position_matrix_from_m1, x > 0 & x < 0.05)
Now you can do:
ij <- as.matrix(position_matrix_from_m1[, 1:2])
m2[ij] <- 1
m1[ij] <- 0
Is there a even better solution? Yes! Note that nonzero elements in m1 and m2 are located in the same positions. So basically, you just need to change m2#x according to m1#x.
ind <- m1#x > 0 & m1#x < 0.05
m2#x[ind] <- 1
m1#x[ind] <- 0
A complete R session
I don't have enough RAM to create your large matrix, so I reduced your problem size a little bit for testing. Everything worked smoothly.
library(Matrix)
# Generating the example matrices.
set.seed(42)
## reduce problem size to what my laptop can bear with
squeeze <- 0.1
# Rows with values.
i <- sample(1:(41000 * squeeze), 227000000 * squeeze ^ 2, replace = TRUE)
# Columns with values.
j <- sample(1:(55000 * squeeze), 227000000 * squeeze ^ 2, replace = TRUE)
# Values for the first matrix.
x1 <- runif(227000000 * squeeze ^ 2)
# Values for the second matrix.
x2 <- sample(1:3, 227000000 * squeeze ^ 2, replace = TRUE)
# Constructing the matrices.
m1 <- sparseMatrix(i = i, j = j, x = x1)
m2 <- sparseMatrix(i = i, j = j, x = x2)
## give me more usable RAM
rm(i, j, x1, x2)
##
## fix to your code
##
m1a <- m1
m2a <- m2
# Getting (i, j, x) triplet (stored as a data.frame) for both `m1` and `m2`
position_matrix_from_m1 <- summary(m1)
# Subsetting to get the elements of interest.
position_matrix_from_m1 <- subset(position_matrix_from_m1, x > 0 & x < 0.05)
ij <- as.matrix(position_matrix_from_m1[, 1:2])
m2a[ij] <- 1
m1a[ij] <- 0
##
## the best solution
##
m1b <- m1
m2b <- m2
ind <- m1#x > 0 & m1#x < 0.05
m2b#x[ind] <- 1
m1b#x[ind] <- 0
##
## they are identical
##
all.equal(m1a, m1b)
#[1] TRUE
all.equal(m2a, m2b)
#[1] TRUE
Caveat:
I know that some people may propose
m1c <- m1
m2c <- m2
logi <- m1 > 0 & m1 < 0.05
m2c[logi] <- 1
m1c[logi] <- 0
It looks completely natural in R's syntax. But trust me, it is extremely slow for large matrices.
function(q,b,Data1,Data2){
x<-sum(
ifelse(Data1[13+q,b]/Data1[12+q,b]>Data2[13+q,1]/Data2[12+q,1],1,0)+
ifelse(Data1[13+q,b]/Data1[11+q,b]>Data2[13+q,1]/Data2[11+q,1],1,0)+
ifelse(Data1[13+q,b]/Data1[10+q,b]>Data2[13+q,1]/Data2[10+q,1],1,0)+
ifelse(Data1[13+q,b]/Data1[9+q,b]>Data2[13+q,1]/Data2[9+q,1],1,0)+
ifelse(Data1[13+q,b]/Data1[8+q,b]>Data2[13+q,1]/Data2[8+q,1],1,0)+
ifelse(Data1[13+q,b]/Data1[7+q,b]>Data2[13+q,1]/Data2[7+q,1],1,0)+
ifelse(Data1[13+q,b]/Data1[6+q,b]>Data2[13+q,1]/Data2[6+q,1],1,0)+
ifelse(Data1[13+q,b]/Data1[5+q,b]>Data2[13+q,1]/Data2[5+q,1],1,0)+
ifelse(Data1[13+q,b]/Data1[4+q,b]>Data2[13+q,1]/Data2[4+q,1],1,0)+
ifelse(Data1[13+q,b]/Data1[3+q,b]>Data2[13+q,1]/Data2[3+q,1],1,0)+
ifelse(Data1[13+q,b]/Data1[2+q,b]>Data2[13+q,1]/Data2[2+q,1],1,0)+
ifelse(Data1[13+q,b]/Data1[1+q,b]>Data2[13+q,1]/Data2[1+q,1],1,0)
)/12
}
Is there a way to simplify this? (no characters, only numbers in the data sets)
Thank you
Two pieces of knowledge you can combine to improve your code:
Firstly, you can divide a single number by a vector and R will return a vector with elementwise divisions. For example:
5 / c(1,2,3,4,5,6)
# [1] 5.0000000 2.5000000 1.6666667 1.2500000 1.0000000 0.8333333
The numerator on both sides of the inequality are the same all the time, you can use the above. So instead of explicitly calling it for every inequality, you can just call it once.
Secondly, an expression with TRUE or FALSE will be coerced to 1 and 0 when you try to perform arithmetic operations (in your case division, or calculating a mean). Inequalities return TRUE or FALSE values. Explicitly telling R to convert them to 0 and 1 is wasted energy, because R will automatically do it in your last step.
Putting this together in a simplified function:
function(q, b, Data1, Data2){
qseq <- (1:12) + q # Replaces all "q+1", "q+2", ... , "q+12"
dat1 <- Data1[qseq, b] # Replaces all "Data1[q+1, b]", ... "Data1[q+12, b]"
dat2 <- Data2[qseq, 1] # Replaces all "Data2[q+1, 1]", ... "Data2[q+12, 1]"
mean( Data1[13+q, b]/dat1 > Data2[13+q, 1]/dat2 )
this simplify a bit:
function(q,b,Data1,Data2){
data1_num <- Data1[13+q,b]
data2_num <- Data2[13+q,1]
x <- 0
for (i in 1:12) {
x <- x + ((data1_num/Data1[i+q,b]) > (data2_num /Data2[i+q,1]))
}
x <- x /12
#return(x)
}
But If you provide data example, and the output your expecting, i'm sure there is way to simplify it better
In Excel, it's easy to perform a calculation on a previous cell by referencing that earlier cell. For example, starting from an initial value of 100 (step = 0), each next step would be 0.9 * previous + 9 simply by dragging the formula bar down from the first cell (step = 1). The next 10 steps would look like:
step value
[1,] 0 100.00000
[2,] 1 99.00000
[3,] 2 98.10000
[4,] 3 97.29000
[5,] 4 96.56100
[6,] 5 95.90490
[7,] 6 95.31441
[8,] 7 94.78297
[9,] 8 94.30467
[10,] 9 93.87420
[11,] 10 93.48678
I've looked around the web and StackOverflow, and the best I could come up with is a for loop (below). Are there more efficient ways to do this? Is it possible to avoid a for loop? It seems like most functions in R (such as cumsum, diff, apply, etc) work on existing vectors instead of calculating new values on the fly from previous ones.
#for loop. This works
value <- 100 #Initial value
for(i in 2:11) {
current <- 0.9 * value[i-1] + 9
value <- append(value, current)
}
cbind(step = 0:10, value) #Prints the example output shown above
It seems like you're looking for a way to do recursive calculations in R. Base R has two ways of doing this which differ by the form of the function used to do the recursion. Both methods could be used for your example.
Reduce can be used with recursion equations of the form v[i+1] = function(v[i], x[i]) where v is the calculated vector and x an input vector; i.e. where the i+1 output depends only the i-th values of the calculated and input vectors and the calculation performed by function(v, x) may be nonlinear. For you case, this would be
value <- 100
nout <- 10
# v[i+1] = function(v[i], x[i])
v <- Reduce(function(v, x) .9*v + 9, x=numeric(nout), init=value, accumulate=TRUE)
cbind(step = 0:nout, v)
filter is used with recursion equations of the form y[i+1] = x[i] + filter[1]*y[i-1] + ... + filter[p]*y[i-p] where y is the calculated vector and x an input vector; i.e. where the output can depend linearly upon lagged values of the calculated vector as well as the i-th value of the input vector. For your case, this would be:
value <- 100
nout <- 10
# y[i+1] = x[i] + filter[1]*y[i-1] + ... + filter[p]*y[i-p]
y <- c(value, stats::filter(x=rep(9, nout), filter=.9, method="recursive", sides=1, init=value))
cbind(step = 0:nout, y)
For both functions, the length of the output is given by the length of the input vector x.
Both of these approaches give your result.
Use our knowledge about the geometric series.
i <- 0:10
0.9 ^ i * 100 + 9 * (0.9 ^ i - 1) / (0.9 - 1)
#[1] 100.00000 99.00000 98.10000 97.29000 96.56100 95.90490 95.31441 94.78297 94.30467 93.87420 93.48678
You could also use purrr::accumulate:
data.frame(value = purrr::accumulate(0:10, ~ .x * .9 + 9, .init = 100))
value
1 100.00000
2 99.00000
3 98.10000
4 97.29000
5 96.56100
6 95.90490
7 95.31441
8 94.78297
9 94.30467
10 93.87420
11 93.48678
12 93.13811
.init is the initial value and there is also the argument .dir if you want to control the direction ("forward" is the default)
I am trying to generate n random numbers whose sum is less than 1.
So I can't just run runif(3). But I can condition each iteration on the sum of all values generated up to that point.
The idea is to start an empty vector, v, and set up a loop such that for each iteration, i, a runif() is generated, but before it is accepted as an element of v, i.e. v[i] <- runif(), the test sum(v) < 1 is carried out, and while FALSE the last entry v[i] is finally accepted, BUT if TRUE, that is the sum is greater than 1, v[i] is tossed out of the vector, and the iteration i is repeated.
I am far from implementing this idea, but I would like to resolve it along the lines of something similar to what follows. It's not so much a practical problem, but more of an exercise to understand the syntax of loops in general:
n <- 4
v <- 0
for (i in 1:n){
rdom <- runif(1)
if((sum(v) + rdom) < 1) v[i] <- rdom
}
# keep trying before moving on to iteration i + 1???? i <- stays i?????
}
I have looked into while (actually I incorporated the while function in the title); however, I need the vector to have n elements, so I get stuck if I try something that basically tells R to add random uniform realizations as elements of the vector v while sum(v) < 1, because I can end up with less than n elements in v.
Here's a possible solution. It doesn't use while but the more generic repeat. I edited it to use a while and save a couple of lines.
set.seed(0)
n <- 4
v <- numeric(n)
i <- 0
while (i < n) {
ith <- runif(1)
if (sum(c(v, ith)) < 1) {
i <- i+1
v[i] <- ith
}
}
v
# [1] 0.89669720 0.06178627 0.01339033 0.02333120
Using a repeat block, you must check for the condition anyways, but, removing the growing problem, it would look very similar:
set.seed(0)
n <- 4
v <- numeric(n)
i <- 0
repeat {
ith <- runif(1)
if (sum(c(v, ith)) < 1) {
i <- i+1
v[i] <- ith
}
if (i == 4) break
}
If you really want to keep exactly the same procedure that you have posted (aka iteratively sample the n values one at a time from the standard uniform distribution, rejecting any samples that cause your sum to exceed 1), then the following code is mathematically equivalent, shorter, and more efficient:
samp <- function(n) {
v <- rep(0, n)
for (i in 1:n) {
v[i] <- runif(1, 0, 1-sum(v))
}
v
}
Basically, this code uses the mathematical fact that if the sum of the vector is currently sum(v), then sampling from the standard uniform distribution until you get a value no greater than 1-sum(v) is exactly equivalent to sampling in the uniform distribution from 0 to 1-sum(v). The advantage of using the latter approach is that it's much more efficient -- we don't need to keep rejecting samples and trying again, and can instead just sample once for each element.
To get a sense of the runtime differences, consider sampling 100 observations with n=10, comparing to a working implementation of the code from your post (copied from my other answer to this question):
OP <- function(n) {
v <- rep(0, n)
for (i in 1:n){
rdom <- runif(1)
while (sum(v) + rdom > 1) rdom <- runif(1)
v[i] <- rdom
}
v
}
set.seed(144)
system.time(samples.OP <- replicate(100, OP(10)))
# user system elapsed
# 261.937 1.641 265.805
system.time(samples.josliber <- replicate(100, samp(10)))
# user system elapsed
# 0.004 0.001 0.004
In this case, the new approach is approaching 100,000 times faster.
It sounds like you're trying to uniformly sample from a space of n variables where the following constraints hold:
x_1 + x_2 + ... + x_n <= 1
x_1 >= 0
x_2 >= 0
...
x_n >= 0
The "hit and run" algorithm is the mathematical machinery that enables you to do exactly this. In 2-dimensional space, the algorithm will sample uniformly from the following triangle, with each location in the shaded area being equally likely to be selected:
The algorithm is provided in R through the hitandrun package, which requires you to specify the linear inequalities that define the space through a constraint matrix, direction vector, and right-hand side vector:
library(hitandrun)
n <- 3
constr <- list(constr = rbind(rep(1, n), -diag(n)),
dir = c(rep("<=", n+1)),
rhs = c(1, rep(0, n)))
set.seed(144)
samples <- hitandrun(constr, n.samples=1000)
head(samples, 10)
# [,1] [,2] [,3]
# [1,] 0.28914690 0.01620488 0.42663224
# [2,] 0.65489979 0.28455231 0.00199671
# [3,] 0.23215115 0.00661661 0.63597912
# [4,] 0.29644234 0.06398131 0.60707269
# [5,] 0.58335047 0.13891392 0.06151205
# [6,] 0.09442808 0.30287832 0.55118290
# [7,] 0.51462261 0.44094683 0.02641638
# [8,] 0.38847794 0.15501252 0.31572793
# [9,] 0.52155055 0.09921046 0.13304728
# [10,] 0.70503030 0.03770875 0.14299089
Breaking down this code a bit, we generated the following constraint matrix:
constr
# $constr
# [,1] [,2] [,3]
# [1,] 1 1 1
# [2,] -1 0 0
# [3,] 0 -1 0
# [4,] 0 0 -1
#
# $dir
# [1] "<=" "<=" "<=" "<="
#
# $rhs
# [1] 1 0 0 0
Reading across the first line of constr$constr we have 1, 1, 1 which indicates "1*x1 + 1*x2 + 1*x3". The first element of constr$dir is <=, and the first element of constr$rhs is 1; putting it together we have x1 + x2 + x3 <= 1. From the second row of constr$constr we read -1, 0, 0 which indicates "-1*x1 + 0*x2 + 0*x3". The second element of constr$dir is <= and the second element of constr$rhs is 0; putting it together we have -x1 <= 0 which is the same as saying x1 >= 0. The similar non-negativity constraints follow in the remaining rows.
Note that the hit and run algorithm has the nice property of having the exact same distribution for each of the variables:
hist(samples[,1])
hist(samples[,2])
hist(samples[,3])
Meanwhile, the distribution of the samples from your procedure will be highly uneven, and as n increases this problem will get worse and worse.
OP <- function(n) {
v <- rep(0, n)
for (i in 1:n){
rdom <- runif(1)
while (sum(v) + rdom > 1) rdom <- runif(1)
v[i] <- rdom
}
v
}
samples.OP <- t(replicate(1000, OP(3)))
hist(samples.OP[,1])
hist(samples.OP[,2])
hist(samples.OP[,3])
An added advantage is that the hit-and-run algorithm appears faster -- I generated these 1000 replicates in 0.006 seconds on my computer with hit-and-run and it took 0.3 seconds using the modified code from the OP.
Here's how I would do it, without any loop, if or while:
set.seed(123)
x <- runif(1) # start with the sum that you want to obtain
n <- 4 # number of generated random numbers, can be chosen arbitrarily
y <- sort(runif(n-1,0,x)) # choose n-1 random points to cut the range [0:x]
z <- c(y[1],diff(y),x-y[n-1]) # result: determine the length of the segments
#> z
#[1] 0.11761257 0.10908627 0.02723712 0.03364156
#> sum(z)
#[1] 0.2875775
#> all.equal(sum(z),x)
#[1] TRUE
The advantage here is that you can determine exactly which sum you want to obtain and how many numbers n you want to generate for this. If you set, e.g., x <- 1 in the second line, the n random numbers stored in the vector z will add up to one.
I am trying to construct a summed area table or integral image given an image matrix. For those of you who dont know what it is, from wikipedia:
A summed area table (also known as an integral image) is a data structure and algorithm for quickly and efficiently generating the sum of values in a rectangular subset of a grid
In other words, its used to sum up values of any rectangular region in the image/matrix in constant time.
I am trying to implement this in R. However, my code seems to take too long to run.
Here is the pseudo code from this link. in is the input matrix or image and intImg is whats returned
for i=0 to w do
sum←0
for j=0 to h do
sum ← sum + in[i, j]
if i = 0 then
intImg[i, j] ← sum
else
intImg[i, j] ← intImg[i − 1, j] + sum
end if
end for
end for
And here is my implementation
w = ncol(im)
h = nrow(im)
intImg = c(NA)
length(intImg) = w*h
for(i in 1:w){ #x
sum = 0;
for(j in 1:h){ #y
ind = ((j-1)*w)+ (i-1) + 1 #index
sum = sum + im[ind]
if(i == 1){
intImg[ind] = sum
}else{
intImg[ind] = intImg[ind-1]+sum
}
}
}
intImg = matrix(intImg, h, w, byrow=T)
Example of input and output matrix:
However, on a 480x640 matrix, this takes ~ 4 seconds. In the paper they describe it to take on the order of milliseconds for those dimensions.
Am I doing something inefficient in my loops or indexing?
I considered writing it in C++ and wrapping it in R, but I am not very familiar with C++.
Thank you
You could try to use apply (isn't faster than your for-loops if you pre-allocating the memory):
areaTable <- function(x) {
return(apply(apply(x, 1, cumsum), 1, cumsum))
}
areaTable(m)
# [,1] [,2] [,3] [,4]
# [1,] 4 5 7 9
# [2,] 4 9 12 17
# [3,] 7 13 16 25
# [4,] 9 16 22 33