Transforming rows in a PCA context using dudi.pca - r

I have a huge matrix of genetic data (1e7 rows representing individuals x 5,000 columns representing markers) on which I would like to perform a PCA in order to keep c. 20 columns. However, due to memory issues, I cannot perform PCA using either dudi.pca or big.PCA on R 3.1.2 on a 8GB 64bits machine.
An alternative was to compute an approximation of the coordinates of principal axes on a row-subset of the matrix and then transform the whole matrix using a linear combination with the approximate PA coordinates.
I am facing a simple PCA-related problem using dudi.pca: how can I get the row coordinates using the original matrix and the matrix of column coordinates (= principal axes) ?
Here is a simple example, let's take a random matrix M (3 rows and 4 columns) such as:
M=
1 9 10 13
20 13 20 7
18 19 17 10
Doing dudi.pca(M, center=T, scale=T) and keeping only one PC, dudi.pca outputs the following $c1 matrix (column normed scores ie principal axes):
c1 =
-0.547
-0.395
-0.539
0.504
To compute the row coordinates of the data on the first principal axis, I thought doing the inner product:
r =
-0.547*1 + -0.395*9 + -0.539*10 + -0.504*13
-0.547*20 + -0.395*13 + -0.539*20 + -0.504*17
-0.547*18 + -0.395*19 + -0.539*17 + -0.504*10
i.e.
r =
-2.944
-23.331
-21.481
But if I look up at the $li (row coordinates ie principal components) natively computed by dudi.pca on the same dataset, I read:
r' =
2.565
-1.559
-1.005
Am I doing something wrong when formulating the row coordinates using dudi.pca $ci matrix?
Many thanks for your help,
Quaerens.
Code :
> M=matrix(c(1,9,10,13,20,13,20,7,18,19,17,10), ncol=4, byrow=T)
> M
[,1] [,2] [,3] [,4]
[1,] 1 9 10 13
[2,] 20 13 20 7
[3,] 18 19 17 10
> N=dudi.pca(M, center=T, scale=T, scannf=F, nf=1)
> N$c1
CS1
V1 -0.5468634
V2 -0.3955638
V3 -0.5389504
V4 0.5039863
> r=c( M[1,] %*% N$c1[,1], M[2,] %*% N$c1[,1], M[3,] %*% N$c1[,1] )
> r
[1] -2.94462 -23.33070 -21.48155
> N$li
Axis1
1 2.565165
2 -1.559546
3 -1.005619

If this is still of interest...
ADE4 works on the duality diagram, hence when p is greater than n singular value decomposition is carried out on the nxn symmetric matrix
library(ade4)
M=matrix(c(1,9,10,13,20,13,20,7,18,19,17,10), ncol=4, byrow=T)
M
## [,1] [,2] [,3] [,4]
## [1,] 1 9 10 13
## [2,] 20 13 20 7
## [3,] 18 19 17 10
N=dudi.pca(M, center=T, scale=T, scannf=F, nf=1)
#dimensions of M
n=3
p=4
X=scalewt(M,center=T,scale=T)
#this could be done in two ways. Singular Value Decomposition or Duality Diagrams.
#Consider a Singular value decomposition of X; S=UDV; where S is X, U is the left triangular matrix, and V is the right triangular matrix, and D is the diagonal matrix of eigen values
svd=svd(X)
#These are equivalent
N$c1
svd$v[,1]
#Equivalent
N$eig
## [1] 3.341175 0.658825
svd$d[1:2]
## [1] 3.341175 0.658825
#Diagonal matrix of eigen values
lambda=diag(svd$d)
#N$lw gives the row weights
N$lw
#0.3333333 0.3333333 0.3333333
#find the inverse of the diagonal matrix of row weights; this is the normalization part
K=solve(sqrt(diag(N$lw,n)))%*%svd$u
#These are equivalent
head(K[,1])
## [1] 1.4033490 -0.8531958 -0.5501532
head(N$l1)
## RS1
## 1 1.4033490
## 2 -0.8531958
## 3 -0.5501532
#Find Principal Components
pc=K%*%sqrt(lambda)
#These are equivalent
head(pc)
## [,1] [,2]
## [1,] 2.565165 -0.1420130
## [2,] -1.559546 -0.9154578
## [3,] -1.005619 1.0574707
head(N$li)
## Axis1
## 1 2.565165
## 2 -1.559546
## 3 -1.005619
This could also be done using the duality diagram implemented in ade4
look here for references on the duality diagram implemented in ade4: http://projecteuclid.org/euclid.aoas/1324399594
Q<-diag(p)
D<-diag(1/n, n)
rk<-qr(X)
rank=rk$rank
#Statistical Triplets
V<-t(X)%*%D%*%X
W<-X%*%Q%*%t(X)
#Compute the eigen values and vectors of the statistical triplet
example.eigen=eigen(W%*%D)
#Equivalent
N$eig
## [1] 3.341175 0.658825
example.eigen$values[1:rank]
## [1] 3.341175 0.658825
#Diagonal matrix of eigen values
lambda=diag(example.eigen$values[1:rank])
#find the inverse of the diagonal matrix of row weights; this is the normalizing part
Binv<-solve(sqrt(D))
K=Binv%*%example.eigen$vectors[,1:rank]
#These are equivalent
head(K[,1])
## [1] 1.4033490 -0.8531958 -0.5501532
head(N$l1)
## RS1
## 1 1.4033490
## 2 -0.8531958
## 3 -0.5501532
#Find Principal Components
pc=K%*%sqrt(lambda)
#These are equivalent
head(pc)
## [,1] [,2]
## [1,] 2.565165 -0.1420130
## [2,] -1.559546 -0.9154578
## [3,] -1.005619 1.0574707
head(N$li)
## Axis1
## 1 2.565165
## 2 -1.559546
## 3 -1.005619

Related

How does the equation for the SpatRaster roughness index (terrain, v = "roughness") work?

The terra package offers and describes the following terrain indices:
x <- terrain(x, v="roughness")
x <- terrain(x, v="TPI")
x <- terrain(x, v="TRI")
I am confused on how this is calculated based on the package description of roughness as "the difference between the maximum and the minimum value of a cell and its 8 surrounding cells" (Hijmans et al. 2023). How does this work for edge and corner cells? I am assuming that the calculation reduces to a cell and its 5 or 3 surrounding cells in these cases?
The ruggedness (TRI) index is described as "the mean of the absolute differences between the value of a cell and the value of its 8 surrounding cells". The following is a graphic illustration of how I envision the calculation of these indices from the description provided.
Does this provide a correct interpretation of these indices?
If this interpretation is incorrect, then I am hoping someone point me in the correct direction (a reference) or explain here. I am interested in coding to calculate a slope of 16° from a DSM and an elevational difference of 1.3 m, but think that a terrain index would give a better indicator of the 1.3 m criterion for this habitat model.
## > 16° slope
habitat_slope_mat <- matrix(nrow = 2, ncol = 3)
habitat_slope_mat[1, ] <- c(0,16,0) # from,to = 0 absent
habitat_slope_mat[2, ] <- c(16,minmax(x)[2],1) # from,to = 1 present
habitat_slope <- classify(x, habitat_slope_mat, include.lowest=TRUE)
I looked at the cited references and was expecting to find the formula for this to help me think of the best way to treat the 1.3 m criterion. I have been unable to locate a written / published description that further explains the method. This paper is listed in the citations for the terrain function description:
Jones, K.H., 1998. A comparison of algorithms used to compute hill (sic) *terrain *as a property of the DEM. Computers & Geosciences 24: 315-323
The correct title for the article (DOI: 10.1016/S0098-3004(98)00032-6) is: "A comparison of algorithms used to compute hill *slope *as a property of the DEM". I cannot locate the formula for roughness in that paper and was interested in reading more on this topic.
I am not sure if this question is appropriate here, as you do not seem to be asking a coding question.
The manual points to Wilson et al (2007) for terrain indices. It also shows how you can use focal instead of terrain to compute them.
You can see for yourself what happens with small examples like this:
library(terra)
x <- rast(nrow=3, ncol=3, vals=c(1,2,3,1,2,1,1,2,8), ext=ext(0,1,0,1), crs="local")
as.matrix(x, wide=T)
# [,1] [,2] [,3]
#[1,] 1 2 3
#[2,] 1 2 1
#[3,] 1 2 8
terrain(x, "roughness") |> as.matrix(wide=TRUE)
# [,1] [,2] [,3]
#[1,] NaN NaN NaN
#[2,] NaN 7 NaN
#[3,] NaN NaN NaN
focal(x, w=3, fun=\(x) {max(x) - min(x)}) |> as.matrix(wide=T)
# [,1] [,2] [,3]
#[1,] NA NA NA
#[2,] NA 7 NA
#[3,] NA NA NA
terrain(x, "TRI") |> as.matrix(wide=TRUE)
# [,1] [,2] [,3]
#[1,] NaN NaN NaN
#[2,] NaN 1.375 NaN
#[3,] NaN NaN NaN
focal(x, w=3, fun=\(x) sum(abs(x[-5]-x[5]))/8) |> as.matrix(wide=T)
# [,1] [,2] [,3]
#[1,] NA NA NA
#[2,] NA 1.375 NA
#[3,] NA NA NA
So the edges become missing (you could do other things via focal)
Or look at the source code.

Reverse indexing of a matrix in R

I am trying to revert the indexing of a matrix in R. The following example illustrates my problem:
#sample data:
set.seed(21)
m <- matrix(sample(100,size = 100),10,10)
# sorting:
t(apply(m,1,order))
# new exemplary order after sorting:
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 3 7 10 6 5 9 2 4 1 8
[2,] 1 6 4 7 3 9 5 8 2 10
[3,] 2 5 8 10 4 7 9 1 3 6
[4,] 8 1 9 2 7 3 4 6 10 5
[5,] 6 9 5 2 7 3 10 4 8 1
[6,] 2 7 4 8 6 9 3 10 1 5
[7,] 1 6 4 10 3 2 7 8 9 5
[8,] 1 2 6 9 3 10 5 7 4 8
[9,] 9 4 5 7 10 2 8 3 1 6
[10,] 6 8 4 3 2 1 5 10 7 9
# we can create m2 with the above sorting. We also add 1000 to all values
m2 <- t(apply(m,1,function(x){
x[order(x)]
})) + 1000
# the next step would be to obtain the original arrangement of columns again, as described below.
After the sorting of my data we have the following situation: In row 1, the 3rd column (of matrix m2) is mapped to the original first column (of matrix m), the 7th column is mapped to the original second column, the 10th column to the original 3rd column, and so on.
My question is as follows: Can I somehow revert this mapping in R? What I mean by this is again for row 1, move the 1st column (of m2) to the position of the 3rd column (of m), then move the 2nd column to the position of the 7th, move the 3rd to the position of the 10th, and so on.
In the end what I try to achieve is to sort my data but save the existing arrangement of the columns somehow, so later, that means after some transformations of my data, I can rearrange them to the original ordering again. When I use the usual sorting algortihms in R, I am losing the old positioning of my columns. Of course most of the time you would not need those anymore, but atm I do need them.
Background
I think it will help to examine the effect of the order() and rank() functions on a simple vector. Consider:
x <- c('c','b','d','b','a');
seq_along(x);
## [1] 1 2 3 4 5
order(x);
## [1] 5 2 4 1 3
rank(x); ## default is ties.method='average'
## [1] 4.0 2.5 5.0 2.5 1.0
rank(x,ties.method='first');
## [1] 4 2 5 3 1
rank(x,ties.method='last'); ## available from 3.3.0
## [1] 4 3 5 2 1
rank(x,ties.method='random'); ## we can ignore this one, obviously
## [1] 4 2 5 3 1
rank(x,ties.method='max');
## [1] 4 3 5 3 1
rank(x,ties.method='min');
## [1] 4 2 5 2 1
(I used character values to demonstrate that these principles and algorithms can apply to any (comparable) data type, not just numeric types. But obviously this includes numeric types.)
The order() function returns a vector that is the same length as the input vector. The order values represent a reordering of the input indexes (which are shown above courtesy of seq_along()) in such a way that when the input vector is indexed with the order vector, it will be sorted (according to the chosen sort method, which (if not explicitly overridden by a method argument), is radixsort for integer, logical, and factor, shellsort otherwise, and takes into account the collation order of the current locale for character values when not using radixsort). In other words, for an element of the result vector, its value gives the input index of the element in the input vector that should be moved to that position in order to sort it.
To try to put it even more plainly, an element of the order vector basically says "place the input vector element with this index in my position". Or, in a slightly more generic way (which will dovetail with the parallel description of rank()):
order element: the input vector element with this index sorts into my position.
In a sense, rank() does the inverse of what order() does. Its elements correspond to the elements of the input vector by index, and its values give a representation of the sort order of the corresponding input element (with tiebreaking behavior depending on the ties.method argument; this contrasts with order(), which always preserves the incoming order of ties, equivalent to ties.method='first' for rank()).
To use the same language structure that I just used for order(), which is the plainest manner of expression I can think of:
rank element: the input vector element in my position sorts into this index.
Of course, this description is only perfectly accurate for ties.method='first'. For the others, the destination index for ties will actually be the reverse of the incoming order (for 'last'), the lowest index of the duplicate set (for 'min'), the highest (for 'max'), the average (for 'average', which is actually the default), or random (for 'random'). But for our purposes, since we need to mirror the proper sort order as per order() (and therefore sort(), which uses order() internally), let's ignore the other cases from this point forward.
I've thought of one final way to articulate the behaviors of the order() and rank() functions: order() defines how to pull elements of the input vector into a sorted order, while rank() defines how to push elements of the input vector into a sorted order.
This is why indexing the input vector with the results of order() is the correct way to sort it. Indexing a vector is inherently a pulling operation. Each respective index vector element effectively pulls the input vector element that is stored at the index given by that index vector element into the position occupied by that index vector element in the index vector.
Of course, the "push vector" produced by rank() cannot be used in the same way as the "pull vector" produced by order() to directly sort the input vector, since indexing is a pull operation. But we can ask, is it in any way possible to use the push vector to sort the input vector? Yes, I've thought of how this can be done. The solution is index-assigning, which is inherently a push operation. Specifically, we can index the input vector with the push vector as the (lvalue) LHS and assign the input vector itself as the RHS.
So, here are the three methods you can use to sort a vector:
x[order(x)];
[1] "a" "b" "b" "c" "d"
sort(x); ## uses order() internally
[1] "a" "b" "b" "c" "d"
y <- x; y[rank(y,ties.method='first')] <- y; y; ## (copied to protect x, but not necessary)
[1] "a" "b" "b" "c" "d"
An interesting property of the rank() function with ties.method='first' is that it is idempotent. This is because, once you've produced a rank vector, ranking it again will not change the result. Think about it: say the first element ranks 4th. Then the first call will produce a 4 in that position. Running rank() again will again find that it ranks 4th. You don't even need to specify ties.method anymore for the subsequent calls to rank, because the values will have become distinct on the first call's (potential) tiebreaking.
rank(x,ties.method='first');
## [1] 4 2 5 3 1
rank(rank(x,ties.method='first'));
## [1] 4 2 5 3 1
rank(rank(rank(x,ties.method='first')));
## [1] 4 2 5 3 1
y <- rank(x,ties.method='first'); for (i in seq_len(1e3L)) y <- rank(y); y;
## [1] 4 2 5 3 1
On the other hand, order() is not idempotent. Repeatedly calling order() has the interesting effect of alternating between the push and pull vectors.
order(x);
## [1] 5 2 4 1 3
order(order(x));
## [1] 4 2 5 3 1
order(order(order(x)));
## [1] 5 2 4 1 3
Think about it: if the last element sorts 1st, then the first call to order() will pull it into the 1st position by placing its index (which is largest of all indexes) into the 1st position. The second call to order() will identify that the element in the 1st position is largest in the entire vector, and thus will pull index 1 into the last position, which is equivalent to ranking the last element with its rank of 1.
Solutions
Based on all of the above, we can devise 3 solutions to your problem of "desorting", if you will.
For input, let's assume that we have (1) the input vector x, (2) its sort order o, and (3) the sorted and possibly transformed vector xs. For output we need to produce the same vector xs but desorted according to o.
Common input:
x <- c('c','b','d','b','a'); ## input vector
o <- order(x); ## order vector
xs <- x[o]; ## sorted vector
xs <- paste0(xs,seq_along(xs)); ## somewhat arbitrary transformation
x;
## [1] "c" "b" "d" "b" "a"
o;
## [1] 5 2 4 1 3
xs;
## [1] "a1" "b2" "b3" "c4" "d5"
Method 1: pull rank()
Since the order and rank vectors are effectively inverses of each other (i.e. pull and push vectors), one solution is to compute the rank vector in addition to the order vector o, and use it to desort xs.
xs[rank(x,ties.method='first')];
## [1] "c4" "b2" "d5" "b3" "a1"
Method 2: pull repeated order()
Alternatively, instead of computing rank(), we can simply use a repeated order() call on o to generate the same push vector, and use it as above.
xs[order(o)];
## [1] "c4" "b2" "d5" "b3" "a1"
Method 3: push order()
I was thinking to myself that, since we already have the order vector o, we really shouldn't have to go to the trouble of computing another order or rank vector. Eventually I realized that the best solution is to use the pull vector o as a push vector. This accomplishes the desorting objective with the least work.
xs[o] <- xs;
xs;
## [1] "c4" "b2" "d5" "b3" "a1"
Benchmarking
library(microbenchmark);
desort.rank <- function(x,o,xs) xs[rank(x,ties.method='first')];
desort.2order <- function(x,o,xs) xs[order(o)];
desort.assign <- function(x,o,xs) { xs[o] <- xs; xs; };
## simple test case
x <- c('c','b','d','b','a');
o <- order(x);
xs <- x[o];
xs <- paste0(xs,seq_along(xs));
ex <- desort.rank(x,o,xs);
identical(ex,desort.2order(x,o,xs));
## [1] TRUE
identical(ex,desort.assign(x,o,xs));
## [1] TRUE
microbenchmark(desort.rank(x,o,xs),desort.2order(x,o,xs),desort.assign(x,o,xs));
## Unit: microseconds
## expr min lq mean median uq max neval
## desort.rank(x, o, xs) 106.487 122.523 132.15393 129.366 139.843 253.171 100
## desort.2order(x, o, xs) 9.837 12.403 15.66990 13.686 16.251 76.122 100
## desort.assign(x, o, xs) 1.711 2.567 3.99916 3.421 4.277 17.535 100
## scale test case
set.seed(1L);
NN <- 1e4; NE <- 1e5; x <- sample(seq_len(NN),NE,T);
o <- order(x);
xs <- x[o];
xs <- xs+seq(0L,NE-1L)/NE;
ex <- desort.rank(x,o,xs);
identical(ex,desort.2order(x,o,xs));
## [1] TRUE
identical(ex,desort.assign(x,o,xs));
## [1] TRUE
microbenchmark(desort.rank(x,o,xs),desort.2order(x,o,xs),desort.assign(x,o,xs));
## Unit: milliseconds
## expr min lq mean median uq max neval
## desort.rank(x, o, xs) 36.488185 37.486967 39.89157 38.613191 39.145405 85.849143 100
## desort.2order(x, o, xs) 16.764414 17.262630 18.10341 17.443527 19.014296 28.338835 100
## desort.assign(x, o, xs) 1.457014 1.498495 1.82893 1.527363 1.592151 4.255573 100
So, clearly the index-assignment solution is the best.
Demo
Below is a demonstration of how this solution can be used for your sample input.
I honestly think that a simple for-loop over the rows is preferable to an apply() call in this case, since you can modify the matrix in-place. If you need to preserve the sorted intermediate matrix, you can copy it before applying this desorting operation.
## generate input matrix
set.seed(21L); m <- matrix(sample(seq_len(100L)),10L); m;
## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
## [1,] 79 61 1 66 40 39 2 86 44 26
## [2,] 25 84 49 35 67 32 36 70 50 100
## [3,] 69 6 90 51 30 92 65 34 68 42
## [4,] 18 54 72 73 85 75 55 15 27 77
## [5,] 93 16 23 58 9 7 19 64 8 46
## [6,] 88 4 60 13 98 47 5 29 56 80
## [7,] 10 45 43 14 95 11 74 76 83 38
## [8,] 17 24 57 82 63 28 71 87 53 59
## [9,] 91 41 81 21 22 94 33 62 12 37
## [10,] 78 52 48 31 89 3 97 20 99 96
## sort each row, capturing sort order in rowwise order matrix
o <- matrix(NA_integer_,nrow(m),ncol(m)); ## preallocate
for (ri in seq_len(nrow(m))) m[ri,] <- m[ri,o[ri,] <- order(m[ri,],decreasing=T)];
## whole-matrix transformation
## embed row index as tenth digit, column index as hundredth (arbitrary)
m <- m+(row(m)-1L)/nrow(m)+(col(m)-1L)/ncol(m)/10;
## desort
for (ri in seq_len(nrow(m))) m[ri,o[ri,]] <- m[ri,]; m;
## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
## [1,] 79.01 61.03 1.09 66.02 40.05 39.06 2.08 86.00 44.04 26.07
## [2,] 25.19 84.11 49.15 35.17 67.13 32.18 36.16 70.12 50.14 100.10
## [3,] 69.22 6.29 90.21 51.25 30.28 92.20 65.24 34.27 68.23 42.26
## [4,] 18.38 54.36 72.34 73.33 85.30 75.32 55.35 15.39 27.37 77.31
## [5,] 93.40 16.46 23.44 58.42 9.47 7.49 19.45 64.41 8.48 46.43
## [6,] 88.51 4.59 60.53 13.57 98.50 47.55 5.58 29.56 56.54 80.52
## [7,] 10.69 45.64 43.65 14.67 95.60 11.68 74.63 76.62 83.61 38.66
## [8,] 17.79 24.78 57.75 82.71 63.73 28.77 71.72 87.70 53.76 59.74
## [9,] 91.81 41.84 81.82 21.88 22.87 94.80 33.86 62.83 12.89 37.85
## [10,] 78.94 52.95 48.96 31.97 89.93 3.99 97.91 20.98 99.90 96.92
rank is the complement to order(). You need to save the original rank() and you can use that to get back to the original ordering after rearranging with order().
I think your example is overcomplicated (far from minimal!) by putting things in a matrix and doing extra stuff. Because you are applying functions at the row-level you just need to solve it for a vector. An example:
set.seed(47)
x = rnorm(10)
xo = order(x)
xr = rank(x)
x[xo][xr] == x
# [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
In your case, you can perform whatever transformations you want on the ordered vector x[xo], then index the result by [xr] to get back to the original ordering.
sorted_result = x[xo] + c(1, diff(x[xo])) # some order-dependent transformation
final_result = sorted_result[xr] # back to original ordering
If there's a possibility of ties, you'll want to use ties.method = 'first' in the rank() call.
Taking this back to the matrix example:
m3 = t(apply(m, 1, function(x) {
xo = order(x)
xr = rank(x, ties.method = 'first')
(x[xo] + 1000)[xr] # add 1000 to sorted matrix and then "unsort"
}))
# check that it worked
all(m3 == (m + 1000))
# [1] TRUE

Simple linear equation using qr.solve gives very off the mark results

I'm trying to figure out how to solve a system of linear equations which are approximations (i.e. there is error in the solution, and I want it minimized).
To understand/verify the process, I came up with a simple example: I give a bunch of 5x + 4x^2 + 3x^3 with a 0-5% error in the answer.
> a
[,1] [,2] [,3]
[1,] 1 1 1
[2,] 2 4 8
[3,] 3 9 27
[...]
[98,] 98 9604 941192
[99,] 99 9801 970299
[100,] 100 10000 1000000
> b
[1] 12.04 48.17 130.02 269.93 505.75 838.44
[7] 1202.04 1911.69 2590.51 3381.00 4538.80 5846.19
...
[97] 2824722.45 2826700.98 3012558.52 2920400.25
When I try to solve this using qr.solve,
> qr.solve(a,b)
[1] 85.2896286 -0.8924785 3.0482766
the results are completely off (want 5, 4, 3). I'm sure I'm missing something obvious. Or perhaps my experiment with polynomials is inherently bad? (if so, why?)
I cannot reproduce this problem with an additive error:
a <- cbind(1:100, (1:100)^2, (1:100)^3)
set.seed(42)
b <- a %*% (5:3) + rnorm(100, sd = 0.1)
qr.solve(a, b)
# [,1]
#[1,] 4.998209
#[2,] 4.000056
#[3,] 3.000000
I can reproduce it with a relative error, but that's not really surprising, since the error is then dominated by the magnitude of the third degree summand:
a <- cbind(1:100, (1:100)^2, (1:100)^3)
set.seed(42)
b <- a %*% (5:3) * rnorm(100, mean = 1, sd = 0.1)
qr.solve(a, b)
# [,1]
#[1,] -1686.611970
#[2,] 68.693368
#[3,] 2.481742
Note that the third coefficient is about what you expect (even more so in you not-reproducible example).

R while loop with vector condition

I want to vectorize a function that uses a while-loop.
The original function is
getParamsLeadtime <- function(leadtimeMean, in_tolerance, tolerance){
searchShape=0
quantil=0
# iterates the parameters until the percentage of values is within the interval of tolerance
while (quantil < in_tolerance){
searchShape = searchShape+1
quantil <- pgamma(leadtimeMean+tolerance,shape=searchShape,rate=searchShape/leadtimeMean) -
pgamma(leadtimeMean-tolerance,shape=searchShape,rate=searchShape/leadtimeMean)
}
leadtimeShape <- searchShape
leadtimeRate <- searchShape/leadtimeMean
return(c(leadtimeShape, leadtimeRate))
}
I would like to have a vectorized call to this function to apply it to a data frame. Currently I am looping through it:
leadtimes <- data.frame()
for (a in seq(92:103)) {
leadtimes <- rbind(leadtimes, getParamsLeadtime(a, .85,2))
}
When I tried to vectorize the function, the while did not seem to accept a vector as condition. The following warning occured:
Warning message:
In while (input["U"] < rep(tolerance, dim(input)[1])) { :
the condition has length > 1 and only the first element will be used
This let me suppose that while does not like vectors. Can you tell me how to vectorize the function?
On a sidenote, I wonder why the column names of the resulting leadtimes-data.frame appear to be values:
> leadtimes
X1 X1.1
1 1 1.000000
2 1 0.500000
3 4 1.333333
4 8 2.000000
5 13 2.600000
6 19 3.166667
7 25 3.571429
8 33 4.125000
9 42 4.666667
10 52 5.200000
11 63 5.727273
12 74 6.166667
Here's an option that is pretty performant.
We vectorize the calculation of pgamma for a given mean lead time, for both the +tol and the -tol case, over a sufficiently large sequence of shp. We calculate a (vectorized) difference, and compare to in_tol. The index (minus 1, since we start our sequence at 0) of the first element of the vector that is greater than in_tol is the lowest value of shp that leads to a pgamma of greater than in_tol.
f <- function(lead, in_tol, tol) {
shp <- which(!(pgamma(lead + tol, 0:10000, (0:10000)/lead) -
pgamma(lead - tol, 0:10000, (0:10000)/lead))
< in_tol)[1] - 1
rate <- shp/lead
c(shp, rate)
}
We can then sapply this over a range of mean lead times.
t(sapply(1:12, f, 0.85, 2))
## [,1] [,2]
## [1,] 1 1.000000
## [2,] 1 0.500000
## [3,] 4 1.333333
## [4,] 8 2.000000
## [5,] 13 2.600000
## [6,] 19 3.166667
## [7,] 25 3.571429
## [8,] 33 4.125000
## [9,] 42 4.666667
## [10,] 52 5.200000
## [11,] 63 5.727273
## [12,] 74 6.166667
system.time(leadtimes <- sapply(1:103, f, 0.85, 2))
## user system elapsed
## 1.28 0.00 1.30
You just need to make sure you choose a sensible upper ceiling for the shape parameter (here I've chosen 10000, which was more than generous). Note that if you don't choose an upper limit that is high enough, some return values will be NA.

R: Finding the begin of a (exponential?) decay?

How to find the index indicated by the red vlin in the following example:
# Get the data as "tmpData"
source("http://pastie.org/pastes/9350691/download")
# Plot
plot(tmpData,type="l")
abline(v=49,col="red")
The following approach is promising, but how to find the peak maximum?
library(RcppRoll)
n <- 10
smoothedTmpData <- roll_mean(tmpData,n)
plot(-diff(smoothedTmpData),type="l")
abline(v=49,col="red")
which.max(-diff(smoothedTmpData)) gives you the index of the maximum.
http://www.inside-r.org/r-doc/base/which.max
I'm unsure if this is your actual question...
Where there is a single peak in the gradient, as in your example dataset, then gwieshammer is correct: you can just use which.max to find it.
For the case where there are multiple possible peaks, you need a more sophisticated approach. R has lots of peak finding functions (of varying quality). One that works for this data is wavCWTPeaks in wmtsa.
library(RcppRoll)
library(wmtsa)
source("http://pastie.org/pastes/9350691/download")
n <- 10
smoothedTmpData <- roll_mean(tmpData, n)
gradient <- -diff(smoothedTmpData)
cwt <- wavCWT(gradient)
tree <- wavCWTTree(cwt)
(peaks <- wavCWTPeaks(tree))
## $x
## [1] 4 52
##
## $y
## [1] 302.6718 5844.3172
##
## attr(,"peaks")
## branch itime iscale time scale extrema iendtime
## 1 1 5 2 5 2 16620.58 4
## 2 2 57 26 57 30 20064.64 52
## attr(,"snr.min")
## [1] 3
## attr(,"scale.range")
## [1] 1 28
## attr(,"length.min")
## [1] 10
## attr(,"noise.span")
## [1] 5
## attr(,"noise.fun")
## [1] "quantile"
## attr(,"noise.min")
## 5%
## 4.121621
So the main peak close to 50 is correctly found, and the routine picks up another smaller peak at the start.

Resources