I have a list of three two-dimensional arrays that contains x, y and z coordinates of some points (to draw a surface from them, I store them in two-dimensional arrays, like surface plots in MATLAB).
Example:
points <- list(x=matrix(c(1, 2, 3, 4), nrow=2),
y=matrix(c(5, 6, 1, 4), nrow=2),
z=matrix(c(1, 9, 2, 3), nrow=2))
This is a representation of points with coordinates (1, 5, 1), (2, 6, 9) and so on (4 points total).
Now I have to multiply every (x, y, z) point with some fixed matrix C (to rotate my surface) and return the result in the same form of list of two-dimensional matrixes.
I can do it in this way with loops:
apply_matrix <- function(C, points) {
x <- points$x
y <- points$y
z <- points$z
n <- nrow(x)
m <- ncol(x)
outx <- matrix(rep(0, n*m), nrow = n)
outy <- matrix(rep(0, n*m), nrow = n)
outz <- matrix(rep(0, n*m), nrow = n)
for (i in 1:nrow(x)) {
for (j in 1:ncol(x)) {
out <- C %*% c(x[i, j], y[i, j], z[i, j])
outx[i,j] <- out[1,]
outy[i,j] <- out[2,]
outz[i,j] <- out[3,]
}
}
list(x=outx,y=outy,z=outz)
}
However, I'm looking for more efficient loop-free solution.
I believe it is possible to convert the list to three-dimensional matrix and then ask R to multiply my matrix C to this three-dimensional matrix using appropriate dimensions, but cannot figure out how to do it.
Here I first convert the list to a three-dimensional array and then also return one:
C <- matrix(rnorm(3 * 3), 3)
ar <- array(unlist(points), c(dim(points[[1]]), 3))
aperm(apply(ar, 1:2, `%*%`, x = t(C)), c(2, 3, 1))
Related
I am running below code to evaluate a function at each value of r.
For each element of r, the function calculates the sum of elements of a matrix product. Before doing this, values of M are adjusted based on a kernel function.
# (1) set-up with toy data
r <- seq(0, 10, 1)
bw <- 25
M <- matrix(data = c(0, 1, 2,
1, 0, 1,
2, 1, 0), nrow = 3, ncol = 3)
X <- matrix(rep(1, 9), 3, 3)
#
# (2) computation
res <- c()
# loop, calculationg sum, Epanechnikov kernel
for(i in seq_along(r)) {
res[i] <- sum(
# Epanechnikov kernel
ifelse(-bw < (M - r[i]) & (M - r[i]) < bw,
3 * (1 - ((M - r[i])^2 / bw^2)) / (4*bw),
0) * X,
na.rm = TRUE
)
}
# result
res
I am looking for recommendations to speed this up using base R. Thank you!
Using outer:
Mr <- outer(c(M), r, "-")
colSums(3*(1 - Mr^2/bw^2)/4/bw*(abs(Mr) < bw)*c(X))
#> [1] 0.269424 0.269760 0.269232 0.267840 0.265584 0.262464 0.258480 0.253632 0.247920 0.241344 0.233904
I'll also note that the original for loop solution can be sped up by pre-allocating res (e.g., res <- numeric(length(r))) prior to the for loop.
Two matrices A & B, with ncol = 2, nrow = 2 separetly.
A = [a_11 a_12
a_21 a_22]
B = [b_11 b_12
b_21 b_22]
(sorry didnt how to show matrix here...)
Multiply these 2 matrices and aim to obtain a new result matrix as:
c = [a_11*b_11 a_11*b_12 a_12*b_11 a_12*b_12
a_21*b_21 a_21*b_22 a_22*b_21 a_22*b_22]
Obviously, it could be done with some loop, but I'd assume there exist simpler methods
C <- matrix(NA, nrow = nrow(A), ncol = ncol(A)*ncol(B))
for (m in 1 : nrow(C)) {
for (k in 1:ncol(A)) {
C[m, (ncol(B)*(k-1)+1) : (k*ncol(B))] <- d1[m, k] * d2[m,]
}
}
You can do:
cbind(A[, 1]*B, A[,2]*B) # or
matrix(apply(A, 2, function(x) x*B), 2)
data
A <- matrix(1:4, 2)
B <- matrix(11:14, 2)
Consider the Togliatti implicit surface. I want to clip it to the ball centered at the origin with radius 4.8. A solution, with the misc3d package, consists in using the mask argument of the computeContour3d function, which allows to use only the points satisfying x^2+y^2+z^2 < 4.8^2:
library(misc3d)
# Togliatti surface equation: f(x,y,z) = 0
f <- function(x,y,z){
w <- 1
64*(x-w)*
(x^4-4*x^3*w-10*x^2*y^2-4*x^2*w^2+16*x*w^3-20*x*y^2*w+5*y^4+16*w^4-20*y^2*w^2) -
5*sqrt(5-sqrt(5))*(2*z-sqrt(5-sqrt(5))*w)*(4*(x^2+y^2-z^2)+(1+3*sqrt(5))*w^2)^2
}
# make grid
nx <- 220; ny <- 220; nz <- 220
x <- seq(-5, 5, length=nx)
y <- seq(-5, 5, length=ny)
z <- seq(-4, 4, length=nz)
g <- expand.grid(x=x, y=y, z=z)
# calculate voxel
voxel <- array(with(g, f(x,y,z)), dim = c(nx,ny,nz))
# mask: keep points satisfying x^2+y^2+z^2 < 4.8^2, in order to
# clip the surface to the ball of radius 4.8
mask <- array(with(g, x^2+y^2+z^2 < 4.8^2), dim = c(nx,ny,nz))
# compute isosurface
surf <- computeContour3d(voxel, maxvol=max(voxel), level=0, mask=mask, x=x, y=y, z=z)
# draw isosurface
drawScene.rgl(makeTriangles(surf, smooth=TRUE))
But the borders of the resulting surface are irregular:
How to get regular, smooth borders?
The solution I found resorts to spherical coordinates. It consists in defining the function f in terms of spherical coordinates (ρ, θ, ϕ), then to compute the isosurface with ρ running from 0 to the desired radius, and then to transform the result to Cartesian coordinates:
# Togliatti surface equation with spherical coordinates
f <- function(ρ, θ, ϕ){
w <- 1
x <- ρ*cos(θ)*sin(ϕ)
y <- ρ*sin(θ)*sin(ϕ)
z <- ρ*cos(ϕ)
64*(x-w)*
(x^4-4*x^3*w-10*x^2*y^2-4*x^2*w^2+16*x*w^3-20*x*y^2*w+5*y^4+16*w^4-20*y^2*w^2) -
5*sqrt(5-sqrt(5))*(2*z-sqrt(5-sqrt(5))*w)*(4*(x^2+y^2-z^2)+(1+3*sqrt(5))*w^2)^2
}
# make grid
nρ <- 300; nθ <- 400; nϕ <- 300
ρ <- seq(0, 4.8, length = nρ) # ρ runs from 0 to the desired radius
θ <- seq(0, 2*pi, length = nθ)
ϕ <- seq(0, pi, length = nϕ)
g <- expand.grid(ρ=ρ, θ=θ, ϕ=ϕ)
# calculate voxel
voxel <- array(with(g, f(ρ,θ,ϕ)), dim = c(nρ,nθ,nϕ))
# calculate isosurface
surf <- computeContour3d(voxel, maxvol=max(voxel), level=0, x=ρ, y=θ, z=ϕ)
# transform to Cartesian coordinates
surf <- t(apply(surf, 1, function(rtp){
ρ <- rtp[1]; θ <- rtp[2]; ϕ <- rtp[3]
c(
ρ*cos(θ)*sin(ϕ),
ρ*sin(θ)*sin(ϕ),
ρ*cos(ϕ)
)
}))
# draw isosurface
drawScene.rgl(makeTriangles(surf, smooth=TRUE, color = "violetred"))
Now the resulting surface has regular, smooth borders:
Your solution is excellent for the problem you stated, because spherical coordinates are so natural for that boundary. However, here is a more general solution that would work for other smooth boundaries.
The idea is to allow input of a boundary function, and cull points when they are too large or too small. In your case it would be the squared distance from the origin, and you would want to cull points where the value is bigger than 4.8^2. But sometimes the triangles being drawn to make the smooth surface should only be partially culled: one point would be kept and two deleted, or two kept and one deleted. If you cull the whole triangle that leads to the jagged edges in your original plot.
To fix this, the points can be modified. If only one is supposed to be kept, then the other two points can be shrunk towards it until they lie on an approximation to the boundary. If two are supposed to be kept you want the shape to be a quadrilateral, so you would build that out of two triangles.
This function does that, assuming the input surf is the output of computeContour3d:
boundSurface <- function(surf, boundFn, bound = 0, greater = TRUE) {
# Surf is n x 3: each row is a point, triplets are triangles
values <- matrix(boundFn(surf) - bound, 3)
# values is (m = n/3) x 3: each row is the boundFn value at one point
# of a triangle
if (!greater)
values <- -values
keep <- values >= 0
# counts is m vector counting number of points to keep in each triangle
counts <- apply(keep, 2, sum)
# result is initialized to an empty array
result <- matrix(nrow = 0, ncol = 3)
# singles is set to all the rows of surf where exactly one
# point in the triangle is kept, say s x 3
singles <- surf[rep(counts == 1, each = 3),]
if (length(singles)) {
# singleValues is a subset of values where only one vertex is kept
singleValues <- values[, counts == 1]
singleIndex <- 3*col(singleValues) + 1:3 - 3
# good is the index of the vertex to keep, bad are those to fix
good <- apply(singleValues, 2, function(col) which(col >= 0))
bad <- apply(singleValues, 2, function(col) which(col < 0))
for (j in 1:ncol(singleValues)) {
goodval <- singleValues[good[j], j]
for (i in 1:2) {
badval <- singleValues[bad[i,j], j]
alpha <- goodval/(goodval - badval)
singles[singleIndex[bad[i,j], j], ] <-
(1-alpha)*singles[singleIndex[good[j], j],] +
alpha *singles[singleIndex[bad[i,j], j],]
}
}
result <- rbind(result, singles)
}
doubles <- surf[rep(counts == 2, each = 3),]
if (length(doubles)) {
# doubleValues is a subset of values where two vertices are kept
doubleValues <- values[, counts == 2]
doubleIndex <- 3*col(doubleValues) + 1:3 - 3
doubles2 <- doubles
# good is the index of the vertex to keep, bad are those to fix
good <- apply(doubleValues, 2, function(col) which(col >= 0))
bad <- apply(doubleValues, 2, function(col) which(col < 0))
newvert <- matrix(NA, 2, 3)
for (j in 1:ncol(doubleValues)) {
badval <- doubleValues[bad[j], j]
for (i in 1:2) {
goodval <- doubleValues[good[i,j], j]
alpha <- goodval/(goodval - badval)
newvert[i,] <-
(1-alpha)*doubles[doubleIndex[good[i,j], j],] +
alpha *doubles[doubleIndex[bad[j], j],]
}
doubles[doubleIndex[bad[j], j],] <- newvert[1,]
doubles2[doubleIndex[good[1,j], j],] <- newvert[1,]
doubles2[doubleIndex[bad[j], j],] <- newvert[2,]
}
result <- rbind(result, doubles, doubles2)
}
# Finally add all the rows of surf where the whole
# triangle is kept
rbind(result, surf[rep(counts == 3, each = 3),])
}
You would use it after computeContour3d and before makeTriangles, e.g.
fn <- function(x) {
apply(x^2, 1, sum)
}
drawScene.rgl(makeTriangles(boundSurface(surf, fn, bound = 4.8^2,
greater = FALSE),
smooth = TRUE))
Here's the output I see:
It's not quite as good as yours, but it would work for many different boundary functions.
Edited to add: Version 0.100.26 of rgl now has a function clipMesh3d which incorporates these ideas.
i have a 240 X 2 matrix and my aim is to partition it into 40 groups of 6 X 2 matrices and determine the determinants of all the Covariance matrices. i have accomplished the task with this code.
mat=matrix(0,240,2)
m=numeric()
n=0
for ( i in 1:40) {
m[i]=det(cov(mat[(n+1):(n+6),]))
n=n+6
}
Is there a better way to get these determinants of the Covariance matrices and the 40 different Covariance matrices?
1) apply/array Reshape into a 3 dimensional array and use apply:
apply(array(mat, c(6, 40, 2)), 2, function(x) det(cov(x)))
2) rollapply
library(zoo)
rollapply(mat, 6, by = 6, function(x) det(cov(x)), by.column = FALSE)
3) tapply
tapply(1:240, gl(40, 6), function(ix) det(cov(mat[ix, ])))
4) sapply
sapply(seq(1, 240, 6), function(i) det(cov(mat[i + 0:5, ])))
5) for
m <- numeric(40)
for(i in seq(1, 240, 6)) m[i] <- det(cov(mat[i + 0:5, ]))
Suppose I have the following code:
X <- model.matrix(~factor(1:2))
beta <- c(1, 2)
I then draw 70 and 40 values from two multivariate normal distributions:
library(MASS)
S1 <- mvrnorm(70, mu = c(0,0), Sigma = matrix(c(10, 3, 3, 2), ncol = 2))
S2 <- mvrnorm(40, mu = c(0,0), Sigma = matrix(c(10, 4, 4, 2), ncol = 2))
As can be easily seen S1 is 70x2 matrix und S2 a 40x2 matrix.
Now I build a for loop in R:
z <- list()
for(i in 1:dim(S2)[1]){
z[[i]] <- X %*% beta + X %*% S1[1,] + X %*% S2[i,] + rnorm(2, mean = 0, sd = 0.45)
Y <- do.call(rbind, z)
}
This gives me a matrix that contains all combinations for the 40 elements in S2 with the 1st element of S1. What I want is to completely cross the two matrices S1 and S2. That is I want the for loop to pick out S1[1,] first, then iterate completely through S2[i,] (e.g. in an inner loop) and store the results in a matrix then pick out S1[2,] iterate again through S2[i,] and store the results in a matrix and so on. If I would need to give a name to what I am looking for I would say "crossed for loops". I find it incredibly hard to come up with R-code that will allow me to do this. Any hints would be appreciated.
Maybe the idea will get clearer with this example:
My idea is equivalent to construction 70 for-loops for every element in S1[i,] and binding the result in a 70*40*2x1 matrix:
for(i in 1:dim(S2)[1]){
z[[i]] <- X %*% beta+X %*% S1[1,]+X %*% S2[i,]+rnorm(2, mean = 0 , sd = sigma)
Y1 <- unname(do.call(rbind, z))
}
for(i in 1:dim(S2)[1]){
z[[i]] <- X %*% beta+X %*% S1[2,]+X %*% S2[i,]+rnorm(2, mean = 0 , sd = sigma)
Y2 <- unname(do.call(rbind, z))
}
for(i in 1:dim(S2)[1]){
z[[i]] <- X %*% beta+X %*% S1[3,]+X %*% S2[i,]+rnorm(2, mean = 0 , sd = sigma)
Y3 <- unname(do.call(rbind, z))
}
.
.
.
for(i in 1:dim(S2)[1]){
z[[i]] <- X %*% beta+X %*% S1[70,]+X %*% S2[i,]+rnorm(2, mean = 0 , sd = sigma)
Y70 <- unname(do.call(rbind, z))
}
Y <- rbind(Y1, Y2, Y3, …, Y70)
What I ideally would want is to do this with for-loops or any other flexible way that can handle different dimensions for S1 and S2.
OK. I might do a few things to make this as efficient as possible. First, we can pre-calculate all the matrix multiplication with
Xb <- X %*% beta
XS1 <- X %*% t(S1)
XS2 <- X %*% t(S2)
Then we can clculate all the combinations of the S1/S2 values with expand.grid
idx <- unname(c(expand.grid(A=1:ncol(XS1), B=1:ncol(XS2))))
Then we can define the transformation
fx<-function(a, b) {
t(Xb + XS1[,a, drop=F] + XS2[,b,drop=F] + rnorm(2, mean = 0, sd = 0.45))
}
we assume we will be passed an index for S1 and an index for S2. Then we combine the data as in your formula. Finally, we use this helper function and the indexes with a set of do.calls
xx <- do.call(rbind, do.call(Map,c(list(fx), idx)))
First we use Map to calculate all the combinations, then we use rbind to merge all the results. This actually produces a 2800x2 matrix. (70*40)*2. The rows are ordered with S1 moving the fastest, then S2.
I realised that this was not a problem with for-loops but with the way I stored the variables. The solution to what I want is:
library(MASS)
z <- list()
y <- list()
for(j in 1:dim(S1)[1]){
for(i in 1:dim(S2)[1]){
z[[i]] <- X %*% beta+X %*% S1[j,]+X %*% S2[i,]+matrix(rnorm(2, mean = 0 , sd = sigma), ncol = 2, nrow = 2)
Z <- unname(do.call(rbind, z))
}
y[[j]] <- Z
Y <- unname(do.call(rbind, y))
}