Does such a kernel exist in the (transposed) convolution operation? - math

A 2X2 input with a 3X3 kernel, and a 4X4 output is obtained. The value of the four cells in the middle of the output matrix is required to be 0. Does such a kernel exist?

Related

Create a Joint Kernel Density Function in R

I have two vectors, A and B. Can I create a joint kernel density function empirically in R such that I have some function f(x,y) that comes from vectors A and B? The following example wouldn't work, as it isn't a joint probability distribution
z <- cbind(A,B)
approxfun(density(z))
MASS::kde2d does two-dimensional kernel density estimation given two vectors (of x and y coordinates). Rather than return a function that can be evaluated at an arbitrary ({newx,newy}), though, it returns the function evaluated on a square grid.
Once you've done the fussy bits like selecting a bandwidth, the actual computations for kernel density estimation at a single point x0,y0 aren't that hard, I think it would be something like
sum(dnorm((x0-x)/h)*dnorm((y0-y)/h)
MASS::kde2d does clever stuff with outer() and tcrossprod() to compute the distances from all the data points to all of the points on the evaluation grids, and all of the sums, in a small number of top-level operations, but I think what I have above is the crux of it.

rotational matrix in R

I want to achieve an algorithm in R. I cannot start with the code because I am having problem figuring out the problem clearly. The problem is related to rotational matrix, which is actully pretty challenging.
The problem is as follow:
The historical data of monthly flows X is transformed into Y by the transformation matrix R where,
Y = RX (3)
The procedure for obtaining the transformation matrix is described in detail in the appendix of Tarboton et al. (1998), here we summarize from their description. The transformation matrix is developed from a standard basis (basis vectors aligned with the coordinate axes) which is orthonormal but does not have a basis vector perpendicular to the conditioning plane defined by (3). One of the standard basis vectors is replaced by a vector perpendicular to the conditioning plane. Operationally this amounts to starting with an identity matrix and replacing the last column with . Clearly the basis set in no longer orthonormal. Gram Schmidt orthonormalization procedure is applied to the remaining 1 standard basis vectors to obtain an orthonormal basis that now includes a vector perpendicular to the conditioning plane.
The last column of the matrix Y, , and the R matrix has the property
RT = R^(-1). The first components of the vector can be denoted as as the last component is , i.e., . Hence, the simulation involves re-sampling from the conditional PDF ( )

Perform sum of vectors in CUDA/thrust

So I'm trying to implement stochastic gradient descent in CUDA, and my idea is to parallelize it similar to the way that is described in the paper Optimal Distributed Online Prediction Using Mini-Batches
That implementation is aimed at MapReduce distributed environments so I'm not sure if it's optimal when using GPUs.
In short the idea is: at each iteration, calculate the error gradients for each data point in a batch (map), take their average by sum/reducing the gradients, and finally perform the gradient step updating the weights according to the average gradient. The next iteration starts with the updated weights.
The thrust library allows me to perform a reduction on a vector allowing me for example to sum all the elements in a vector.
My question is: How can I sum/reduce an array of vectors in CUDA/thrust?
The input would be an array of vectors and the output would be a vector that is the sum of all the vectors in the array (or, ideally, their average).
Converting my comment into this answer:
Let's say each vector has length m and the array has size n.
An "array of vectors" is then the same as a matrix of size n x m.
If you change your storage format from this "array of vectors" to a single vector of size n * m, you can use thrust::reduce_by_key to sum each row of this matrix separately.
The sum_rows example shows how to do this.

Normalizing a matrix with respect to a constraint

I am doing a project which requires me to normalize a sparse NxNmatrix. I read somewhere that we can normalize a matrix so that its eigen values lie between [-1,1] by multiplying it with a diagonal matrix D such that N = D^{-1/2}*A*D^{-1/2}.
But I am not sure what D is here. Also, is there a function in Matlab that can do this normalization for sparse matrices?
It's possible that I am misunderstanding your question, but as it reads it makes no sense to me.
A matrix is just a representation of a linear transformation. Given that a matrix A corresponds to a linear transformation T, any matrix of the form B^{-1} A B (called the conjugate of A by B) for an invertible matrix B corresponds to the same transformation, represented in a difference basis. In particular, the eigen values of a matrix correspond to the eigen values of the linear transformation, so conjugating by an invertible matrix cannot change the eigen values.
It's possible that you meant that you want to scale the eigen vectors so that each has unit length. This is a common thing to do since then the eigen values tell you how far a vector of unit length is magnified by the transformation.

Split a transform matrix into orhhogonal matrix and scale matrix

If I have a matrix from scale, translate, and rotation transform. I want to split this matrix to two matrix. One is rotation+translation matrix, the other is scale matrix.
Because I want to compute the correct normal vector transform, so I only need orthogonal matrix to do the computation for surface normal vector
Any ideas?
If I have a matrix from scale, translate, and rotation transform. I want to split this matrix to two matrix. One is rotation+translation matrix, the other is scale matrix.
I'm assuming this matrix you are talking about is a 4x4 matrix that is widely used by some, widely despised by others, with the fourth row being 0,0,0,1.
I'll cause these two operations "scale" and "rotate+translate". Note well: These operations are not commutative. Scaling a 3-vector and then rotating/translating this scaled vector yields a different result than you would get by reversing the order of operations.
Case 1, operation is "rotate+translate", then "scale".
Let SR=S*R, where S is a 3x3 diagonal matrix with positive diagonal elements (a scaling matrix) and R is a 3x3 orthonormal rotation matrix. The rows of matrix SR will be orthogonal to one another, but the columns will not be orthogonal. The scale factors are the square root of the norms of the rows of the matrix SR.
Algorithm:
Given 4x4 matrix A, produce 4x4 scaling matrix S, 4x4 rotation+translation matrix T
A = [ SR(3x3) Sx(3x1) ]
[ 0(1x3) 1 ]
Partition A into a 3x3 matrix SR and a 3 vector Sx as depicted above.
Construct the scaling matrix S. The first three diagonal elements are the vector norms of the rows of matrix SR; the last diagonal element is 1.
Construct the 4x4 rotation+translation matrix T by dividing each row of A by the corresponding scale factor.
Case 2, operation is "scale", then "rotate+translate".
Now consider the case RS=R*S. Here the columns of A will be orthogonal to one another, but the rows will not be orthogonal. In this case the scale factors are the square root of the norms of the columns of the matrix RS.
Algorithm:
Given 4x4 matrix A, produce 4x4 rotation+translation matrix T, 4x4 scaling matrix S
A = [ RS(3x3) x(3x1) ]
[ 0(1x3) 1 ]
Partition A into a 3x3 matrix RS and a 3 vector x as depicted above.
Construct the scaling matrix S. The first three diagonal elements are the vector norms of the columns of matrix RS; the last diagonal element is 1.
Construct the 4x4 rotation+translation matrix T by dividing each row of A by the corresponding scale factor.
If the scaling is not uniform (e.g., scale x by 2, y by 4, z by 1/2), you can tell the order of operations by looking at the inner products of the rows and columns of the upper 3x3 matrix with one another. Scaling last (my case 1) means the row inner products will be very close to zero but the column inner products will be non zero. Scaling first (my case 2) reverses the situation. If the scaling is uniform there is no way to tell which case is which. You need to know beforehand.
Just an idea -
Multiply the matrix by the unit vectors (1/sqrt(3),1/sqrt(3),1/sqrt(3)),
check how the length of the vector after the multiplication,
scale the matrix by the reciprocal of that value. Now you have an orthogonal matrix
create a new scale matrix with the scale you found.
Remove the translation to get a 3x3 matrix
Perform the polar decomposition via SVD.

Resources