Square a matrix in R - r

I've searched and am surprised not to find an answer to the following fairly basic question:
How do you square a matrix in R? That is to say, for a square matrix $A$ how do I compute $A^{2}$? Of course I could matrix product it with itself, but if the matrix I want to square has a very long expression, this seems undesirable, and worse if I were to want a higher power.
I heard one person mention the expm package while talking about his problem and I suppose I could try this, but it mentioned a bug that I'm not certain has been fixed. It's also surprising you'd need to import a package to do this pretty basic matrix operation, but maybe that's just how it is.

Related

8 point algorithm for estimating Fundamental Matrix

I'm watching a lecture about estimating the fundamental matrix for use in stereo vision using the 8 point algorithm. I understand that once we recover the fundamental matrix between two cameras we can compute the epipolar line on one camera given a point on the other. To my understanding this epipolar line (after it's been rectified) makes it easy to find feature correspondences, because we are simply matching features along a 1D line.
The confusion comes from the fact that 8-point algorithm itself requires at least 8 feature correspondences to estimate the Fundamental Matrix.
So, we are finding point correspondences to recover a matrix that is used to find point correspondences?
This seems like a chicken-egg paradox so I guess I'm misunderstanding something.
The fundamental matrix can be precomputed. This leads to two advantages:
You can use a nice environment in which features can be matched easily (like using a chessboard) to compute the fundamental matrix.
You can use more computationally expensive operations like a sequence of SIFT, FLANN and RANSAC across the entire image since you only need to do that once.
After getting the fundamental matrix, you can find correspondences in a noisy environment more efficiently than using the same method when you compute the fundamental matrix.

Lapack Orthonormalization Function for Rectangular Matrix

I was wondering if there was a function in Lapack for orthonormalizing the columns of a very tall and skinny matrix. A similar previous question asked this question, presumably in the context of a square matrix. My setting is as follows: I have an M by N matrix A that I am trying to orthonormalize the columns of.
So, my first thought was to do a qr decomposition. The functions for doing a qr decomposition in Lapack seem to be dgeqrf and dormqr. Great. However, my problem is as follows: my matrix A is so tall, that I don't want to actually compute all of Q, because it is M by M. In fact, I can't afford to instantiate an M by M matrix at all during any of my computation (it would not fit in memory). I would rather compute just the matrix that wikipedia calls Q1. However, I can't seem to find a way to make this work.
The weird thing is, that I think it is possible. Numpy, in particular, has a function numpy.linalg.qr that appears to do just this. However, even after reading their source code, I can't figure out how they are using lapack calls to get this to work.
Do folks have ideas? I would strongly prefer this to only use lapack functions because I am hoping to port this code to CuSOLVE, which has implemented several lapack functions (including dgeqrf and dormqr) for the GPU.
You want the "thin" or "economy size" version of QR. In matlab, you can do this with:
[Q,R] = qr(A,0);
I haven't used Lapack directly, but I would imagine there's a corresponding call there. It appears that you can do this in python with:
numpy.linalg.qr(a, mode='reduced')

R equivalent to matlab griddata, scatteredInterpolant, and/or TriScatteredInterp

We do a lot of full field 3D numerical simulations (CFD, FEA, etc.). The solutions take a long time to run. We often interpolate from solutions rather than rerun every case. We also interpolate between multiple solutions, which leads to even higher dimensional interpolation (like adding time, so x,y,z,t,v).
Matlab does a great job of reading data V at irregular grid of X,Y,Z coordinates, and interpolating from V using griddata, scatterdInterpolan, and/or TriScatteredInterp. For a variety of reasons, I've switched to R. This remains one key area I've not been able to find as good R equivalent. 'akima' only does x,y,V (not, x,y,z,V, much less even higher dimensions like x,y,z,t,v).
The next best thing I've found has been 'krigging'. But krigging behaves more like model fitting and projection, and often does not behave well between irregular grid points. So it's not nearly as robust as simple direct linear interpolation.
Matlab has had griddata for several decades. It's hard to believe R doesn't have an equivalent out there. Any suggestions? Or is there at least a way to use krigging to yield effectively the same result as a direct linear interpolation?
Jonathan
You might start by looking at the package "tripack" to do Delaunay triangulation, which gives you the first step in duplicating scatteredInterpolant().
R interpp() is equivalent to MATLAB scatteredInterpolant().

Handling extremely small numbers

I need to find a way to handle extremely small numbers in R, particularly in order to take the log of extremely small numbers. According to the R-manual, “on a typical R platform the smallest positive double is about 5e-324.” Well, I need to deal with numbers even smaller (at least as small as 10^-350). If R is incapable of doing this, I was wondering if there is a way I can use a program that can do this (such as Matlab or Mathematica) from R.
Specifically, I am computing a matrix of probabilities, and some of these probabilities are so small that R does not distinguish them from 0. The reason I know this is because each probability is the product of two other probabilities; so I’ll have p(x)=10^-300, p(y)=10^-50, and then p(x)*p(y)=0. I’d like to be able to do these computations, take the log of the resultant very small number (-805.905 for my example, according to Mathematica), and then continue working with the log values in R.
So to be more detailed, I have a matrix of values for p(x), a matrix of values for p(y), both computed using dnorm, and I’m computing the product. In many cases, R is capable of evaluating p(x) and p(y), but the p(x)*p(y) is too small. In a few cases, though, even the p(x) or p(y) value itself is too small, and is itself just equated to 0 in R.
I’ve seen that there is stuff out there for calling R from Mathematica, but not much pertaining to calling Mathematica from R. I’d honestly prefer to do the latter than the former here. So if any one either knows how to do this (either employing Mathematica or Matlab or something else in R) or has another solution to this issue, I’d greatly appreciate it.
Note that I realize there are a few other threads on this topic, discussing such things as using the Brobdingnag package to deal with small numbers, but these do not appear applicable here.

R function to solve large dense linear systems of equations?

Sorry, maybe I am blind, but I couldn't find anything specific for a rather common problem:
I want to implement
solve(A,b)
with
A
being a large square matrix in the sense that command above uses all my memory and issues an error (b is a vector with corresponding length). The matrix I have is not sparse in the sense that there would be large blocks of zero etc.
There must be some function out there which implements a stepwise iterative scheme such that a solution can be found even with limited memory available.
I found several posts on sparse matrix and, of course, the Matrix package, but could not identify a function which does what I need. I have also seen this post but
biglm
produces a complete linear model fit. All I need is a simple solve. I will have to repeat that step several times, so it would be great to keep it as slim as possible.
I already worry about the "duplication of an old issue" and "look here" comments, but I would be really grateful for some help.

Resources