Does a symmetrical NxN matrix still require N² for computer storage - multidimensional-array

Consider a symmetrical covariance matrix which only requires N(N+1)/2 parameters.
To implement this, the symmetrical pair of values can be stored in a location and a pointer is used to refer to this address. However, actual implementation would require:
A pointer
The stored value
Which still requires "two" memory spaces per pair of symmetrical values and totals to N² storage in contrast to N(N+1)/2.
Is there a better treatment of symmetrical matrices?

Related

What is the most efficient way to store a set of points (embeddings) such that queries for closest points are computed quickly

Given a set of embeddings, i.e. set of [name, vector representation]
how should I store it such that queries on the closest points are computed quickly. For example given 100 embeddings in 2-d space, if I query the data struct on the 5 closest points to (10,12), it returns { [a,(9,11.5)] , [b,(12,14)],...}
The trivial approach is calculate all distances, sort and return top-k points. Alternatively, one might think of storing in a 2-d array in blocks/units of mXn space to cover the range of the embedding space. I don't think this is extensible to higher dimensions, but I'm willing to be corrected.
There are standard approximate nearest neighbors library such as faiss, flann, java-lsh etc. (which are either LSH or Product Quantization based), which you may use.
The quickest solution (which I found useful) is to transform a vector of (say 100 dimensions) to a long variable (64 bits) by using the Johnson–Lindenstrauss transform. You can then use Hamming similarity (i.e. 64 minus the number of bits set in a XOR b) to compute the similarity between bit vectors a and b. You could use the POPCOUNT machine instruction to this effect (which is very fast).
In effect, if you use POPCOUNT in C, even if you do a complete iteration over the whole set of binary transformed vectors (long variables of 64 bits), it still will be very fast.

What is the difference between Floyd-Warshall and matrix multiplication graph algorithms?

I have to solve the following problem: Write a program that, given a directed graph with costs and two vertices, finds a lowest cost walk between the given vertices, or prints a message if there are negative cost cycles in the graph. The program shall use the matrix multiplication algorithm.
I implemented the matrix multiplication algorithm as it is defined: a pseudo-matrix multiplication, where addition is replaced by minimization and multiplication with addition. But by doing this, I ended up with the Floyd-Warshall algorithm Also, I can't easily determine the existence of a negative-cost cycle this way.
I assume there is a major difference between my algorithm, and the real matrix multiplication graph algorithm, but what is that exactly?
You can determine the existence of negative cycles with Floyd-Warshall:
https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm#Behavior_with_negative_cycles
Nevertheless, if there are negative cycles, the Floyd–Warshall
algorithm can be used to detect them. The intuition is as follows:
The Floyd–Warshall algorithm iteratively revises path lengths between all pairs of vertices (i,j), including where i=j;
Initially, the length of the path (i,i) is zero;
A path [i,k, ... ,i] can only improve upon this if it has length less than zero, i.e. denotes a negative cycle;
Thus, after the algorithm, (i,i) will be negative if there exists a negative-length path from i back to i.
Some differences between two algorithms:
Matrix algo can find minimal path with specific number of edges (for example, to find minimal pathes between all pairs of vertices with number of edges <= k), FW cannot.
Matrix multiplication algorithm requires O(n^2) additional space, Floyd-Warshall can be used in-place.
Matrix multiplication algorithm has O(n^3*log(n)) complexity with repeated squaring or O(n^4) with simple implementation, Floyd-Warshall complexity is O(n^3)

Efficient Calculation of an N-Dimensional Cross Product?

As per the title, is the best way to calculate the n-dimensional cross product just using the determinant definition and using the LU Decomposition method of doing as such or could you guys suggest a better one?
Thanks
Edit: for clarity I mean http://en.wikipedia.org/wiki/Cross_product and not the Cartesian Product
Edit: It also seems that using the Leibniz Formula might help - though I don't know how that compares to LU Decomp. at the moment.
From your comment, it seems like you are looking for an operation which takes n −1 vectors as input and computes a single vector as its result, which will be orthogonal to all the input vectors and perhaps have a well-defined length as well.
With defined length
You can characterize the 3-dimensional cross product v =a ×b using the identity v ∙w =det(a,b,w). In other words, taking the cross product of the input vectors and then computing the dot product with any other vector w is the same as plugging the input vectors and that other vector into a matrix and computing its determinant.
This definition can be generalized to arbitrary dimensions. Due to the way a determinant can be computed using Laplace expansion along the last column, the resulting coordinates of that cross product will be the values of all (n −1)×(n −1) sub-determinants you can form from the input vectors, with alternating signs. So yes, Leibniz might be useful in theory, although it is hardly suitable for real-world computations. In practice, you'll soon have to figure out ways to avoid repeating computationswhile computing these n determinants. But wait for the last section of this answer…
Just the direction
Most applications however can do with a weaker requirement. They don't care about the length of the resulting vector, but only about its direction. In that case, what you are asking for is the kernel of the (n −1)×n matrix you can form by taking the input vectors as rows. Any element of that kernel will be orthogonal to the input vectors, and since computing kernels is a common task, you can build on a lot of existing implementations, e.g. Lapack. Details might depend on the language you are using.
Combining these
You can even combine the two approaches above: compute one element of the kernel, and for a non-zero entry of that vector, also compute the corresponding (n −1)×(n −1) determinant which would give you that single coordinate using the first approach. You can then simply scale the vector so that the selected coordinate reaches the computed value, and all the other coordinates will match that one.

how to reduce dimensionality of vector

I have a set of vectors. I'm working on ways to reduce a n-dimensional vector to a unary value (1-d), say
(x1,x2,....,xn) ------> y
This single value needs to be the characteristic value of the vector. Each unique vector produces a unique output value. Which of the following methods is appropriate:
1- norm of the vector - square root of sum of squares that measures euclidian distance from origin
2- compute hash of F, using some hashing techniques avoiding collision
3- use linear regression to compute, y = w1*x1 + w2*x2 + ... + wn*xn - unlikely to be good if there is no good dependence of input values on output
4- feature extraction technique like PCA that assigns weights to each of x1,x2,..xn based on
the set of input vectors
It's unclear from the method what properties you need this transform to have, so I'm making a guess that you don't need the transformation to preserve any properties other than uniqueness, and possibly invertibility.
None of the techniques you suggest can in general avoid collisions:
Norm - two vectors pointing in opposite directions have the same norm.
Hash - if the input isn't known apriori - what is generally meant by hash function has a finite image, and you have an infinite number of possible vectors - no good.
It's easy to find to vectors which give the same result for any linear regression result (think about it).
PCA is a specific kind of linear transformation - hence the same problem as with linear regression.
So - if you're just looking for uniqueness, you could "stringify" your vectors. One way to do it is to write them down as text strings, with the different coordinates separated by a special character (underscore, for example). Then take the binary value of this string as your representation.
If space is important and you need a more efficient representation, you could think of a more efficient bit encoding: each character in the set 0,1,...,9,'.','' can be represented by 4 bits - a hexadecimal digit (map '.' to A and '' to B). Now encode this string as a hexadecimal number, saving half the space.

How to store Sparse matrix for a matrix-vector multiply when some boundary condition values are known?

I have a sparse matrix that represents a 3D rectangular space. Along some of the boundaries, I know what the value is going to be (it's a constant). The other boundaries may be reflective, differential, etc.
Should I just set the problem up as if all the boundaries were say, differential, and then go back and set the nodes in the solution vector b to be the constants?
Thanks!
In the finite element method you treat Dirchelet (value constraints) and Neumann (derivative constraints) differently. Usually you assemble the matrix without consideration for boundary conditions first, then apply boundary conditions, then do LU decomposition to solve.
You apply boundary conditions by modifying both the assembled matrix and the RHS vector. I'd have to know more details to tell you exactly what you need to do.

Resources