I am solving a problem in programming where I have a matrix and given two positions I need to find the elements in between including the elements given
So for example a matrix with 1000 rows and 1000 columns initial position is [499,499] and final position is [500,500] the number of elements are 4
I wanted to know if there is any mathematical formula that can be applied on any matrix
Well to get the number of elements it would be (500-499+1)*(500-499+1) or (x2-x1+1)*(y2-y1+1) which could be used for possible memory allocation depending on what programming language you are using. Then to access the elements of the matrix, you could create a matrix of size calculated with the values provided and return that.
Matrix getSubMatrix(Matrix matrix, int x1, int y1, int x2, int y2) {
// This is assuming matrixes can be created this way
// x2-x1+1 and y2-y1+1 should provide the correct dimensions for the values
// to be extracted from the provided matrix
Matrix submatrix = new Matrix(x2-x1+1, y2-y1+1);
// Now we will itterate through both dimensions of the original matrix
// and the new matrix
for (int i = 0; i < x2-x1+1; i++) {
for (int j = 0; j < y2-y1+1; j++) {
// The new matrix can be accessed with i and j, but the original matrix
// requires the offset of x1 and y1
subMatrix[i][j] = matrix[i+x1][j+y1];
}
}
return submatrix;
}
Note that you could also use arrays instead of Objects for the input parameters and return value. As matt did with his answer
As SergGr pointed out the case where x1 > x2 or y1 > y2 to fix that and not assume that x1 < x2 and y1 < y2. You can replace the x1 in the method with min(x1,x2), x2 with max(x1,x2) and the same for y1 and y2.
Sure just do it with two for loops:
int[][] matrix = new int[1000][1000];
populateMatrix(matrix); // populate the matrix with some values, somehow
int pos_1_X = 499;
int pos_1_Y = 499;
int pos_2_X = 500;
int pos_2_Y = 500;
int numElements = 0;
for(int x = pos_1_X; x <= pos_2_X; x++) {
for(int y = pos_1_Y; y <= pos_2_Y; y++) {
numElements++; // increment the counter
System.out.printf("matrix[%d][%d] = %d", x, y, matrix[x][y]); // print the element
}
}
System.out.println("Number of elements: " + numElements);
Related
vector<vector<int>> arr{
{1,2},
{2,3},
{4,5},
{1,5}
};
vector<vector<int>> adj( m , vector<int> (m, 0));
for (int i = 0; i < arr.size(); i++) {
// Find X and Y of Edges
int x = arr[i][0];
int y = arr[i][1];
// Update value to 1
adj[x][y] = 1;
adj[y][x] = 1;
}
I am trying to update the vector of vector value using the above code but I am getting a segmentation error. How can I change the element value of the 2D vector of a particular location.
I am creating the game "Snakes And Ladders". I am using a GridPane to represent the game board and obviously I want to move through the board in a "snake" way. Just like that: http://prntscr.com/k5lcaq .
When the dice is rolled I want to move 'dice_num' moves forward + your current position, so I am calculating the new index using an 1D array and I convert this index to 2D coordinates (Reverse Row-Major Order).
gameGrid.add(pieceImage, newIndex % ROWS, newIndex / ROWS);
Where gameGrid is the ID of my grid pane, newIndex % ROWS represents the column coordinate and newIndex / ROWS the row coordinate.
PROBLEM 1: The grid pane is iterating in its own way. Just like that: https://prnt.sc/k5lhjx.
Obviously when the 2D array meets coordinates [0,9] , next position is [1,0] but what I actually want as next position is [1,9] (going from 91 to 90).
PROBLEM 2: I want to start counting from the bottom of the grid pane (from number 1, see screenshots) and go all the way up till 100. But how am I supposed to reverse iterate through a 2D array?
You can easily turn the coordinate system upside down with the following conversion:
y' = maxY - y
To get the "snake order", you simply need to check, if the row the index difference is odd or even. For even cases increasing the index should increase the x coordinate
x' = x
for odd cases you need to apply a transformation similar to the y transformation above
x' = xMax - x
The following methods allow you to convert between (x, y) and 1D-index. Note that the index is 0-based:
private static final int ROWS = 10;
private static final int COLUMNS = 10;
public static int getIndex(int column, int row) {
int offsetY = ROWS - 1 - row;
int offsetX = ((offsetY & 1) == 0) ? column : COLUMNS - 1 - column;
return offsetY * COLUMNS + offsetX;
}
public static int[] getPosition(int index) {
int offsetY = index / COLUMNS;
int dx = index % COLUMNS;
int offsetX = ((offsetY & 1) == 0) ? dx : COLUMNS - 1 - dx;
return new int[] { offsetX, ROWS - 1 - offsetY };
}
for (int y = 0; y < ROWS; y++) {
for (int x = 0; x < COLUMNS; x++, i++) {
System.out.print('\t' + Integer.toString(getIndex(x, y)));
}
System.out.println();
}
System.out.println();
for (int j = 0; j < COLUMNS * ROWS; j++) {
int[] pos = getPosition(j);
System.out.format("%d: (%d, %d)\n", j, pos[0], pos[1]);
}
This should allow you to easily modify the position:
int[] nextPos = getPosition(steps + getIndex(currentX, currentY));
int nextX = nextPos[0];
int nextY = nextPos[1];
I have a big matrix and am interested in computing the correlation between the rows of the matrix. Since the cor method computes correlation between the columns of a matrix, I am transposing the matrix before calling cor. But since the matrix is big, transposing it is expensive and is slowing down my program. Is there a way to compute the correlations among the rows without having to take transpose?
EDIT: thanks for the responses. thought i'd share some findings. my input matrix is 16 rows by 239766 cols and comes from a .mat file. I wrote C# code to do the same thing using the csmatio library. it looks like this:
foreach (var file in Directory.GetFiles(path, interictal_pattern))
{
var reader = new MatFileReader(file);
var mla = reader.Data[0] as MLStructure;
convert(mla.AllFields[0] as MLNumericArray<double>, data);
double sum = 0;
for (var i = 0; i < 16; i++)
{
for (var j = i + 1; j < 16; j++)
{
sum += cor(data, i, j);
}
}
var avg = sum / 120;
if (++count == 10)
{
var t2 = DateTime.Now;
var t = t2 - t1;
Console.WriteLine(t.TotalSeconds);
break;
}
}
static double[][] createArray(int rows, int cols)
{
var ans = new double[rows][];
for (var row = 0; row < rows; row++)
{
ans[row] = new double[cols];
}
return ans;
}
static void convert(MLNumericArray<double> mla, double[][] M)
{
var rows = M.Length;
var cols = M[0].Length;
for (int i = 0; i < rows; i++)
for (int j = 0; j < cols; j++)
M[i][j] = mla.Get(i, j);
}
static double cor(double[][] M, int i, int j)
{
var count = M[0].Length;
double sum1 = 0, sum2 = 0;
for (int ctr = 0; ctr < count; ctr++)
{
sum1 += M[i][ctr];
sum2 += M[j][ctr];
}
var mu1 = sum1 / count;
var mu2 = sum2 / count;
double numerator = 0, sumOfSquares1 = 0, sumOfSquares2 = 0;
for (int ctr = 0; ctr < count; ctr++)
{
var x = M[i][ctr] - mu1;
var y = M[j][ctr] - mu2;
numerator += x * y;
sumOfSquares1 += x * x;
sumOfSquares2 += y * y;
}
return numerator / Math.Sqrt(sumOfSquares1 * sumOfSquares2);
}
this gave a throughput of 22.22s for 10 files or 2.22s/file
Then I profiled my R code:
ptm=proc.time()
for(file in files)
{
i = i + 1;
mat = readMat(paste(path,file,sep=""))
a = t(mat[[1]][[1]])
C = cor(a)
correlations[i] = mean(C[lower.tri(C)])
}
print(proc.time()-ptm)
to my surprise its running faster than C# and is giving throughput of 5.7s per 10 files or 0.6s/file (an improvement of almost 4x!). The bottleneck in C# is the methods inside csmatio library to parse double values from input stream.
and if i do not convert the csmatio classes into a double[][] then the C# code runs extremely slow (order of magnitude slower ~20-30s/file).
Seeing that this problem arises from a data input issue whose details are not stated (and only hinted at in a comment), I will assume this is a comma-delimited file of unquoted numbers with the number of columns= Ncol. This does the transposition on input.
in.mat <- matrix( scan("path/to/the_file/fil.txt", what =numeric(0), sep=","),
ncol=Ncol, byrow=TRUE)
cor(in.nmat)
One dirty work-around would be to apply cor-functions row-wise and produce the correlation matrix from the results. You could try if this is any more efficient (which I doubt, though you could fine-tune it by not double computing everything or the redundant diagonal cases):
# Apply 2-fold nested row-wise functions
set.seed(1)
dat <- matrix(rnorm(1000), nrow=10)
cormat <- apply(dat, MARGIN=1, FUN=function(z) apply(dat, MARGIN=1, FUN=function(y) cor(z, y)))
cormat[1:3,1:3] # Show few first
# [,1] [,2] [,3]
#[1,] 1.000000000 0.002175792 0.1559263
#[2,] 0.002175792 1.000000000 -0.1870054
#[3,] 0.155926259 -0.187005418 1.0000000
Though, generally I would expect the transpose to have a really, really efficient implementation, so it's hard to imagine when that would be the bottle-neck. But, you could also dig through the implementation of 'cor' function and call the correlation C-function itself by first making sure your rows are suitable. Type 'cor' in the terminal to see the implementation, which is mostly a wrapper that makes input suitable for the C-function:
# Row with C-call from the implementation of 'cor':
# if (method == "pearson")
# .Call(C_cor, x, y, na.method, FALSE)
You can use outer:
outer(seq(nrow(mat)), seq(nrow(mat)),
Vectorize(function(x, y) cor(mat[x , ], mat[y , ])))
where mat is the name of your matrix.
I am attaching the code for solving the sparse matrix but it seems that it is taking too much time for solving the equation. Please tell me if I am doing something wrong.
Code 1:
//given to the below function
//int[] row_inds, int[] col_inds, double[] vals, double[] x, double[] b;
value=49152;
Matrix A= new CCSMatrix(value,value);
for(i=0;i<value;i++){
A.set(row_inds[i], col_inds[i], val[i]);
}
Vector B = new BasicVector(b);
GaussianSolver solvers=new GaussianSolver(A);
Vector Y=solvers.solve(B, LinearAlgebra.SPARSE_FACTORY);
for (i = 0; i < Y.length(); i++) {
x[i]=Y.get(i);
}
Code 2:
//given to the below function
//int[] row_inds, int[] col_inds, double[] vals, double[] x, double[] b;
value=49152;
Matrix A= new CCSMatrix(value,value);
for(i=0;i<value;i++){
A.set(row_inds[i], col_inds[i], val[i]);
}
Vector B = new BasicVector(b);
LinearSystemSolver solver = A.withSolver(LinearAlgebra.FORWARD_BACK_SUBSTITUTION);
Vector X = solver.solve(B, LinearAlgebra.SPARSE_FACTORY);
System.out.println("solved for vector X");
for (i = 0; i < X.length(); i++) {
x[i]=X.get(i);
}
There was a performance bug in la4j. It has been fixed in the last release. It should work fine now (especially for sparse matrices with low density).
I must be missing something really simple here. I've got some JS code that creates simple linear systems (I'm trying to create the shortest line between two skew lines). I've gotten to the point where I have Ax = b, and need to solve for x. A is a 3 x 2 matrix, b is 3 x 1.
I have:
function build_equation_system(v1, v2, b) {
var a = [ [v1.x, v2.x], [v1.y, v2.y], [v1.z, v2.z] ];
var b = [ [b.x], [b.y], [b.z]];
return numeric.solve(a,b)
}
Numeric returns a 1 x 3 matrix of NaNs, even when there is a solution.
Using numeric you can do the following:
Create a function which computes the pseudoinverse of your A matrix:
function pinv(A) {
return numeric.dot(numeric.inv(numeric.dot(numeric.transpose(A),A)),numeric.transpose(A));
}
Use that function to solve your linear least squares equation to get the coefficients.
var p = numeric.dot(pinv(a),b);
I tried your initial method of using numeric.solve and could not get it to work either so I'd be interested to know what the problem is.
A simple test...
var x = new Array(10);
var y = new Array(10);
for (var i = 0; i < 10; ++i) {
x[i] = i;
y[i] = i;
}
// Solve for the first order equation representing this data
var n = 1;
// Construct Vandermonde matrix.
var A = numeric.rep([x.length, n + 1], 1);
for (var i = 0; i < x.length; ++i) {
for (var j = n-1; j >= 0; --j) {
A[i][j] = x[i] * A[i][j+1];
}
}
// Solves the system Ap = y
var p = numeric.dot(pinv(A),y);
p = [1, 2.55351295663786e-15]
I've used this method to recreate MATLAB's polyfit for Javascript use.