Is this subset closed under addition or multiplication and why? - linear-algebra

V is a subset of R^3 and consists of vectors a{1,1,0) + b{0,1,1} where
a and b are real numbers.
I am confused as to how to determine if V is closed under addition and scalar multiplication. I understand that the vectors would be closed if their sum and product are within the vector space, but the introduction of the scalars a and b has confused me.
Thanks!

Another way of writing a{1,1,0} would be {a, a, 0}
So your 3D space is all vectors of the form {a, a, 0} and {0, b, b}. Is it possible to add or multiple all vectors from this set and still remain in the set?
Use an example of a = 1 and b = 2 and see what happens when you multiply {a, a, 0} x {0, b, b}.

Related

Find paths of length = 4, starting by an adjacency matrix of a directed graph, considering only distinct edges?

Given an EREW-PRAM model, that allows me to use an arbitrary number of processors in parallel without them conflicting nor in read, nor in write access, I need to find the number of paths of length 4, considering that I have an input node-node adjacency matrix A representing a directed graph and that I need to exclude paths that don't use distinct edges (e.g.: (a,b),(b,a),(a,b),(b,a) is not a valid path).
I have a function that uses n^3 processors and calculates the matrix multiplication of two given matrices in time O(logn):
mult-matrix(A, A, n) => B --> gives me the paths of length 2.
mult-matrix(B, B, n) => C --> gives me the paths of length 4, but I think it considers paths that run across the same edges.
I tried subtracting 1 from elements of C that have a node u communicating with a node v in both directions, but I'm not sure it works.
How could I solve the problem considering that I just need to exclude some paths from the resulting matrix C?
Any working solution is appreciated, considering that the number of processors is constrained to n^3 and time must be O(logn) in the worst case. The exercises must be solved using a pseudo-pascal language, but given a working solution, I should be able to write the pseudocode by myself.
I think I found a solution in https://www.perlmonks.org/?node_id=522270
Given an input matrix A, I am able to calculate the adjacency matrix for paths of length 2, 3 and 4 with the provided function.
A2 is the adjacency matrix obtained by multiplying A*A and contains paths of length 2
A3 is obtained by multiplying A2*A and contains paths of length 3
A4 is obtained by multiplying A3*A and contains paths of length 4
In order to exclude the repeated edges, I have to compute the matrix C, obtained by doing an element-wise subtraction among the calculated matrices.
C[i,j] = A4[i,j] - A3[i,j] - A2[i,j] - A[i,j]
C contains the final result.
The following pseudocode solves the problem with an EREW-PRAM using O(n^3) processors and in time O(logn).
procedure paths_length_4(A, n) // Work = O(n^3 logn)
begin
A2 := mult_matrix(A, A, n) // T=O(logn), P=O(n^3)
A3 := mult_matrix(A2, A, n) // T=O(logn), P=O(n^3)
A4 := mult_matrix(A3, A, n) // T=O(logn), P=O(n^3)
for all i,j where 1 ≤ i ≤ n, 1 ≤ j ≤ n pardo // P=O(n^2)
C[i,j] := A4[i,j] - A3[i,j] - A2[i,j] - A[i,j]
end

Get Nth combination of a set of elements with loose conditions

Given:
A list of symbols of size M
The desired size of combination L
Symbols may occur any number of times in a combination
All permutations of any combination of the symbols must be taken into the account
Example: for a list of symbols (a, b, c), and L=4, all of the combinations (a, a, a, a), (a, b, a, c), (a, c, b, b) and so on are valid. For the lack of a better term, I called this "loose combinations".
The particular ordering of the combinations is not important. Being given the combination index N, the algorithm should return a unique combination from the set of possible combinations that satisfy the conditions. My guess is that the most natural order would be if we consider the combinations as numbers of radix M and length L, so that the normal number order would apply, but that is not strictly necessary to follow.
What is the algorithm to find the Nth combination?
I'm not sure how to find the answer myself, and have been searching if there was an answer for this particular set of conditions elsewhere, but did not find it. All the questions that I find are not interested in combinations with repeating elements like (a, a, b, b) and combinations with rearranged order, like (a, a, b, c) and (a, b, c, a) or (a, c, a, b) are treated as the same combination.
As you figured out already, you are essentially interested in enumerating the numbers of length up to L in base M.
So, a solution might look like this:
Define a bijection {0, …, M-1} -> Symbols, i.e. enumerate your symbols.
For any non-negative integer N < M^L, determine its base M representation.
Easily done by repeated modulo M and rounded down division by M.
Without loss of generality, this has length M by adding leading zeroes as needed.
Use your bijection to convert this list of digits 0 to M-1 to a loose combination of symbols.
So, let's go into detail on this part:
Easily done by repeated modulo M and rounded down division by M.
Pseudocode:
int a[L];
for int i from 0 to L-1 do
a[i] = N % M; // Should go from 0 to M-1
N = N / M; // Rounded down, of course
done

Are these functions column-major or row-major?

I'm comparing two different linear math libraries for 3D graphics using matrices. Here are two similar Translate functions from the two libraries:
static Matrix4<T> Translate(T x, T y, T z)
{
Matrix4 m;
m.x.x = 1; m.x.y = 0; m.x.z = 0; m.x.w = 0;
m.y.x = 0; m.y.y = 1; m.y.z = 0; m.y.w = 0;
m.z.x = 0; m.z.y = 0; m.z.z = 1; m.z.w = 0;
m.w.x = x; m.w.y = y; m.w.z = z; m.w.w = 1;
return m;
}
(c++ library from SO user prideout)
static inline void mat4x4_translate(mat4x4 T, float x, float y, float z)
{
mat4x4_identity(T);
T[3][0] = x;
T[3][1] = y;
T[3][2] = z;
}
(linmath c library from SO user datenwolf)
I'm new to this stuff but I know that the order of matrix multiplication depends a lot on whether you are using a column-major or row-major format.
To my eyes, these two are using the same format, in that in both the first index is treated as the row, the second index is the column. That is, in both the x y z are applied to the same first index. This would imply to me row-major, and thus matrix multiplication is left associative (for example, you'd typically do a rotate * translate in that order).
I have used the first example many times in a left associative context and it has been working as expected. While I have not used the second, the author says it is right-associative, yet I'm having trouble seeing the difference between the formats of the two.
To my eyes, these two are using the same format, in that in both the first index is treated as the row, the second index is the column.
The looks may be deceiving, but in fact the first index in linmath.h is the column. C and C++ specify that in a multidimensional array defined like this
sometype a[n][m];
there are n times m elements of sometype in succession. If it is row or column major order solely depends on how you interpret the indices. Now OpenGL defines 4×4 matrices to be indexed in the following linear scheme
0 4 8 c
1 5 9 d
2 6 a e
3 7 b f
If you apply the rules of C++ multidimensional arrays you'd add the following column row designation
----> n
| 0 4 8 c
| 1 5 9 d
V 2 6 a e
m 3 7 b f
Which remaps the linear indices into 2-tuples of
0 -> 0,0
1 -> 0,1
2 -> 0,2
3 -> 0,3
4 -> 1,0
5 -> 1,1
6 -> 1,2
7 -> 1,3
8 -> 2,0
9 -> 2,1
a -> 2,2
b -> 2,3
c -> 3,0
d -> 3,1
e -> 3,2
f -> 3,3
Okay, OpenGL and some math libraries use column major ordering, fine. But why do it this way and break with the usual mathematical convention that in Mi,j the index i designates the row and j the column? Because it is make things look nicer. You see, matrix is just a bunch of vectors. Vectors that can and usually do form a coordinate base system.
Have a look at this picture:
The axes X, Y and Z are essentially vectors. They are defined as
X = (1,0,0)
Y = (0,1,0)
Z = (0,0,1)
Moment, does't that up there look like a identity matrix? Indeed it does and in fact it is!
However written as it is the matrix has been formed by stacking row vectors. And the rules for matrix multiplication essentially tell, that a matrix formed by row vectors, transforms row vectors into row vectors by left associative multiplication. Column major matrices transform column vectors into column vectors by right associative multiplication.
Now this is not really a problem, because left associative can do the same stuff as right associative can, you just have to swap rows for columns (i.e. transpose) everything and reverse the order of operands. However left<>right row<>column are just notational conventions in which we write things.
And the typical mathematical notation is (for example)
v_clip = P · V · M · v_local
This notation makes it intuitively visible what's going on. Furthermore in programming the key character = usually designates assignment from right to left. Some programming languages are more mathematically influenced, like Pascal or Delphi and write it :=. Anyway with row major ordering we'd have to write it
v_clip = v_local · M · V · P
and to the majority of mathematical folks this looks unnatural. Because, technically M, V and P are in fact linear operators (yes they're also matrices and linear transforms) and operators always go between the equality / assignment and the variable.
So that's why we use column major format: It looks nicer. Technically it could be done using row major format as well. And what does this have to do with the memory layout of matrices? Well, When you want to use a column major order notation, then you want direct access to the base vectors of the transformation matrices, without having them to extract them element by element. With storing numbers in a column major format, all it takes to access a certain base vector of a matrix is a simple offset in linear memory.
I can't speak for the code example of the other library, but I'd strongly assume, that it treats first index as the slower incrementing index as well, which makes it work in column major if subjected to the notations of OpenGL. Remember: column major & right associativity == row major & left associativity.
The fragments posted are not enough to answer the question. They could be row-major matrices stored in row order, or column-major matrices stored in column order.
It may be more obvious if you look at how a vector is treated when multiplied with an appropriate matrix. In a row-major system, you would expect the vector to be treated as a single row matrix, whereas in a column-major system it would similarly be a single column matrix. That then dictates how a vector and a matrix may be multiplied. You can only multiply a vector with a matrix as either a single column on the right, or a single row on the left.
The GL convention is column-major, so a vector is multiplied to the right.
D3D is row-major, so vectors are rows and are multiplied to the left.
This needs to be taken into account when concatenating transforms, so that they are applied in the correct order.
i.e:
GL:
V' = CAMERA * WORLD * LOCAL * V
D3D:
V' = V * LOCAL * WORLD * CAMERA
However they choose to store their matrices such that the in-memory representations are actually the same (until we get into shaders and some representations need to be transposed...)

Mathematica Index Equation (basic algebra)

I am currently working on a Mathematica project to calculate Riemann's sums and put them in a table. I am having trouble printing the row numbers (intervals). (The row numbers are also parameters to the secondary functions). I don't know of any way to just access the index of the iterator in a Mathematica Table, so I am trying to compute them using the function parameters.
Here is an example of what I'd like to print, for the integral of x^2 over the range {0, 1}, with 10 subdivisions.
tableRiemannSums[#^2 &, {0, 1}, 10]
I need to figure out what the index of each iteration is, based on the value of the current
subdivision k, the range of the integral {a, b}, and the number of subdivisions, n. Below is the main piece of code.
tableRiemannSums[fct_, {a_, b_}, n_] := Table[{'insert index here',
leftRiemannSum[fct, {a, b}, 'insert index here'],
rightRiemannSum[fct, {a, b}, 'insert index here']},
{k, a, b - (N[b - a]/n), N[b - a]/n}]
In the above equation, the line
{k, a, b - (N[b - a]/n), N[b - a]/n}]
means the range of the table is k as k goes from 'a' to 'b - ((b - a)/n)' in steps of size '(b - a)/n'.
In each of the places where my code says 'insert index here,' I need to put the same equation. Right now, I am using 'n * k + 1' to calculate the index, which is working for positive ranges, but breaks when I have a range like {a,b} = {-1, 1}.
I think this is a fairly straightforward algebra problem, but I have been racking my brain for hours and can't find a general equation.
(I apologize if this is a duplicate question - I tried searching through the Stack overflow archives, but had a hard time summarizing my question into a few key words.)
I finally figured out how to solve this. I was over thinking the range, rather than relying on the inner functions to control it. I rewrote the function as:
tableRiemannSums[fct_, {a_, b_}, n_] := Table[{k,
leftRiemannSum[fct, {a, b}, k],
rightRiemannSum[fct, {a, b}, k]},
{k, 1, n}}]
For reference, here are the left and right sums (for anyone interested!):
leftRiemannSum[fct_, {a_, b_}, n_] :=
N[b - a]/n* Apply[Plus, Map[fct, Range[a, b - N[b - a] / n, N[b - a]/n]]]
rightRiemannSum[fct_, {a_, b_}, n_] :=
N[b - a]/n* Apply[Plus, Map[fct, Range[a + (N[b - a]/n), b, N[b - a]/n]]]
What you may want to consider is creating a function to make each line of the table. One argument to this function would be the row number.
Execute this function using MapIndexed, which will provide you a way to traverse your range as required while providing an incrementing row number.
(Create a list with the range of values, then apply your MapIndexed function to this list.)

A more generalized expand.grid function?

expand.grid(a,b,c) produces all the combinations of the values in a,b, and c in a matrix - essentially filling the volume of a three-dimensional cube. What I want is a way of getting slices or lines out of that cube (or higher dimensional structure) centred on the cube.
So, given that a,b, c are all odd-length vectors (so they have a centre), and in this case let's say they are of length 5. My hypothetical slice.grid function:
slice.grid(a,b,c,dimension=1)
returns a matrix of the coordinates of points along the three central lines. Almost equivalent to:
rbind(expand.grid(a[3],b,c[3]),
expand.grid(a,b[3],c[3]),
expand.grid(a[3],b[3],c))
almost, because it has the centre point repeated three times. Furthermore:
slice.grid(a,b,c,dimension=2)
should return a matrix equivalent to:
rbind(expand.grid(a,b,c[3]), expand.grid(a,b[3],c), expand.grid(a[3],b,c))
which is the three intersecting axis-aligned planes (with repeated points in the matrix at the intersections).
And then:
slice.grid(a,b,c,dimension=3)
is the same as expand.grid(a,b,c).
This isn't so bad with three parameters, but ideally I'd like to do this with N parameters passed to the function expand.grid(a,b,c,d,e,f,dimension=4) - its unlikely I'd ever want dimension greater than 3 though.
It could be done by doing expand.grid and then extracting those points that are required, but I'm not sure how to build that criterion. And I always have the feeling that this function exists tucked in some package somewhere...
[Edit] Right, I think I have the criterion figured out now - its to do with how many times the central value appears in each row. If its less than or equal to your dimension+1...
But generating the full matrix gets big quickly. It'll do for now.
Assuming a, b and c each have length 3 (and if there are 4 variables then they each have length 4 and so on) try this. It works by using 1:3 in place of each of a, b and c and then counting how many 3's are in each row. If there are four variables then it uses 1:4 and counts how many 4's are in each row, etc. It uses this for the index to select out the appropriate rows from expand.grid(a, b, c) :
slice.expand <- function(..., dimension = 1) {
L <- lapply(list(...), seq_along)
n <- length(L)
ix <- rowSums(do.call(expand.grid, L) == n) >= (n-dimension)
expand.grid(...)[ix, ]
}
# test
a <- b <- c <- LETTERS[1:3]
slice.expand(a, b, c, dimension = 1)
slice.expand(a, b, c, dimension = 2)
slice.expand(a, b, c, dimension = 3)

Resources