cross product in TinyOs? - vector

how can i make cross product between two vector in Tinyos (nesC)?
if i have position=p1+x*ex+y*ey;; where ex,ey: are vector.
i am searching for that but couldn't find any thing helped !!!
so is there any helpful way to do this ? if i go to the definition of cross product there is angle which i don't now how to get it from the two vector ?

if you have two vectors a and b (a = (a1, a2, a3) and b = (b1, b2, b3)) then cross product can be computed using following formula:
a x b = (a2*b3 - a3*b2, a3*b1 - a1*b3, a1*b2 - a2*b1)

Related

How to approximately match bowtie graphs using igraph?

I'm trying to learn how match_vertices works in igraph so that I can use it to join node and attribute information from graphs generated by a few different black box computer programs. I tried looking at the single provided example, but it was too complicated for me to understand. So, I tried to throw together an even simpler toy example which I am trying to understand.
library(igraph)
bow = make_graph(~ A - B - C - A - D - E - A)
tie = make_graph(~ a - e - d - a - c - b - a)
isomorphic(bow, tie)
It looks to me like the code should be:
A = get.adjacency(bow)
B = get.adjacency(tie)
P0 = diag(nrow(A))
corr = match_vertices(A, B,
start = P0,
m = 0,
iteration = 30)
corr$P
The permutation matrix didn't seem to change. The resulting permutation matrix is the same as the
starting permutation matrix. Why is that?
I defined a random permutation matrix and repeated the exercise to
see if that was always the case. It was!
random_permutation = function(n, ...) {
P = diag(n)
i = sample(1:n, ...)
P[i,,drop=FALSE]
}
Can anyone recommend some simple toy examples of using match_vertices that
demonstrate its main features?
show how to approximately match two graphs
show how to approximately match two graphs when some vertices are known
show how to approximately match two graphs when they have different
numbers of vertices
Also, are there any ways of matching graphs if even if the seeds aren't known
a priori, the nodes have maximally consistent attributes?

Shortest keyboard distance typing

Can anyone help me with this problem?
We have a grid of MxN characters from some specific aplhabet, S={A,B,C,D} for example.
The cursor is positioned on (1,1) position, and we can move cursor using the arrow keys, up, down, left, right, and press enter to select the character ( just like selecting nick in old games ). What is minimum cost of operations where they are weighted same, (e.g. move right is equally costly as selecting the char) given some input string from aplhabet S? There can also be multiple occurences of the same character in the matrix.
Example:
alphabet S={A,B,C,D}
matrix :
ABDC
CADB
ABAA
and input string ADCABDA.
My incomplete solution would be:
Construct directed grid graph and find shortest path from 1,1 to end character, with inbetween characters similar to towns in TSP, and from optimal subpaths construct optimal final path using dynamic programming. Problem is that you could end with many possible end characters, and I totally have no idea how to construct longer optimal path from smaller optimal subpaths.
You should construct a graph with nodes something like this:
A1 A1 A1
A2 D1 C1 A2 B1 D1 A2
Start A3 D2 C2 A3 B2 D2 A3 End
A4 A4 B3 A4
A5 A5 A5
where there are edges connecting each node in a column to each node in the next column. Start is (1,1) and End is wherever. The edge weights are the "taxicab" distances between each pair of keys.
Now it's a fairly straightforward dynamic programming problem. You can start at either end; it's probably conceptually simpler to start at Start. Keep track of the minimum cost so far to reach each node.
You could use 3D dynamic programming, where each state is (x, y, l) - (x, y) representing current position and l representing what letter you are at.
To explain further. You start at position (0, 0, 0). First letter is "A". You can try all A's and we know that distance will be Manhattan distance (http://en.wikipedia.org/wiki/Taxicab_geometry). Solution for (0, 0, 0) would be minimum of all possibilities.
At each step repeat the above process. Note that importance of memorising each step. In the below sample code memo acts as function, you would use array in reality.
Here is sample pseudo-code:
f(x, y, l):
if memo(x, y, l) != -1:
return memo(x, y, l) # Check if already calculated.
if l == length(word):
return memo(x, y, l) = 0 # Ending condition.
memo(x, y, l) = inf
next_letter = word[l]
for each (x2, y2) in grid that contains next_letter:
distance = |x2 - x| + |y2 - y|
next_calc = f(x2, y2, l+1)
memo(x, y, l) = min(memo(x, y, l), distance + next_calc)
return memo(x, y, l)
Set all memo to -1, so we know that no states are calculated.
Solution is f(0, 0, 0).
Let me know which steps I need to clarify further.

How to determine if a vector is between two other vectors?

I am looking for a fast and effective way to determine if vector B is Between the small angle of vector A and vector C. Normally I would use the perpendicular dot product to determine which sides of each line B lies on but in this case is not so simple because of the following:
None of the vectors can be assumed to be normalized and normalizing them is an extra step I would prefer to avoid.
I have no clear notion as to which side is the smallest angle so it is hard to say which side of the line is good or not.
It is possible for A and B to be co-linear or exactly 180 degrees apart in which case I want to return false.
While I am working in a 3D enviroment it is easy for me to simplify this to 2D if that makes things easier and more importantly faster. This test will be used in an algorithm that needs to run as fast as possible.
If there is some easy and efficient method to determine which direction my perpendicular vectors should both point I could use the two dot products for my test.
Another approach I have been considering without much success so far is using a matrix. In theory from what I understand of matrix transforms I should be able to use A and C as basis vectors. Then multiplying B by the matrix I should be able to test what quadrant B then lies in by whether X and Y are both positive. If I could get this approach to work it would likely be the best since one matrix multiplication should be faster than two dot products and I should not have to worry about which side has the smallest angle on it.
The problem is from my tests I cannot simply use A and C as bases and multiply it normally and get correct behavior. I am really not sure what i am doing wrong here. I have run across the term "Vector spaces" a few times which as near as I can figure seems to be a very similar concept to matrix transforms without any requirements for orthogonal bases or orthonormal bases. Is it the same thing as matrix? If not, might there be a better approach and how would I use that?
Just to give a more visual explanation of what I am talking about:
#Aki Suihkonen
I can't seem to get it working. Coded up a mock case I could run through and see if I can't figure somthing out
For this case using
Ax 2.9579773 Ay 3.315979
Cx 2.5879822 Cy 5.1630249
For B I rotated around the four quadrants the vectors divide the space up into.
The signs I got:
- For Q1 --
- For Q2 +-
- For Q3 +-
- For Q4 --
Assuming I rotated around in the enviroment the same direction as the image I am fairly sure I did.
I think Aki's solution is close, but there are cases where it doesn't work:
From his solution:
return (ay * bx - ax * by) * (ay * cx - ax * cy) < 0;
This is equivalent to checking whether the cross product between B and A has the same sign as the cross product between C and A.
The sign of the cross product (U x V) tells you whether V lies on one side of U or the other (out of the board, into the board). In most coordinate systems, if U needs to rotate counter-clockwise (out of the board), then the sign will be positive.
So Aki's solution checks to see if B needs to rotate in one direction to get to A, while C needs to rotate in the other direction. If this is the case, B is not within A and C. This solution doesn't work when you don't know the 'order' of A and C, as follows:
To know for certain whether B is within A and C you need to check both ways. That is, the rotation direction from A to B should be the same as from A to C, and the rotation direction from C to B should be the same as from C to A.
This reduces to:
if (AxB * AxC >= 0 && CxB * CxA >= 0)
// then B is definitely inside A and C
One method to think about this is to regard all these vectors A, B, C as complex numbers.
Multiplying A, C all with B*, which is the complex conjugate of B, both the resulting vectors will be rotated in complex plane so that the reference axis (B*Conj(B)) is now the real axis (or y = 0) -- and that axis doesn't need to be calculated. In this case one only has to check if the sign of 'y' or imaginary component differ. Also in this case both resulting vectors have been scaled by the same length |B|.
`return sign(Imag(A * Conj(B))) != sign(Imag(C * Conj(B)));`
A = ax + i * ay; B = bx + i * by; C = cx + i * cy;
Conj(B) = bx - i * by;
A * B = (ax * bx - ay * by) + i * (ax * by + ay * bx);
I think this equation leads to even better performance, as only the Imaginary component of the multiplication is needed.
As a full solution, this converts to:
return (ay * bx - ax * by) * (ay * cx - ax * cy) < 0;
The middle multiplication is a short cut for:
return Sign(ay * bx - ax * by) != Sign(ay * cx - ax * cy);
Without complex numbers, the problem can be also seen as vector B being
{ Rcos beta, Rsin beta }, which can be represented as a rotation matrix.
R*[ cb -sb ] [ bx -by ], cb = cos(beta), sb = sin(beta)
[ sb cb ] = [ by bx ] cos(-beta) = cos(beta), sin(-beta) = -sin(beta)
Multiplying [ax,ay], [cx,cy] with the transpose of the scaled matrix [bx by, -by bx] affects the lengths of [ax, ay] * rotMatrix(-beta), [cx, cy] * rotMatrix(-beta) in exactly the same way.
In polar coordinates, you would just be asking if θA < θB < θC. So transform to polar first:
a_theta = ax ? atan(ay / ax) : sign(ay) * pi

How to calculate vec4 cross product with glm?

Why this throws an compilation error: no matching function for call to ‘cross(glm::vec4&, glm::vec4&)’
glm::vec4 a;
glm::vec4 b;
glm::vec4 c = glm::cross(a, b);
but it works fine for vec3?
There is no such thing as a 4D vector cross-product; the operation is only defined for 3D vectors. Well, technically, there is a seven-dimensional vector cross-product, but somehow I don't think you're looking for that.
Since 4D vector cross-products aren't mathematically reasonable, GLM doesn't offer a function to compute it.
What do your vec4's represent? Like Nicol said, cross products are only for 3D vectors. The cross product operation is used to find a vector that is orthogonal to the two input vectors. So if your vec4's represent 3D homogeneous vectors in the form {x, y, z, w}, then the w-component doesn't matter to you; You could simply ignore it.
A workaround could go as follows:
vec4 crossVec4(vec4 _v1, vec4 _v2){
vec3 vec1 = vec3(_v1[0], _v1[1], _v1[2]);
vec3 vec2 = vec3(_v2[0], _v2[1], _v2[2]);
vec3 res = cross(vec1, vec2);
return vec4(res[0], res[1], res[2], 1);
}
Simply turn your vec4's into vec3's, perform the cross product, then add a w-component of 1 back into it.
The generalization of the cross product is the wedge product, and the wedge product of two vectors is a 2-form, also known as a bivector.
In 3-space, the 2-form kinda looks like a vector, but it behaves quite differently. Suppose we have two non-collinear vectors tangent to a surface (aka tangent vectors). By taking the cross product of these vectors, we have a 2-form that represents the tangent plane. We can also represent that tangent plane by the vector normal to that plane (aka the normal vector). But the tangent and normal vectors are transformed differently, i.e. the normal vector is transformed by the inverse transpose of the matrix used to transform the tangent vectors.
In 4-space, the 2-form resulting from the wedge product of two vectors also represents the plane that contains the two vectors (this is also true in N-space). Similarly to the case in 3-space, we can have an alternate interpretation of that plane, but in 4-space, the complement to a plane is not a 4-vector, but another plane, both of which are represented with 6 components, not 4.
c1 * e1^e2 + c2 * e1^e3 + c3 * e1^e4 + c4 * e2^e3 + c5 * e2^e4 + c6 * e3^e4
Since glm doesn't provide the API for wedge products, you will have to roll your own. You can easily work out the algebra for the wedge product with two simple rules:
(1) ei ^ ei = 0
(2) ei ^ ej = -ej ^ ei
where the ei and ej are the component vectors (bases) of the vector space, e.g.
[a b c d] --> a * e1 + b * e2 + c * e3 + d * e4
The 7-dimensional vector referred to in a previous post is the geometric product of two vectors, which uses ei^ei=1 instead of rule (1) above, and is like a meld of the dot and cross products (or complex multiplication), which is more than what you want.
For more information, https://en.wikipedia.org/wiki/Exterior_algebra or https://en.wikipedia.org/wiki/Geometric_algebra .
There is more shortcut way to calculate cross product using glm's GLM_SWIZZLE.
Just do #define GLM_SWIZZLE before inclusion of any glm file. It's also helpful for lots of other tricks.
glm::vec4 a;
glm::vec4 b;
glm::vec4 c = glm::vec4( glm::cross( glm::vec3( a.xyz ), glm::vec3( b.xyz ) ), 0 );

How to find the two opposite normals or two segments?

I have a two segments AB and CD (in red). These two segments are facing each others. They are not completely parallel but will never be perpendicular to each others either.
From that, I need to find the two normals of these segments (in blue) that oppose each others (i.e. the two normals are outside ABCD). I know how to calculate the normals of the segments but obviously each segment has two normals, and I cannot figure out how to programmatically select the ones I need. Any suggestion?
Calculate the vector v between the midpoints of the two segments, pointing from AB to CD. Now the projection of the desired normal to AB onto v must be negative and the projection of the desired normal to CD onto v must be positive. So just calculate the normals, check against v, and negate the normals if needed to make them satisfy the condition.
Here it is in Python:
# use complex numbers to define minimal 2d vector datatype
def vec2d(x,y): return complex(x,y)
def rot90(v): return 1j * v
def inner_prod(u, v): return (u * v.conjugate()).real
def outward_normals(a, b, c, d):
n1 = rot90(b - a)
n2 = rot90(d - c)
mid = (c + d - a - b) / 2
if inner_prod(n1, mid) > 0:
n1 = -n1
if inner_prod(n2, mid) < 0:
n2 = -n2
return n1, n2
Note that I assume the endpoints define lines meeting the conditions in the problem. Nor do I check for the edge case when the lines have the same midpoint; the notion of "outside" doesn't apply in that case.
I think there are two cases to consider:
Case 1: Intersection between lines occurs outside the endpoints of either segment.
In this case the midpoint method suggested by #Michael J. Barber will work for sure. So form a vector between the midpoints of the segments, compute the dot product of your normal vectors with this midpoint vector and check the sign.
If you're computing the normal for lineA, the dot product of the normal with the vector midB -> midA should be +ve.
Case 2: Intersection between lines occurs inside the endpoints of one segment.
In this case form a vector between either one of the endpoints of the segment that does not enclose the intersection point and the intersection point itself.
The dot product of the normal for the segment that does enclose the intersection point and this new vector should be +ve.
You can find the outward normal for the other segment by requiring that the dot product between the two normals is -ve (which would only be ambiguous in the case of perpendicular segments).
I've assumed that the segments are not co-linear or actually intersecting.
Hope this helps.
You can reduce the four combinations for the signs as follows:
Calculate the dot product of the normals, a negative sign indicates that both show outside or inside.
As I suppose that your normals have unit lenght, you can detect parallelity if the dot product has magnitude one. A positive value indicates that both show in the same direction, a negative value says that both show in different directions.
It the normals are not parallel: parametrize lines as x(t) = x0 + t * n for a normal n and calculate the t for which both intersect. A negative t will indicate that both show outside. It is enough if you do this for one of the normals, as you reduced your combinations from 4 to 2 in step 1.
If both normals are parralel: Calculate the time t for which the normals hit the midpoint between of your segments. As in 2. is enough if you do this for one of the normals, as you reduced your combinations from 4 to 2 in step 1.

Resources