Julia's map and comprehension syntax make it easy to map over all elements of a multidimensional array.
Is there a similar support for mapping over slices of an array?
As a silly example, given a 3x3x100 matrix, I might want to map over all 100 3x3x_ slices. I might, say, derive the determinant of each 3x3 slice, and end up with a 1x1x100 array of determinants.
Look at mapslices. For the question suggest an example with size(A)==(3,3,100). Calculating the 100 determinantst of 3x3 matrices can be done with: mapslices(det,A,(1,2)).
Note the resulting matrix is still 3 dimensional, and squeeze can be used to get rid of the size 1 dimensions. In the example:
squeeze(mapslices(det,A,(1,2)),(1,2))
Related
This is more of a mathematical question.
I have a list of 2D coordinates of length N. (Nx2 list)
The coordinates are rounded numbers and form a region. The following is an example:
enter image description here
What I would like is to have a border around these points. Like the following:
enter image description here
One option to do this is to
go through the list, and for each coordinate i
check for the 8 possible neighbours j to see
if this point doesn't overlap with given coordinates k .
if this point doesn't overlap with already found border coordinates
This works well, nut needs N*N*8 calculations. For my N=1000 points: 8 million!
Does anyone know how this could be done more efficient?
Best regards,
Martin
If the size of the grid is constrained and is on the order of N as well, you could do better and get to O(N) by making a 2-D array of ints the size of the grid.
Initialize the grid to zeros.
For each point in the list of points, set the point itself to negative in the grid array and set each neighbor that isn't negative to positive.
When you're done, each point in the 2-D grid array that's positive is the border.
Make a sorted container that enforces uniqueness (in c++ STL, that's a std::set) of coordinates. Go through the points, adding each point's eight neighbors to the set if they aren't already in there. Then go though the points a second time, subtracting them from the set if they are in the set. The points that remain are the border. That's O(N*log(N)). In general that's the best you can do. But see my other answer for a better algorithm if additional criteria exist.
I'm trying to achieve smooth shading of triangles in my graphics program, however I'm currently stuck on how to do it exactly, I've got two options.
Option 1: (per vector)
Create a "zero" Vector.
Add the non-normalized normal of every incident triangle to the created vector.
Scale the resulting vector by 1 / incidentTriangleCount.
Return the normalized version of the resulting vector.
Option 2: (per vector)
Create a "zero" Vector.
Add the normalized normal of every incident triangle to the created vector.
Scale the resulting vector by 1 / incidentTriangleCount.
Return the non-normalized version of the resulting vector.
Both approaches are giving me different results and I don't really know which one to take, can anyone give me advice on this?
Always work with normalized normals. Thus your two options will merge in single one :)
Besides, you have to be careful when using "every" incident triangle, because in this case you will have your entire model smoothed, which is not good. E.g. a model of pencil that actually have edges will look like a rounded one. Implement a treshold, i.e. only consider triangles, which normals have relatively small angle beetween them.
I have a series of transformations that take my object and put it somewhere else. I am manually multiplying these transformations for the programmable pipeline in GL/ES. I'm rotating around distant arbitrary points and also translating, and while I have no trouble getting my object to finally position where I want it, I'd like to know how I can extract the final 3D vector coordinate of its position after these transformations.?
One option, suggested by this question, is to simply multiply your starting position by the final matrix and keep that result vector as the final coordinate. If so, what is the vector I use to represent my object's origin before these transformations? Because multiplying a matrix by my origin (0,0,0) simply results in a vector of zeroes.
The solution is surprisingly simple.
If I have a matrix M that is the final transformation created by all the matrix multiplications, then I can find the center of an object transformed by M by simply:
M * vector(0,0,0,1) // creates a 4D vector, where the first three, x,y,z are the coordinates
This is easily done manually in the code.
The key piece the question was missing was the exact vector to use for this multiplication.
I originally stored an objects origin and orientation in 3D space using 3 vectors representing the objects origin, forward and up directions.
To apply to correct transformation to the modelview matrix stack I compose the affine transformation matrix using these three vectors.
Translation is trivial, however for rotation is applied by constructing the correct rotation matrix (depending on the angle and axis of rotation) and applying it to these 3 vectors.
I'm using this method for a very large number of objects and the rotation/ affine matrix composing is causing a performance bottleneck.
I'm wondering if there is a more sensible/efficient way to store the orientation?
I originally stored an objects origin and orientation in 3D space using 3 vectors representing the objects origin, forward and up directions.
Or in other words: You're storing a 3×3 matrix. The matrices OpenGL use are just the same, though they're 4×4, but the only difference to your is, that the element 4,4 is always 1, the elements 0...3,4 are all 0, and the first column 0,0...3 is the cross product of forward and up, usually called right.
This is actually the most concise and directly accessible way to represent an objects placement in 3D space. You can apply any kind of transformation, by performing a single matrix multiplication then.
Another method is using a quaternion together with an offset vector. But quaternions must be turned into a matrix if you want your objects to be translatable (or you could chain up a lot of translation/rotation pairings for a transformation hierachy, but using a matrix actually causes less overhead).
Other than memory, what's stopping you from storing the whole 4x4 affine matrix?
Better yet, ISTR that if the array is normalised the bottom row is usually [0, 0, 0, 1] so you only need to store the top three rows.
A quaternion is probably the most natural/efficient way to store an orientation (it should be better in all respects than forward/up vectors). A quaternion will take 4 values to store, and takes 10 multiplications and 15 additions to convert to a 3x3 rotation matrix -- no divisions or transcendental functions are required.
If you are especially pressed for space, you can probably get by with only 3 values, as you can generate the first element of a unit quaternion robustly from the remaining three. This will take an additional 3 multiplications, 3 additions, and a square root (it is also slightly tricky, since you need to make sure this first element is nonnegative...)
Quaternions transformations may be faster then Matrix on current GPGPUs where the access for global memory is much slower then to local memory.
According to this tables you compute twice much but need to fetch less then half of memory.
Rendering have the vertex shader uniforms stored as local memory, the only reasonable here is the use of matrices.
Does a 3D vector differ from a 3D point tuple (x,y,z) in the context of 3D game mathematics?
If they are different, then how do I calculate a vector given a 3d point?
The difference is that a vector is an algebraic object that may or may not be given as the set of coordinates in some space. (thanks to bungalobill for correcting my sloppiness).
A point is just a point given by coordinates. Generally, one can conflate the two. If you are given a set of coordinates, and told that they constitute a 'point' with no further information (choice of basis, etc), then you can just hand that set of numbers back and legitimately claim to have produced a vector.
The largest difference between the two is that it makes no sense to do things to one that you can do to the other. For example,
You can add vectors: <1 2 3> + <3 2 1> = <4 4 4>
You can multiply (or scale) a vector by a number (generally called a scalar)
2 * <1 1 1> = <2 2 2>
You can ask how far apart two points are: d((1, 2, 3), (3, 2, 1) = sqrt((1 - 3)2 + (2 - 2)2 + (3 - 1)2) = sqrt(8) ~= 2.82
A good intuitive way to think about the association between a vector and a point is that a vector tells you how to get from the origin (that one point in space to which we assign the coordinates (0, 0, 0)) to its associated point.
If you translate your coordinate system, then you get a new vector for the same point. Although the coordinates that make up the point will undergo the same translation so it's a pretty easy conflation to make between the two.
Likewise if rotate the coordinate system or apply some other transformation (e.g. a shear), then the coordinates and vector associated to the point will also change.
It's also possible for a vector to be something else entirely, for example a bounded function on the interval [0, 1] is a vector because you can multiply it by a real number and add it to another function on the interval and it will satisfy certain requirements (namely the axioms of a vectorspace). In this case one thinks of having one coordinate for each real number, x, in [0, 1] where the value of that coordinate is just f(x). So that's the easiest example of an infinite dimensional vector space.
There are all sorts of vector spaces and the notion that a vector is a 'point and a direction' (or whatever it's supposed to be) is actually pretty vacuous.
A vector represents a change from one state to another. To create one, you need two states (in this case, points), and then you subtract the initial state from the final state in order to get the resultant vector.
Vectors are a more general idea that a point in 3D space.
Vectors can have 2, 3, or n dimensions. They represent many quantities in the physical world (e.g., velocity, force, acceleration) besides position.
A mathematician would say that a vector is a first order tensor that transforms according to this rule:
u(i) = A(i, j)v(j)
You need both point and vector because they are different. A point in 3D space denoting position is a vector, but every vector is not a point in 3D space.
Then there's the computer science notion of a vector as a container - it's an abstraction for an array of values or references. This is a different concept from a mathematician's idea of a vector, because every vector container need not obey the first order tensor transformation law (e.g. a Vector of OrderItems). That's yet another separate idea.
It's important to keep all these in mind when talking about vectors and points.
Does a 3D vector differ from a 3D point tuple (x,y,z) in the context of 3D game mathematics?
Traditionaly vector means a direction and speed. A point could be considered a vector from the world orgin of one time step. (even though it may not be considered mathematically pure)
If they are different, then how do I calculate a vector given a 3d point?
target-tower is the common mnemonic.
Careful on your usage of this. The resulting vector is really normal*velocity. If you want to change it into something useful in a game application: you will need to normalize the vector first.
Example: Joe is at (10,0,0) and he wants to go to (10,10,0)
Target-Tower: (10,10,0)-(10,0,0)=(0,10,0)
Normalize the resulting vector: (0,1,0)
Apply "physics": (0,1,0) * speed*elapsed_time < speed = 3 and we'll say that the computer froze for a whole 2 seconds between the last step and this one for ease of computation >
=(0,6,0)
Add the resulting vector to Joes current point in space to get his next point in space: ... =(10,6,0)
Normal = vector/(sqrt(x*x+y*y+z*z))
...I think I have everything here
Vector is the change in the states. A point is the static point. Two vectors can be parallel or perpendicular. You can have product of two vectors which is a third vector. You can multiply a vector by a constant. You can add two vectors.
All these operations are not allowed on point. So program wise if you think both as a C++ class, there will be many such methods in the vector class but probably only Get and Set for point.
In the context of game mathematics there is no difference.
Points are elements of an affine space.† Vectors are elements of a vector (aka linear) space. When you choose an origin in an affine space it automatically induces a linear structure on that affine space. The contrary is also true: if you have a vector space it already satisfies all the axioms of an affine space.
The fact is that when it comes to computation, the only way to represent an affine space numerically is to use tuples of numbers, which also form a vector space.
Each object in a game always has an origin, and it is crucial to know where it is. That origin is set relative to the origin of the world, which is set relative to the origin of the camera/viewport. The vertices of the object are represented as vectors -- offsets from the object origin. You use matrix multiplication to transform the objects -- that is too a purely vector space operation (you cannot multiply an affine point by a matrix without specifying the origin first). Etc, etc... As we see all those triplets of numbers that we might think of as 'points' are actually vectors in the local coordinate system.
So is there any reason to distinguish between the two outside the study of algebra? It is an unnecessary abstraction, and unnecessary abstractions are harmful (KISS). So my answer is no, just go with a single vector type.
† Or any topological space outside the context of game development.
A vector is a line, that is a sequence of points but that it can be represented by two points, the starting and the ending point.
If you take the origin as the starting point, then you can describe your vector giving only the ending point.