Trying to understand vectors a bit more.
What is the need for normalizing a vector?
If I have a vector, N = (x, y, z)
What do you actually get when you normalize it - I get the idea you have to divide x/|N| y/|N| & z/|N|. My question is, why do we do this thing, I mean what do we get out of this equation?
What is the meaning or 'inside' purpose of doing this.
A bit of a maths question, I apologize, but I am really not clear in this topic.
For any vector V = (x, y, z), |V| = sqrt(x*x + y*y + z*z) gives the length of the vector.
When we normalize a vector, we actually calculate V/|V| = (x/|V|, y/|V|, z/|V|).
It is easy to see that a normalized vector has length 1. This is because:
| V/|V| | = sqrt((x/|V|)*(x/|V|) + (y/|V|)*(y/|V|) + (z/|V|)*(z/|V|))
= sqrt(x*x + y*y + z*z) / |V|
= |V| / |V|
= 1
Hence, we can call normalized vectors as unit vectors (i.e. vectors with unit length).
Any vector, when normalized, only changes its magnitude, not its direction. Also, every vector pointing in the same direction, gets normalized to the same vector (since magnitude and direction uniquely define a vector). Hence, unit vectors are extremely useful for providing directions.
Note however, that all the above discussion was for 3 dimensional Cartesian coordinates (x, y, z). But what do we really mean by Cartesian coordinates?
Turns out, to define a vector in 3D space, we need some reference directions. These reference directions are canonically called i, j, k (or i, j, k with little caps on them - referred to as "i cap", "j cap" and "k cap"). Any vector we think of as V = (x, y, z) can actually then be written as V = xi + yj + zk. (Note: I will no longer call them by caps, I'll just call them i, j, k). i, j, and k are unit vectors in the X, Y and Z directions and they form a set of mutually orthogonal unit vectors. They are the basis of all Cartesian coordinate geometry.
There are other forms of coordinates (such as Cylindrical and Spherical coordinates), and while their coordinates are not as direct to understand as (x, y, z), they too are composed of a set of 3 mutually orthogonal unit vectors which form the basis into which 3 coordinates are multiplied to produce a vector.
So, the above discussion clearly says that we need unit vectors to define other vectors, but why should you care?
Because sometimes, only the magnitude matters. That's when you use a "regular" number (something like 4 or 1/3 or 3.141592653 - nope, for all you OCD freaks, I am NOT going to put Pi there - that shall stay a terminating decimal, just because I am evil incarnate). You would not want to thrown in a pesky direction, would you? I mean, does it really make sense to say that I want 4 kilograms of watermelons facing West? Unless you are some crazy fanatic, of course.
Other times, only the direction matters. You just don't care for the magnitude, or the magnitude just is too large to fathom (something like infinity, only that no one really knows what infinity really is - All Hail The Great Infinite, for He has Infinite Infinities... Sorry, got a bit carried away there). In such cases, we use normalization of vectors. For example, it doesn't mean anything to say that we have a line facing 4 km North. It makes more sense to say we have a line facing North. So what do you do then? You get rid of the 4 km. You destroy the magnitude. All you have remaining is the North (and Winter is Coming). Do this often enough, and you will have to give a name and notation to what you are doing. You can't just call it "ignoring the magnitude". That is too crass. You're a mathematician, and so you call it "normalization", and you give it the notation of the "cap" (probably because you wanted to go to a party instead of being stuck with vectors).
BTW, since I mentioned Cartesian coordinates, here's the obligatory XKCD:
Reading Godot Game Engine documentation about unit vector, normalization, and dot product really makes a lot of sense. Here is the article:
Unit vectors
Ok, so we know what a vector is. It has a direction and a magnitude. We also know how to use them in Godot. The next step is learning about unit vectors. Any vector with magnitude of length 1 is considered a unit vector. In 2D, imagine drawing a circle of radius one. That circle contains all unit vectors in existence for 2 dimensions:
So, what is so special about unit vectors? Unit vectors are amazing. In other words, unit vectors have several, very useful properties.
Can’t wait to know more about the fantastic properties of unit vectors, but one step at a time. So, how is a unit vector created from a regular vector?
Normalization
Taking any vector and reducing its magnitude to 1.0 while keeping its direction is called normalization. Normalization is performed by dividing the x and y (and z in 3D) components of a vector by its magnitude:
var a = Vector2(2,4)
var m = sqrt(a.x*a.x + a.y*a.y)
a.x /= m
a.y /= m
As you might have guessed, if the vector has magnitude 0 (meaning, it’s not a vector but the origin also called null vector), a division by zero occurs and the universe goes through a second big bang, except in reverse polarity and then back. As a result, humanity is safe but Godot will print an error. Remember! Vector(0,0) can’t be normalized!.
Of course, Vector2 and Vector3 already provide a method to do this:
a = a.normalized()
Dot product
OK, the dot product is the most important part of vector math. Without the dot product, Quake would have never been made. This is the most important section of the tutorial, so make sure to grasp it properly. Most people trying to understand vector math give up here because, despite how simple it is, they can’t make head or tails from it. Why? Here’s why, it’s because...
The dot product takes two vectors and returns a scalar:
var s = a.x*b.x + a.y*b.y
Yes, pretty much that. Multiply x from vector a by x from vector b. Do the same with y and add it together. In 3D it’s pretty much the same:
var s = a.x*b.x + a.y*b.y + a.z*b.z
I know, it’s totally meaningless! You can even do it with a built-in function:
var s = a.dot(b)
The order of two vectors does not matter, a.dot(b) returns the same value as b.dot(a).
This is where despair begins and books and tutorials show you this formula:
A ⋅ B = ∥A∥ ∥B∥ cos(θ)
And you realize it’s time to give up making 3D games or complex 2D games. How can something so simple be so complex? Someone else will have to make the next Zelda or Call of Duty. Top down RPGs don’t look so bad after all. Yeah I hear someone did pretty will with one of those on Steam...
So this is your moment, this is your time to shine. DO NOT GIVE UP! At this point, this tutorial will take a sharp turn and focus on what makes the dot product useful. This is, why it is useful. We will focus one by one in the use cases for the dot product, with real-life applications. No more formulas that don’t make any sense. Formulas will make sense once you learn what they are useful for.
Siding
The first useful and most important property of the dot product is to check what side stuff is looking at. Let’s imagine we have any two vectors, a and b. Any direction or magnitude (neither origin). Does not matter what they are, but let’s imagine we compute the dot product between them.
var s = a.dot(b)
The operation will return a single floating point number (but since we are in vector world, we call them scalar, will keep using that term from now on). This number will tell us the following:
If the number is greater than zero, both are looking towards the same direction (the angle between them is < 90° degrees).
If the number is less than zero, both are looking towards opposite direction (the angle between them is > 90° degrees).
If the number is zero, vectors are shaped in L (the angle between them is 90° degrees).
So let’s think of a real use-case scenario. Imagine Snake is going through a forest, and then there is an enemy nearby. How can we quickly tell if the enemy has seen discovered Snake? In order to discover him, the enemy must be able to see Snake. Let’s say, then that:
Snake is in position A.
The enemy is in position B.
The enemy is facing towards direction vector F.
So, let’s create a new vector BA that goes from the guard (B) to Snake (A), by subtracting the two:
var BA = A - B
Ideally, if the guard was looking straight towards snake, to make eye to eye contact, it would do it in the same direction as vector BA.
If the dot product between F and BA is greater than 0, then Snake will be discovered. This happens because we will be able to tell that the guard is facing towards him:
if (BA.dot(F) > 0):
print("!")
Seems Snake is safe so far.
Siding with unit vectors
Ok, so now we know that dot product between two vectors will let us know if they are looking towards the same side, opposite sides or are just perpendicular to each other.
This works the same with all vectors, no matter the magnitude so unit vectors are not the exception. However, using the same property with unit vectors yields an even more interesting result, as an extra property is added:
If both vectors are facing towards the exact same direction (parallel to each other, angle between them is 0°), the resulting scalar is 1.
If both vectors are facing towards the exact opposite direction (parallel to each other, but angle between them is 180°), the resulting scalar is -1.
This means that dot product between unit vectors is always between the range of 1 and -1. So Again...
If their angle is 0° dot product is 1.
If their angle is 90°, then dot product is 0.
If their angle is 180°, then dot product is -1.
Uh.. this is oddly familiar... seen this before... where?
Let’s take two unit vectors. The first one is pointing up, the second too but we will rotate it all the way from up (0°) to down (180° degrees)...
While plotting the resulting scalar!
Aha! It all makes sense now, this is a Cosine function!
We can say that, then, as a rule...
The dot product between two unit vectors is the cosine of the angle between those two vectors. So, to obtain the angle between two vectors, we must do:
var angle_in_radians = acos( a.dot(b) )
What is this useful for? Well obtaining the angle directly is probably not as useful, but just being able to tell the angle is useful for reference. One example is in the Kinematic Character demo, when the character moves in a certain direction then we hit an object. How to tell if what we hit is the floor?
By comparing the normal of the collision point with a previously computed angle.
The beauty of this is that the same code works exactly the same and without modification in 3D. Vector math is, in a great deal, dimension-amount-independent, so adding or removing an axis only adds very little complexity.
That's a bit like asking why we multiply numbers. It comes up all the time.
The Cartesian coordinate system that we use is an orthonormal basis (consists of vectors of length 1 that are orthogonal to each other, basis means that any vector can be represented by a unique combination of these vectors), when you want to rotate your basis (which occurs in video game mechanics when you look around) you use matrices whose rows and columns are orthonormal vectors.
As soon as you start playing around with matrices in linear algebra enough you will want orthonormal vectors. There are too many examples to just name them.
At the end of the day we don't need normalized vectors (in the same way as we don't need hamburgers, we could live without them, but who is going to?), but the similar pattern of v / |v| comes up so often that people decided to give it a name and a special notation (a ^ over a vector means it's a normalized vector) as a shortcut.
Normalized vectors (also known as unit vectors) are, basically, a fact of life.
You are making its length 1 - finding the unit vector that points in the same direction.
This is useful for various purposes, for example, if you take the dot product of a vector with a unit vector you have the length of the component of that vector in the direction of the unit vector.
The normals are supposed to be used as a direction vector only. They are used for lighting computation, which requires normalized normal vectors.
This post is very old, but there still isn't a very clear answer as to why we normalize. The reason is to find the exact magnitude of the vector and it's projection over another vector.
Example: Projection of vector a over b is b·cos(θ)
However, in the case of dot products, the dot product of two vectors a and b is a·b·cos(θ). This means the dot product is the projection of a over b times a. So we divide it by a to normalize to find the exact length of the projection which is b·cos(θ).
Hope it's clear.
Related
I'm currently have quite a lot of trouble producing a forward vector based on my accessible transform components. I'm working within SparkAR, where objects are built around a transform class with position, rotation (euler) and scale, as expected, however it doesn't provide any built in methods for retreiving general information such as the relative forward and up vector of the object, a transform matrix, etc.
This leaves me with the task of trying to put this together myself from the available data. The end result is simply to be able to calculate the angle between the forward vector of 2 objects, which is mapped to a 0-1 range and used as a value of texture blending. The goal is for it to simulate fake dynamic lighting through texture blends based on how similar the local object's forward vector is to the forward vector of the directional light.
I must have read 20+ different StackOverflow results by this point covering similar questions, however I've been unable to get correct results based on my testing using their suggestions, and at this point I feel like I need some more direct feedback to my method before I tear my eyes out.
My current process is as follows:
1) Retrieve euler rotation from the animated joint I want to compare against (it's in radians)
2) Convert radian values to degrees
3) Attempt to convert the euler values to a forward vector using the following calculation sample:
x = cos(yaw)cos(pitch)
y = sin(yaw)cos(pitch)
z = sin(pitch)
I tried a couple of other variations on that for different axis orders, which didn't provide much of a change at all.
Some small notes before continuing, X is pitch, Y is yaw and Z is roll. Positive Z is towards the screen, positive X is to the right, positive Y is up.
4) Normalise the vector (although it's unnecessary given the results I get).
5) Perform a dot product operation on the vector, against a set direction vector, in this case for testing purposes, I'm simply using (0,0,1).
6) Compare the resulting value against the expected result - incorrect.
Within a single 90 degree return, the value retreived from the dot product, which by my understanding should be 1 when facing the same direction, and -1 when facing inverse, oscillates between these two range endpoints between 15-20 times.
Another thing worth noting here, is that I did have to swap the Y and Z components of the calculation to produce a non-0 result, so the current forward vector calculation from euler is as follows:
x = cos(yaw)cos(pitch)
y = sin(pitch)
z = sin(yaw)cos(pitch)
I'm really not sure where to go about moving on from here to produce the result I'm looking for, as I simply don't understand what about the current calculations are going wrong to begin fixing it.
The passed in vector, pre-dot product at the rotations 0, 90, 180 and 270 are as follows:
( 1 , 0, -0.00002)
(-0.44812, 0, -0.89397)
( 1 , 0, 0.00002)
(-0.44812, 0, 0.89397)
I can see that the euler angles going into the calculations are definitely correct, so the RadToDeg conversion isn't screwing the input up.
Is the calculation I'm using for trying to produce a forward vector from euler rotations incorrect? Is there something I should be using instead?
Any advice on moving forward with this issue would be much appreciated.
Thanks.
Given A: a point, B: A point known to exist on a plane P, C: the normal of plane P. Can I determine if A lies on P by the result of a dot product between (A - B) and C being zero? (or within a certain level of precision, I'll probably use 0.0001f)
I might just be missing some obvious mathematical flaw but this seems to be much simpler and faster than transforming the point to a triangle's coordinate space a.la the answer to Check if a point is inside a plane segment
So secondly I guess; if this is a valid check, would it be computationally faster than using matrix transformations if all I want is to see if the point is on the plane? (and not whether it lies within a polygon on said plane, I'll probably keep using matrix transforms for that)
You are correct that B lies on the plane through A and with normal P if and only if dotProduct(A-B,P) = 0.
To estimate speed for this sort of thing, you can pretty much just count the multiplications. This formula just has three multiplications, so its going to be faster than pretty much anything to do with matrices.
The above answers are closer to the proof, but not sufficient. It should be intuitive that using just two vectors is insufficient because for one, point P can be above the plane and a vertical line drawn from it to the plane would still generate a zero dot product with any single vector lying on the plane, just as it would for a point P on the plane. The necessary and sufficient condition is that if two vectors can be found on the plane then the actual plane is represented unambiguously by the cross product of the two vectors i.e.
w=uxv. By definition, w is the area vector, which is always perpendicular to the plane.
Then, for the point P in question, constructing a third vector s from either u or v should be tested against w by the dot product, s.t.
w.s=|w||s|cos(90)=0 implies that the point P lies on the plane described by w, which is in turn described by vectors u and v.
I have an input device that gives me 3 angles -- rotation around x,y,z axes.
Now I need to use these angles to rotate the 3D space, without gimbal lock. I thought I could convert to Quaternions, but apparently since I'm getting the data as 3 angles this won't help?
If that's the case, just how can I correctly rotate the space, keeping in mind that my input data simply is x,y,z axes rotation angles, so I can't just "avoid" that. Similarly, moving around the order of axes rotations won't help -- all axes will be used anyway, so shuffling the order around won't accomplish anything. But surely there must be a way to do this?
If it helps, the problem can pretty much be reduced to implementing this function:
void generateVectorsFromAngles(double &lastXRotation,
double &lastYRotation,
double &lastZRotation,
JD::Vector &up,
JD::Vector &viewing) {
JD::Vector yaxis = JD::Vector(0,0,1);
JD::Vector zaxis = JD::Vector(0,1,0);
JD::Vector xaxis = JD::Vector(1,0,0);
up.rotate(xaxis, lastXRotation);
up.rotate(yaxis, lastYRotation);
up.rotate(zaxis, lastZRotation);
viewing.rotate(xaxis, lastXRotation);
viewing.rotate(yaxis, lastYRotation);
viewing.rotate(zaxis, lastZRotation);
}
in a way that avoids gimbal lock.
If your device is giving you absolute X/Y/Z angles (which implies something like actual gimbals), it will have some specific sequence to describe what order the rotations occur in.
Since you say that "the order doesn't matter", this suggests your device is something like (almost certainly?) a 3-axis rate gyro, and you're getting differential angles. In this case, you want to combine your 3 differential angles into a rotation vector, and use this to update an orientation quaternion, as follows:
given differential angles (in radians):
dXrot, dYrot, dZrot
and current orientation quaternion Q such that:
{r=0, ijk=rot(v)} = Q {r=0, ijk=v} Q*
construct an update quaternion:
dQ = {r=1, i=dXrot/2, j=dYrot/2, k=dZrot/2}
and update your orientation:
Q' = normalize( quaternion_multiply(dQ, Q) )
Note that dQ is only a crude approximation of a unit quaternion (which makes the normalize() operation more important than usual). However, if your differential angles are not large, it is actually quite a good approximation. Even if your differential angles are large, this simple approximation makes less nonsense than many other things you could do. If you have problems with large differential angles, you might try adding a quadratic correction to improve your accuracy (as described in the third section).
However, a more likely problem is that any kind of repeated update like this tends to drift, simply from accumulated arithmetic error if nothing else. Also, your physical sensors will have bias -- e.g., your rate gyros will have offsets which, if not corrected for, will cause your orientation estimate Q to precess slowly. If this kind of drift matters to your application, you will need some way to detect/correct it if you want to maintain a stable system.
If you do have a problem with large differential angles, there is a trigonometric formula for computing an exact update quaternion dQ. The assumption is that the total rotation angle should be linearly proportional to the magnitude of the input vector; given this, you can compute an exact update quaternion as follows:
given differential half-angle vector (in radians):
dV = (dXrot, dYrot, dZrot)/2
compute the magnitude of the half-angle vector:
theta = |dV| = 0.5 * sqrt(dXrot^2 + dYrot^2 + dZrot^2)
then the update quaternion, as used above, is:
dQ = {r=cos(theta), ijk=dV*sin(theta)/theta}
= {r=cos(theta), ijk=normalize(dV)*sin(theta)}
Note that directly computing either sin(theta)/theta ornormalize(dV) is is singular near zero, but the limit value of vector ijk near zero is simply ijk = dV = (dXrot,dYrot,dZrot), as in the approximation from the first section. If you do compute your update quaternion this way, the straightforward method is to check for this, and use the approximation for small theta (for which it is an extremely good approximation!).
Finally, another approach is to use a Taylor expansion for cos(theta) and sin(theta)/theta. This is an intermediate approach -- an improved approximation that increases the range of accuracy:
cos(x) ~ 1 - x^2/2 + x^4/24 - x^6/720 ...
sin(x)/x ~ 1 - x^2/6 + x^4/120 - x^6/5040 ...
So, the "quadratic correction" mentioned in the first section is:
dQ = {r=1-theta*theta*(1.0/2), ijk=dV*(1-theta*theta*(1.0/6))}
Q' = normalize( quaternion_multiply(dQ, Q) )
Additional terms will extend the accurate range of the approximation, but if you need more than +/-90 degrees per update, you should probably use the exact trig functions described in the second section. You could also use a Taylor expansion in combination with the exact trigonometric solution -- it may be helpful by allowing you to switch seamlessly between the approximation and the exact formula.
I think that the 'gimbal lock' is not a problem of computations/mathematics but rather a problem of some physical devices.
Given that you can represent any orientation with XYZ rotations, then even at the 'gimbal lock point' there is a XYZ representation for any imaginable orientation change. Your physical gimbal may be not able to rotate this way, but the mathematics still works :).
The only problem here is your input device - if it's gimbal then it can lock, but you didn't give any details on that.
EDIT: OK, so after you added a function I think I see what you need. The function is perfectly correct. But sadly, you just can't get a nice and easy, continuous way of orientation edition using XYZ axis rotations. I haven't seen such solution even in professional 3D packages.
The only thing that comes to my mind is to treat your input like a steering in aeroplane - you just have some initial orientation and you can rotate it around X, Y or Z axis by some amount. Then you store the new orientation and clear your inputs. Rotations in 3DMax/Maya/Blender are done the same way.
If you give us more info about real-world usage you want to achieve we may get some better ideas.
I'm programming a physics game. It seems I can use 2 systems for storing a character's movement data:
A) x & y components (Cartesian coordinates)
B) speed and direction components (polar coordinates)
It seems I need to ultimately decide on one of these 2 systems because:
A) They both represent the same information about a vector
B) It seems redundant and inefficient to maintain both
Most game programming resources I've found use Cartesian. To my understanding, all transformations like friction, rotation, acceleration, etc are combined into each vector via multiplication, division, etc. But to me, polar feels more modular and, therefore, more malleable because each vector is comprised of and can be broken down into its two elements (direction and magnitude). If I want to modify one of these independently, I can set its value without needing to deconstruct it into separate parts.
I'm guessing that different models are suitable for different types of games. But...
What trade-offs affect the decision to use Cartesian versus polar?
When does one model become cumbersome or verbose?
Or am I way off?
The premise of your question is a bit odd. Magnitude plus angle and sum of 2 basis components are both ways to specify a vector in 2-space. In either case, you record 2 scalars (i.e. you do not have a separate variable to represent the x unit vector). The choice of rectangular vs polar coordinates doesn't change the nature of something from a vector to a scalar or vice versa.
However, different representations certainly have their uses. As you mention, breaking down into orthogonal components has a ready advantage for addition of two vectors and other operations. In addition, most displays use a x-y coordinate system, so rendering is easier because you don't have to do a coordinate transform.
If your game was based on a polar coordinate system (say a ship that always faces the center of a circle), you might actually want to represent it using polar coordinates. Other than that, rectangular coordinates are generally easier to use.
Either way, sin and cos will probably become your friend. Just remember that most graphical coordinate systems have y-down as positive.
You are confused about the difference between vectors and scalars.
The speed along the x-axis is a scalar.
The speed along the y-axis is also a scalar.
When you combine those two numbers into a single mathematical object, that object is the velocity vector. Think of it like a 2-element array: [x, y]
Similarly,
Thrust is a scalar.
Angle is a scalar.
The combination of these two numbers is a different kind of velocity vector [thrust, angle].
Any velocity that is expressable in your [x, y] system can be also expressed in your [thrust, angle] system.
You might be getting confused with "basis vectors." In your first coordinate system, a basis vector is a vector that is one unit long and which points along the x or y axis. So [1, 0] would be a basis vector that is one-unit along the x axis, and [0, 1] would be a basis vector that is one unit along the y axis. The thing that is interesting about basis vectors is that any vector at all can be expressed as a linear combination of basis vectors.
So if i = [1, 0], and j = [0, 1] then
(34.5 i + -4.45 j) is a vector,
(4.65 i + 23.3 j) is a vector,
etc. (if you're not familiar with vector addition, just google it, it's easy)
Now you might think that when take your 2-dimensional space and you use a different coordinate system (like polar coordinates, which is really what your thrust/angle coordinates are) you are getting away from basis vectors, but in fact you are not. So, for your thrust and angle coordinate system, your basis vectors are:
i = 1 unit of positive thrust, or radius
and
j = 1 degree (or radian) of positive angle
Any possible velocity is still a combination of i and j, your basis vectors.
The two representations are mathematically equivalent. Additionally, converting one to the other is a simple O(1) operation. So be aware that it's probably not a make-or-break decision. That said, in terms of ease-of-use:
You're probably right that it depends on the circumstance as to which is more appropriate, so whichever you can foresee yourself using more often, then go with that, and convert to the other form when necessary.
Use language features to help you abstract the specific type of implementation. E.g. If you're using Java, have a IPoint interface with the relevant methods. That way you can choose an implementation, or even more, to suit the needs. You can even choose certain parts of the program to work with one implementation, and other parts with other types. Proper architecture will make these things seemless.
Depending on certain calculations you might prefer to use ones that will provide you with more accuracy. If you're doing floating point arithmetic with vastly different magnitudes you might suffer precision loss. In that case it may, for example, be easier to use the angle and length representation, because angles will have persistent accuracy, and lengths might be of similar magnitude, whereas there is no guarantee of such in the x and y representation. Although granted that this is a slightly less pressing issue if you're values will be reasonable and calculations nominal.
What you're calling "scalar quantities" is really just a polar vector, right? So your question isn't so much about vectors vc scalars as it is about cartesian vs polar coordinate systems. [x,y] and [theta,r] are both vectors.
I haven't done a whole lot of physics programming, but the last time I did and it started to get complicated (modeling fish swimming in a three-dimensional space), I was much more comfortable dealing with polar coordinates. I was working from scratch implementing a boids-like algorithm, and I found it much more straightforward to think in terms of polar vectors, especially when working in 3 dimensions. I also found using trigonometric functions (acos(), asin(), etc.) cleaner than using the pythagorean formulae you'd use in a cartesian system.
But are you actually coding things from such a low level?
The dynamics of a system are usually easier to describe in the (point, velocity) framework. Indeed, the "fundamental" ODE is usually described in this system:
d (mv) / dt = force(x)
and hence are also easier to plug into a black box Runge Kutta solver.
However, any system will do, thanks to canonical transformations.
Does a 3D vector differ from a 3D point tuple (x,y,z) in the context of 3D game mathematics?
If they are different, then how do I calculate a vector given a 3d point?
The difference is that a vector is an algebraic object that may or may not be given as the set of coordinates in some space. (thanks to bungalobill for correcting my sloppiness).
A point is just a point given by coordinates. Generally, one can conflate the two. If you are given a set of coordinates, and told that they constitute a 'point' with no further information (choice of basis, etc), then you can just hand that set of numbers back and legitimately claim to have produced a vector.
The largest difference between the two is that it makes no sense to do things to one that you can do to the other. For example,
You can add vectors: <1 2 3> + <3 2 1> = <4 4 4>
You can multiply (or scale) a vector by a number (generally called a scalar)
2 * <1 1 1> = <2 2 2>
You can ask how far apart two points are: d((1, 2, 3), (3, 2, 1) = sqrt((1 - 3)2 + (2 - 2)2 + (3 - 1)2) = sqrt(8) ~= 2.82
A good intuitive way to think about the association between a vector and a point is that a vector tells you how to get from the origin (that one point in space to which we assign the coordinates (0, 0, 0)) to its associated point.
If you translate your coordinate system, then you get a new vector for the same point. Although the coordinates that make up the point will undergo the same translation so it's a pretty easy conflation to make between the two.
Likewise if rotate the coordinate system or apply some other transformation (e.g. a shear), then the coordinates and vector associated to the point will also change.
It's also possible for a vector to be something else entirely, for example a bounded function on the interval [0, 1] is a vector because you can multiply it by a real number and add it to another function on the interval and it will satisfy certain requirements (namely the axioms of a vectorspace). In this case one thinks of having one coordinate for each real number, x, in [0, 1] where the value of that coordinate is just f(x). So that's the easiest example of an infinite dimensional vector space.
There are all sorts of vector spaces and the notion that a vector is a 'point and a direction' (or whatever it's supposed to be) is actually pretty vacuous.
A vector represents a change from one state to another. To create one, you need two states (in this case, points), and then you subtract the initial state from the final state in order to get the resultant vector.
Vectors are a more general idea that a point in 3D space.
Vectors can have 2, 3, or n dimensions. They represent many quantities in the physical world (e.g., velocity, force, acceleration) besides position.
A mathematician would say that a vector is a first order tensor that transforms according to this rule:
u(i) = A(i, j)v(j)
You need both point and vector because they are different. A point in 3D space denoting position is a vector, but every vector is not a point in 3D space.
Then there's the computer science notion of a vector as a container - it's an abstraction for an array of values or references. This is a different concept from a mathematician's idea of a vector, because every vector container need not obey the first order tensor transformation law (e.g. a Vector of OrderItems). That's yet another separate idea.
It's important to keep all these in mind when talking about vectors and points.
Does a 3D vector differ from a 3D point tuple (x,y,z) in the context of 3D game mathematics?
Traditionaly vector means a direction and speed. A point could be considered a vector from the world orgin of one time step. (even though it may not be considered mathematically pure)
If they are different, then how do I calculate a vector given a 3d point?
target-tower is the common mnemonic.
Careful on your usage of this. The resulting vector is really normal*velocity. If you want to change it into something useful in a game application: you will need to normalize the vector first.
Example: Joe is at (10,0,0) and he wants to go to (10,10,0)
Target-Tower: (10,10,0)-(10,0,0)=(0,10,0)
Normalize the resulting vector: (0,1,0)
Apply "physics": (0,1,0) * speed*elapsed_time < speed = 3 and we'll say that the computer froze for a whole 2 seconds between the last step and this one for ease of computation >
=(0,6,0)
Add the resulting vector to Joes current point in space to get his next point in space: ... =(10,6,0)
Normal = vector/(sqrt(x*x+y*y+z*z))
...I think I have everything here
Vector is the change in the states. A point is the static point. Two vectors can be parallel or perpendicular. You can have product of two vectors which is a third vector. You can multiply a vector by a constant. You can add two vectors.
All these operations are not allowed on point. So program wise if you think both as a C++ class, there will be many such methods in the vector class but probably only Get and Set for point.
In the context of game mathematics there is no difference.
Points are elements of an affine space.† Vectors are elements of a vector (aka linear) space. When you choose an origin in an affine space it automatically induces a linear structure on that affine space. The contrary is also true: if you have a vector space it already satisfies all the axioms of an affine space.
The fact is that when it comes to computation, the only way to represent an affine space numerically is to use tuples of numbers, which also form a vector space.
Each object in a game always has an origin, and it is crucial to know where it is. That origin is set relative to the origin of the world, which is set relative to the origin of the camera/viewport. The vertices of the object are represented as vectors -- offsets from the object origin. You use matrix multiplication to transform the objects -- that is too a purely vector space operation (you cannot multiply an affine point by a matrix without specifying the origin first). Etc, etc... As we see all those triplets of numbers that we might think of as 'points' are actually vectors in the local coordinate system.
So is there any reason to distinguish between the two outside the study of algebra? It is an unnecessary abstraction, and unnecessary abstractions are harmful (KISS). So my answer is no, just go with a single vector type.
† Or any topological space outside the context of game development.
A vector is a line, that is a sequence of points but that it can be represented by two points, the starting and the ending point.
If you take the origin as the starting point, then you can describe your vector giving only the ending point.