Should I use a Cartesian (x and y) or polar (angle and magnitude) coordinate system to represent velocity? - math

I'm programming a physics game. It seems I can use 2 systems for storing a character's movement data:
A) x & y components (Cartesian coordinates)
B) speed and direction components (polar coordinates)
It seems I need to ultimately decide on one of these 2 systems because:
A) They both represent the same information about a vector
B) It seems redundant and inefficient to maintain both
Most game programming resources I've found use Cartesian. To my understanding, all transformations like friction, rotation, acceleration, etc are combined into each vector via multiplication, division, etc. But to me, polar feels more modular and, therefore, more malleable because each vector is comprised of and can be broken down into its two elements (direction and magnitude). If I want to modify one of these independently, I can set its value without needing to deconstruct it into separate parts.
I'm guessing that different models are suitable for different types of games. But...
What trade-offs affect the decision to use Cartesian versus polar?
When does one model become cumbersome or verbose?
Or am I way off?

The premise of your question is a bit odd. Magnitude plus angle and sum of 2 basis components are both ways to specify a vector in 2-space. In either case, you record 2 scalars (i.e. you do not have a separate variable to represent the x unit vector). The choice of rectangular vs polar coordinates doesn't change the nature of something from a vector to a scalar or vice versa.
However, different representations certainly have their uses. As you mention, breaking down into orthogonal components has a ready advantage for addition of two vectors and other operations. In addition, most displays use a x-y coordinate system, so rendering is easier because you don't have to do a coordinate transform.
If your game was based on a polar coordinate system (say a ship that always faces the center of a circle), you might actually want to represent it using polar coordinates. Other than that, rectangular coordinates are generally easier to use.
Either way, sin and cos will probably become your friend. Just remember that most graphical coordinate systems have y-down as positive.

You are confused about the difference between vectors and scalars.
The speed along the x-axis is a scalar.
The speed along the y-axis is also a scalar.
When you combine those two numbers into a single mathematical object, that object is the velocity vector. Think of it like a 2-element array: [x, y]
Similarly,
Thrust is a scalar.
Angle is a scalar.
The combination of these two numbers is a different kind of velocity vector [thrust, angle].
Any velocity that is expressable in your [x, y] system can be also expressed in your [thrust, angle] system.
You might be getting confused with "basis vectors." In your first coordinate system, a basis vector is a vector that is one unit long and which points along the x or y axis. So [1, 0] would be a basis vector that is one-unit along the x axis, and [0, 1] would be a basis vector that is one unit along the y axis. The thing that is interesting about basis vectors is that any vector at all can be expressed as a linear combination of basis vectors.
So if i = [1, 0], and j = [0, 1] then
(34.5 i + -4.45 j) is a vector,
(4.65 i + 23.3 j) is a vector,
etc. (if you're not familiar with vector addition, just google it, it's easy)
Now you might think that when take your 2-dimensional space and you use a different coordinate system (like polar coordinates, which is really what your thrust/angle coordinates are) you are getting away from basis vectors, but in fact you are not. So, for your thrust and angle coordinate system, your basis vectors are:
i = 1 unit of positive thrust, or radius
and
j = 1 degree (or radian) of positive angle
Any possible velocity is still a combination of i and j, your basis vectors.

The two representations are mathematically equivalent. Additionally, converting one to the other is a simple O(1) operation. So be aware that it's probably not a make-or-break decision. That said, in terms of ease-of-use:
You're probably right that it depends on the circumstance as to which is more appropriate, so whichever you can foresee yourself using more often, then go with that, and convert to the other form when necessary.
Use language features to help you abstract the specific type of implementation. E.g. If you're using Java, have a IPoint interface with the relevant methods. That way you can choose an implementation, or even more, to suit the needs. You can even choose certain parts of the program to work with one implementation, and other parts with other types. Proper architecture will make these things seemless.
Depending on certain calculations you might prefer to use ones that will provide you with more accuracy. If you're doing floating point arithmetic with vastly different magnitudes you might suffer precision loss. In that case it may, for example, be easier to use the angle and length representation, because angles will have persistent accuracy, and lengths might be of similar magnitude, whereas there is no guarantee of such in the x and y representation. Although granted that this is a slightly less pressing issue if you're values will be reasonable and calculations nominal.

What you're calling "scalar quantities" is really just a polar vector, right? So your question isn't so much about vectors vc scalars as it is about cartesian vs polar coordinate systems. [x,y] and [theta,r] are both vectors.
I haven't done a whole lot of physics programming, but the last time I did and it started to get complicated (modeling fish swimming in a three-dimensional space), I was much more comfortable dealing with polar coordinates. I was working from scratch implementing a boids-like algorithm, and I found it much more straightforward to think in terms of polar vectors, especially when working in 3 dimensions. I also found using trigonometric functions (acos(), asin(), etc.) cleaner than using the pythagorean formulae you'd use in a cartesian system.
But are you actually coding things from such a low level?

The dynamics of a system are usually easier to describe in the (point, velocity) framework. Indeed, the "fundamental" ODE is usually described in this system:
d (mv) / dt = force(x)
and hence are also easier to plug into a black box Runge Kutta solver.
However, any system will do, thanks to canonical transformations.

Related

How to convert points between two coordinate systems with different rotations

Imagine two coordinate systems layed on top of each other, with a rotation and scale difference between the two:
The problem is to convert a point from the non-rotated system to the other. What we do have, are four corner points forming a rectangle, with coordinates known for both systems at each point. We also know the rotation difference, and I think I at least should know the scale difference too. How do I convert a point from the non-rotated system to the rotated system? I have Unity3D at use.
Extra points for clarity in math :)
PS: I'm writing this really late, going to edit later for more clarity.
Some linear algebra does the trick:
Express each operation as a matrix and matrix multiply those to combine them into a single resulting matrix (for efficiency).
If translation is involved you need to add a dimension to your matrices, see homogenous coordinates.
The reason is that the mappings are affine ones then, not linear ones. You can ignore the extra dimension in the end result. It is just a nice way to embed affine mappings into linear ones, so the algebra is easier.
Example
M = M_trans * M_rot * M_scale
x' = M x
The order here is right to left: vector x is first scaled, then rotated, then translated into vector x'. (Using column vectors).
Hints on the matrices: Rotation Matrix, Scaling Matrix
For deriving 2D formulas when given 3D ones: either keep z = 0 or delete the 3rd row and 3rd column from each matrix.

Combining quaternions with different pivot point

Background:
I am currently implementing a skeletal animation shader in GLSL, and to save space and complexity I am using Quaternions for the bone rotations, using weighted quaternion multiplication (of each bone) to accumulate a "final rotation" for each vertex.
Something like: (pseudo-code, just assume the quaternion math works as expected)
float weights[5];
int bones[5];
vec4 position;
uniform quaternion allBoneRotations[100];
uniform vec3 allBonePositions[100];
main(){
quaternion finalQuaternion;
for(i=0;i<5;i++){finalQuaternion *= allBoneRotations[bones[i]]*weights[i];}
gl_position = position.rotateByQuaternion(finalQuaternion);
}
The real code is complicated, sloppy, and working as expected, but this should give the general idea, since this is mostly a math question anyway, the code isn't of much consequence, it's just provided for clarity.
Problem:
I was in the process of adding "pivot points"/"joint locations" to each bone (negative translate, rotate by "final quaternion", translate back) when I realized that the "final quaternion" will not have taken the different pivot points into account when combining the quaternions themselves. In this case each bone rotation will have been treated as if it was around point (0,0,0).
Given that quaternions represent only a rotation, it seems I'll either need to "add" a position to the quaternions (if possible), or simply convert all of the quaternions into matrices, then do matrix multiplication to combine the series of translations and rotations. I am really hoping the latter is not necessary, since it seems like it would be really inefficient, comparatively.
I've searched through mathoverflow, math.stackexchange, and whatever else Google provided and read the following resources so far in hopes of figuring out an answer myself:
http://shankel.best.vwh.net/QuatRot.html
http://mathworld.wolfram.com/Quaternion.html
plus various other small discussions found through Googling (I can only post 2 links)
The consensus is that Quaternions do not encode "translation" or "position" in any sense, and don't seem to provide an intuitive way to simulate it, so pure quaternion math seems unlikely to be a viable solution.
However it might be nice to have a definitive answer to this here. Does anyone know any way to "fake" a position component of a quaternion, that in some way that would keep the quaternion math efficiency, or some other method to "accumulate" rotations around different origin points that is more efficient than just computing the matrix of the quaternions, and doing matrix translation and rotation multiplications for each and every quaternion? Or perhaps some mathematical assurance that differing pivot points don't actually make any difference, and can, in fact be applied later (but I doubt it).
Or is using quaternions in this situation just a bad idea on the face of it?
Indeed, there is no such thing as a position component of a quaternion, so you'll need to track it separately. Suppose individual transformations end up being like
x' = R(q)*(x-pivot)+pivot = R(q)*x + (pivot-R(q)*pivot) = R(q)*x+p,
where q is your quaternion, R(q) is the rotation matrix built from it, and p=pivot-R(q)*pivot is the position/translation component. If you want to combine two such transformations, you can do it without going full-matrix multiplication:
x'' = R(q2)*x'+p2 = R(q2)*R(q)*x + (R(q2)*p+p2) = R(q2*q)*x + (R(q2)*p+p2).
This way the combined quaternion will be q2*q, and the combined position, R(q2)*p+p2. Note that you can even apply quaternions to vectors (R(q2)*p and so on) without explicitly building rotation matrices, if you want to absolutely avoid them.
That said, there is also a notion of "dual quaternions" which, in fact, do contain a translation component, and are presumably better for representing screw motions. Check them out on Wiki, and here (the last link also points to a paper).
After extensive additional searching, and reading more about quaternions than any sane person should, I finally discovered my answer here:
http://www.euclideanspace.com/maths/algebra/realNormedAlgebra/other/dualQuaternion/index.htm
It turns out Dual Quaternions operate similarly to actual quaternions, with many of the mathematical operations based off of regular quaternion math, but they provide both orientation, and displacement both, and can be combined for any rotation-translation sequence needed, much like Transformation Matrix multiplication, but without the shear/scale ability.
The page also has a section that derives exactly the "rotating around an arbitrary point" functionality that I was requiring by using dual quaternion multiplication. Perhaps I should have researched a bit more before asking, but at least the answer is here now in case anyone else comes looking.

What is the need for normalizing a vector?

Trying to understand vectors a bit more.
What is the need for normalizing a vector?
If I have a vector, N = (x, y, z)
What do you actually get when you normalize it - I get the idea you have to divide x/|N| y/|N| & z/|N|. My question is, why do we do this thing, I mean what do we get out of this equation?
What is the meaning or 'inside' purpose of doing this.
A bit of a maths question, I apologize, but I am really not clear in this topic.
For any vector V = (x, y, z), |V| = sqrt(x*x + y*y + z*z) gives the length of the vector.
When we normalize a vector, we actually calculate V/|V| = (x/|V|, y/|V|, z/|V|).
It is easy to see that a normalized vector has length 1. This is because:
| V/|V| | = sqrt((x/|V|)*(x/|V|) + (y/|V|)*(y/|V|) + (z/|V|)*(z/|V|))
= sqrt(x*x + y*y + z*z) / |V|
= |V| / |V|
= 1
Hence, we can call normalized vectors as unit vectors (i.e. vectors with unit length).
Any vector, when normalized, only changes its magnitude, not its direction. Also, every vector pointing in the same direction, gets normalized to the same vector (since magnitude and direction uniquely define a vector). Hence, unit vectors are extremely useful for providing directions.
Note however, that all the above discussion was for 3 dimensional Cartesian coordinates (x, y, z). But what do we really mean by Cartesian coordinates?
Turns out, to define a vector in 3D space, we need some reference directions. These reference directions are canonically called i, j, k (or i, j, k with little caps on them - referred to as "i cap", "j cap" and "k cap"). Any vector we think of as V = (x, y, z) can actually then be written as V = xi + yj + zk. (Note: I will no longer call them by caps, I'll just call them i, j, k). i, j, and k are unit vectors in the X, Y and Z directions and they form a set of mutually orthogonal unit vectors. They are the basis of all Cartesian coordinate geometry.
There are other forms of coordinates (such as Cylindrical and Spherical coordinates), and while their coordinates are not as direct to understand as (x, y, z), they too are composed of a set of 3 mutually orthogonal unit vectors which form the basis into which 3 coordinates are multiplied to produce a vector.
So, the above discussion clearly says that we need unit vectors to define other vectors, but why should you care?
Because sometimes, only the magnitude matters. That's when you use a "regular" number (something like 4 or 1/3 or 3.141592653 - nope, for all you OCD freaks, I am NOT going to put Pi there - that shall stay a terminating decimal, just because I am evil incarnate). You would not want to thrown in a pesky direction, would you? I mean, does it really make sense to say that I want 4 kilograms of watermelons facing West? Unless you are some crazy fanatic, of course.
Other times, only the direction matters. You just don't care for the magnitude, or the magnitude just is too large to fathom (something like infinity, only that no one really knows what infinity really is - All Hail The Great Infinite, for He has Infinite Infinities... Sorry, got a bit carried away there). In such cases, we use normalization of vectors. For example, it doesn't mean anything to say that we have a line facing 4 km North. It makes more sense to say we have a line facing North. So what do you do then? You get rid of the 4 km. You destroy the magnitude. All you have remaining is the North (and Winter is Coming). Do this often enough, and you will have to give a name and notation to what you are doing. You can't just call it "ignoring the magnitude". That is too crass. You're a mathematician, and so you call it "normalization", and you give it the notation of the "cap" (probably because you wanted to go to a party instead of being stuck with vectors).
BTW, since I mentioned Cartesian coordinates, here's the obligatory XKCD:
Reading Godot Game Engine documentation about unit vector, normalization, and dot product really makes a lot of sense. Here is the article:
Unit vectors
Ok, so we know what a vector is. It has a direction and a magnitude. We also know how to use them in Godot. The next step is learning about unit vectors. Any vector with magnitude of length 1 is considered a unit vector. In 2D, imagine drawing a circle of radius one. That circle contains all unit vectors in existence for 2 dimensions:
So, what is so special about unit vectors? Unit vectors are amazing. In other words, unit vectors have several, very useful properties.
Can’t wait to know more about the fantastic properties of unit vectors, but one step at a time. So, how is a unit vector created from a regular vector?
Normalization
Taking any vector and reducing its magnitude to 1.0 while keeping its direction is called normalization. Normalization is performed by dividing the x and y (and z in 3D) components of a vector by its magnitude:
var a = Vector2(2,4)
var m = sqrt(a.x*a.x + a.y*a.y)
a.x /= m
a.y /= m
As you might have guessed, if the vector has magnitude 0 (meaning, it’s not a vector but the origin also called null vector), a division by zero occurs and the universe goes through a second big bang, except in reverse polarity and then back. As a result, humanity is safe but Godot will print an error. Remember! Vector(0,0) can’t be normalized!.
Of course, Vector2 and Vector3 already provide a method to do this:
a = a.normalized()
Dot product
OK, the dot product is the most important part of vector math. Without the dot product, Quake would have never been made. This is the most important section of the tutorial, so make sure to grasp it properly. Most people trying to understand vector math give up here because, despite how simple it is, they can’t make head or tails from it. Why? Here’s why, it’s because...
The dot product takes two vectors and returns a scalar:
var s = a.x*b.x + a.y*b.y
Yes, pretty much that. Multiply x from vector a by x from vector b. Do the same with y and add it together. In 3D it’s pretty much the same:
var s = a.x*b.x + a.y*b.y + a.z*b.z
I know, it’s totally meaningless! You can even do it with a built-in function:
var s = a.dot(b)
The order of two vectors does not matter, a.dot(b) returns the same value as b.dot(a).
This is where despair begins and books and tutorials show you this formula:
A ⋅ B = ∥A∥ ∥B∥ cos(θ)
And you realize it’s time to give up making 3D games or complex 2D games. How can something so simple be so complex? Someone else will have to make the next Zelda or Call of Duty. Top down RPGs don’t look so bad after all. Yeah I hear someone did pretty will with one of those on Steam...
So this is your moment, this is your time to shine. DO NOT GIVE UP! At this point, this tutorial will take a sharp turn and focus on what makes the dot product useful. This is, why it is useful. We will focus one by one in the use cases for the dot product, with real-life applications. No more formulas that don’t make any sense. Formulas will make sense once you learn what they are useful for.
Siding
The first useful and most important property of the dot product is to check what side stuff is looking at. Let’s imagine we have any two vectors, a and b. Any direction or magnitude (neither origin). Does not matter what they are, but let’s imagine we compute the dot product between them.
var s = a.dot(b)
The operation will return a single floating point number (but since we are in vector world, we call them scalar, will keep using that term from now on). This number will tell us the following:
If the number is greater than zero, both are looking towards the same direction (the angle between them is < 90° degrees).
If the number is less than zero, both are looking towards opposite direction (the angle between them is > 90° degrees).
If the number is zero, vectors are shaped in L (the angle between them is 90° degrees).
So let’s think of a real use-case scenario. Imagine Snake is going through a forest, and then there is an enemy nearby. How can we quickly tell if the enemy has seen discovered Snake? In order to discover him, the enemy must be able to see Snake. Let’s say, then that:
Snake is in position A.
The enemy is in position B.
The enemy is facing towards direction vector F.
So, let’s create a new vector BA that goes from the guard (B) to Snake (A), by subtracting the two:
var BA = A - B
Ideally, if the guard was looking straight towards snake, to make eye to eye contact, it would do it in the same direction as vector BA.
If the dot product between F and BA is greater than 0, then Snake will be discovered. This happens because we will be able to tell that the guard is facing towards him:
if (BA.dot(F) > 0):
print("!")
Seems Snake is safe so far.
Siding with unit vectors
Ok, so now we know that dot product between two vectors will let us know if they are looking towards the same side, opposite sides or are just perpendicular to each other.
This works the same with all vectors, no matter the magnitude so unit vectors are not the exception. However, using the same property with unit vectors yields an even more interesting result, as an extra property is added:
If both vectors are facing towards the exact same direction (parallel to each other, angle between them is 0°), the resulting scalar is 1.
If both vectors are facing towards the exact opposite direction (parallel to each other, but angle between them is 180°), the resulting scalar is -1.
This means that dot product between unit vectors is always between the range of 1 and -1. So Again...
If their angle is 0° dot product is 1.
If their angle is 90°, then dot product is 0.
If their angle is 180°, then dot product is -1.
Uh.. this is oddly familiar... seen this before... where?
Let’s take two unit vectors. The first one is pointing up, the second too but we will rotate it all the way from up (0°) to down (180° degrees)...
While plotting the resulting scalar!
Aha! It all makes sense now, this is a Cosine function!
We can say that, then, as a rule...
The dot product between two unit vectors is the cosine of the angle between those two vectors. So, to obtain the angle between two vectors, we must do:
var angle_in_radians = acos( a.dot(b) )
What is this useful for? Well obtaining the angle directly is probably not as useful, but just being able to tell the angle is useful for reference. One example is in the Kinematic Character demo, when the character moves in a certain direction then we hit an object. How to tell if what we hit is the floor?
By comparing the normal of the collision point with a previously computed angle.
The beauty of this is that the same code works exactly the same and without modification in 3D. Vector math is, in a great deal, dimension-amount-independent, so adding or removing an axis only adds very little complexity.
That's a bit like asking why we multiply numbers. It comes up all the time.
The Cartesian coordinate system that we use is an orthonormal basis (consists of vectors of length 1 that are orthogonal to each other, basis means that any vector can be represented by a unique combination of these vectors), when you want to rotate your basis (which occurs in video game mechanics when you look around) you use matrices whose rows and columns are orthonormal vectors.
As soon as you start playing around with matrices in linear algebra enough you will want orthonormal vectors. There are too many examples to just name them.
At the end of the day we don't need normalized vectors (in the same way as we don't need hamburgers, we could live without them, but who is going to?), but the similar pattern of v / |v| comes up so often that people decided to give it a name and a special notation (a ^ over a vector means it's a normalized vector) as a shortcut.
Normalized vectors (also known as unit vectors) are, basically, a fact of life.
You are making its length 1 - finding the unit vector that points in the same direction.
This is useful for various purposes, for example, if you take the dot product of a vector with a unit vector you have the length of the component of that vector in the direction of the unit vector.
The normals are supposed to be used as a direction vector only. They are used for lighting computation, which requires normalized normal vectors.
This post is very old, but there still isn't a very clear answer as to why we normalize. The reason is to find the exact magnitude of the vector and it's projection over another vector.
Example: Projection of vector a over b is b·cos(θ)
However, in the case of dot products, the dot product of two vectors a and b is a·b·cos(θ). This means the dot product is the projection of a over b times a. So we divide it by a to normalize to find the exact length of the projection which is b·cos(θ).
Hope it's clear.

Linear Algebra in Games in a 2D space

I am currently teaching myself linear algebra in games and I almost feel ready to use my new-found knowledge in a simple 2D space. I plan on using a math library, with vectors/matrices etc. to represent positions and direction unlike my last game, which was simple enough not to need it.
I just want some clarification on this issue. First, is it valid to express a position in 2D space in 4x4 homogeneous coordinates, like this:
[400, 300, 0, 1]
Here, I am assuming, for simplicity that we are working in a fixed resolution (and in screen space) of 800 x 600, so this should be a point in the middle of the screen.
Is this valid?
Suppose that this position represents the position of the player, if I used a vector, I could represent the direction the player is facing:
[400, 400, 0, 0]
So this vector would represent that the player is facing the bottom of the screen (if we are working in screen space.
Is this valid?
Lastly, if I wanted to rotate the player by 90 degrees, I know I would multiply the vector by a matrix/quarternion, but this is where I get confused. I know that quarternions are more efficient, but I'm not exactly sure how I would go about rotating the direction my player is facing.
Could someone explain the math behind constructing a quarternion and multiplying it by my face vector?
I also heard that OpenGL and D3D represent vectors in a different manner, how does that work? I don't exactly understand it.
I am trying to start getting a handle on basic linear algebra in games before I step into a 3D space in several months.
You can represent your position as a 4D coordinate, however, I would recommend using only the dimensions that are needed (i.e. a 2D vector).
The direction is mostly expressed as a vector that starts at the player's position and points in the according direction. So a direction vector of (0,1) would be much easier to handle.
Given that vector you can use a rotation matrix. Quaternions are not really necessary in that case because you don't want to rotate about arbitrary axes. You just want to rotate about the z-axis. You helper library should provide methods to create such matrix and transform the vector with it (transform as a normal).
I am not sure about the difference between the OpenGL's and D3D's representation of the vectors. But I think, it is all about memory usage which should be a thing you don't want to worry about.
I can not answer all of your questions, but in terms of what is 'valid' or not it all completely depends on if it contains all of the information that you need and it makes sense to you.
Furthermore it is a little strange to have the direction that an object is facing be a non-unit vector. Basically you do not need the information of how long the vector is to figure out the direction they are facing, You simply need to be able to figure out the radians or degrees that they have rotated from 0 degrees or radians. Therefore people usually simply encode the radians or degrees directly as many linear algebra libraries will allow you to do vector math using them.

What is a 3D Vector and how does it differ from a 3D point?

Does a 3D vector differ from a 3D point tuple (x,y,z) in the context of 3D game mathematics?
If they are different, then how do I calculate a vector given a 3d point?
The difference is that a vector is an algebraic object that may or may not be given as the set of coordinates in some space. (thanks to bungalobill for correcting my sloppiness).
A point is just a point given by coordinates. Generally, one can conflate the two. If you are given a set of coordinates, and told that they constitute a 'point' with no further information (choice of basis, etc), then you can just hand that set of numbers back and legitimately claim to have produced a vector.
The largest difference between the two is that it makes no sense to do things to one that you can do to the other. For example,
You can add vectors: <1 2 3> + <3 2 1> = <4 4 4>
You can multiply (or scale) a vector by a number (generally called a scalar)
2 * <1 1 1> = <2 2 2>
You can ask how far apart two points are: d((1, 2, 3), (3, 2, 1) = sqrt((1 - 3)2 + (2 - 2)2 + (3 - 1)2) = sqrt(8) ~= 2.82
A good intuitive way to think about the association between a vector and a point is that a vector tells you how to get from the origin (that one point in space to which we assign the coordinates (0, 0, 0)) to its associated point.
If you translate your coordinate system, then you get a new vector for the same point. Although the coordinates that make up the point will undergo the same translation so it's a pretty easy conflation to make between the two.
Likewise if rotate the coordinate system or apply some other transformation (e.g. a shear), then the coordinates and vector associated to the point will also change.
It's also possible for a vector to be something else entirely, for example a bounded function on the interval [0, 1] is a vector because you can multiply it by a real number and add it to another function on the interval and it will satisfy certain requirements (namely the axioms of a vectorspace). In this case one thinks of having one coordinate for each real number, x, in [0, 1] where the value of that coordinate is just f(x). So that's the easiest example of an infinite dimensional vector space.
There are all sorts of vector spaces and the notion that a vector is a 'point and a direction' (or whatever it's supposed to be) is actually pretty vacuous.
A vector represents a change from one state to another. To create one, you need two states (in this case, points), and then you subtract the initial state from the final state in order to get the resultant vector.
Vectors are a more general idea that a point in 3D space.
Vectors can have 2, 3, or n dimensions. They represent many quantities in the physical world (e.g., velocity, force, acceleration) besides position.
A mathematician would say that a vector is a first order tensor that transforms according to this rule:
u(i) = A(i, j)v(j)
You need both point and vector because they are different. A point in 3D space denoting position is a vector, but every vector is not a point in 3D space.
Then there's the computer science notion of a vector as a container - it's an abstraction for an array of values or references. This is a different concept from a mathematician's idea of a vector, because every vector container need not obey the first order tensor transformation law (e.g. a Vector of OrderItems). That's yet another separate idea.
It's important to keep all these in mind when talking about vectors and points.
Does a 3D vector differ from a 3D point tuple (x,y,z) in the context of 3D game mathematics?
Traditionaly vector means a direction and speed. A point could be considered a vector from the world orgin of one time step. (even though it may not be considered mathematically pure)
If they are different, then how do I calculate a vector given a 3d point?
target-tower is the common mnemonic.
Careful on your usage of this. The resulting vector is really normal*velocity. If you want to change it into something useful in a game application: you will need to normalize the vector first.
Example: Joe is at (10,0,0) and he wants to go to (10,10,0)
Target-Tower: (10,10,0)-(10,0,0)=(0,10,0)
Normalize the resulting vector: (0,1,0)
Apply "physics": (0,1,0) * speed*elapsed_time < speed = 3 and we'll say that the computer froze for a whole 2 seconds between the last step and this one for ease of computation >
=(0,6,0)
Add the resulting vector to Joes current point in space to get his next point in space: ... =(10,6,0)
Normal = vector/(sqrt(x*x+y*y+z*z))
...I think I have everything here
Vector is the change in the states. A point is the static point. Two vectors can be parallel or perpendicular. You can have product of two vectors which is a third vector. You can multiply a vector by a constant. You can add two vectors.
All these operations are not allowed on point. So program wise if you think both as a C++ class, there will be many such methods in the vector class but probably only Get and Set for point.
In the context of game mathematics there is no difference.
Points are elements of an affine space.† Vectors are elements of a vector (aka linear) space. When you choose an origin in an affine space it automatically induces a linear structure on that affine space. The contrary is also true: if you have a vector space it already satisfies all the axioms of an affine space.
The fact is that when it comes to computation, the only way to represent an affine space numerically is to use tuples of numbers, which also form a vector space.
Each object in a game always has an origin, and it is crucial to know where it is. That origin is set relative to the origin of the world, which is set relative to the origin of the camera/viewport. The vertices of the object are represented as vectors -- offsets from the object origin. You use matrix multiplication to transform the objects -- that is too a purely vector space operation (you cannot multiply an affine point by a matrix without specifying the origin first). Etc, etc... As we see all those triplets of numbers that we might think of as 'points' are actually vectors in the local coordinate system.
So is there any reason to distinguish between the two outside the study of algebra? It is an unnecessary abstraction, and unnecessary abstractions are harmful (KISS). So my answer is no, just go with a single vector type.
† Or any topological space outside the context of game development.
A vector is a line, that is a sequence of points but that it can be represented by two points, the starting and the ending point.
If you take the origin as the starting point, then you can describe your vector giving only the ending point.

Resources