I'm programming a really simple 2D collision response algorithm (thankfully), but even the really simple geometry concepts have been baffling me. Been studying! But...
In this case, it's vectors:
If an object hits a piece of geometry, I want to completely eliminate that object's momentum in the direction parallel to the normal of the geometry's wall. There's no friction or bounce involved luckily, but even still I'm not sure how to find a vector that will completely negate that momentum along the normal.
Thank you in advance!
Calculate the dot product of the geometry wall normal with the velocity vector of the object. The result equals the velocity component in the direction of the wall normal. Subtract the wall normal multiplied by this result from the velocity vector to remove all velocity in that direction.
If you look for the reflection formula, there is a term there that subtracts twice the velocity in the direction of the geometry normal. Change that to 1 times and it will stop instead of bouncing. Sorry, no time for formulas ;-)
Related
I am currently teaching myself linear algebra in games and I almost feel ready to use my new-found knowledge in a simple 2D space. I plan on using a math library, with vectors/matrices etc. to represent positions and direction unlike my last game, which was simple enough not to need it.
I just want some clarification on this issue. First, is it valid to express a position in 2D space in 4x4 homogeneous coordinates, like this:
[400, 300, 0, 1]
Here, I am assuming, for simplicity that we are working in a fixed resolution (and in screen space) of 800 x 600, so this should be a point in the middle of the screen.
Is this valid?
Suppose that this position represents the position of the player, if I used a vector, I could represent the direction the player is facing:
[400, 400, 0, 0]
So this vector would represent that the player is facing the bottom of the screen (if we are working in screen space.
Is this valid?
Lastly, if I wanted to rotate the player by 90 degrees, I know I would multiply the vector by a matrix/quarternion, but this is where I get confused. I know that quarternions are more efficient, but I'm not exactly sure how I would go about rotating the direction my player is facing.
Could someone explain the math behind constructing a quarternion and multiplying it by my face vector?
I also heard that OpenGL and D3D represent vectors in a different manner, how does that work? I don't exactly understand it.
I am trying to start getting a handle on basic linear algebra in games before I step into a 3D space in several months.
You can represent your position as a 4D coordinate, however, I would recommend using only the dimensions that are needed (i.e. a 2D vector).
The direction is mostly expressed as a vector that starts at the player's position and points in the according direction. So a direction vector of (0,1) would be much easier to handle.
Given that vector you can use a rotation matrix. Quaternions are not really necessary in that case because you don't want to rotate about arbitrary axes. You just want to rotate about the z-axis. You helper library should provide methods to create such matrix and transform the vector with it (transform as a normal).
I am not sure about the difference between the OpenGL's and D3D's representation of the vectors. But I think, it is all about memory usage which should be a thing you don't want to worry about.
I can not answer all of your questions, but in terms of what is 'valid' or not it all completely depends on if it contains all of the information that you need and it makes sense to you.
Furthermore it is a little strange to have the direction that an object is facing be a non-unit vector. Basically you do not need the information of how long the vector is to figure out the direction they are facing, You simply need to be able to figure out the radians or degrees that they have rotated from 0 degrees or radians. Therefore people usually simply encode the radians or degrees directly as many linear algebra libraries will allow you to do vector math using them.
Hey so after reading this article I've been left with a few questions I hope to resolve here.
My understanding is that the goal of any multi-dimensional collision response is to convert it to a 1D collision be putting the bodies on some kind of shared axis. I've deduced from the article that the steps to responding to a 2d collision between 2 polygons is to
First find the velocity vector of each bodies collision point
Find relative velocity based on each collision point's velocity (see question 1)
Factor in how much of that velocity is along the the "force transfer line (see question 2)"
(which is the only velocity that matters for the collision)
Factor in elasticity
Factor in mass
Find impulse/ new linear velocity based on 2-4
Finally figure out new angular velocity by figuring out how much of the impulse is "rotating around" each object's CM (which is what determines angular acceleration)
All these steps basically figure out how much velocity each point is coming at the other with after each velocity is translated to a new 1D coordinate system, right?
Question 1: The article says relative velocity is meant to find and expression for the velocity with which the colliding points are approaching each other, but to me it seems as though is simply the vector of
CM 1 -> CM 2, with magnitude based on each point's velocity. I don't understand the reasoning behind even including the CMs in the calculations since it is the points colliding, not the CMs. Also, I like visualizing things, so how does relative velocity translate geometrically, and how does it work toward the goal of getting a 1D collision problem.
Question 2: The article states that the only force during the collision is in the direction perpendicular to the impacted edge, but how was this decided? Also how can they're only be force in one direction when each body is supposed to end up bouncing off in 2 different directions.
"All these steps basically figure out how much velocity each point is coming at the other with after each velocity is translated to a new 1D coordinate system, right?"
That seems like a pretty good description of steps 1 and 2.
"Question 1: The article says relative velocity is meant to find and expression for the velocity with which the colliding points are approaching each other, but to me it seems as though is simply the vector of CM 1 -> CM 2, with magnitude based on each point's velocity."
No, imagine both CMs almost stationary, but one rectangle rotating and striking the other. The relative velocity of the colliding points will be almost perpendicular to the displacement vector between CM1 and CM2.
"...How does relative velocity translate geometrically?"
Zoom in on the site of collision, just before impact. If you are standing on the collision point of one body, you see the collision point on the other point approaching you with a certain velocity (in your frame, the one in which you are standing still).
"...And how does it work toward the goal of getting a 1D collision problem?"
At the site of collision, it is a 1D collision problem.
"Question 2: The article states that the only force during the collision is in the direction perpendicular to the impacted edge, but how was this decided?"
It looks like an arbitrary decision to make the surfaces slippery, in order to make the problem easier to solve.
"Also how can [there] only be force in one direction when each body is supposed to end up bouncing off in 2 different directions."
Each body experiences a force in one direction. It departs in a certain direction, rotating with a certain angular velocity. I can't parse the rest of the question.
I am quite new to this, and iv'e heard that i need to get my inversed projection matrix and so on to create a ray from a 2D point to a 3D world point, however since im using OpenglES and there are not as many methods as there would be regulary to help me with this. (And i simply don't know how to do it) im using a trigenomeric formula for this insted.
For each time i iterate one step down the negative Z-axis i multiply the Y-position on the screen (-1 to 1) with
(-z / (cot(myAngle / 2))
And the X position likewise but with a koefficent equally to the aspect ratio.
myAngle is the frustum perspective angle.
This works really good for me and i get very accurate values, so what i wonder is: Why should i use the inverse of the projection matrix and multiply it with some stuff instead of using this?
Most of the time you have a matrix lying around for your OpenGl camera. Using an inverse matrix is simple when you already have a camera matrix on hand. It is also (oh so very slightly at computer speeds) faster to do a matrix multiply. And in cases where you are doing a bajillion of these calculations per frame, it can matter.
Here is some good info on getting started on a camera class if you are interested:
Camera Class
And some matrix resources
Depending on what you are working on, I wouldn't worry too much about the 'best way to do it.' You just want to make sure you understand what your code is doing then keep improving it.
So I have written a Quaternion based 3D Camera oriented toward new programmers so it is ultra easy for them to integrate and begin using.
While I was developing it, at first I would take user input as Euler angles, then generate a Quaternion based off of the input for that frame. I would then take the Camera's Quaternion and multiply it by the one we generated for the input, and in theory that should simply add the input rotation to the current state of the camera's rotation, and things would be all fat and happy. Lets call this: Accumulating Quaternions, because we are storing and adding Quaternions only.
But I noticed that there was a problem with this method. The more I used it, even if I was only rotating on one Euler angle, say Yaw, it would, over some iterations, begin bleeding over into another, say Pitch. It was slight, but fairly unacceptable.
So I did some more research and found an article stating it was better to accumulate Euler angles, so the camera stores it's current rotation as Euler angles, and input is simply added to them each frame. Then I generate a Quaternion from them each frame, which is in turn used to generate my rotation matrix. And this fixed the issue of rotation bleeding into improper axes.
So do any Stackoverflow members have any insight into this problem? Is that a proper way of doing things?
Multiplying quaternions is going to suffer from accumulation of floating-point roundoff issues (even simple angles like 45 degrees won't be exact). It's a great way to composite rotations, but the precision of each of your quaternion components is going to drop-off over time. The bleed-through is one side-effect, a visually worse one though is your quaternion could start incorporating a scale factor - to recover that, you'd have to renormalize back to Euler angles in any case. A fixed-point Euler angle isn't going to accumulate roundoff.
Recalculating the quaternion per-frame is minimal. I wouldn't bother trying to optimize it out. You could probably allow a few quaternions to accumulate before you renormalized to get the accuracy back, but it really isn't worth the effort.
Accumulation is an inexact process. Accumulating lots of incremental rotations will accumulate roundoff error whether you do it with quaternions or matrices.
I imagine something like this: you got your code up and running, but noticed that after a certain amount of navigation your camera was heeling over annoyingly -- violating an invariant you hadn't thought of in advance. Effectively, you've realized you don't want to accumulate rotations; instead you want to do something else.
You can look at this as more of an interface design issue than a numerical accuracy issue. Basically, people expect a camera to navigate according to pitch, yaw, and roll, so choosing to control and represent the angles directly can avoid a lot of problems.
The bummer here is that the quaterions seem to have become redundant (for this particular usage, at least). You still want the quaternions, though -- interpolating with the raw pitch/yaw/roll angles can be ugly. Again, it's an interface design question: you need to figure out where you'll need the quaternions, and how to get them in and out...
I've seen both argued for. I think the real question you'll have to deal with is flexibility in your camera system down the line; IMO yaw is generally more interesting in a third-person view (because you're going to rotate about the character's vertical axis). While you can arguably "yaw" around the vertical in first-person view as well, I'm not sure it's really the same thing.
However, I do think it's kind of a waste to recalculate your quaternions per-frame. Perhaps it would be better to store the latest quaternions and mark them dirty if your frame receives input?
I am trying to animate an object, let's say its a car. I want it go from point
x1,y1,z1
to point x2,y2,z2 . It moves to those points, but it appears to be drifting rather than pointing in the direction of motion. So my question is: how can I solve this issue in my updateframe() event? Could you point me in the direction of some good resources?
Thanks.
First off how do you represent the road?
I recently done exactly this thing and I used Catmull-Rom splines for the road. To orient an object and make it follow the spline path you need to interpolate the current x,y,z position from a t that walks along the spline, then orient it along the Frenet Coordinates System or Frenet Frame for that particular position.
Basically for each point you need 3 vectors: the Tangent, the Normal, and the Binormal. The Tangent will be the actual direction you will like your object (car) to point at.
I choose Catmull-Rom because they are easy to deduct the tangents at any point - just make the (vector) difference between 2 other near points to the current one. (Say you are at t, pick t-epsilon and t+epsilon - with epsilon being a small enough constant).
For the other 2 vectors, you can use this iterative method - that is you start with a known set of vectors on one end, and you work a new set based on the previous one each updateframe() ).
You need to work out the initial orientation of the car, and the final orientation of the car at its destination, then interpolate between them to determine the orientation in between for the current timestep.
This article describes the mathematics behind doing the interpolation, as well as some other things to do with rotating objects that may be of use to you. gamasutra.com in general is an excellent resource for this sort of thing.
I think interpolating is giving the drift you are seeing.
You need to model the way steering works .. your update function should 1) move the car always in the direction of where it is pointing and 2) turn the car toward the current target .. one should not affect the other so that the turning will happen and complete more rapidly than the arriving.
In general terms, the direction the car is pointing is along its velocity vector, which is the first derivative of its position vector.
For example, if the car is going in a circle (of radius r) around the origin every n seconds then the x component of the car's position is given by:
x = r.sin(2πt/n)
and the x component of its velocity vector will be:
vx = dx/dt = r.(2π/n)cos(2πt/n)
Do this for all of the x, y and z components, normalize the resulting vector and you have your direction.
Always pointing the car toward the destination point is simple and cheap, but it won't work if the car is following a curved path. In which case you need to point the car along the tangent line at its current location (see other answers, above).
going from one position to another gives an object a velocity, a velocity is a vector, and normalising that vector will give you the direction vector of the motion that you can plug into a "look at" matrix, do the cross of the up with this vector to get the side and hey presto you have a full matrix for the direction control of the object in motion.