I use slerp to interpolate between two quaternions representing rotations. The resulting rotation is then extracted as Euler angles to be fed into a graphics lib. This kind of works, but I have the following problem; when rotating around two (one works just fine) axes in the direction of the green arrow as shown in the left frame
here
the rotation soon jumps around to rotate from the opposite site to the opposite visual direction, as indicated by the red arrow in the right frame.
This may be logical from a mathematical perspective (although not to me), but it is undesired. How could I achieve an interpolation with no visual flipping and changing of directions when rotating around more than one axis, following the green arrow at all times until the interpolation is complete?
Thanks in advance.
Your description of the problem is a little hard to follow, quite frankly. But it sounds like you need to negate one of your quaternions.
Remember, each rotation can actually be represented by two quaternions, q and -q. But the Slerp path from q to w will be different from the path from (-q) to w: one will go the long away around, the other the short away around. It sounds like you're getting the long way when you want the short way.
Try taking the dot product of your two quaternions (i.e., the 4-D dot product), and if the dot product is negative, replace your quaterions q1 and q2 with -q1 and q2 before performing Slerp.
How far is the total rotation? You may be asking for an interpolation for two orientation too far apart in angle. The math, quaternions or not, has trouble deciding which way to go, in a sense. Like not having enough keyframes in animation.
Determine a good intermediate orientation about halfway along, and make separate interpolations from the initial orientation to that intermediate one, and from the intermediate to the final.
Related
Here is what I have so far.
I have a 3D model and I made a triangle mesh. Calculated and applied normals to the model too.
I want to apply different textures into the triangle. I also have the direction vector of all the texture I need.
For mapping, I do this:
I just calculate the Dot product of each triangle normal with the texture direction vector of each texture, and start comparing to see which texture could suitable BASED UPON the calculation of dot product.
But I realised that it is not as straight forward as I thought it was. Because two or more, different triangle could be in almost same orientation in 3D space, meaning one could be facing towards me and the other could be facing opposite direction (maybe parallel but different direction).
I think a better question is how do I use the calculated dot-product to distinguish the face of the triangle so I know I know which image/texture should be used ?
If the triangles are facing in opposite directions, the normals will also face in opposite directions, and the dot products will have opposite signs. Therefore the dot product gives you enough information to distinguish between the opposite faces. I can't think of a simple test which would give better results than the dot product.
I am currently teaching myself linear algebra in games and I almost feel ready to use my new-found knowledge in a simple 2D space. I plan on using a math library, with vectors/matrices etc. to represent positions and direction unlike my last game, which was simple enough not to need it.
I just want some clarification on this issue. First, is it valid to express a position in 2D space in 4x4 homogeneous coordinates, like this:
[400, 300, 0, 1]
Here, I am assuming, for simplicity that we are working in a fixed resolution (and in screen space) of 800 x 600, so this should be a point in the middle of the screen.
Is this valid?
Suppose that this position represents the position of the player, if I used a vector, I could represent the direction the player is facing:
[400, 400, 0, 0]
So this vector would represent that the player is facing the bottom of the screen (if we are working in screen space.
Is this valid?
Lastly, if I wanted to rotate the player by 90 degrees, I know I would multiply the vector by a matrix/quarternion, but this is where I get confused. I know that quarternions are more efficient, but I'm not exactly sure how I would go about rotating the direction my player is facing.
Could someone explain the math behind constructing a quarternion and multiplying it by my face vector?
I also heard that OpenGL and D3D represent vectors in a different manner, how does that work? I don't exactly understand it.
I am trying to start getting a handle on basic linear algebra in games before I step into a 3D space in several months.
You can represent your position as a 4D coordinate, however, I would recommend using only the dimensions that are needed (i.e. a 2D vector).
The direction is mostly expressed as a vector that starts at the player's position and points in the according direction. So a direction vector of (0,1) would be much easier to handle.
Given that vector you can use a rotation matrix. Quaternions are not really necessary in that case because you don't want to rotate about arbitrary axes. You just want to rotate about the z-axis. You helper library should provide methods to create such matrix and transform the vector with it (transform as a normal).
I am not sure about the difference between the OpenGL's and D3D's representation of the vectors. But I think, it is all about memory usage which should be a thing you don't want to worry about.
I can not answer all of your questions, but in terms of what is 'valid' or not it all completely depends on if it contains all of the information that you need and it makes sense to you.
Furthermore it is a little strange to have the direction that an object is facing be a non-unit vector. Basically you do not need the information of how long the vector is to figure out the direction they are facing, You simply need to be able to figure out the radians or degrees that they have rotated from 0 degrees or radians. Therefore people usually simply encode the radians or degrees directly as many linear algebra libraries will allow you to do vector math using them.
I am using a 3D engine called Electro which is programmed using Lua. It's not a very good 3D engine, but I don't have any choice in the matter.
Anyway, I'm trying to take a flat quadrilateral and transform it to be in a specific location and orientation. I know exactly where it is supposed to go (i.e. I know the exact vertices where the corners should end up), but I'm hitting a snag in getting it rotated to the right place.
Electro does not allow you to apply transformation matrices. Instead, you must transform models by using built-in scale, position (that is, translate), and rotation functions. The rotation function takes an object and 3 angles (in degrees):
E.set_entity_rotation(entity, xangle, yangle, zangle)
The documentation does not speficy this, but after looking through Electro's source, I'm reasonably certain that the rotation is applied in order of X rotation -> Y rotation -> Z rotation.
My question is this: If my starting object is a flat quadrilateral lying on the X-Z plane centered at the origin, and the destination position is in a different location and orientation where the destination vertices are known, how could I use Electro's rotation function to rotate it into the correct orientation before I move it to the correct place?
I've been racking my brain for two days trying to figure this out, looking at math that I don't understand dealing with Euler angles and such, but I'm still lost. Can anyone help me out?
Can you tell us more about the problem? It sounds odd phrased in this way. What else do you know about the final orientation you have to hit? Is it completely arbitrary or user-specified or can you use more knowledge to help solve the problem? Is there any other Electro API you could use to help?
If you really must solve this general problem, then too bad, it's hard, and underspecified. Here's some guy's code that may work, from euclideanspace.com.
First do the translation to bring one corner of the quadrilateral to the point you'd like it to be, then apply the three rotational transformations in succession:
If you know where the quad is, and you know exactly where it needs to go, and you're certain that there are no distortions of the quad to fit it into the place where it needs to go, then you should be able to figure out the angles using the vector scalar product.
If you have two vectors, the angle between them can be calculated by taking the dot product.
I'm an artist involved with building various sorts of computer controlled machines. I've started prototyping a gimble-based XY painting machine and have realized that the maths needed are out of my reach. I'm a decent enough programmer but not strong in math- esp. 3D math.
To get a sense of what I'm needing to do, it might be helpful to look at the rig:
Early prototype:
http://roypardi.com/gimble/gimbleSmall.MOV (small video)
http://roypardi.com/gimble/gimbleLarge.mov (larger video)
The two inner rings represent the X/Y axes and are controlled by stepper motors. I want to be able to use both raster images and vector data (gcode). So I need to be able to address a point in 2D space on the paper/from my data and have the gimble figure out what orientation it needs to be at in order to get there (i.e. how much to step each motor).
I've been searching out 2D > 3D projection, Euler angles, etc. but I'm out of my depth. Any pointers, pushes in the right direction, or code snippets would be most welcome. I can make sense of most programming languages.
Very nice machine you have made, I hope this works for you I believe it is correct.
The way I see it, is to get one angle is simple, but the other is slightly harder to visualise as we have tilted the axis which it turns upon.
I'm going to avoid using tan, as when programming this could result in a division by 0, which could be frustrating. Also Z is going to be the height of the origin above the paper.
YAxis = arcsin( X / sqrt(X² + Z²))
XAxis = arcsin( Y / sqrt(Y² + X² + Z²))
or we could use
XAxis = arcsin(Y / sqrt(Y² + Z²))
YAxis = arcsin( X / sqrt(X² + Y² + Z²))
Also, I'd very much like to see a video of this plotting, if it works.
Edit:
After thinking about it i believe only one solution will work it depends on which axis is affected by the other. Is the YAxis in the Middle or the Xaxis?
I think it's a problem of simple http://en.wikipedia.org/wiki/Trigonometry
Let's say that the distance from the centre of your rings to the nearest point on the paper (which I'll call point 'O' for 'Origin') is distance X.
Take another point P directly north of O, whose distance from O is Y.
To paint this point, you need the angle alpha such that tan(alpha)=Y/X, i.e. you can calculate alpha using the formula "arctan(Y/X)" [arctan is sometimes also known as atan]. Arctan is a trignometric function, which I think you'll probably find defined in the API of a general purpose math library.
The above is the simplest case.
The only other case that I can think of is when the point P isn't due north. Instead of being due north, let's say that its distance is Y1 to the north, and Y2 to the east. The solution is two angles (one angle for each of two rings), one of which is "arctan(Y1/X)" and the other of which is "arctan(Y2/X)".
Perhaps I misunderstand, but I don't believe a gimbal will do what you want. A gimbal can point in any 3D direction, but it cannot move to arbitrary points in 3D space. If the plane of the paper intersects the volume swept by the pen held in the gimbal, the pen might be able to draw a circle, but nothing more. Even drawing a circle is not a sure thing, since in this case the paper would also intersect the volume swept by the gimbal rings; trying to orient the pen would make a ring hit the paper.
I think what you want is a plotter, not a gimbal.
So I have written a Quaternion based 3D Camera oriented toward new programmers so it is ultra easy for them to integrate and begin using.
While I was developing it, at first I would take user input as Euler angles, then generate a Quaternion based off of the input for that frame. I would then take the Camera's Quaternion and multiply it by the one we generated for the input, and in theory that should simply add the input rotation to the current state of the camera's rotation, and things would be all fat and happy. Lets call this: Accumulating Quaternions, because we are storing and adding Quaternions only.
But I noticed that there was a problem with this method. The more I used it, even if I was only rotating on one Euler angle, say Yaw, it would, over some iterations, begin bleeding over into another, say Pitch. It was slight, but fairly unacceptable.
So I did some more research and found an article stating it was better to accumulate Euler angles, so the camera stores it's current rotation as Euler angles, and input is simply added to them each frame. Then I generate a Quaternion from them each frame, which is in turn used to generate my rotation matrix. And this fixed the issue of rotation bleeding into improper axes.
So do any Stackoverflow members have any insight into this problem? Is that a proper way of doing things?
Multiplying quaternions is going to suffer from accumulation of floating-point roundoff issues (even simple angles like 45 degrees won't be exact). It's a great way to composite rotations, but the precision of each of your quaternion components is going to drop-off over time. The bleed-through is one side-effect, a visually worse one though is your quaternion could start incorporating a scale factor - to recover that, you'd have to renormalize back to Euler angles in any case. A fixed-point Euler angle isn't going to accumulate roundoff.
Recalculating the quaternion per-frame is minimal. I wouldn't bother trying to optimize it out. You could probably allow a few quaternions to accumulate before you renormalized to get the accuracy back, but it really isn't worth the effort.
Accumulation is an inexact process. Accumulating lots of incremental rotations will accumulate roundoff error whether you do it with quaternions or matrices.
I imagine something like this: you got your code up and running, but noticed that after a certain amount of navigation your camera was heeling over annoyingly -- violating an invariant you hadn't thought of in advance. Effectively, you've realized you don't want to accumulate rotations; instead you want to do something else.
You can look at this as more of an interface design issue than a numerical accuracy issue. Basically, people expect a camera to navigate according to pitch, yaw, and roll, so choosing to control and represent the angles directly can avoid a lot of problems.
The bummer here is that the quaterions seem to have become redundant (for this particular usage, at least). You still want the quaternions, though -- interpolating with the raw pitch/yaw/roll angles can be ugly. Again, it's an interface design question: you need to figure out where you'll need the quaternions, and how to get them in and out...
I've seen both argued for. I think the real question you'll have to deal with is flexibility in your camera system down the line; IMO yaw is generally more interesting in a third-person view (because you're going to rotate about the character's vertical axis). While you can arguably "yaw" around the vertical in first-person view as well, I'm not sure it's really the same thing.
However, I do think it's kind of a waste to recalculate your quaternions per-frame. Perhaps it would be better to store the latest quaternions and mark them dirty if your frame receives input?