How to deal with negative depth in 3D perspective projection - math

Background
This question is very similar to this question asked 3 years ago. Basically, I'm wanting to re-create a rudimentary first-person graphics engine as a learning experience.
So, say for example, that we're in a 3D space where z is representative of depth - x and y map to the x and y coordinates of the 2D space. If this coordinate system's origin is the camera, then a point at (0, 0, 1) would be located directly in front of the camera and a point at (0, 0, -1) would be located directly behind the camera.
Adding depth to this projection simply requires us to divide our x and y components by the depth (in this case, z). In practice, this makes sense to me and it appears to work.
Until...
...the depth becomes negative. If the depth is negative and you divide x and y by the depth, x and y's signs will change. We know that logically, however, this shouldn't be the case.
I've tried a few things so far :
Using the absolute value of depth - this wasn't ideal. Say there's a point (1, 1, 4) and (1, 1, -4). These points will then theoretically project onto the same location.
Trying to approximate negative values as decimals. So, if we have a negative depth, we try to map positive decimal number (between 0 and 1), allowing our x and y coordinates to stretch to infinity. The larger the negative number is, the closer to zero that the representative positive decimal is that we'd calculate. I feel like this might be a potential solution, but I'm still struggling a little bit with the concept.
So, how do you handle negative depths in your perspective projections?
I'm very new to graphics, so if I'm omitting any information that's needed to answer this question, feel free to ask. I wanted to keep this implementation agnostic since I feel like this question tends more towards the theoretical aspect of perspective projection.
EDIT
This video identifies the problem I'm trying to solve. It's a great video and is also what inspired me to start this little project - but I'm just wondering if there was a generally 'agreed-upon' way to handle this particular case.

You are doing a point projection, which means that your projected point in 2D is exactly the point where the line between 3D object and 3D camera would pass through the canvas. For positive depth, that intersection is between object and camera. For negative depth, the intersection is beyond the camera. But it's still the same line, hence swapping signs makes perfect sense.
Of course, actually drawing stuff with negative depth doesn't make that much sense, since usually you won't see things behind your camera. And if you do, then you have some extremely wide angle lense, so assuming the canvas as a plane in space is no longer accurate, and you'll have to switch to more complex projections to simulate fish-eye lenses and similar.
It might however be that you want to draw a triangle or other geometric primitive, and that just one of the corners has negative depth, while the others are positive. The usual approch in such scenarios is to clip the object to the frustrum, more particularly to intersect it with the near plane of the frustrum, thus getting rid of all points with negative depth. Usually your graphics pipeline can take care of this clipping.

I will try to provide a more math-y answer for anyone interested.
The mathemetical theory behind this is called projective geometry. You start with a three dimensional space and then split it into equivalence classes where two points a and b are equivalent if there is a factor f so that f*a == b. So for example (4, 4, 4) would be in the same class as (1, 1, 1) and (3, 6, 9) would be in the same class as (100, 200, 300). Geometrically speaking, you look at the set of straight lines through (0, 0, 0).
If you pick the point with z == 1 from every equivalence class you basically get a 2D space. This is exactly what "perspective projection" is. However, the equivalence classes for points like (1, 1, 0) do not have such a point. So what you actually get is a 2D space + some additional "points at infinity".
You can think of these points as a circle that goes around your coordinate system, but with an infinite radius. Also, opposite points are identical, so stuff that goes out on one end wraps around and comes back in on the opposite side. This means that straight lines are actually just circles that contain a point at infinity.
To make a concrete example. If you want to render a straight line from (1, 1, 4) to (1, 1, -4) you first normalize both of them to z == 1: (0.25, 0.25, 1) and (-0.25, -0.25, 1). But now when you draw the line between them, you need to go "the other way around", i.e. leave the screen in one direction and come back in at the opposite side. (You can skip the "come back in" part though because it is behind the camera.)
For implementation it is unfortunately not sufficient to map (1, 1, -4) to (inf, inf, 1) because that way there would be no way to know the slope of the line. You can either fake it by using a very large number instead of infinity or you can do it properly and handle these special cases throughout your code.

Related

View matrix: to invert rotation or to not invert rotation?

Edit: My question may be too complex for what I am really asking, so skip to the TLDR; if you need it.
I have been programming 3D graphics for a while now and up until now I never seemed to have this issue, but maybe this is the first time I really understand things like I should (or not). So here's the question...
My 3D engine uses the typical OpenGL legacy convention of a RH coordinate system, which means X+ is right, Y+ is up and Z+ is towards the viewer, Z- goes into the screen. I did this so that I could test my 3D math against the one in OpenGL.
To coop with the coordinate convention of Blender 3D/Collada, I rotate every matrix during importing with -90 degrees over the X axis. (Collada uses X+ right, Y+ forward, Z+ up if I am not mistaken)
When I just use the projection matrix, an identity view matrix and a model matrix that transforms a triangle to position at (0, 0, -5), I will see it because Z- is into the screen.
In my (6DOF space) game, I have spaceships and asteroids. I use double-precision coordinates (because they are huge) and by putting the camera inside a spaceship, the coordinates are made relative every frame so they are precise enough to fit a single-precision coordinate for rendering on the GPU.
So, now I have a spaceship, the camera is inside, and it its rotation quaternion is identity. This gives an identity matrix and if I recall correctly, row columns 1-3 are representing the X, Y and Z axis of where the object is pointing at. To move the ship, I use this Z axis to go forward. With the identity matrix, the Z-axis will be (0, 0, 1).
Edit: actually, I don't take the columns from the matrix, I extract the axes directly from the quaternion.
Now, when I put the the camera in the spaceship, this means that its nose is pointing at (0, 0, 1) but OpenGL will render with -1 going into the screen because of its conventions.
I always heard that when you put the camera inside an object in your scene, you need to take the model matrix and invert it. It's logical: if the ship is at (0, 0, 1000) and an asteroid is at (0, 0, 1100), then is makes sense that you need to put the camera at (0, 0, -1000) so that the ship will be at (0, 0, 0) and the asteroid will be at (0, 0, 100).
So when I do this, the ship will be rendered with its nose looking at Z-, but now, when I start moving, my ship moves to its rotation (still identity) Z being (0, 0, 1) and the ship will back up instead of going forward. Which makes sense if (0, 0, 1) is towards the viewer...
So now I am confused... how should I handle this correctly??? Which convention did I use incorrectly? Which convention did I forget? It doesn't seem logical, for example, to invert the rotation of the ship when calculation the movement vectors...
Can someone clarify this for me? This has been bothering me for a week now and I don't seem to get it straight without doubting that I am making new errors.
Edit: isn't it at all very strange to invert the rotational part of the model's matrix for a view matrix? I understand that the translation part should be inverted, but the view should still look at the same direction as the object when it would be rendered, no?
TLDR;
If you take legacy OpenGL, set a standard projection matrix and an identity modelview matrix and render a triangle at (0, 0, -5), you will see it because OpenGL looks at Z-.
But if you take the Z-axis from the view matrix (3rd row column), which is (0, 0, 1) on an identity matrix, this means that going 'forward' means that you will be getting further away from that triangle, which looks illogical.
What am I missing?
Edit: As the answer is hidden in many comments below, I summarize it here: conventions! I chose to use the OpenGL legacy convention but I also chose to use my own physics convention and they collide, so I need to compensate for that.
Edit: After much consideration, I have decided to abandon the OpenGL legacy convention and use whatever looks most logical to me, which is the left-handed system.
I think the root cause of your confusion might lie here
So, now I have a spaceship, the camera is inside, and it its rotation quaternion is identity. This gives an identity matrix and if I recall correctly, row 1-3 are representing the X, Y and Z axis of where the object is pointing at. To move the ship, I use this Z axis to go forward. With the identity matrix, the Z-axis will be (0, 0, 1).
Since we can assume that a view matrix contains only rotations and translations (no scaling/shear or perspective tricks), we know that the upper left 3x3 sub-matrix will be a rotation only, and those are orthogonal by definition, so the inverse(mat3(view)) will be the transpose(mat3(view)), which is where your rows are coming from. Since in a standard matrix which you use to transform objects in a fixed coordinate frame (as opposed to moving the coordinate frame of reference), the columns of the matrix will simply show where the unit vectors for x, y and z (as well as the origin (0,0,0,1) will be mapped to by this matrix. By taking the rows, you use the transpose, which, in this particular setup, is the inverse (not considering the last column containing the translation, of course).
The view matrix will transform from wolrd space into eye space. As a result, inverse(view) will transform from eye space back to world space.
So, inverse(view) * (1,0,0,0) will give you the camera's right vector in world space, inverse(view) * (0,1,0,0) the up vector, but as per convention the camera will be looking at -z in eye space, so forward direction in wolrd space will be inverse(view) * (0,0,-1,0), which, in your setup, is just the third row of the matrix negated.
(Camera position will be inverse(view) * (0,0,0,1) of course, but we have to do a bit more than just transposing to get the fourth column of inverse(view) right).

DirectX negative W

I really was trying to find an answer on this very basic (at first sight) question.
For simplicity depth test is disabled during further discussion (it doesn’t have a big deal).
For example, we have triangle (after transformation) with next float4 coordinates.
top CenterPoint: (0.0f, +0.6f, 0.6f, 1f)
basic point1: (+0.4f, -0.4f, 0.4f, 1f),
basic point2: (-0.4f, -0.4f, 0.4f, 1f),
I’m sending float4 for input and use straight VertexShader (without transforms), so I’m sure about input. And we have result is reasonable:
But what we will get if we'll start to move CenterPoint to point of camera position. In our case we don’t have camera so will move this point to minus infinity.
I'm getting quite reasonable results as long as w (with z) is positive.
For example, (0.0f, +0.006f, 0.006f, .01f) – look the same.
But what if I'll use next coordinates (0.0f, -0.6f, -1f, -1f).
(Note: we have to switch points or change rasterizer for culling preventing).
According to huge amount of resource I'll have test like: -w < z < w, so GPU should cut of that point. And yes, in principle, I don’t see point. But triangle still visible! OK, according to huge amount of other resource (and my personal understanding) we'll have division like (x/w, y/w, z/w) so result should be (0, 0.6, 1). But I'm getting
And even if that result have some sense (one point is somewhere far away behind as), how really DirectX (I think it is rather GPU) works in such cases (in case of infinite points and negative W)?
It seems that I don't know something very basic, but it seems that nobody know that.
[Added]: I want to note that point w < 0 - is not a real input.
In real life such points are result of transformation by matrices and according to the math (math that are used in standard Direct sdk and other places) corresponds to the point that appears behind the camera position.
And yes, that point is clipped, but questions is rather about strange triangle that contains such point.
[Brief answer]: Clipping is essentially not just z/w checking and division (see details below).
Theoretically, NDC depth is divided into two distinct areas. The following diagram shows these areas for znear = 1, zfar = 3. The horizontal axis shows view-space z and the vertical axis shows the resulting NDC depth for a standard projective transform:
We can see that the part between view-space z of 1 and 3 (znear, zmax) gets mapped to NDC depth 0 to 1. This is the part that we are actually interested in.
However, the part where view-space z is negative also produces positive NDC depth. However, those are parts that result from fold-overs. I.e., if you take a corner of your triangle and slowly decrease z (along with w), starting in the area between znear and zfar, you would observe the following:
we start between znear and zfar, everything is good
as soon as we pass znear, the point gets clipped because NDC depth < 0.
when we are at view-space z = 0, the point also has w = 0 and no valid projection.
as we decrease view-space z further, the point gets a valid projection again (starting at infinity) and comes back in with positive NDC depth.
However, this last part is the area behind the camera. So, homogeneous clipping is made, such that this part is also clipped away by znear clipping.
Check the old D3D9 documentation for the formulas and some more illustrative explanations here.

How to calculate a point on a circle knowing the radius and center point

I have a complicated problem and it involves an understanding of Maths I'm not confident with.
Some slight context may help. I'm building a 3D train simulator for children and it will run in the browser using WebGL. I'm trying to create a network of points to place the track assets (see image) and provide reference for the train to move along.
To help explain my problem I have created a visual representation as I am a designer who can script and not really a programmer or a mathematician:
Basically, I have 3 shapes (Figs. A, B & C) and although they have width, can be represented as a straight line for A and curves (B & C). Curves B & C are derived (bend modified) from A so are all the same length (l) which is 112. The curves (B & C) each have a radius (r) of 285.5 and the (a) angle they were bent at was 22.5°.
Each shape (A, B & C) has a registration point (start point) illustrated by the centre of the green boxes attached to each of them.
What I am trying to do is create a network of "track" starting at 0, 0 (using standard Cartesian coordinates).
My problem is where to place the next element after a curve. If it were straight track then there is no problem as I can use the length as a constant offset along the y axis but that would be boring so I need to add curves.
Fig. D. demonstrates an example of a possible track layout but please understand that I am not looking for a static answer (based on where everything is positioned in the image), I need a formula that can be applied no matter how I configure the track.
Using Fig. D. I tried to work out where to place the second curved element after the first one. I used the formula for plotting a point of the circumference of a circle given its centre coordinates and radius (Fig. E.).
I had point 1 as that was simply a case of setting the length (y position) of the straight line. I could easily work out the centre of the circle because that's just the offset y position, the offset of the radius (r) (x position) and the angle (a) which is always 22.5° (which, incidentally, was converted to Radians as per formula requirements).
After passing the values through the formula I didn't get the correct result because the formula assumed I was working anti-clockwise starting at 3 o'clock so I had to deduct 180 from (a) and convert that to Radians to get the expected result.
That did work and if I wanted to create a 180° track curve I could use the same centre point and simply deducted 22.5° from the angle each time. Great. But I want a more dynamic track layout like in Figs. D & E.
So, how would I go about working point 5 in Fig. E. because that represents the centre point for that curve segment? I simply have no idea.
Also, as a bonus question, is this the correct way to be doing this or am I over-complicating things?
This problem is the only issue stopping me from building my game and, as you can appreciate, it is a bit of a biggie so I thank anyone for their contribution in advance.
As you build up the track, the position of the next piece of track to be placed needs to be relative to location and direction of the current end of the track.
I would store an (x,y) position and an angle a to indicate the current point (with x,y starting at 0, and a starting at pi/2 radians, which corresponds to straight up in the "anticlockwise from 3-o'clock" system).
Then construct
fx = cos(a);
fy = sin(a);
lx = -sin(a);
ly = cos(a);
which correspond to the x and y components of 'forward' and 'left' vectors relative to the direction we are currently facing. If we wanted to move our position one unit forward, we would increment (x,y) by (fx, fy).
In your case, the rule for placing a straight section of track is then:
x=x+112*fx
y=y+112*fy
The rule for placing a curve is slightly more complex. For a curve turning right, we need to move forward 112*sin(22.5°), then side-step right 112*(1-cos(22.5°), then turn clockwise by 22.5°. In code,
x=x+285.206*sin(22.5*pi/180)*fx // Move forward
y=y+285.206*sin(22.5*pi/180)*fy
x=x+285.206*(1-cos(22.5*pi/180))*(-lx) // Side-step right
y=y+285.206*(1-cos(22.5*pi/180))*(-ly)
a=a-22.5*pi/180 // Turn to face new direction
Turning left is just like turning right, but with a negative angle.
To place the subsequent pieces, just run this procedure again, calculating fx,fy, lx and ly with the now-updated value of a, and then incrementing x and y depending on what type of track piece is next.
There is one other point that you might consider; in my experience, building tracks which form closed loops with these sort of pieces usually works if you stick to making 90° turns or rather symmetric layouts. However, it's quite easy to make tracks which don't quite join up, and it's not obvious to see how they should be modified to allow them to join. Something to bear in mind perhaps if your program allows children to design their own layouts.
Point 5 is equidistant from 3 as 2, but in the opposite direction.

Trajectory -math, c#

I am trying to map a trajectory path between two point. All I know is the two points in question and the distance between them. What I would like to be able to calculate is the velocity and angle necessary to hit the end point.
I would also like to be able to factor in some gravity and wind so that the path/trajectory is a little less 'perfect.' Its for a computer game.
Thanks F.
This entire physical situation can be described using the SUVAT equations of motion, since the acceleration at all times is constant.
The following explanation presumes understanding of basic algebra and vector maths. If you're not familiar with it, I strongly recommend you go read up on it before attempting to write the sort of game you have proposed. It also assumes you're dealing with 2D, though if you're dealing with 3D most of the same applies, since it's all in vector form - you just end up solving a cubic instead of quadratic, for which it may be best to use a numerical solver.
Physics
(Note: vectors represented in bold.)
Basically, you'll want to start by formulating your equation for displacement (in vector form):
r = ut + (at^2)/2
r is the displacement relative to the start position, u is the initial velocity, a is the acceleration (constant at all times). t is of course time.
a is dependent on the forces present in your system. In the general case of gravity and wind:
a = F_w/m - g j
where i is the unit vector in the x direction and j the unit vector in the y direction. g is the acceleration due to gravity (9.81 ms^-2 on Earth). F_w is the force vector due to the wind (this term disappears for no wind) - we're assuming this is constant for the sake of simplicity. m is the mass of the projectile.
Then you can simply substitute the equation for a into the equation for r, and you're left with an equation of three variables (r, u, t). Next, expand your single vector equation for r into two scalar equations (for x and y displacement), and use substitution to eliminate t (maths might get a bit tricky here). You should be left with a single quadratic equation with only r and u as free variables.
Now, you want to solve the equation for r = [target position] - [start position]. If you pick a certain magnitude for the initial velocity u (i.e. speed), then you can write the x and y components of u as U cos(a) and U sin(a) respectively, where U is the initial speed, and a the initial angle. This can be rearranged and with a bit of trignometry, you can finally solve for the angle a, giving you the launch velocity!
Algorithm
Most of the above description should be worked out on paper first. Then, it's simply a matter of writing a function to solve the quadratic formula and apply some inverse trigonometric functions to get the result.
P.S. Sorry for all the maths/physics in this post, but it was unavoidable! The OP seemed to be asking more about the physical rather than computational side of this, anyway, so that's what I've provided. Hopefully this is still useful to both the OP and other people.
The book:
Modern Exterior Ballistics: The Launch and Flight Dynamics of Symmetric Projectiles ISBN-13: 978-0764307201
Is the modern authority on ballistics. For accuracy you'll need the corrections:
http://www.dexadine.com/mccoy.html
If you need something free and less authoritative, Dr. Mann's 1909 classic The bullet's flight from powder to target is available on books.google.com.
-kmarsh
PS Poor ballistics in games is a particular pet peeve of mine, especially the "shoots flat to infinity" ballistic model.
As people have mentioned, while figuring out the angles between the points is relatively easy, determining the way that wind and gravity will affect the shot is more difficult.
Wind and gravity are both accelrating forces, though they act somewhat differently.
Gravity is easier, since it has both a constant direction (down) and magnitude regardless of the object. (Assuming that you're not shooting things with ridiculously high velocities). To calculate how gravity will affect the velocity of your object, just take the time since you last updated the velocity of the object, multiply it by your gravitational factor, and add it to your current velocity vector.
As a simple example, let's think of an object that is moving with a velocity of (3, 4, 7) in the x, y, z directions, with z being parallel with the force of gravity. You decide that your gravity value is -.3 You are ready to calculate the new velocity. When you check, you discover that 10 time units have passed since your last calculation (whatever your time units are...perhaps ticks or something). You take your time units (10), multiply by your gravity (-.3), which gives you -3. You add that to your Z, and your new velocity is (3, 4, 4). That's it. (This has been very simplified, but that should get you started.)
Wind is a bit different, if you want to do it right. If you want to do it a simple and easy way, you can make it like gravity...a constant force in a particular direction. But a more realistic way is to have the force be dependent on your current velocity vector. Put simply: if you're moving exactly with the wind, it shouldn't impart any force onto you. In this case, you simply calculate the magnitude of the force as the difference between its direction and your own.
A simple example of this might be if you were moving at (3, 0, 0), and the wind was moving at (5, 0, 0), and we can give the wind a strength of .5. (You also have to multiply by the time elapsed...for the sake of this example, to keep it simple, we'll leave the time-elapsed factor at 1) You calculate the difference in the vectors and multiply by your time difference (1), and discover that the difference is (2, 0, 0). You then multiply that vector by the wind strength, .5, and you discover that your velocity change is (1, 0, 0). Add that to your previous velocity, and you get (4, 0, 0)...so the wind has sped the object up slightly. If you waited another single time unit, you would have a difference of (1, 0, 0), multiplied by your strength of .5, so your final velocity would then be (4.5, 0, 0). As you can see, the wind provides less force as you become closer to it in velocity.) This is kind of neat, but may be overly complex for game ballistics.
The angle is easy, atan2(pB.x-pA.x,pB.y-pA.y). The velocity vector should be (pB-pA)*speed. And to add gravity/wind (gravity is just wind with a negative y component) add the (scaled) wind vector to your velocity at every simulation tick (you're basically adding it as acceleration).
Just a link, sorry : http://www.gamedev.net/reference/articles/article694.asp.
There is a lot of papers about game physics at gamedev. Have a look.
(BTW : wind only add some velocity to an object. The hard part is gravity.)

Help me with Rigid Body Physics/Transformations

I want to instance a slider constraint, that allows a body to slide between point A and point B.
To instance the constraint, I assign the two bodies to constrain, in this case, one dynamic body constrained to the static world, think sliding door.
The third and fourth parameters are transformations, reference Frame A and reference Frame B.
To create and manipulate Transformations, the library supports Quaternions, Matrices and Euler angles.
The default slider constraint slides the body along the x-axis.
My question is:
How do I set up the two transformations, so that Body B slides along an axis given by its own origin and an additional point in space?
Naively I tried:
frameA.setOrigin(origin_of_point); //since the world itself has origin (0,0,0)
frameA.setRotation(Quaternion(directionToB, 0 rotation));
frameB.setOrigin(0,0,0); //axis goes through origin of object
frameB.setRotation(Quaternion(directionToPoint,0))
However, Quaternions don't seem to work as I expected. My mathematical knowledge of them is not good, so if someone could fill me in on why this doesn't work, I'd be grateful.
What happens is that the body slides along an axis orthogonal to the direction. When I vary the rotational part in the Quaternion constructor, the body is rotated around that sliding direction.
Edit:
The framework is bullet physics.
The two transformations are how the slider joint is attached at each body in respect to each body's local coordinate system.
Edit2
I could also set the transformations' rotational parts through a orthogonal basis, but then I'd have to reliably construct a orthogonal basis from a single vector. I hoped quaternions would prevent this.
Edit3
I'm having some limited success with the following procedure:
btTransform trafoA, trafoB;
trafoA.setIdentity();
trafoB.setIdentity();
vec3 bodyorigin(entA->getTrafo().col_t);
vec3 thisorigin(trafo.col_t);
vec3 dir=bodyorigin-thisorigin;
dir.Normalize();
mat4x4 dg=dgGrammSchmidt(dir);
mat4x4 dg2=dgGrammSchmidt(-dir);
btMatrix3x3 m(
dg.col_x.x, dg.col_y.x, dg.col_z.x,
dg.col_x.y, dg.col_y.y, dg.col_z.y,
dg.col_x.z, dg.col_y.z, dg.col_z.z);
btMatrix3x3 m2(
dg2.col_x.x, dg2.col_y.x, dg2.col_z.x,
dg2.col_x.y, dg2.col_y.y, dg2.col_z.y,
dg2.col_x.z, dg2.col_y.z, dg2.col_z.z);
trafoA.setBasis(m);
trafoB.setBasis(m2);
trafoA.setOrigin(btVector3(trafo.col_t.x,trafo.col_t.y,trafo.col_t.z));
btSliderConstraint* sc=new btSliderConstraint(*game.worldBody, *entA->getBody(), trafoA, trafoB, true);
However, the GramSchmidt always flips some axes of the trafoB matrix and the door appears upside down or right to left.
I was hoping for a more elegant way to solve this.
Edit4
I found a solution, but I'm not sure whether this will cause a singularity in the constraint solver if the top vector aligns with the sliding direction:
btTransform rbat = rba->getCenterOfMassTransform();
btVector3 up(rbat.getBasis()[0][0], rbat.getBasis()[1][0], rbat.getBasis()[2][0]);
btVector3 direction = (rbb->getWorldTransform().getOrigin() - btVector3(trafo.col_t.x, trafo.col_t.y, trafo.col_t.z)).normalize();
btScalar angle = acos(up.dot(direction));
btVector3 axis = up.cross(direction);
trafoA.setRotation(btQuaternion(axis, angle));
trafoB.setRotation(btQuaternion(axis, angle));
trafoA.setOrigin(btVector3(trafo.col_t.x,trafo.col_t.y,trafo.col_t.z));
Is it possible you're making this way too complicated? It sounds like a simple parametric translation (x = p*A+(1-p)*B) would do it. The whole rotation / orientation thing is a red herring if your sliding-door analogy is accurate.
If, on the other hand, you're trying to constrain to an interpolation between two orientations, you'll need to set additional limits 'cause there is no unique solution in the general case.
-- MarkusQ
It would help if you could say what framework or API you're using, or copy and paste the documentation for the function you're calling. Without that kind of detail I can only guess:
Background: a quaternion represents a 3-dimensional rotation combined with a scale. (Usually you don't want the complications involved in managing the scale, so you work with unit quaternions representing rotations only.) Matrices and Euler angles are two alternative ways of representing rotations.
A frame of reference is a position plus a rotation. Think of an object placed at a position in space and then rotated to face in a particular direction.
So frame A probably needs to be the initial position and rotation of the object (when the slider is at one end), and frame B the final position and rotation of the object (when the slider is at the other end). In particular, the two rotations probably ought to be the same, since you want the object to slide rigidly.
But as I say, this is just a guess.
Update: is this Bullet Physics? It doesn't seem to have much in the way of documentation, does it?
Perhaps you are looking for slerp?
Slerp is shorthand for spherical
linear interpolation, introduced by
Ken Shoemake in the context of
quaternion interpolation for the
purpose of animating 3D rotation. It
refers to constant speed motion along
a unit radius great circle arc, given
the ends and an interpolation
parameter between 0 and 1.
At the end of the day, you still need the traditional rotational matrix to get things rotated.
Edit: So, I am still guessing, but I assume that the framework takes care of the slerping and you want the two transformations which describes begin state and the end state?
You can stack affine transformations on top of the other. Except you have to think backwards. For example, let's say the sliding door is placed at (1, 1, 1) facing east at the begin state and you want to slide it towards north by (0, 1, 0). The door would end up at (1, 1, 1) + (0, 1, 0).
For begin state, rotate the door towards east. Then on top of that you apply another translation matrix to move the door to (1, 1, 1). For end state, again, you rotate the door towards east, then you move the door to (1, 1, 1) by applying the translation matrix again. Next, you apply the translation matrix (0, 1, 0).

Resources