I'm working on adding camera controls to a program that doesn't have them through memory-editing, but can only change global angular velocity on each axis. How would I go about taking a desired local angular velocity, for example pitching the camera up at a rate of 1 radian per second, and getting the global angular velocities that equal it?
You'll need to get a local-to-global matrix of your camera somehow.
Then you may create a quaternion from your local axes, convert it to global by multiplying it with the matrix and convert it back to Euler representation.
Related
I'm working on a VR game in Unreal Engine 4 using Blueprints.
I want to calculate the (yaw) angle the user needs to turn his/her gun (whose direction is determined via the position of the motion controllers) in order to be pointing towards the target.
I figure this might be the way to do it:
Subtract the location of the target from the location of the gun
Get the yaw component of that as a vector pointing from the gun as origin
Subtract the current yaw of the gun direction from that yaw component to get the yaw angle the user needs to turn to get to the target
Except I'm not quite sure how to execute that. I've been experimenting (as seen in the screenshot below), but not doing the correct operations. Any thoughts?
Thanks!
A more elegant and robust solution is to use the gun actor's world transform to calculate the relative rotation to the object:
Get the gun's world transform. The rotation should point in the forward vector direction. You can make a transform with its location and forward vector, but likely the component transform will work.
Use the operation InverseTransformLocation on this transform, with the target's location as other parameter. This creates a vector that is the target's location in the gun's space
Get the rotation of this vector with the RotationFromXVector operation.
This rotator contains the correct yaw, but also pitch. And it will also work when your objects are rotated in space arbitrarily, or your objects become children of even more actors.
This is how I do it:
'Find look at rotation' is a function from 'Kismet Math Library' (unreal math library). It finds world rotation for an object at Start location to point at Target location.
I have a planer link which is rotating about the z axis and tip of that link moves in a straight line path. it means that values of theta is continiously increasing and decreasing with respect to a point. I have the information of angular velocity vector for every instant of time.
How i can get the unit vector from the angular velocity vector ?
I'm working on an FPS with the jPCT library. One key thing that all FPS's need is to prevent the players from looking behind them by pulling the mouse too far up/down. Currently, I'm using some example code found on the jPCT's website that keeps track of how many angles have been added to the camera, but I'm worried about rounding issues with all the angles in radians. I can get a rotation Matrix from jPCT's camera, and I know that it contains the information to figure out how "high" up the player is looking, but I have no clue how to get it out of the matrix.
What would I look for in the rotation matrix that will tell me if the player is looking more "up" than strait up and more "down" than strait down?
If you're updating your matrix each time the player moves you're going to run into trouble due to floating point errors and your rotation matrix will turn into a skew matrix. One solution is to orthonormalise the matrix every now and then but usually it's better to simply keep the player's pitch, yaw (and roll if you need it) as floats and build your matrix from those angles when the player changes orientation, looks up/down etc. If you use optimised code for each angle (or a single method for converting Euler angles to a matrix) it's not slower than what you seem to be doing right now. You won't run into Gimbal lock issues as the camera orientation will be restricted anyway.
As for your specific question I think you'd need to calculate the angle between matrix Z axis (the third row or column, depends how your matrices are oriented) and an unrotated vector pointing down your Z axis.
I have written a simple AR program in XNA and I am now trying to find the relative transformation between my 2 markers.
I have located my markers relative to my camera and have extracted out translation and rotation matrixes for the markers.
What I am trying to do is to find out the relative translation to get to marker 2 from marker 1. For instance if marker 1 and marker 2 were lying on the same Z plane the Z translation component would be 0mm.
The image below is the application working for 2 positions on the same plane:
I assumed that by simply multiplying the matrix of the 2nd marker by the inverse of the 1st marker I can get the translation. However I am getting completely wrong results.
The code I am running is as follows:
posit.EstimatePose(points, out matrix, out trans);
float yaw, pitch, roll;
matrix.ExtractYawPitchRoll(out yaw, out pitch, out roll);
Matrix rotation =
Matrix.CreateFromYawPitchRoll(-yaw, -pitch, roll);
Matrix translation =
Matrix.CreateTranslation(new Vector3(trans.X, trans.Y, -trans.Z));
Matrix complete = rotation * translation;
List<Matrix> all = new List<Matrix>();
all.Add(rotation);
all.Add(translation);
all.Add(complete);
matrixes.Add(all);
}
Matrix res = Matrix.Invert(matrixes[0][2]) * matrixes[1][2];
Vector3 scaleR;
Vector3 translationR;
Quaternion rotationR;
res.Decompose(out scaleR, out rotationR, out translationR);
The result:
TranslationR : {X:-103.4285 Y:-104.1754 Z:104.9243}
I have overlaid 3D axes onto the image as shown above using XNA so I assume the rotation and translation relative to the camera has been worked out correctly.
It seems like I am doing something wrong along the way to calculate the translation. I would definitely not expect the Z to equal 104mm. I was expecting something along the lines of:
{X:0 Y:150 Z:0}
I've done something similar to this before, however it was using 3x3 matrices in a 2D environment (with X,Y Translate, Rotate, Skew). Are the matrices in question 4x4?
Yes you are right, to find the matrix to transform object A with matrix M1 to object B with matrix M2 you can compute M1' * M2 (where M1' is the inverse).
The problem you may be running into is that a Matrix is composed of rotation, translation, scale and other transformations (e.g. skew/perspective). Decomposing the matrix into its component parts often yields a non-deterministic answer. Its like Quadratic equations, there is more than one solution.
Another issue may be that Matrix operations are not commutative and you are simply performing them the wrong way around. If you perform M1' * M2 and M2 * M1' you will get different results.
Please give it a try (switching the matrix order). Also I'd be looking up the matrix decomposition function you used - what value of Rotation & Scaling are you getting at the output? Are your objects rotated or scaled? If not then you should get zero. Note that it is possible to have more than one solution of rotation + translation to get the same end result and the decomposition function doesn't know which it is you are looking for.
To extract just the translation component, you can use the methods form this page:
vt = (M14, M24, M34)T
What do you get when you try that?
What I am trying to do is to find out the relative translation to get
to marker 2 from marker 1.
Vector3 relativeTranslation = Marker2Matrix.Translation - marker1Matrix.Translation;
My answer seems overly simplistic so maybe I'm not grasping your question completely, but it will create a vector that when added to Marker1's location (translation), will get you to Marker 2's location.
I've been trying to build a filter that can successfully combine compass, geomagnetic, and gyroscopic data to produce a smooth augmented reality experience. After reading this post along with lots of discussions, I finally found out a good algorithm to correct my sensor data. Most examples I've read show how to correct accelerometers with gyroscopes, but not correct compass + accelerometer data with gyroscope. This is the algorithm I've settled upon, which works great except that I run into gimbal lock if I try to look at the scene if I'm not facing North. This algorithm is Balance Filter, only instead of only implemented in 3D
Initialization Step:
Initialize a world rotation matrix using the (noisy) accelerometer and compass sensor data (this is provided by the Android already)
Update Steps:
Integrate the gyroscope reading (time_delta * reading) for each axis (x, y, z)
Rotate the world rotation matrix using the Euler angles supplied by the integration
Find the Quaternion from the newly rotated matrix
Find the rotation matrix from the unfiltered accelerometer + compass data (using the OS provided function, I think it uses angle/axis calculation)
Get the quaternion from the matrix generated in the previous step.
Slerp between quaternion generated in step 2 (from the gyroscope), and the accelerometer data using a coefficient based on some experimental magic
Convert back to a matrix and use that to draw the scene.
My problem is that when I'm facing North and then try to look south, the whole thing blows up and it appears to be gimbal lock. After a few gimbal locks, the whole filter is in an undefined state. Searching around I hear everybody saying "Just use Quaternions" but I'm afraid it's not that simple (at least not to me) and I know there's something I'm just missing. Any help would be greatly appreciated.
The biggest reason to use quaternions is to avoid the singularity problem with Euler angles. You can directly rotate a quaternion with gyro data.
Many appologies if information is delayed or not useful specifically but may be useful to others as I found it after some research:::
a. Using a kalman (linear or non linear) filter you do following ::
Gyro to integrate the delta angle while accelerometers tell you the outer limit.
b. Euler rates are different from Gyro rate of angle change so you ll need quaternion or Euler representation::
Quaternion is non trivial but two main steps are ----
1. For Roll, pitch,yaw you get three quaternions as cos(w) +sin(v) where w is scalar part and v is vector part (or when coding just another variable)
Then simply multiply all 3 quat. to get a delta quaternion
i.e quatDelta[0] =c1c2*c3 - s1s2*s3;
quatDelta[1] =c1c2*s3 + s1s2*c3;
quatDelta[2] =s1*c2*c3 + c1*s2*s3;
quatDelta[3] =c1*s2*c3 - s1*c2*s3;
where c1,c2,c3 are cos of roll,pitch,yaw and s stands for sin of the same actually half of those gyro pre integrated angles.
2. Then just multiply by old quaternion you had
newQuat[0]=(quaternion[0]*quatDelta[0] - quaternion[1]*quatDelta[1] - quaternion[2]*quatDelta[2] - quaternion[3]*quatDelta[3]);
newQuat[1]=(quaternion[0]*quatDelta[1] + quaternion[1]*quatDelta[0] + quaternion[2]*quatDelta[3] - quaternion[3]*quatDelta[2]);
newQuat[2]=(quaternion[0]*quatDelta[2] - quaternion[1]*quatDelta[3] + quaternion[2]*quatDelta[0] + quaternion[3]*quatDelta[1]);
newQuat[3]=(quaternion[0]*quatDelta[3] + quaternion[1]*quatDelta[2] - quaternion[2]*quatDelta[1] + quaternion[3]*quatDelta[0]);
As you loop through the code it gets updated so only quatenion is a global variables not the rest
3. Lastly if you want Euler angles from them then do the following:
`euler[2]=atan2(2.0*(quaternion[0]*quaternion[1]+quaternion[2]*quaternion[3]), 1-2.0*(quaternion[1]*quaternion[1]+quaternion[2]*quaternion[2]))euler[1]=safe_asin(2.0*(quaternion[0]*quaternion[2] - quaternion[3]*quaternion[1]))euler[0]=atan2(2.0*(quaternion[0]*quaternion[3]+quaternion[1]*quaternion[2]), 1-2.0*(quaternion[2] *quaternion[2]+quaternion[3]*quaternion[3]))`
euler[1] is pitch and so on..
I just wanted to outline general steps of quaternion implementation. There may be some minor errors but I tried this myself and it works. Please note that when changing to euler angles you will get singularities also called as "Gimbal lock"
An important note here is that this is not my work but I found it over the internet and wanted to thank who ever did this priceless code...Cheers