The gyroscope of a device returns values in the following range:
Alpha: 0 - 360 around the Z-axis
Beta: -180 - 180 around the X-axis
Gamma: -90 - 90 around the Y-axis
When I rotate the device around the Y-axis, the value 'flips' at a certain point from -90 to 90.
I know that this is due to the fact that the gyroscope functions like a gimbal, where Alpha is the outer ring, Beta is the middle ring and Gamma is the inner ring.
My question is: how do I measure the rotation JUST around the Y-axis, without the 'flipping' effect?
Is there a mathematical way to process the alpha/beta/gamma values and get more 'useful' values for this case, so that is easier to get the amount of rotation around the X, Y or Z axis?
Related
I have asked a question similar to this before but have since got further and also didn't tag the question right and wanted to get a bit of help on the maths around the question if possible.
I have a 3D sphere with points evenly spaced on its surface of which I know the coordinates. From these coordinates I am trying to define the orientation of some spikes that are coming out of the surface of my sphere along the vector between the centre of the sphere and the point at which the coordinates lie.
The idea is these euler angles will be very helpful in later aligning the spikes so they are all in roughly the same orientation if I am am to box out all of the spikes from an image.
Since the coordinates on the sphere are evenly spaced i can just take the average x, y and z coordinates to give me the centre and I can then draw a vector from the centre to each coordinate in turn.
The euler angles I need to calculate in this case are initially around the z axis, then around the new y axis, and finally again around the new z axis.
My centre point is currently being defined as the average coordinate of all my coordinates. This works as the coordinates are evenly spaced around the sphere.
I then use the equation that states
cos(theta) = dot product of the two vectors / magnitude of each vector multiplied together
on the x and y axis. One of my vectors is the x and y of the vector i am interested in whilst the other is the y axis (0,1). This tells me the rotation around the z axis with the y axis being 0. I also calculate the gradient of the line on this 2D plane to calculate whether I am working between 0 and +180 or 0 and -180.
I then rotate the x axis about the angle just calculated to give me x' using a simple 2D rotation matrix.
I then calculate the angle in the same way above but this time around the y axis using x' and z' as my second vector (where z' = z).
Finally I repeat the same as stated above to calculate the new z'' and x'' and do my final calculation.
This gives me three angles but when I display in matlab using the quiver3 command I do not get the correct orientations using this method. I believe I just do not understand how to calculate euler angles correctly and am messing something up along the way.
I was hoping someone more knowledgeable than me could take a glance over my planned method of euler angle calculation and spot any flaws.
Thanks.
I have an object rotates around the y axis in 2 dimension image, i want to know the angle of rotation around y axis, if i already have the initial point(X,Y) and the point(X',Y) after rotation.
I have tried to follow the 3 dimension rotation equations (https://www.siggraph.org/education/materials/HyperGraph/modeling/mod_tran/3drota.htm) to evaluate the value of rotation angle no matter the direction of rotation,but i do not know the Z value from the 2 dimension to evaluate the rotation angle from the equations.
I figure out that i can't know the accurate rotation angles because i don't have full information about the location of points after and before rotation , i just have a projection of points(after and before the rotation)(x,y) in 2D image(plan) as "Nico Schertler" said in the comments, so i found an approximate solution which is to map the 2D object to similar 3D model for the same object and simulate the same motion on the 3D object to know approximated information about angles, in my case i want to know the rotation angles of a human head (head pose) so i mapped some 2D head features point to another 3D model and after deep diving into mathematics i got approximated rotation matrix as it shown here (http://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/)
I'm trying to work out whether a point is inside an ellipsoid cone formed between a point and a circle in 3D space. The cone is ellipsoid because the point is not perpendicular to the centre of the circle. See diagram below:
So I know:
The position of the point forming the apex of the cone: x
The location of the centre of the circle: c
The radius of the circle: r
The locations of various points I want to determine if they are inside the cone: y, z
Here is a top view of the same diagram:
I do not care about the base of the cone - I want points contained within the cone stretched effectively to infinity.
I've found formulae for working out whether a point is within an ellipsoid cone given the major/minor axis, but having difficulty working out how to do it when the ellipsoid cone is formed from a circle at an angle.
Thanks for any help!
With a conic you could probably determine distance from the axis and a semi major and minor and compute it directly.
Harder is some arbitrary shape.
If the cone has the point in the Z Axis direction, and you know a point in XYZ... then you should be able to draw an ellipse at that particular Z level. Maybe draw it with 360 segments.
Once you have your point and your ellipse, then you can test ellipse segment to see if there is an intersection in X & Y.
Imaging a circle at 0,0,0 with radius 1. And a point at 0,0,0 there are 2Y intersections at +/- 90 degrees and 2 X intersections happening at 0 and 180
If the point is at 2,0,0 you still have 2 intersections in X but they are to the left, and you want one to the left and one to the right.
Zero intersections mean. That you are outside the hoop.
Repeat across the 360 segments and determine how to handle points "on a line" and how close "on" is.
I have a huge set of normalized position vectors. The vector-set is recorded by a special measurement device, while the device is rotate around two axes. Every position vector is also a combination of gravitational values for X, Y, and Z at a defined time. My assignment is to get the rotational speed of the both axes.
The coordinate system of the measurement device is rotated by circa 45° around the z-axis in relation to the coordinate system of machine.
The z-axis of the measurement device parallel to the z-axis of the machine.
I have tried to convert the carthesian coordinates to spherical coordinates. For this i used the Qt-Framework and MATLAB. As a result I got 2 angles and a radius. In my opinion the radius is not important. But the 2 angles don’t fit my problem, because I need the rotational speed of machine around the machine Z-axis and the X-axis. At this point it is important to know that the rotational speed is so slow that gravity-vector pointed always with 1g in to the ground. The X-,Y-, and Z-values of the measurement device represents the orientation subject to the gravity-vector . For example if the Z-axis pointed to the Ground the value is nearly 1. And if the axis is parallel to the ground (also orthogonal to the gravity-vecot) the value is nearly zero.
If the machine only rotates around the Z-axis I can get the period of one rotation very easy. The plot of the Y-values and X-values subject to the time is a sine or cosine. So I can get period by searching for the zero point, the maxima or the minima.
rotation around the z-axis
But this solution fit only the 1axis-problem. If the machine rotates additional around the X-axis the measured X-, Y-, and Z-values are combinations of both rotations. I have no idea how I can fix my problem.
rotation around the machines z-axis and x-axis: the rotation starts after 55s!
Another idea is the inverse kinematic but for this I need the dimensions of the machine and the exact point where the measurement device is mounted.
rotation around 1 axis
Dataset rotation around 1 axis
rotation around 2 axis
Dataset rotation around 2 axes
How can I start or go ahead?
I have tried to visualize the rotational process with this picture.
I tried to put this in as a comment, but there is a length limit there. So, some clarifying questions / intermediate conclusions:
Thanks for the figures! So it looks from your 4th figure above, the one showing 2-axis x-y-z sine waves, and from your diagram of the machine, like you have three coordinate systems: The first is the earth frame, call it x1,y1,z1, as you show it in "machine picture" diagram. The second frame call it x2,y2,z2 rotates about both the x1 and the x2 axes (they remain parallel). The third frame x3,y3,z3 is the one that rotates about the z2 (=z3) axis. Your accelerometers are fixed in the x3,y3,z3 coordinate frame.
Your single-axis data set has z3=z2 aligned with earth z1, and spins about z, so that x3 and y3 spin around sampling gravity in quadrature sine waves.
In your second data set, the outer gimbal x1=x2 rotates at a constant rate, giving rise to the perfect sine wave on the z accelerometer, while the inner z3=z2 gimbal also spins perhaps at a constant rate, but now the accelerometers on x3 and y3 have their amplitudes modulated by multiplying by the cosine of the x1/x2 rotation angle.
Does all this sound right?
One other thing we always need to know when estimating velocity from position measurements is some kind of model or concept of how your system changes in time: Can we assume some maximum angular acceleration? Or can we assume that once the rotation(s) come up to speed, that they are constant? That will become especially important in trying to stitch the z2/z3 gimbal angle over the times when the x1/x2 angle passes through +/-pi/2 radians, where the z2/z3 angle becomes momentarily unobservable because the x and y accelerometers are orthogonal to the gravity vector and will just show noise. It will also help us to decide if the x1/x2 gimbal went up to pi/2 and back down again, or kept turning in the same direction to > pi/2, because both motions look the same on the z accelerometer, and the z2/z3 angle is unobservable there.
Simple answer:
Use two-argument arctangent.
The roll angle is atan2(ay, ax).
The pitch angle is atan2(az, sqrt(ax*ax + ay*ay)).
Then time-difference these to get angle rates.
This oversimplified solution has a number of problems, but it's a good place to start.
Probably the key you need is this: You must transform the x and y accelerations from measurement coordinates to machine coordinates before estimating the pitch about the machine x axis. This requires you to first know the roll angle (about the machine z axis). In matlab sytax,
[x_machine; y_machine] = [cos(roll) -sin(roll); sin(roll) cos(roll)] * [x_meas; y_meas].
z_machine = z_meas, always.
Given x,y,z in machine coordinates, you can directly estimate the pitch angle and rate about the machine x axis:
pitch = atan2(z_machine, -y_machine) (right hand rule about the machine x axis; positive acceleration pointing down);
pitch_rate = -asin((xyz_i cross xyz_i-1)_x) / dt_i,
where in English, the rate is computed from the arcsine of the x_machine component of the cross product of the latest machine coordinate acceleration vector with the previous one, divided by the time between them (1/8 of a second in your case).
The same approach works for estimating roll and roll rate (about the machine z axis):
roll = atan2(-x_meas, -y_meas) * cos(pitch) / abs(cos(pitch));
roll_rate = -asin((xyz_meas_i cross xyz_meas_i-1)_z) * cos(pitch) / abs(cos(pitch)) / dt_i.
It is a chicken-and-egg problem, where you need to know the pitch and roll to estimate pitch and roll and their rates. So you need to start off with a good guess of the correct pitch and roll angles (to within 15 degrees or so should be fine).
All of the measurements are noisy, so the estimates will be also. The rate estimates especially so. So you will want to filter the estimates in time. Propagate your pitch and roll angle estimates in time using your filtered angle rate estimates.
Also, your roll angle and rate estimates become pure noise as pitch is near +/-pi/2, so you should probably weight down the inputs to the roll filters by something like cos^2(pitch).
I get a series of square binary images as in the picture below,
I want to find the red point, which is the point of intersection of four blocks (2 black and 2 white). For doing so, I use to get the sum of all pixel values along the diagonal directions of the square image, which is 45 deg and 135 deg respectively. The intersection of maximum pixel sum 45 deg line and minimum pixel sum 135 deg line is where my red point is.
Now that I get the co-ordinate of the red point in 45 deg-135 deg co-ordinate system, how to I transform them to earth co-ordinates?
In other words, say I have a point in 45deg-135deg co-ordinate system; How do I find the corresponding co-ordinate values in x-y co-ordinate system? What is the transformation matrix?
some more information that might help:
1) if the image is a 60x60 image, I get 120 values in 45deg-135deg system, since i scan each row followed by column to add the pixels.
I don't know much about matlab, but in general all you need to do is rotate your grid by 45 degrees.
Here's a helpful link; shows you the rotation matrix you need
wikipedia rotation matrix article
The new coordinates for a point after 2D rotation look like this:
x' = x \cos \theta - y \sin \theta.
y' = x \sin \theta + y \cos \theta.
replace theta with 45 (or maybe -45) and you should be all set.
If your red dot starts out at (x,y), then after the -45 degree rotation it will have the new coordinates (x',y'), which are defined as follows:
x' = x cos(-45) - y sin (-45)
y' = x sin (-45) + y cos (-45)
Sorry when I misunderstood your question but why do you rotate the image? The x-value of your red point is just the point where the derivative in x-direction has the maximum absolute value. And for the y-direction it is the same with the derivative in y-direction.
Assume you have the following image
If you take the first row of the image it has at the beginning all 1 and the for most of the width zeroes. The plot of the first column looks like this.
Now you convolve this line with the kernel {-1,1} which is only one nested loop over your line and you get
Going now through this result and extracting the position of the point with the highest value gets you 72. Therefore the x-position of the red point is 73 (since the kernel of the convolution finds the derivative one point too soon).
Therefore, if data is the image matrix of the above binary image then extracting your red point position is near to one line in Mathematica
Last[Transpose[Position[ListConvolve[{-1, 1}, #] & /#
{data[[1]],Transpose[data][[1]]}, 1 | -1]]] + 1
Here you get {73, 86} which is the correct position if y=0 is the top row. This method should be implemented in a few minutes in any language.
Remarks:
The approximated derivative which is the result of the convolution can either be negative or positive. This depends whether it is a change from 0 to 1 or vice versa. If you want to search for the highest value, you have to take the absolute value of the convolution result.
Remember that the first row in the image matrix is not always in top position of the displayed image. This depends on the software you are using. If you get wrong y values be aware of that.