I read the Calculate Bounding box coordinates from a rotated rectangle to know how to calculate bounding box coordinates from a rotated rectangle. But in a special case as follow image:
How to get the rotated rectangle size if had get the bounding box size, coordinates and rotate degree?
I try write code in javascript
//assume w=123,h=98,deg=35 and get calculate box size
var deg = 35;
var bw = 156.9661922099485;
var bh = 150.82680201149986;
//calculate w and h
var xMax = bw / 2;
var yMax = bh / 2;
var radian = (deg / 180) * Math.PI;
var cosine = Math.cos(radian);
var sine = Math.sin(radian);
var cx = (xMax * cosine) + (yMax * sine) / (cosine * cosine + sine * sine);
var cy = -(-(xMax * sine) - (yMax * cosine) / (cosine * cosine + sine * sine));
var w = (cx * 2 - bw)*2;
var h = (cy * 2 - bh)*2;
But...the answer is not match w and h
Solution
Given bounding box dimensions bx by by and t being the anticlockwise rotation of rectangle sized x by y:
x = (1/(cos(t)^2-sin(t)^2)) * ( bx * cos(t) - by * sin(t))
y = (1/(cos(t)^2-sin(t)^2)) * (- bx * sin(t) + by * cos(t))
Derivation
Why is this?
First, consider that the length bx is cut in two pieces, a and b, by the corner of the rectangle. Use trigonometry to express bx in terms of x, y, and theta:
bx = b + a
bx = x * cos(t) + y * sin(t) [1]
and similarly for by:
by = c + d
by = x * sin(t) + y * cos(t) [2]
1 and 2 can be expressed in matrix form as:
[ bx ] = [ cos(t) sin(t) ] * [ x ] [3]
[ by ] [ sin(t) cos(t) ] [ y ]
Note that the matrix is nearly a rotation matrix (but not quite - it's off by a minus sign.)
Left-divide the matrix on both sides, giving:
[ x ] = inverse ( [ cos(t) sin(t) ] * [ bx ] [4]
[ y ] [ sin(t) cos(t) ] ) [ by ]
The matrix inverse is easy to evaluate for a 2x2 matrix and expands to:
[ x ] = (1/(cos(t)^2-sin(t)^2)) * [ cos(t) -sin(t) ] * [ bx ] [5]
[ y ] [-sin(t) cos(t) ] [ by ]
[5] gives the two formulas:
x = (1/(cos(t)^2-sin(t)^2)) * ( bx * cos(t) - by * sin(t)) [6]
y = (1/(cos(t)^2-sin(t)^2)) * (- bx * sin(t) + by * cos(t))
Easy as pie!
You'll probably need something like affine transformation to discover point coordinates. And then using standard geometry formulas calculate the size.
Related
I'm looking for a non matrix solution (basic geometry) for how to calculate an absolute x,y position C(x,y) of a rotated offset position. I know the parent position A, the amount of x,y offset B, and the rotation T. In the image axis B is offset from A and rotated from A by T degrees. I need to know C x and y.
(I consider Bx, By as positive values, distances)
Let's find corner point at first:
Px = Ax + By * Sin(T)
Py = Ay - By * Cos(T)
Then find C point
Cx = Px + Bx * Cos(T)
Cy = Py + Bx * Sin(T)
And combine them:
Cx = Ax + By * Sin(T) + Bx * Cos(T)
Cy = Ay - By * Cos(T) + Bx * Sin(T);
I'm trying to implement a simple water simulation, with theory from GPU Gems 1 chapter 1.
if you imagine a 3D plane (flat in the xz plane, with y denoting height at any point), the height field function is given as:
where:
Wavelength (w): the crest-to-crest distance between waves in world space.
Amplitude (A): the height from the water plane to the wave crest.
Speed (S): the distance the crest moves forward per second.
Direction (D): the horizontal vector perpendicular to the wave front along which the
crest travels.
This is straightforward to implement.
Please note the article in GPUGems uses the z direction for height, but this isn't standard for graphics (normally, x is width, y is height, z is depth). So I'll refer to the xz direction meaning the flat/horizontal plane directions.
So, having computed the height (y) value at any given point, I need to compute the bitangent and tangent vectors to that point, in order that I can compute a normal vector, which I need for lighting equations.
The bitangent and tangent vectors are partial derivatives in the x and z directions (y is the heightfield value).
So my question is, how can I take a partial derivative in the x and then the z directions for the height field function?
The article says that the partial derivative for the x direction is given by
I understand the concept of taking a partial derivative from this video:, but I don't know how to take the partial derivative of my heightfield function.
Can someone explain it (like I'm 5) - My grasp of maths isn't great!
You want to derive the following equation:
W(x) = A * sin(w * (D.x * x + D.y * z) + t * phi)
= A * sin(w * D.x * x + w * D.y * z + t * phi)
which is the above formula with the expanded dot product. Because we want to find the derivative with respect to x, all other variables (except x) are considered constant. So we can substitute the constants:
c1 = A
c2 = w * D.x
c3 = w * D.y * z + t * phi
W(x) = c1 * sin(c2 * x + c3)
The derivative is:
W'(x) = c1 * c2 * cos(c2 * x + c3)
Reverting the substitution we get:
W'(x) = A * w * D.x * cos(w * D.x * x + w * D.y * z + t * phi)
which describes the y-component of the tangent at a given position.
Similarly, the bitangent (derivative with respect to z) can be described by
W'(z) = A * w * D.y * cos(w * D.y * z + w * D.x * x + t * phi)
Therefore:
tangent = (1, W'(x), 0)
= (1, A * w * D.x * cos(w * D.x * x + w * D.y * z + t * phi), 0)
bitangent = (0, W'(z), 1)
= (0, A * w * D.y * cos(w * D.y * z + w * D.x * x + t * phi), 1)
I’ve written the below python script. The idea is to calculate the new location of point C after you rotate the globe from point A to point B. I first calculate point P, which is the rotation pole. With calculating point P already something goes wrong. With the following input f.e. I would assume point P to be having latitude 90 or –90.
I asked this question before here: Rotate a sphere from coord1 to coord2, where will coord3 be?
But I figured it's better to ask again with the script included ;)
# GreatCircle can be downloaded from: http://www.koders.com/python/fid0A930D7924AE856342437CA1F5A9A3EC0CAEACE2.aspx?s=coastline
from GreatCircle import *
from math import *
# Points A and B defining the rotation:
LonA = radians(0)
LatA = radians(1)
LonB = radians(45)
LatB = radians(1)
# Point C which will be translated:
LonC = radians(90)
LatC = radians(1)
# The following equation is described here: http://articles.adsabs.harvard.edu//full/1953Metic...1...39L/0000040.000.html
# It calculates the rotation pole at point P of the Great Circle defined by point A and B.
# According to http://www.tutorialspoint.com/python/number_atan2.htm
# atan2(x, y) = atan(y / x)
LonP = atan2(((sin(LonB) * tan(LatA)) - (sin(LonA) * tan(LatB))), ((cos(LonA) * tan(LatB)) - (cos(LonB) * tan(LatA))))
LatP = atan2(-tan(LatA),(cos(LonP - LonA)))
print degrees(LonP), degrees(LatP)
# The equations to calculate the translated point C location were found here: http://www.uwgb.edu/dutchs/mathalgo/sphere0.htm
# The Rotation Angle in radians:
gcAP = GreatCircle(1,1,degrees(LonA),degrees(LatA),degrees(LonP),degrees(LatP))
gcBP = GreatCircle(1,1,degrees(LonB),degrees(LatB),degrees(LonP),degrees(LatP))
RotAngle = abs(gcAP.azimuth12 - gcBP.azimuth12)
# The rotation pole P in Cartesian coordinates:
Px = cos(LatP) * cos(LonP)
Py = cos(LatP) * sin(LonP)
Pz = sin(LatP)
# Point C in Cartesian coordinates:
Cx = cos(radians(LatC)) * cos(radians(LonC))
Cy = cos(radians(LatC)) * sin(radians(LonC))
Cz = sin(radians(LatC))
# The translated point P in Cartesian coordinates:
NewCx = (Cx * cos(RotAngle)) + (1 - cos(RotAngle)) * (Px * Px * Cx + Px * Py * Cy + Px * Pz * Cz) + (Py * Cz - Pz * Cy) * sin(RotAngle)
NewCy = (Cy * cos(RotAngle)) + (1 - cos(RotAngle)) * (Py * Px * Cx + Py * Py * Cy + Py * Pz * Cz) + (Pz * Cx - Px * Cz) * sin(RotAngle)
NewCz = (Cz * cos(RotAngle)) + (1 - cos(RotAngle)) * (Pz * Px * Cx + Pz * Py * Cy + Pz * Pz * Cz) + (Px * Cy - Py * Cx) * sin(RotAngle)
# The following equation I got from http://rbrundritt.wordpress.com/2008/10/14/conversion-between-spherical-and-cartesian-coordinates-systems/
# The translated point P in lat/long:
Cr = sqrt((NewCx*NewCx) + (NewCy*NewCy) + (NewCz*NewCz))
NewCLat = degrees(asin(NewCz/Cr))
NewCLon = degrees(atan2(NewCy, NewCx))
# Output:
print str(NewCLon) + "," + str(NewCLat)
I am trying to calculate the tangent line (needed for bump mapping) for every vertex in my mesh. The v1, v2 and v3 are the vertices in the triangle and the t1, t2 and t3 are the respective texture coords. From what i understand this should output the tangent line for the three vertices of the triangle.
Vec3f va = Vec3f{vertexData[a * 3 + 0], vertexData[a * 3 + 1], vertexData[a * 3 + 2]};
Vec3f vb = Vec3f{vertexData[b * 3 + 0], vertexData[b * 3 + 1], vertexData[b * 3 + 2]};
Vec3f vc = Vec3f{vertexData[c * 3 + 0], vertexData[c * 3 + 1], vertexData[c * 3 + 2]};
Vec2f ta = (Vec2f){texcoordData[a * 2 + 0],texcoordData[a * 2 + 1]};
Vec2f tb = (Vec2f){texcoordData[b * 2 + 0],texcoordData[b * 2 + 1]};
Vec2f tc = (Vec2f){texcoordData[c * 2 + 0],texcoordData[c * 2 + 1]};
Vec3f v1 = subtractVec3f(vb, va);
Vec3f v2 = subtractVec3f(vc, va);
Vec2f t1 = subtractVec2f(tb, ta);
Vec2f t2 = subtractVec2f(tc, ta);
float coef = 1/(t1.u * t2.v - t1.v * t2.u);
Vec3f tangent = Vec3fMake((t2.v * v1.x - t1.v * v2.x) * coef,
(t2.v * v1.y - t1.v * v2.y) * coef,
(t2.v * v1.z - t1.v * v2.z) * coef);
My problem is that the coef variable is sometimes the nan (not a number) value causing the multiplication to be off. My mesh is not super complex, a simple cylinder, but i would like a universal formula to calculate the tangent line to enable bump mapping on all of my meshes.
coef becomming a NaN indicates some numerical problem with your input data, like degenerate triangles or texture coordinates. Make sure that the expression (t1.u * t2.v - t1.v * t2.u) doesn't (nearly) vanish, i.e. its absolute value is larger than some reasonable threshold value.
A good sanity check is |vb-va|>0 ^ |vc-va|>0, |tb-ta|>0 ^ |tc-ta|>0, |normalized(vb-va) . normalized(vc-va)| < 1 and |normalized(tb-ta) . normalized(tc-ta)| < 1.
I'm having a problem with OpenCV's various parameterization of coordinates used for camera calibration purposes. The problem is that three different sources of information on image distortion formulae apparently give three non-equivalent description of the parameters and equations involved:
(1) In their book "Learning OpenCV…" Bradski and Kaehler write regarding lens distortion (page 376):
xcorrected = x * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ 2 * p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ],
ycorrected = y * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ p1 * ( r^2 + 2 * y^2 ) + 2 * p2 * x * y ],
where r = sqrt( x^2 + y^2 ).
Assumably, (x, y) are the coordinates of pixels in the uncorrected captured image corresponding to world-point objects with coordinates (X, Y, Z), camera-frame referenced, for which
xcorrected = fx * ( X / Z ) + cx and ycorrected = fy * ( Y / Z ) + cy,
where fx, fy, cx, and cy, are the camera's intrinsic parameters. So, having (x, y) from a captured image, we can obtain the desired coordinates ( xcorrected, ycorrected ) to produced an undistorted image of the captured world scene by applying the above first two correction expressions.
However...
(2) The complication arises as we look at OpenCV 2.0 C Reference entry under the Camera Calibration and 3D Reconstruction section. For ease of comparison we start with all world-point (X, Y, Z) coordinates being expressed with respect to the camera's reference frame, just as in #1. Consequently, the transformation matrix [ R | t ] is of no concern.
In the C reference, it is expressed that:
x' = X / Z,
y' = Y / Z,
x'' = x' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ 2 * p1 * x' * y' + p2 * ( r'^2 + 2 * x'^2 ) ],
y'' = y' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ p1 * ( r'^2 + 2 * y'^2 ) + 2 * p2 * x' * y' ],
where r' = sqrt( x'^2 + y'^2 ), and finally that
u = fx * x'' + cx,
v = fy * y'' + cy.
As one can see these expressions are not equivalent to those presented in #1, with the result that the two sets of corrected coordinates ( xcorrected, ycorrected ) and ( u, v ) are not the same. Why the contradiction? It seems to me the first set makes more sense as I can attach physical meaning to each and every x and y in there, while I find no physical meaning in x' = X / Z and y' = Y / Z when the camera focal length is not exactly 1. Furthermore, one cannot compute x' and y' for we don't know (X, Y, Z).
(3) Unfortunately, things get even murkier when we refer to the writings in Intel's Open Source Computer Vision Library Reference Manual's section Lens Distortion (page 6-4), which states in part:
"Let ( u, v ) be true pixel image coordinates, that is, coordinates with ideal projection, and ( u ̃, v ̃ ) be corresponding real observed (distorted) image coordinates. Similarly, ( x, y ) are ideal (distortion-free) and ( x ̃, y ̃ ) are real (distorted) image physical coordinates. Taking into account two expansion terms gives the following:
x ̃ = x * ( 1 + k1 * r^2 + k2 * r^4 ) + [ 2 p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ]
y ̃ = y * ( 1 + k1 * r^2 + k2 * r^4 ] + [ 2 p2 * x * y + p2 * ( r^2 + 2 * y^2 ) ],
where r = sqrt( x^2 + y^2 ). ...
"Because u ̃ = cx + fx * u and v ̃ = cy + fy * v , … the resultant system can be rewritten as follows:
u ̃ = u + ( u – cx ) * [ k1 * r^2 + k2 * r^4 + 2 * p1 * y + p2 * ( r^2 / x + 2 * x ) ]
v ̃ = v + ( v – cy ) * [ k1 * r^2 + k2 * r^4 + 2 * p2 * x + p1 * ( r^2 / y + 2 * y ) ]
The latter relations are used to undistort images from the camera."
Well, it would appear that the expressions involving x ̃ and y ̃ coincided with the two expressions given at the top of this writing involving xcorrected and ycorrected. However, x ̃ and y ̃ do not refer to corrected coordinates, according to the given description. I don't understand the distinction between the meaning of the coordinates ( x ̃, y ̃ ) and ( u ̃, v ̃ ), or for that matter, between the pairs ( x, y ) and ( u, v ). From their descriptions it appears their only distinction is that ( x ̃, y ̃ ) and ( x, y ) refer to 'physical' coordinates while ( u ̃, v ̃ ) and ( u, v ) do not. What is this distinction all about? Aren't they all physical coordinates? I'm lost!
Thanks for any input!
There is no one and only formula for camera calibration, they are all valid. Notice the first one contains constants K1, K2 & K3 for r^2, r^4 & r^6, and the other two only have constants for r^2 and r^4? That is because they are all approximate models. The first one is likely to be more accurate since it has more parameters.
Anytime you see:
r = sqrt( x^2 + y^2 )
it is probably safe to assume x = (the x coordinate pixel) - (the camera center in pixels) since r usually means radius from the center.
What are you trying to do by the way? Estimate the camera parameters, correct for lens distortion, or both?