How do I find the rotation value of the teapot so when the new rotation value is applied, the teapot is pointing towards the sphere (point3) in the teapots own local space.
Here is what the starting scene looks like:
This is the goal I'm trying to achieve:
initial attempt:
delete objects
target = sphere pos:[20,20,20] radius:2
n = teapot radius:2 pos:[6,35,0]
rotate n (angleaxis -68.2351 [0.808965,0.587747,0.0113632])
dist = n.pos - target.pos
vec = normalize dist
upVecLocal = n.transform.row3 -- local up vector
dp = dot vec upVecLocal
t = acos dp
newDir = cross upVecLocal dist
n.dir = newDir
toolMode.coordsys #local
select n
Not sure of what you're trying to achieve, but I think the best solution is a non-scripting one: Add a Look At Constraint on the Teapot:
Select the Teapot
Go to the Motion tab (the wheel, 4th one from the left)
Select the Rotation button, then click the button above the tree view that includes the Transform, position, rotation and scale.
Select the sphere as the Look At Target
Now if you move the sphere it will rotate and follow.
Related
Perhaps the question title needs some work.
For context this is for the purpose of a Koch Snowflake (using C-like math syntax in a formula node in LabVIEW), thus why the triangle must be the correct way. (As given 2 points an equilateral triangle may be in one of two directions.)
To briefly go over the algorithm: I have an array of 4 predefined coordinates initially forming a triangle, the first "generation" of the fractal. To generate the next iteration, one must for each line (pair of coordinates) get the 1/3rd and 2/3rd midpoints to be the base of a new triangle on that face, and then calculate the position of the 3rd point of the new triangle (the subject of this question). Do this for all current sides, concatenating the resulting arrays into a new array that forms the next generation of the snowflake.
The array of coordinates is in a clockwise order, e.g. each vertex travelling clockwise around the shape corresponds to the next item in the array, something like this for the 2nd generation:
This means that when going to add a triangle to a face, e.g. between, in that image, the vertices labelled 0 and 1, you first get the midpoints which I'll call "c" and "d", you can just rotate "d" anti-clockwise around "c" by 60 degrees to find where the new triangle top point will be (labelled e).
I believe this should hold (e.g. 60 degrees anticlockwise rotating the later point around the earlier) for anywhere around the snowflake, however currently my maths only seems to work in the case where the initial triangle has a vertical side: [(0,0), (0,1)]. Else wise the triangle goes off in some other direction.
I believe I have correctly constructed my loops such that the triangle generating VI (virtual instrument, effectively a "function" in written languages) will work on each line segment sequentially, but my actual calculation isn't working and I am at a loss as to how to get it in the right direction. Below is my current maths for calculating the triangle points from a single line segment, where a and b are the original vertices of the segment, c and d form new triangle base that are in-line with the original line, and e is the part that sticks out. I don't want to call it "top" as for a triangle formed from a segment going from upper-right to lower-left, the "top" will stick down.
cx = ax + (bx - ax)/3;
dx = ax + 2*(bx - ax)/3;
cy = ay + (by - ay)/3;
dy = ay + 2*(by - ay)/3;
dX = dx - cx;
dY = dy - cy;
ex = (cos(1.0471975512) * dX + sin(1.0471975512) * dY) + cx;
ey = (sin(1.0471975512) * dX + cos(1.0471975512) * dY) + cy;
note 1.0471975512 is just 60 degrees in radians.
Currently for generation 2 it makes this: (note the seemingly separated triangle to the left is formed by the 2 triangles on the top and bottom having their e vertices meet in the middle and is not actually an independent triangle.)
I suspect the necessity for having slightly different equations depending on weather ax or bx is larger etc, perhaps something to do with how the periodicity of sin/cos may need to be accounted for (something about quadrants in spherical coordinates?), as it looks like the misplaced triangles are at 60 degrees, just that the angle is between the wrong lines. However this is a guess and I'm just not able to imagine how to do this programmatically let alone on paper.
Thankfully the maths formula node allows for if and else statements which would allow for this to be implemented if it's the case but as said I am not awfully familiar with adjusting for what I'll naively call the "quadrants thing", and am unsure how to know which quadrant one is in for each case.
This was a long and rambling question which inevitably tempts nonsense so if you've any clarifying questions please comment and I'll try to fix anything/everything.
Answering my own question thanks to #JohanC, Unsurprisingly this was a case of making many tiny adjustments and giving up just before getting it right.
The correct formula was this:
ex = (cos(1.0471975512) * dX + sin(1.0471975512) * dY) + cx;
ey = (-sin(1.0471975512) * dX + cos(1.0471975512) * dY) + cy;
just adding a minus to the second sine function. Note that if one were travelling anticlockwise then one would want to rotate points clockwise, so you instead have the 1st sine function negated and the second one positive.
I'm currently attempting to teach myself perspective projection, my reference is the wikipedia page on the subject here: http://en.wikipedia.org/wiki/3D_projection#cite_note-3
My understanding is that you take your object to be project it and rotate and translate it in to "camera space", such that your camera is now assumed to be origin looking directly down the z axis. (This matrix op from the wikipedia page: http://upload.wikimedia.org/math/5/1/c/51c6a530c7bdd83ed129f7c3f0ff6637.png)
You then project your new points in to 2D space using this equation: http://upload.wikimedia.org/math/6/8/c/68cb8ee3a483cc4e7ee6553ce58b18ac.png
The first step I can do flawlessly. Granted I wrote my own matrix library to do it, but I verified it was spitting out the right answer by typing the results in to blender and moving the camera to 0,0,0 and checking it renders the same as the default scene.
However, the projection part is where it all goes wrong.
From what I can see, I ought to be taking the field of view, which by default in blender is 28.842 degrees, and using it to calculate the value wikipedia calls ez, by doing
ez = 1 / tan(fov / 2);
which is approximately 3.88 in this case.
I should then for every point be doing:
x = (ez / dz) * dx;
y = (ez / dz) * dy;
to get x and y coordinates in the range of -1 to 1 which I can then scale appropriately for the screen width.
However, when I do that my projected image is mirrored in the x axis and in any case doesn't match with the cube blender renders. What am I doing wrong, and what should I be doing to get the right projected coordinates?
I'm aware that you can do this whole thing with one matrix op, but for the moment I'm just trying to understand the equations, so please just stick to the question asked.
From what you say in your question it's unclear whether you're having trouble with the Projection matrix or the Model matrix.
Like I said in my comments, you can Google glFrustum and gluLookAt to see exactly what these matrices look like. If you're familiar with matrix math (and it looks like you are), you will then understand how the coordinates are transformed into a 2D perspective.
Here is some sample OpenGL code to make the View and Projection Matrices and Model matrix for a simple 30 degree rotation about the Y axis so you can see how the components that go into these matrices are calculated.
// The Projection Matrix
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
near = -camera.viewPos.z - shapeSize * 0.5;
if (near < 0.00001)
near = 0.00001;
far = -camera.viewPos.z + shapeSize * 0.5;
if (far < 1.0)
far = 1.0;
radians = 0.0174532925 * camera.aperture / 2; // half aperture degrees to radians
wd2 = near * tan(radians);
ratio = camera.viewWidth / (float) camera.viewHeight;
if (ratio >= 1.0) {
left = -ratio * wd2;
right = ratio * wd2;
top = wd2;
bottom = -wd2;
} else {
left = -wd2;
right = wd2;
top = wd2 / ratio;
bottom = -wd2 / ratio;
}
glFrustum (left, right, bottom, top, near, far);
// The View Matrix
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
gluLookAt (camera.viewPos.x, camera.viewPos.y, camera.viewPos.z,
camera.viewPos.x + camera.viewDir.x,
camera.viewPos.y + camera.viewDir.y,
camera.viewPos.z + camera.viewDir.z,
camera.viewUp.x, camera.viewUp.y ,camera.viewUp.z);
// The Model Matrix
glRotatef (30.0, 0.0, 1.0, 0.0);
You'll see that glRotate actually does a quaternion rotation (angle of rotation plus a vector about which to do the rotation).
You could also do separate rotations about the X, Y and Z axis.
There's lot's of information on the web about how to form 4X4 matrices for rotations, translations and scales. If you do each of these separately, you'll need to multiply them to get the Model matrix. e.g:
If you have 4X4 matrices rotateX, rotateY, rotateZ, translate, scale, you might form your Model matrix by:
Model = scale * rotateX * rotateZ * rotateY * translate.
Order matters when you form the Model matrix. You'll get different results if you do the multiplication in a different order.
If your object is at the origin, I doubt you want to also put the camera at the origin.
Let's suppose we have a sphere of radius r at a distance d from the observer
We define the following
O: observer
C: Center of the sphere
P: arbitrary visible point of the sphere (fromthe observer)
OC: line connecting the observer to the center of the sphere ( fixed length: d)
OP: Line connecting the observer and an arbitrary visible point of the sphere (variable length depending on the angle: a)
CP: Line connecting the center of the sphere and this arbitrary visible point (fixed length: r)
theta: angle between OC and OP
shi: angle between OC and CP
In case P is one of the "external" visible points of the sphere, using basic geometry we have that
theta_max = atan( r/ sqrt(d^2-r^2) )
shi_max = PI/2 - theta_max
For any other point, I got the following equations
r.cos(shi) + a.cos(theta) = d
r.sin(shi) = a.sin(theta)
I think these equations are right, but I can see no way to write them as shi=f(theta), since 'a' also varies with it.
Is it possible? Or is any of these steps wrong?
EDIT
Working with the latest two equations, we can get
tan(theta)= r.sin(shi)/(d-r.cos(shi))
but I would need to get shi=f(theta) if possible
Let's call the angle between CP and OP λ. Solving for λ is rather simple:
sin(λ) = sin(theta)*d/r
Now you know two angles within that triangle and the remaining one can be calculated from the angle sum of a triangle:
shi = Pi - theta - asin( sin(theta)*d/r )
The goal is to get a point in 3D space projected on the cameras screen (the final goal is to produce a points cloud)
This is the setup:
Camera
postion: px,py,pz
up direction: ux,uy,uz
look_at_direction: lx,ly,lz
screen_width
screen_height
screen_dist
Point in space:
p = (x,y,z)
And
init U,V,W
w = || position - look_at_direction || = || (px,py,pz) - (ux,uy,uz) ||
u = || (ux,uy,uz) cross-product w ||
v = || w cross-product u ||
This gives me the u,v,w coords of the camera.
This are the steps I'm supposed to implement, and the way I understand them. I believe something was lost in the translation.
(1) Find the ray from the camera to the point
I subtract the point with the camera's position
ray_to_point = p - position
(2) Calculate the dot product between the ray to the point and the normalized ray
to the center of the screen (the camera direction). Divide the result by sc_dist.
This will give you a ratio between the distance of the point and the distance to the camera's screen.
ratio = (ray_to_point * w)/screen_dist
Here I'm not sure I should be using w or the original look at value of the camera or the w vector which is the unit vector in the camera's space.
(3) Divide the ray to the point by the ratio found in step 2, and add it to the camera position. This will give you the projection of the point on the camera screen.
point_on_camera_screen = ray_to_point / ratio
(4) Find a vector between the center of the camera's screen and the projected point.
Am I supposed to find the center pixel of the screen? How can I do that?
Thanks.
Consider the following figure:
The first step is to calculate the difference vector from camera to p(=>diff`). That's correct.
The next step is to express the difference vector in the camera's coordinate system. Since its axes are orthogonal, this can be done with the dot product. Be sure that the axes are unit vectors:
diffInCameraSpace = (dot(diff, u), dot(diff, v), dot(diff, w))
Now comes the scaling part so that the resulting difference vector has a z-component of screen_dist. So:
diffInCameraSpace *= screenDist / diffInCameraSpace.z
You don't want to transform it back to world space. You might need some further information on how camera units are mapped to pixels, but at this step you are basically done. You might want to shift the resulting diffInCameraSpace by (screenWidth / 2, screenHeight / 2, 0). Forget the z-component and you have the position on the screen.
i want to refine a previous question:
How do i project a sphere onto the screen?
(2) gives a simple solution:
approximate radius on screen[CLIP SPACE] = world radius * cot(fov / 2) / Z
with:
fov = field of view angle
Z = z distance from camera to sphere
result is in clipspace, multiply by viewport size to get size in pixels
Now my problem is that i don't have the FOV. Only the view and projection matrices are known. (And the viewport size if that does help)
Anyone knows how to extract the FOV from the projection matrix?
Update:
This approximation works better in my case:
float radius = glm::atan(radius/distance);
radius *= glm::max(viewPort.width, viewPort.height) / glm::radians(fov);
I'm a bit late to this party. But I came across this thread when I was looking into the same problem. I spent a day looking into this and worked though some excellent articles I found here:
http://www.antongerdelan.net/opengl/virtualcamera.html
I ended up starting with the projection matrix and working backwards. I got the same formula you mention in your post above. ( where cot(x) = 1/tan(x) )
radius_pixels = (radius_worldspace / {tan(fovy/2) * D}) * (screen_height_pixels / 2)
(where D is the distance from camera to the target's bounding sphere)
I'm using this approach to determine the radius of an imaginary trackball that I use to rotate my object.
Btw Florian, you can extract the fovy from the Projection matrix as follows:
If you take the Sy component from the Projection matrix as shown here:
Sx 0 0 0
0 Sy 0 0
0 0 Sz Pz
0 0 -1 0
where Sy = near / range
and where range = tan(fovy/2) x near
(you can find these definitions at the page I linked above)
if you substitute range in the Sy eqn above you get:
Sy = 1 / tan(fovy/2) = cot(fovy/2)
rearranging:
tan(fovy/2) = 1 / Sy
taking arctan (the inverse of tan) of both sides we get:
fovy/2 = arctan(1/Sy)
so,
fovy = 2 x arctan(1/Sy)
Not sure if you still care - its been a while! - but maybe this will help someone else.
Update: see below.
Since you have the view and projection matrices, here's one way to do it, though it's probably not the shortest:
transform the sphere's center into view space using the view matrix: call the result point C
transform a point on the surface of the sphere, e.g. C+(r, 0, 0) in world coordinates where r is the sphere's world radius, into view space; call the result point S
compute rv = distance from C to S (in view space)
let point S1 in view coordinates be C + (rv, 0, 0) - i.e. another point on the surface of the sphere in view space, for which the line C -> S1 is perpendicular to the "look" vector
project C and S1 into screen coords using the projection matrix as Cs and S1s
compute screen radius = distance between Cs and S1s
But yeah, like Brandorf said, if you can preserve the camera variables, like FOVy, it would be a lot easier. :-)
Update:
Here's a more efficient variant on the above: make an inverse of the projection matrix. Use it to transform the viewport edges back into view space. Then you won't have to project every box into screen coordinates.
Even better, do the same with the view matrix and transform the camera frustum back into world space. That would be more efficient for comparing many boxes against; but harder to figure out the math.
The answer posted at your link radiusClipSpace = radius * cot(fov / 2) / Z, where fov is the angle of the field of view, and Z is the z-distance to the sphere, definitely works. However, keep in mind that radiusClipSpace must be multiplied by the viewport's width to get a pixel measure. The value measured in radiusClipSpace will be a value between 0 and 1 if the object fits on the screen.
An alternative solution may be to use the solid angle of the sphere. The solid angle subtended by a sphere in a sky is basically the area it covers when projected to the unit sphere.
The formulae are given at this link but roughly what I'm doing is:
if( (!radius && !distance) || fabsf(radius) > fabsf(distance) )
; // NAN conditions. do something special.
theta=arcsin( radius/distance )
sphereSolidAngle = ( 1 - cosf( theta ) ) ; // not multiplying by 2PI since below ratio used only
frustumSolidAngle = ( 1 - cosf( fovy / 2 ) ) / M_PI ; // I cheated here. I assumed
// the solid angle of a frustum is (conical), then divided by PI
// to turn it into a square (area unit square=area unit circle/PI)
numPxCovered = 768.f*768.f * sphereSolidAngle / frustumSolidAngle ; // 768x768 screen
radiusEstimate = sqrtf( numPxCovered/M_PI ) ; // area=pi*r*r
This works out to roughly the same numbers as radius * cot(fov / 2) / Z. If you only want an estimate of the area covered by the sphere's projection in px, this may be an easy way to go.
I'm not sure if a better estimate of the solid angle of the frustum could be found easily. This method involves more comps than radius * cot(fov / 2) / Z.
The FOV is not directly stored in the projection matrix, but rather used when you call gluPerspective to build the resulting matrix.
The best approach would be to simply keep all of your camera variables in their own class, such as a frustum class, whose member variables are used when you call gluPerspective or similar.
It may be possible to get the FOVy back out of the matrix, but the math required eludes me.