Related
I'm representing rotations for actors in my rendering engine using a vec4 with axis-angle notation. The first 3 components (x, y, z) represent the (normalized) axis of rotation, and the last component (w) represents the angle (in radians) we have rotated about this axis.
For example,
With axis (0, 1, 0) and angle 0, up is (0, 1, 0) and forward is (0, 0, -1).
With axis (0, 0, 1) and angle 180, up is (0, 0, 1) and forward is (0, -1, 0).
My current solution (which doesn't work), looks like this:
// glm::vec4 Movable::getOrientation();
// glm::vec3 FORWARD(0.0f, 0.0f, -1.0f);
glm::vec3 Movable::getForward() {
return glm::vec3(glm::rotate(
this->getOrientation().w, glm::vec3(this->getOrientation())) *
glm::vec4(FORWARD, 1.0f));
}
I've defined the up direction to be the same as the rotational axis, but I'm having trouble calculating the forward directional vector for an arbitrary axis. What is the easiest way to do this? I'd like to take advantage of glm functions wherever possible.
One thing to keep in mind about axis-angle is that "up" should mean the same thing for all rotations with an angle of 0, as that represents no rotation no matter which direction the axis is pointed ... you can't just say up is in the direction of the axis. The proper way to calculate forward and up is to start with two vectors which represent them, say (1,0,0) for forward and (0,1,0) for up, and then apply the rotation to both those vectors to obtain the new forward and up.
I'm a little confused about the coordinates system in Babylon.js. That is, when I use the following sequence of statements :
var camera = new BABYLON.ArcRotateCamera("Camera", 0, 0, 50, new BABYLON.Vector3(0, 0, 0), scene);
var sphere1 = BABYLON.Mesh.CreateSphere("sphere1", 16, 1.0, scene);
the sphere is painted in the center of the screen. OK. When I use the following sequence :
var camera = new BABYLON.ArcRotateCamera("Camera", 50, 0, 0, new BABYLON.Vector3(0, 0, 0), scene);
var sphere1 = BABYLON.Mesh.CreateSphere("sphere1", 16, 1.0, scene);
no sphere is painted.
I know that usually the coordinates (in CG) are as follows: Oy - vertical, Ox - horizontal, Oz - pointing to the screen. So, in the second sequence, the camera is in the point x = 50, in the plane xOz (that is ground) and is looking to origin, where the sphere is.
I guess somewhere on the road I was lost. Can you help to understand where I am wrong ?
Thank you,
Eb_cj
Hello ArcRotateCamera uses two angles (alpha and beta) to define the position of the camera on a sphere centered around a point.
Feel free to read this for more info:
https://github.com/BabylonJS/Babylon.js/wiki/05-Cameras
I'm using the python OpenGL bindings, and trying to only use modern opengl calls. I have a VBO with verticies, and I am trying to render with an orthographic projection matrix passed to the vertex shader.
At present I am calculating my projection matrix with the following values:
from numpy import array
w = float(width)
h = float(height)
n = 0.5
f = 3.0
matrix = array([
[2/w, 0, 0, 0],
[ 0, 2/h, 0, 0],
[ 0, 0, 1/(f-n), -n/(f-n)],
[ 0, 0, 0, 1],
], 'f')
#later
projectionUniform = glGetUniformLocation(shader, 'projectionMatrix')
glUniformMatrix4fv(projectionUniform, 1, GL_FALSE, matrix)
That code I got from here:
Formula for a orthogonal projection matrix?
This seems to work fine, but I would like my Origin to be in the bottom left corner of the screen. Is this a function I can apply over my matrix so everything "just works", or must I translate every object by w/2 h/2 manually?
side note: Will the coordinates match pixel positions with this working correctly?
Because I'm using modern OpenGL techniques, I don't think I should be using gluOrtho2d or GL_PROJECTION calls.
glUniformMatrix4fv(projectionUniform, 1, GL_FALSE, matrix)
Your matrix is stored in row-major ordering. So you should pass GL_TRUE, or you should change your matrix to column-major.
I'm not completely familiar with projections yet, as I've only started OpenGL programming recently, but your current matrix does not translate any points. The diagonal will apply scaling, but the right most column will apply translation. The link Dirk gave gives you a projection matrix that will make your origin (0,0 is what you want, yes?) the bottom-left corner of your screen.
A matrix I've used to do this (each row is actually a column to OpenGL):
OrthoMat = mat4(
vec4(2.0/(screenDim.s - left), 0.0, 0.0, 0.0),
vec4(0.0, 2.0/(screenDim.t - bottom), 0.0, 0.0),
vec4(0.0, 0.0, -1 * (2.0/(zFar - zNear)), 0.0),
vec4(-1.0 * (screenDim.s + left)/(screenDim.s - left), -1.0 * (screenDim.t + bottom)/(screenDim.t - bottom), -1.0 * (zFar + zNear)/(zFar - zNear), 1.0)
);
The screenDim math is effectively the width or height, since left and bottom are both set to 0. zFar and zNear are 1 and -1, respectively (since it's 2D, they're not extremely important).
This matrix takes values in pixels, and the vertex positions need to be in pixels as well. The point (0, 32) will always be at the same position when you resize the screen too.
Hope this helps.
Edit #1: To be clear, the left/bottom/zfar/znear values I stated are the ones I chose to make them. You can change these how you see fit.
You can use a more general projection matrix which additionally uses left,right positions.
See Wikipedia for the definition.
I'm rusty a bit here.
I have a vector (camDirectionX, camDirectionY, camDirectionZ) that represents my camera direction of view.
I have a (camX, camY, camZ) that is my camera position.
Then, I have an object placed at (objectX, objectY, objectZ)
How can I calculate, from the camera point of view, the azimut & elevation of my object ??
The first thing I would do, to simplify the problem, is transform the coordinate space so the camera is at (0, 0, 0) and pointing straight down one of the axes (so the direction is say (0, 0, 1)). Translating so the camera is at (0, 0, 0) is pretty trivial, so I won't go into that. Rotating so that the camera direction is (0, 0, 1) is a little trickier...
One way of doing it is to construct the full orthonormal basis of the camera, then stick that in a rotation matrix and apply it. The "orthonormal basis" of the camera is a fancy way of saying the three vectors that point forward, up, and right from the camera. They should all be at 90 degrees to each other (which is what the ortho bit means), and they should all be of length 1 (which is what the normal bit means).
You can get these vectors with a bit of cross-product trickery: the cross product of two vectors is perpendicular (at 90 degrees) to both.
To get the right-facing vector, we can just cross-product the camera direction vector with (0, 1, 0) (a vector pointing straight up). You'll need to normalise the vector you get out of the cross-product.
To get the up vector of the camera, we can cross product the camera direction vector with the right-facing vector we just calculated. Assuming both input vectors are normalised, this shouldn't need normalising.
We now have the orthonormal basis of the camera. If we stick these vectors into the rows of a 3x3 matrix, we get a rotation matrix that will transform our coordinate space so the camera is pointing straight down one of the axes (which one depends on the order you stick the vectors in).
It's now fairly easy to calculate the azimuth and elevation of the object.
To get the azimuth, just do an atan2 on the x/z coordinates of the object.
To get the elevation, project the object coordinates onto the x/z plane (just set the y coordinate to 0), then do:
acos(dot(normalise(object coordinates), normalise(projected coordinates)))
This will always give a positive angle -- you probably want to negate it if the object's y coordinate is less than 0.
The code for all of this will look something like:
fwd = vec3(camDirectionX, camDirectionY, camDirectionZ)
cam = vec3(camX, camY, camZ)
obj = vec3(objectX, objectY, objectZ)
# if fwd is already normalised you can skip this
fwd = normalise(fwd)
# translate so the camera is at (0, 0, 0)
obj -= cam
# calculate the orthonormal basis of the camera
right = normalise(cross(fwd, (0, 1, 0)))
up = cross(right, fwd)
# rotate so the camera is pointing straight down the z axis
# (this is essentially a matrix multiplication)
obj = vec3(dot(obj, right), dot(obj, up), dot(obj, fwd))
azimuth = atan2(obj.x, obj.z)
proj = vec3(obj.x, 0, obj.z)
elevation = acos(dot(normalise(obj), normalise(proj)))
if obj.y < 0:
elevation = -elevation
One thing to watch out for is that the cross-product of your original camera vector with (0, 1, 0) will return a zero-length vector when your camera is facing straight up or straight down. To fully define the orientation of the camera, I've assumed that it's always "straight", but that doesn't mean anything when it's facing straight up or down -- you need another rule.
I want to extend processing in order to be able to render 3D stuff with oblique projections (cabinet or cavalier). After looking around source of the camera(), perspective() and ortho() methods I was able to set up an orthographic perspective and then adjust the PGraphics3D#camera matrix to an appropriate value with partial success.
void setup() {
camera(30, 30, 30, 0, 0, 0, 1, 1, 0);
ortho(-100, 100, -100, 100, -500, 500);
p3d.camera.set(1, 0, -0.433f, 0, 0, 1, 0.25f, 0, 0, 0, 0, 0, 0, 0, 0, 1);
}
void draw() {
box(20);
}
This results in the right perspective, but without surface filling. When removing either the camera and ortho method calls or both, the screen is empty, although I'd expect camera(...) to operate on the same matrix that is overwritten later on.
Moreover I'm a little bit confused about the matrizes in PGraphics3D: camera, modelView and projection. While OpenGL keeps two matrix stacks - modelView and projection, here is a third one - camera. Can anybody shed some light on the difference and relation between these matrizes?
This would be helpful in order to know when to use/set which one.
Great question!
I ran the following code as you had it, and it looked like an isometric view of a white cube.
1: size(300,300,P3D);
2: camera(30, 30, 30, 0, 0, 0, 1, 1, 0);
3: ortho(-100, 100, -100, 100, -500, 500);
4: PGraphics3D p3d = (PGraphics3D)g;
5: p3d.camera.set(1, 0, -0.433f, 0, 0, 1, 0.25f, 0, 0, 0, 0, 0, 0, 0, 0, 1);
6: box(20);
Here's what's happening:
Line 2: sets both the camera and modelview matrices
Line 3: sets the projection matrix
Line 4: sets the camera matrix only, but this actually did nothing here. (read on)
Transformations are only performed using the modelview and projection matrices. The camera matrix is merely a convenient separation of what the modelview is usually initialized to.
If you used the draw() function, the modelview matrix is actually initialized to the camera matrix before each time it is called. Since you didn't use the draw() function, your camera matrix was never updated with your oblique transform in your camera matrix.
How to create an Oblique Projection
As a disclaimer, you must truly understand how matrices are used to transform coordinates. Order is very important. This is a good resource for learning it:
http://glprogramming.com/red/chapter03.html
The quickest explanation I can give is that the modelview matrix turns object coordinates into relative eye coordinates, then the projection matrix takes those eye coordinates and turns them in to screen coordinates. So you want to apply the oblique projection before the transformation into screen coordinates.
Here's a runnable example for creating a cabinet projection that displays some cubes:
void setup()
{
strokeWeight(2);
smooth();
noLoop();
size(600,600,P3D);
oblique(radians(60),0.5);
}
void draw()
{
background(100);
// size of the box
float w = 100;
// draw box in the middle
translate(width/2,height/2);
fill(random(255),random(255),random(255),100);
box(w);
// draw box behind
translate(0,0,-w*4);
fill(random(255),random(255),random(255),100);
box(w);
// draw box in front
translate(0,0,w*8);
fill(random(255),random(255),random(255),100);
box(w);
}
void oblique(float angle, float zscale)
{
PGraphics3D p3d = (PGraphics3D)g;
// set orthographic projection
ortho(-width/2,width/2,-height/2,height/2,-5000,5000);
// get camera's z translation
// ... so we can transform from the original z=0
float z = p3d.camera.m23;
// apply z translation
p3d.projection.translate(0,0,z);
// apply oblique projection
p3d.projection.apply(
1,0,-zscale*cos(angle),0,
0,1,zscale*sin(angle),0,
0,0,1,0,
0,0,0,1);
// remove z translation
p3d.projection.translate(0,0,-z);
}