Orthographic projection with origin at screen bottom left - math

I'm using the python OpenGL bindings, and trying to only use modern opengl calls. I have a VBO with verticies, and I am trying to render with an orthographic projection matrix passed to the vertex shader.
At present I am calculating my projection matrix with the following values:
from numpy import array
w = float(width)
h = float(height)
n = 0.5
f = 3.0
matrix = array([
[2/w, 0, 0, 0],
[ 0, 2/h, 0, 0],
[ 0, 0, 1/(f-n), -n/(f-n)],
[ 0, 0, 0, 1],
], 'f')
#later
projectionUniform = glGetUniformLocation(shader, 'projectionMatrix')
glUniformMatrix4fv(projectionUniform, 1, GL_FALSE, matrix)
That code I got from here:
Formula for a orthogonal projection matrix?
This seems to work fine, but I would like my Origin to be in the bottom left corner of the screen. Is this a function I can apply over my matrix so everything "just works", or must I translate every object by w/2 h/2 manually?
side note: Will the coordinates match pixel positions with this working correctly?
Because I'm using modern OpenGL techniques, I don't think I should be using gluOrtho2d or GL_PROJECTION calls.

glUniformMatrix4fv(projectionUniform, 1, GL_FALSE, matrix)
Your matrix is stored in row-major ordering. So you should pass GL_TRUE, or you should change your matrix to column-major.

I'm not completely familiar with projections yet, as I've only started OpenGL programming recently, but your current matrix does not translate any points. The diagonal will apply scaling, but the right most column will apply translation. The link Dirk gave gives you a projection matrix that will make your origin (0,0 is what you want, yes?) the bottom-left corner of your screen.
A matrix I've used to do this (each row is actually a column to OpenGL):
OrthoMat = mat4(
vec4(2.0/(screenDim.s - left), 0.0, 0.0, 0.0),
vec4(0.0, 2.0/(screenDim.t - bottom), 0.0, 0.0),
vec4(0.0, 0.0, -1 * (2.0/(zFar - zNear)), 0.0),
vec4(-1.0 * (screenDim.s + left)/(screenDim.s - left), -1.0 * (screenDim.t + bottom)/(screenDim.t - bottom), -1.0 * (zFar + zNear)/(zFar - zNear), 1.0)
);
The screenDim math is effectively the width or height, since left and bottom are both set to 0. zFar and zNear are 1 and -1, respectively (since it's 2D, they're not extremely important).
This matrix takes values in pixels, and the vertex positions need to be in pixels as well. The point (0, 32) will always be at the same position when you resize the screen too.
Hope this helps.
Edit #1: To be clear, the left/bottom/zfar/znear values I stated are the ones I chose to make them. You can change these how you see fit.

You can use a more general projection matrix which additionally uses left,right positions.
See Wikipedia for the definition.

Related

How to calculate direction vectors from axis-angle rotation?

I'm representing rotations for actors in my rendering engine using a vec4 with axis-angle notation. The first 3 components (x, y, z) represent the (normalized) axis of rotation, and the last component (w) represents the angle (in radians) we have rotated about this axis.
For example,
With axis (0, 1, 0) and angle 0, up is (0, 1, 0) and forward is (0, 0, -1).
With axis (0, 0, 1) and angle 180, up is (0, 0, 1) and forward is (0, -1, 0).
My current solution (which doesn't work), looks like this:
// glm::vec4 Movable::getOrientation();
// glm::vec3 FORWARD(0.0f, 0.0f, -1.0f);
glm::vec3 Movable::getForward() {
return glm::vec3(glm::rotate(
this->getOrientation().w, glm::vec3(this->getOrientation())) *
glm::vec4(FORWARD, 1.0f));
}
I've defined the up direction to be the same as the rotational axis, but I'm having trouble calculating the forward directional vector for an arbitrary axis. What is the easiest way to do this? I'd like to take advantage of glm functions wherever possible.
One thing to keep in mind about axis-angle is that "up" should mean the same thing for all rotations with an angle of 0, as that represents no rotation no matter which direction the axis is pointed ... you can't just say up is in the direction of the axis. The proper way to calculate forward and up is to start with two vectors which represent them, say (1,0,0) for forward and (0,1,0) for up, and then apply the rotation to both those vectors to obtain the new forward and up.

Interpolation in a distorted box

I want to interpolate in a distorted box. As We have 8 points around a distorted box (p0, p1, p2, ..., p7), if we find the transformation matrix which transform this box to a box with points ((0, 0, 0), (0, 0, 1), (0, 1, 1), (0, 1, 0), (1, 0, 0), (1, 0, 1), (1, 1, 1), (1, 1, 0) ), the interpolation can be done easily. In other words, If we find a transformation from a distorted box to a normal box which length, width and height of the box are equal to 1, the interpolation can be done very simple. Anyone has any idea about interpolating in a distorted box or finding transformation from a distorted box to a normal box?
Not answering the original question since in the comment you said that you simply wanted to interpolate a function inside the cube, using the values at the 8 vertices.
So in order to do that, you can reason as follow:
1) Split the cube in 6 tetrahedra
2) Find the tetrahedron that contains the point you want to interpolate
3) An irregular tetrahedron can be easily mapped to a regular one, that is you can easily obtain the generalized tetrahedral coordinates of a point. Check eq. 9-11 here.
4) Once you have the tetrahedral coordinates of your point, the interpolation is trivial (see previous link).
This is the easiest way I can think of, the big downside is that there are 13 ways to split a cube in tetrahedras, and this choice will produce (slightly) different results, especially if the cube is heavily deformed. You should aim for a delaunay tetrahedralization of the cube to minimize this effect.
Also notice that the interpolated function defined in this way is continuous across the faces of the tetrahedra (but not differentiable).
You can apply the inverse of a scaling matrix to the cube where vx, vy and vz
are the cube's spacial extents.

Forward, right, up vectors - Replace up, but keep relation of forward / right

Let's say I have 3 vectors, forward = Vector(1,0,0), up = Vector(0,1,0), right = Vector(0,0,1).
Now I replace the up vector by something else, but forward and right should stay in relation to the new up vector as they have to the old one.
e.g. if the new up vector is Vector(1,0,0), forward should be Vector(0,-1,0) and right should still be Vector(0,0,1).
What mathematical formula can be used for this?
You can not do it without a rotation axis. Even in your simplified (axis aligned) case where one vector is changed to the opposite direction you need a rotation axis:
Both (actually there are four)
forward( 0,-1, 0), up(1,0,0), right(0, 0, 1) and
forward( 0, 1, 0), up(1,0,0), right(0, 0,-1)
are valid solutions.
However having the rotation from Vector(0,1,0) to Vector(1,0,0), it (may) implicitly defines the rotation axis(0, 0, 1) and angle PI/2. Hence you can build a rotation matrix and multiply that matrix with the two other vectors.

Create QTransform given 4 points defining the transformed unit square

Given 4 points being be the result of
QPolygon poly = transform.mapToPolygon(QRectF(0, 0, 1, 1));
how can I find QTransform transform? (Even better: also given an arbitrary source rectangle)
Motivation: Given the four corner points of an image to be drawn in a perspectively distorted coordinate system, how can I draw the image using QPainter?
This is a screenshot illustrating the problem in GIMP, where one can transform a layer by moving around the 4 corners of the layer. This results in a perspective transformation. I want to do exactly the same in a Qt application. I know that QTransform is not restricted to affine transformations but can also handle perspective transformations.
You should be able to do this with QTransform.squareToQuad. Just pass it the QPolygonF you want to transform to.
I've sometimes had some issues getting squareToQuad to do what I want, and have had to use QTransform.quadToQuad instead, defining my own starting quad, but you might have more luck.
I think I found a solution, which calculates the transformation matrix step by step.
// some example points:
QPointF p1(1.0, 2.0);
QPointF p2(2.0, 2.5);
QPointF p3(1.5, 4.0);
QPointF p4(3.0, 5.0);
// define the affine transformation which will position p1, p2, p3 correctly:
QTransform trans;
trans.translate(p1.x(), p1.y());
trans.scale(p2.x() - p1.x(), p3.y() - p1.y());
trans.shear((p3.x() - p1.x()) / trans.m11(), (p2.y() - p1.y()) / trans.m22());
Until now, trans describes a parallelogram transformation. Within this paralellogram, I find p4 (relatively) in the next step. I think that this can be done using a direct formula not involving an inversion of trans.
// relative position of the 4th point in the transformed coordinate system:
qreal px = trans.inverted().map(p4).x();
qreal py = trans.inverted().map(p4).y();
// this defines the perspective distortion:
qreal y = 1 + (py - 1) / px;
qreal x = 1 + (px - 1) / py;
The values x and y are hard to explain. Given only one of them (the other set to 1), this defines the relative scaling of p4 only. But a combination of both x and y perspective transformation, the meaning of x and y are difficult; I found the formulas by trial and error.
// and thus the perspective matrix:
QTransform persp(1/y, 0, 1/y-1,
0, 1/x, 1/x-1,
0, 0, 1);
// premultiply the perspective matrix to the affine transformation:
trans = persp * trans;
Some tests showed that this leads to the correct results. However, I did not tested special cases like those where two points are equal or one of them is on the line segment between two others; I think that this solution might break in such situations.
Therefore, I still search for some direct formulas for the matrix values m11, m12 ... m33, given the point coordinates p1.x(), p1.y() ... p4.x(), p4.y().

How to calculate azimut & elevation relative to a camera direction of view in 3D ...?

I'm rusty a bit here.
I have a vector (camDirectionX, camDirectionY, camDirectionZ) that represents my camera direction of view.
I have a (camX, camY, camZ) that is my camera position.
Then, I have an object placed at (objectX, objectY, objectZ)
How can I calculate, from the camera point of view, the azimut & elevation of my object ??
The first thing I would do, to simplify the problem, is transform the coordinate space so the camera is at (0, 0, 0) and pointing straight down one of the axes (so the direction is say (0, 0, 1)). Translating so the camera is at (0, 0, 0) is pretty trivial, so I won't go into that. Rotating so that the camera direction is (0, 0, 1) is a little trickier...
One way of doing it is to construct the full orthonormal basis of the camera, then stick that in a rotation matrix and apply it. The "orthonormal basis" of the camera is a fancy way of saying the three vectors that point forward, up, and right from the camera. They should all be at 90 degrees to each other (which is what the ortho bit means), and they should all be of length 1 (which is what the normal bit means).
You can get these vectors with a bit of cross-product trickery: the cross product of two vectors is perpendicular (at 90 degrees) to both.
To get the right-facing vector, we can just cross-product the camera direction vector with (0, 1, 0) (a vector pointing straight up). You'll need to normalise the vector you get out of the cross-product.
To get the up vector of the camera, we can cross product the camera direction vector with the right-facing vector we just calculated. Assuming both input vectors are normalised, this shouldn't need normalising.
We now have the orthonormal basis of the camera. If we stick these vectors into the rows of a 3x3 matrix, we get a rotation matrix that will transform our coordinate space so the camera is pointing straight down one of the axes (which one depends on the order you stick the vectors in).
It's now fairly easy to calculate the azimuth and elevation of the object.
To get the azimuth, just do an atan2 on the x/z coordinates of the object.
To get the elevation, project the object coordinates onto the x/z plane (just set the y coordinate to 0), then do:
acos(dot(normalise(object coordinates), normalise(projected coordinates)))
This will always give a positive angle -- you probably want to negate it if the object's y coordinate is less than 0.
The code for all of this will look something like:
fwd = vec3(camDirectionX, camDirectionY, camDirectionZ)
cam = vec3(camX, camY, camZ)
obj = vec3(objectX, objectY, objectZ)
# if fwd is already normalised you can skip this
fwd = normalise(fwd)
# translate so the camera is at (0, 0, 0)
obj -= cam
# calculate the orthonormal basis of the camera
right = normalise(cross(fwd, (0, 1, 0)))
up = cross(right, fwd)
# rotate so the camera is pointing straight down the z axis
# (this is essentially a matrix multiplication)
obj = vec3(dot(obj, right), dot(obj, up), dot(obj, fwd))
azimuth = atan2(obj.x, obj.z)
proj = vec3(obj.x, 0, obj.z)
elevation = acos(dot(normalise(obj), normalise(proj)))
if obj.y < 0:
elevation = -elevation
One thing to watch out for is that the cross-product of your original camera vector with (0, 1, 0) will return a zero-length vector when your camera is facing straight up or straight down. To fully define the orientation of the camera, I've assumed that it's always "straight", but that doesn't mean anything when it's facing straight up or down -- you need another rule.

Resources