camera origin is visible with projection matrix - math

In the image above, I show the result of having the camera positioned in the same position as the vertex covered by the mouse. A similar result comes from using an orthographic matrix. My problem is that when rotating the camera, it rotates around the visible origin of the camera. What I want is for the view to rotate like normal FPS Cameras.
What I believe to be useful information:
I am doing the math manually and rendering to the screen using OpenGL.
The cube's vertices are from {0, 0, 0} to {1, 1, 1}
The camera is positioned at {0, 0, 0}
My (4x4) matrices are in column major and I get the same result from uploading the individual matrices to the shader via uniforms and multiplying them in the same order
The movement and rotation is otherwise sensible even when translating the camera, just that the origin of the camera is visible.
This last point makes sense to me mathematically with an orthographic projection, however, since the near clipping plane is supposed to be slightly in front of the camera, I'd expect the point near the mouse to be clipped. I know for a fact that it is not clipped, because if I rotate the camera to look down on the cube (without translating it), the clipping plane cuts off roughly halfway up that vertical edge of the cube.
I think my confusion may be due to a fundamental misunderstanding of how the mathematics works for the perspective projection matrix, but it may be due to my code as well, so let me include that:
static inline constexpr auto ortho(T l, T r, T b, T t, T n, T f) {
return Matrix4x4{{(T)2 / (r - l), 0, 0, 0},
{0, (T)2 / (t - b), 0, 0},
{0, 0, (T)2 / (n - f), 0},
{(l + r) / (l - r), (b + t) / (b - t), (f + n) / (n - f), 1}};
}
static inline constexpr auto perspective(T fov, T aspect, T near, T far) {
const T q = (T)1.0 / std::tan(0.5 * fov),
a = q * aspect,
b = (near + far) / (near - far),
c = near * far * 2 / (near - far);
return Matrix4x4{
{q, 0, 0, 0},
{0, a, 0, 0},
{0, 0, b, -1},
{0, 0, c, 1},
};
}
If anyone needs extra information on what is going on, let me know in the comments and will happily either answer or make an addendum to the question

After reading the link provided in the comments, and comparing my method with the code that I had written, I realized that I made a mistake in transcribing the mathematics into my code. I accidentally put a 1 in the last row of the last column of the perspective matrix, while it should have been a 0.
The corrected code is shown here:
static inline constexpr auto perspective(T fov, T aspect, T near, T far) {
const T q = (T)1.0 / std::tan(0.5 * fov),
a = q * aspect,
b = (near + far) / (near - far),
c = near * far * 2 / (near - far);
return Matrix4x4{
{q, 0, 0, 0},
{0, a, 0, 0},
{0, 0, b, -1},
{0, 0, c, 0},
};
}

Related

Duplicate structured surface/mesh in gmsh

I'm trying to build a large structure from a simple geometric shape in gmsh and I'd like to use a structured (quadrilateral) grid. I start by creating that shape and then duplicating and translating it as often as needed to build my final structure.
The problem is that even if I define the lines and surfaces of the original shape to be transfinite, this property is lost once I duplicate and translate it. Check this sample code for a square:
Point(1) = {0, 0, 0, 1};
Point(2) = {0, 1, 0, 1};
Point(3) = {1, 1, 0, 1};
Point(4) = {1, 0, 0, 1};
Line(1) = {1, 2};
Line(2) = {2, 3};
Line(3) = {3, 4};
Line(4) = {4, 1};
Line Loop(5) = {1, 2, 3, 4};
Plane Surface(6) = {5};
Transfinite Line {1, 2, 3, 4} = 10 Using Progression 1;
Transfinite Surface {6};
Recombine Surface {6};
Translate {0, 1, 0} {
Duplicata { Surface{6}; }
}
I obtain the original square with a structured grid but the duplicated one does not have this property.
Is there a possibility to retain the structured grid when I copy the surface?
EDIT: It seems that there is indeed no possibility to duplicate a structured volume or surface. The problem is that these properties are directly related to the mesh itself and not the geometry. And the mesh cannot be duplicated.
It is possible.
You can use the GMSH Geometry.CopyMeshingMethod property that is responsible for copying the meshing method for duplicated or translated geometric entities. By default, it is turned off. To turn it on, you can simply add the following line to the beginning of your GEO file.
Geometry.CopyMeshingMethod = 1;
Now, compare:
Tested on GMSH 3.0.5, but should work with any modern version.
This fix (using "Geometry.CopyMeshingMethod = 1;") works unless you use OpenCASCADE to define your geometry.
Try simply to include "SetFactory("OpenCASCADE");" in the beginning of your script and you will see it fails.

How to determine the side of an octahedron and a dodecahedron using an accelerometer?

I have three figures: a cube, an octahedron, a dodecahedron.
Inside, each figure has an accelerometer.
The sides of the figures numbered between 1 and n.
Task: determine the current side of the cube, octahedron, dodecahedron.
For the cube, I derived the formula:
side = round((Ax*1/988)+(Ay*2/988)+(Az*3/988));
Variable "side" will give values in interval -3 and 3 (without 0), which means the current side of cube between 1 and 6.
Now I need to do the same for the octahedron and the dodecahedron. Help, how can I do this? Do I need additional sensors or accelerometer is enough?
Using a formula like that is quite clever but it has some undesirable properties. Firstly, when moving from one side to another, it will move through some intermediate values as a result of the formula that are geometrically meaningless. For example, if you are on side -3 and rotate to side -1, it will necessarily move through -2. Secondly it may not be robust to noisy accelerometer data, for example a vector that is part way between sides -3 and -1, but closer to -1 may give -2, when it should give -1.
An alternative approach is to store an array of face normals for the figure, and then take the dot product of the accelerometer reading with each of them. The closest match (the one with the highest dot product) is the closest side.
e.g:
float cube_sides[6][3] = {
{-1, 0, 0},
{0, -1, 0},
{0, 0, -1},
{1, 0, 0},
{0, 1, 0},
{0, 0, 1},
};
int closest_cube_side(float Ax, float Ay, float Az)
{
float largest_dot = 0;
int closest_side = -1; // will return -1 in case of a zero A vector
for(int side = 0; side < 6; side++)
{
float dot = (cube_sides[side][0] * Ax) +
(cube_sides[side][1] * Ay) +
(cube_sides[side][2] * Az);
if(dot > largest_dot)
{
largest_dot = dot;
closest_side = side;
}
}
return closest_side;
}
You can extend this for an octahedron and dodecahedron just by using the surface normals for each. No additional sensors should be necessary.

Orthographic projection with origin at screen bottom left

I'm using the python OpenGL bindings, and trying to only use modern opengl calls. I have a VBO with verticies, and I am trying to render with an orthographic projection matrix passed to the vertex shader.
At present I am calculating my projection matrix with the following values:
from numpy import array
w = float(width)
h = float(height)
n = 0.5
f = 3.0
matrix = array([
[2/w, 0, 0, 0],
[ 0, 2/h, 0, 0],
[ 0, 0, 1/(f-n), -n/(f-n)],
[ 0, 0, 0, 1],
], 'f')
#later
projectionUniform = glGetUniformLocation(shader, 'projectionMatrix')
glUniformMatrix4fv(projectionUniform, 1, GL_FALSE, matrix)
That code I got from here:
Formula for a orthogonal projection matrix?
This seems to work fine, but I would like my Origin to be in the bottom left corner of the screen. Is this a function I can apply over my matrix so everything "just works", or must I translate every object by w/2 h/2 manually?
side note: Will the coordinates match pixel positions with this working correctly?
Because I'm using modern OpenGL techniques, I don't think I should be using gluOrtho2d or GL_PROJECTION calls.
glUniformMatrix4fv(projectionUniform, 1, GL_FALSE, matrix)
Your matrix is stored in row-major ordering. So you should pass GL_TRUE, or you should change your matrix to column-major.
I'm not completely familiar with projections yet, as I've only started OpenGL programming recently, but your current matrix does not translate any points. The diagonal will apply scaling, but the right most column will apply translation. The link Dirk gave gives you a projection matrix that will make your origin (0,0 is what you want, yes?) the bottom-left corner of your screen.
A matrix I've used to do this (each row is actually a column to OpenGL):
OrthoMat = mat4(
vec4(2.0/(screenDim.s - left), 0.0, 0.0, 0.0),
vec4(0.0, 2.0/(screenDim.t - bottom), 0.0, 0.0),
vec4(0.0, 0.0, -1 * (2.0/(zFar - zNear)), 0.0),
vec4(-1.0 * (screenDim.s + left)/(screenDim.s - left), -1.0 * (screenDim.t + bottom)/(screenDim.t - bottom), -1.0 * (zFar + zNear)/(zFar - zNear), 1.0)
);
The screenDim math is effectively the width or height, since left and bottom are both set to 0. zFar and zNear are 1 and -1, respectively (since it's 2D, they're not extremely important).
This matrix takes values in pixels, and the vertex positions need to be in pixels as well. The point (0, 32) will always be at the same position when you resize the screen too.
Hope this helps.
Edit #1: To be clear, the left/bottom/zfar/znear values I stated are the ones I chose to make them. You can change these how you see fit.
You can use a more general projection matrix which additionally uses left,right positions.
See Wikipedia for the definition.

Dynamic is not working properly

I'm having some troubles with the Dynamic command in Mathematica, the next code shows an interactive graphic of the function f(x) = 1 - x^2. The graphic's title also shows the current area under the curve (definite integral) which is modified using the slider.
Manipulate[Show[Plot[1 - x^2, {x, 0, 1}, PlotLabel -> Integrate[1 - x^2, {x, 0, Limite - 0.000000000001}]],
Plot[-x^2 + 1, {x, 0, Limite}, PlotRange -> {0, 1}, Filling -> Axis] ], {Limite, 0.000000000001, 1}, LocalizeVariables -> False]
I would like to show the current area using this command:
Integrate[1 - x^2, {x, 0, Dynamic[Limite]}]
but the result is not what i expected. Mathematica evaluates this like
0.529 - (0.529)^3 / 3
which is correct but i don't understand why it displays an expression instead of a single number. The //FullSimplify and//N commands just don't solve the problem.
Is there a better way to obtain the result?
Am I using the Dynamic command correctly?
Thanks!
With your example the Integrate command is performed once with a symbolic upper limit. When the value of that upper limit changes the integral is not recomputed. You will get your desired result if you move the Dynamic[] wrapper from the iterator specification and wrap it around the Integrate command, which will cause the integral to be recomputed whenever Limite changes.
Dynamic[Integrate[1 - x^2, {x, 0, Limite}]]

GDI+, using DrawImage to draw a transperancy mask of the source image

Is it possible to draw a transperancy mask of an image (that is, paint all visible pixels with a constant color) using Graphics::DrawImage? I am not looking for manually scanning the image pixel-by-pixel and creating a seperate mask image, I wonder if it's possible to draw one directly from the original image.
My guessing is that it should be done with certain manipulations to ImageAttributes, if possible at all.
The color of the mask is arbitrary and should be accurate, and it would be a plus if there can be a threshold value for the transparency.
I have to draw an image with a per pixel alpha mask and found the best way was to just draw the base RGB image then draw the opaque parts of the alpha mask over the top. You need to map each alpha level to an alpha level + colour to get the image to honour every detail in the alpha mask. The code I've got has the alpha mask as a seperate 8bit image but the draw code looks like this:
g.DrawImage(&picture, r.left, r.top, r.Width(), r.Height()); // base image
if ( AlphaBuf != NULL ) // do we have and alpha mask?
{
Gdiplus::Bitmap mask(Width, Height, Width(), PixelFormat8bppIndexed, AlphaBuf);
picture.GetPalette( palette, picture.GetPaletteSize() );
// invert - only shows the transparent
palette->Entries[0] = DLIMAKEARGB(255, 0, 0, 200); // 0 = fully transparent
palette->Entries[255] = DLIMAKEARGB(0, 0, 0, 0); // 255 = fully opaque
mask.SetPalette(palette);
g.DrawImage(&mask, r.left, r.top, r.Width(), r.Height()); // draw the mask
}
My alpha masks are only full transparent or full opaque but I would think that setting the other alpha values in the pallette would let you follow a more graduated mask.
palette->Entries[1] = DLIMAKEARGB(254, 0, 0, 200);
palette->Entries[2] = DLIMAKEARGB(253, 0, 0, 200);
etc..
Hope this helps (or at least makes sense :-p )
Do you mean that you want to transform every existing color in the bitmap into one uniform color, while fully honoring the alpha/transparency information present in the image?
If so, just use the following colormatrix:
imageAttributes.SetColorMatrix( new ColorMatrix( new float[][] {
new float[] {0, 0, 0, 0, 0},
new float[] {0, 0, 0, 0, 0},
new float[] {0, 0, 0, 0, 0},
new float[] {0, 0, 0, 1, 0},
new float[] {r, g, b, 0, 1}} ) );
where r, g, b are between 0 and 1, like so:
float r = DesiredColor.R / 255f;
float g = DesiredColor.G / 255f;
float B = DesiredColor.B / 255f;
I don't understand what you mean by "it would be a plus if there can be a threshold value for the transparency", though... so maybe this isn't the answer you were looking for?

Resources