Oblique perspective - projection matrizes in processing - projection

I want to extend processing in order to be able to render 3D stuff with oblique projections (cabinet or cavalier). After looking around source of the camera(), perspective() and ortho() methods I was able to set up an orthographic perspective and then adjust the PGraphics3D#camera matrix to an appropriate value with partial success.
void setup() {
camera(30, 30, 30, 0, 0, 0, 1, 1, 0);
ortho(-100, 100, -100, 100, -500, 500);
p3d.camera.set(1, 0, -0.433f, 0, 0, 1, 0.25f, 0, 0, 0, 0, 0, 0, 0, 0, 1);
}
void draw() {
box(20);
}
This results in the right perspective, but without surface filling. When removing either the camera and ortho method calls or both, the screen is empty, although I'd expect camera(...) to operate on the same matrix that is overwritten later on.
Moreover I'm a little bit confused about the matrizes in PGraphics3D: camera, modelView and projection. While OpenGL keeps two matrix stacks - modelView and projection, here is a third one - camera. Can anybody shed some light on the difference and relation between these matrizes?
This would be helpful in order to know when to use/set which one.

Great question!
I ran the following code as you had it, and it looked like an isometric view of a white cube.
1: size(300,300,P3D);
2: camera(30, 30, 30, 0, 0, 0, 1, 1, 0);
3: ortho(-100, 100, -100, 100, -500, 500);
4: PGraphics3D p3d = (PGraphics3D)g;
5: p3d.camera.set(1, 0, -0.433f, 0, 0, 1, 0.25f, 0, 0, 0, 0, 0, 0, 0, 0, 1);
6: box(20);
Here's what's happening:
Line 2: sets both the camera and modelview matrices
Line 3: sets the projection matrix
Line 4: sets the camera matrix only, but this actually did nothing here. (read on)
Transformations are only performed using the modelview and projection matrices. The camera matrix is merely a convenient separation of what the modelview is usually initialized to.
If you used the draw() function, the modelview matrix is actually initialized to the camera matrix before each time it is called. Since you didn't use the draw() function, your camera matrix was never updated with your oblique transform in your camera matrix.
How to create an Oblique Projection
As a disclaimer, you must truly understand how matrices are used to transform coordinates. Order is very important. This is a good resource for learning it:
http://glprogramming.com/red/chapter03.html
The quickest explanation I can give is that the modelview matrix turns object coordinates into relative eye coordinates, then the projection matrix takes those eye coordinates and turns them in to screen coordinates. So you want to apply the oblique projection before the transformation into screen coordinates.
Here's a runnable example for creating a cabinet projection that displays some cubes:
void setup()
{
strokeWeight(2);
smooth();
noLoop();
size(600,600,P3D);
oblique(radians(60),0.5);
}
void draw()
{
background(100);
// size of the box
float w = 100;
// draw box in the middle
translate(width/2,height/2);
fill(random(255),random(255),random(255),100);
box(w);
// draw box behind
translate(0,0,-w*4);
fill(random(255),random(255),random(255),100);
box(w);
// draw box in front
translate(0,0,w*8);
fill(random(255),random(255),random(255),100);
box(w);
}
void oblique(float angle, float zscale)
{
PGraphics3D p3d = (PGraphics3D)g;
// set orthographic projection
ortho(-width/2,width/2,-height/2,height/2,-5000,5000);
// get camera's z translation
// ... so we can transform from the original z=0
float z = p3d.camera.m23;
// apply z translation
p3d.projection.translate(0,0,z);
// apply oblique projection
p3d.projection.apply(
1,0,-zscale*cos(angle),0,
0,1,zscale*sin(angle),0,
0,0,1,0,
0,0,0,1);
// remove z translation
p3d.projection.translate(0,0,-z);
}

Related

How to get 4 corners of a of a mesh's surface (or a plane) that consists of many triangles?

After using CSG my mesh gets messed up with lots of more vertices and faces than needed. The data provides all vertices in a single array without differentiating wether they are real corners or somewhere in the middle of the surface / plane. I made a simple fiddle to show an example.
https://jsfiddle.net/apbln/k5ze30hr/82/
geometry.vertices.push(
new THREE.Vector3(-1, -1, 0), // 0
new THREE.Vector3( 1, -1, 0), // 1
new THREE.Vector3(-1, 1, 0), // 2
new THREE.Vector3( 1, 1, 0), // 3
new THREE.Vector3(-0.5, -0.5, 0), // 4
new THREE.Vector3( 0, 1, 0), // 5
new THREE.Vector3( 1, -1, 0), // 6
);
geometry.faces.push(
new THREE.Face3(0, 5, 2),
new THREE.Face3(0, 1, 5),
new THREE.Face3(3, 5, 1),
);
This is a simplified version of how my surfaces look like after using csg. How could I possibly know which vertices are real corners, so that I could repair the surface with 2 faces only?
You could try detecting adjacent faces. If two faces share an edge and they are coplanar then they can be merged. This would be a slightly more general algorithm.
Skecth of algorithm would be to first build adjacency graph where each edge is linked to the faces adjacent to it. Then loop through the list of edges and detect those where the two adjacent faces are coplaner, if so merge the faces and adjust adajacency lists.
Its a bit more complex if you require a triangular mesh. Best aproach might be to first construct the mesh with non triangular faces and then split them into triangles.
Its a pretty common problem and there are no doubt libraries which can do it for you. This answer may help: ThreeJS: is it possible to simplify an object / reduce the number of vertexes?

Coordinates system in Babylon.js

I'm a little confused about the coordinates system in Babylon.js. That is, when I use the following sequence of statements :
var camera = new BABYLON.ArcRotateCamera("Camera", 0, 0, 50, new BABYLON.Vector3(0, 0, 0), scene);
var sphere1 = BABYLON.Mesh.CreateSphere("sphere1", 16, 1.0, scene);
the sphere is painted in the center of the screen. OK. When I use the following sequence :
var camera = new BABYLON.ArcRotateCamera("Camera", 50, 0, 0, new BABYLON.Vector3(0, 0, 0), scene);
var sphere1 = BABYLON.Mesh.CreateSphere("sphere1", 16, 1.0, scene);
no sphere is painted.
I know that usually the coordinates (in CG) are as follows: Oy - vertical, Ox - horizontal, Oz - pointing to the screen. So, in the second sequence, the camera is in the point x = 50, in the plane xOz (that is ground) and is looking to origin, where the sphere is.
I guess somewhere on the road I was lost. Can you help to understand where I am wrong ?
Thank you,
Eb_cj
Hello ArcRotateCamera uses two angles (alpha and beta) to define the position of the camera on a sphere centered around a point.
Feel free to read this for more info:
https://github.com/BabylonJS/Babylon.js/wiki/05-Cameras

How can I calculate a multi-axis SCNVector4 rotation for a SCNNode?

The SCNNode take a rotation using a SCNVector4, which has an angle (w) and a magnitude how that angle applies to each axis (x, y, z). For example, to rotate 45 degrees around the x-axis I'd create a SCNVector4 like this:
SCNVector4Make(1.0f, 0, 0, DEG2RAD(45))
What I'd like to do is rotate it across all three axis, for example: 45 degrees on the x-axis, 15 degrees on the y-axis and -135 degress across the z-axis. Does anyone know the math to calculate the final SCNVector4?
Instead of rotation property, use eulerAngles and specify angle for each axis
You'll need to generate an SCNVector4 for each of the rotations, and then multiply them. Note that the order of operations matters!
http://www.cprogramming.com/tutorial/3d/rotationMatrices.html has a pretty good writeup of the math. Any OpenGL reference that deals with rotation matrices is worth a look too.
If you're not animating the rotation, it might be cleaner to just set the transform matrix directly, like:
node.transform = CATransform3DRotate(CATransform3DRotate(CATransform3DRotate(node.transform, xAngle, 1, 0, 0), yAngle, 0, 1, 0), zAngle, 0, 0, 1);
Do you ask for rotation matrix or how to simply rotate in general? If the second is correct then for example:
[node runAction:[SCNAction rotateByX:0 y:M_PI z:0 duration:0]];

Orthographic projection with origin at screen bottom left

I'm using the python OpenGL bindings, and trying to only use modern opengl calls. I have a VBO with verticies, and I am trying to render with an orthographic projection matrix passed to the vertex shader.
At present I am calculating my projection matrix with the following values:
from numpy import array
w = float(width)
h = float(height)
n = 0.5
f = 3.0
matrix = array([
[2/w, 0, 0, 0],
[ 0, 2/h, 0, 0],
[ 0, 0, 1/(f-n), -n/(f-n)],
[ 0, 0, 0, 1],
], 'f')
#later
projectionUniform = glGetUniformLocation(shader, 'projectionMatrix')
glUniformMatrix4fv(projectionUniform, 1, GL_FALSE, matrix)
That code I got from here:
Formula for a orthogonal projection matrix?
This seems to work fine, but I would like my Origin to be in the bottom left corner of the screen. Is this a function I can apply over my matrix so everything "just works", or must I translate every object by w/2 h/2 manually?
side note: Will the coordinates match pixel positions with this working correctly?
Because I'm using modern OpenGL techniques, I don't think I should be using gluOrtho2d or GL_PROJECTION calls.
glUniformMatrix4fv(projectionUniform, 1, GL_FALSE, matrix)
Your matrix is stored in row-major ordering. So you should pass GL_TRUE, or you should change your matrix to column-major.
I'm not completely familiar with projections yet, as I've only started OpenGL programming recently, but your current matrix does not translate any points. The diagonal will apply scaling, but the right most column will apply translation. The link Dirk gave gives you a projection matrix that will make your origin (0,0 is what you want, yes?) the bottom-left corner of your screen.
A matrix I've used to do this (each row is actually a column to OpenGL):
OrthoMat = mat4(
vec4(2.0/(screenDim.s - left), 0.0, 0.0, 0.0),
vec4(0.0, 2.0/(screenDim.t - bottom), 0.0, 0.0),
vec4(0.0, 0.0, -1 * (2.0/(zFar - zNear)), 0.0),
vec4(-1.0 * (screenDim.s + left)/(screenDim.s - left), -1.0 * (screenDim.t + bottom)/(screenDim.t - bottom), -1.0 * (zFar + zNear)/(zFar - zNear), 1.0)
);
The screenDim math is effectively the width or height, since left and bottom are both set to 0. zFar and zNear are 1 and -1, respectively (since it's 2D, they're not extremely important).
This matrix takes values in pixels, and the vertex positions need to be in pixels as well. The point (0, 32) will always be at the same position when you resize the screen too.
Hope this helps.
Edit #1: To be clear, the left/bottom/zfar/znear values I stated are the ones I chose to make them. You can change these how you see fit.
You can use a more general projection matrix which additionally uses left,right positions.
See Wikipedia for the definition.

GDI+, using DrawImage to draw a transperancy mask of the source image

Is it possible to draw a transperancy mask of an image (that is, paint all visible pixels with a constant color) using Graphics::DrawImage? I am not looking for manually scanning the image pixel-by-pixel and creating a seperate mask image, I wonder if it's possible to draw one directly from the original image.
My guessing is that it should be done with certain manipulations to ImageAttributes, if possible at all.
The color of the mask is arbitrary and should be accurate, and it would be a plus if there can be a threshold value for the transparency.
I have to draw an image with a per pixel alpha mask and found the best way was to just draw the base RGB image then draw the opaque parts of the alpha mask over the top. You need to map each alpha level to an alpha level + colour to get the image to honour every detail in the alpha mask. The code I've got has the alpha mask as a seperate 8bit image but the draw code looks like this:
g.DrawImage(&picture, r.left, r.top, r.Width(), r.Height()); // base image
if ( AlphaBuf != NULL ) // do we have and alpha mask?
{
Gdiplus::Bitmap mask(Width, Height, Width(), PixelFormat8bppIndexed, AlphaBuf);
picture.GetPalette( palette, picture.GetPaletteSize() );
// invert - only shows the transparent
palette->Entries[0] = DLIMAKEARGB(255, 0, 0, 200); // 0 = fully transparent
palette->Entries[255] = DLIMAKEARGB(0, 0, 0, 0); // 255 = fully opaque
mask.SetPalette(palette);
g.DrawImage(&mask, r.left, r.top, r.Width(), r.Height()); // draw the mask
}
My alpha masks are only full transparent or full opaque but I would think that setting the other alpha values in the pallette would let you follow a more graduated mask.
palette->Entries[1] = DLIMAKEARGB(254, 0, 0, 200);
palette->Entries[2] = DLIMAKEARGB(253, 0, 0, 200);
etc..
Hope this helps (or at least makes sense :-p )
Do you mean that you want to transform every existing color in the bitmap into one uniform color, while fully honoring the alpha/transparency information present in the image?
If so, just use the following colormatrix:
imageAttributes.SetColorMatrix( new ColorMatrix( new float[][] {
new float[] {0, 0, 0, 0, 0},
new float[] {0, 0, 0, 0, 0},
new float[] {0, 0, 0, 0, 0},
new float[] {0, 0, 0, 1, 0},
new float[] {r, g, b, 0, 1}} ) );
where r, g, b are between 0 and 1, like so:
float r = DesiredColor.R / 255f;
float g = DesiredColor.G / 255f;
float B = DesiredColor.B / 255f;
I don't understand what you mean by "it would be a plus if there can be a threshold value for the transparency", though... so maybe this isn't the answer you were looking for?

Resources