Coordinates system in Babylon.js - babylonjs

I'm a little confused about the coordinates system in Babylon.js. That is, when I use the following sequence of statements :
var camera = new BABYLON.ArcRotateCamera("Camera", 0, 0, 50, new BABYLON.Vector3(0, 0, 0), scene);
var sphere1 = BABYLON.Mesh.CreateSphere("sphere1", 16, 1.0, scene);
the sphere is painted in the center of the screen. OK. When I use the following sequence :
var camera = new BABYLON.ArcRotateCamera("Camera", 50, 0, 0, new BABYLON.Vector3(0, 0, 0), scene);
var sphere1 = BABYLON.Mesh.CreateSphere("sphere1", 16, 1.0, scene);
no sphere is painted.
I know that usually the coordinates (in CG) are as follows: Oy - vertical, Ox - horizontal, Oz - pointing to the screen. So, in the second sequence, the camera is in the point x = 50, in the plane xOz (that is ground) and is looking to origin, where the sphere is.
I guess somewhere on the road I was lost. Can you help to understand where I am wrong ?
Thank you,
Eb_cj

Hello ArcRotateCamera uses two angles (alpha and beta) to define the position of the camera on a sphere centered around a point.
Feel free to read this for more info:
https://github.com/BabylonJS/Babylon.js/wiki/05-Cameras

Related

Get coordinate of point after perspective displacement

When an ortographic camera is looking to a sphere with center (0, 0, 0) the rightmost point in the sphere is (100, 0, 0). But when a perspective camera is looking to the sphere at some close distance (the whole sphere still can be seen), the rightmost point may be (98, 0, 20), for example.
The "displaced coordinate" is the "inverse projection" of a coordinate. The coordinate (98, 0, 20) appears in the same place as (100, 0, 0) would appear if the camera was orthographic instead of perspective. So the "displaced coordinate" of (100, 0, 0) is (98, 0, 20) in the example.
How do I get the "displaced coordinate" given the point coordinate, the projection matrix and the camera position?
This is useful to know the distance of a pixel to the sphere center independently from the screen size.

How to get 4 corners of a of a mesh's surface (or a plane) that consists of many triangles?

After using CSG my mesh gets messed up with lots of more vertices and faces than needed. The data provides all vertices in a single array without differentiating wether they are real corners or somewhere in the middle of the surface / plane. I made a simple fiddle to show an example.
https://jsfiddle.net/apbln/k5ze30hr/82/
geometry.vertices.push(
new THREE.Vector3(-1, -1, 0), // 0
new THREE.Vector3( 1, -1, 0), // 1
new THREE.Vector3(-1, 1, 0), // 2
new THREE.Vector3( 1, 1, 0), // 3
new THREE.Vector3(-0.5, -0.5, 0), // 4
new THREE.Vector3( 0, 1, 0), // 5
new THREE.Vector3( 1, -1, 0), // 6
);
geometry.faces.push(
new THREE.Face3(0, 5, 2),
new THREE.Face3(0, 1, 5),
new THREE.Face3(3, 5, 1),
);
This is a simplified version of how my surfaces look like after using csg. How could I possibly know which vertices are real corners, so that I could repair the surface with 2 faces only?
You could try detecting adjacent faces. If two faces share an edge and they are coplanar then they can be merged. This would be a slightly more general algorithm.
Skecth of algorithm would be to first build adjacency graph where each edge is linked to the faces adjacent to it. Then loop through the list of edges and detect those where the two adjacent faces are coplaner, if so merge the faces and adjust adajacency lists.
Its a bit more complex if you require a triangular mesh. Best aproach might be to first construct the mesh with non triangular faces and then split them into triangles.
Its a pretty common problem and there are no doubt libraries which can do it for you. This answer may help: ThreeJS: is it possible to simplify an object / reduce the number of vertexes?

How to calculate a 3D rect covers exact the full screen in Unity?

I have a Quad faces to the camera in a 3D scene, how can I calculate a position and size to make it covers the screen exactly in Unity?
With those 4 vectors you should be able to build your quad. They are in a World space coords. The 10f number is a distance from camera to the vertices.
You may also look at this link.
Vector3 p0 = camera.ScreenToWorldPoint( new Vector3(0, 0, 10f));
Vector3 p1 = camera.ScreenToWorldPoint( new Vector3(0, camera.pixelWidth, 10f));
Vector3 p2 = camera.ScreenToWorldPoint( new Vector3(camera.pixelHeight, camera.pixelWidth, 10f));
Vector3 p3 = camera.ScreenToWorldPoint( new Vector3(camera.pixelHeight, 0, 10f));

Oblique perspective - projection matrizes in processing

I want to extend processing in order to be able to render 3D stuff with oblique projections (cabinet or cavalier). After looking around source of the camera(), perspective() and ortho() methods I was able to set up an orthographic perspective and then adjust the PGraphics3D#camera matrix to an appropriate value with partial success.
void setup() {
camera(30, 30, 30, 0, 0, 0, 1, 1, 0);
ortho(-100, 100, -100, 100, -500, 500);
p3d.camera.set(1, 0, -0.433f, 0, 0, 1, 0.25f, 0, 0, 0, 0, 0, 0, 0, 0, 1);
}
void draw() {
box(20);
}
This results in the right perspective, but without surface filling. When removing either the camera and ortho method calls or both, the screen is empty, although I'd expect camera(...) to operate on the same matrix that is overwritten later on.
Moreover I'm a little bit confused about the matrizes in PGraphics3D: camera, modelView and projection. While OpenGL keeps two matrix stacks - modelView and projection, here is a third one - camera. Can anybody shed some light on the difference and relation between these matrizes?
This would be helpful in order to know when to use/set which one.
Great question!
I ran the following code as you had it, and it looked like an isometric view of a white cube.
1: size(300,300,P3D);
2: camera(30, 30, 30, 0, 0, 0, 1, 1, 0);
3: ortho(-100, 100, -100, 100, -500, 500);
4: PGraphics3D p3d = (PGraphics3D)g;
5: p3d.camera.set(1, 0, -0.433f, 0, 0, 1, 0.25f, 0, 0, 0, 0, 0, 0, 0, 0, 1);
6: box(20);
Here's what's happening:
Line 2: sets both the camera and modelview matrices
Line 3: sets the projection matrix
Line 4: sets the camera matrix only, but this actually did nothing here. (read on)
Transformations are only performed using the modelview and projection matrices. The camera matrix is merely a convenient separation of what the modelview is usually initialized to.
If you used the draw() function, the modelview matrix is actually initialized to the camera matrix before each time it is called. Since you didn't use the draw() function, your camera matrix was never updated with your oblique transform in your camera matrix.
How to create an Oblique Projection
As a disclaimer, you must truly understand how matrices are used to transform coordinates. Order is very important. This is a good resource for learning it:
http://glprogramming.com/red/chapter03.html
The quickest explanation I can give is that the modelview matrix turns object coordinates into relative eye coordinates, then the projection matrix takes those eye coordinates and turns them in to screen coordinates. So you want to apply the oblique projection before the transformation into screen coordinates.
Here's a runnable example for creating a cabinet projection that displays some cubes:
void setup()
{
strokeWeight(2);
smooth();
noLoop();
size(600,600,P3D);
oblique(radians(60),0.5);
}
void draw()
{
background(100);
// size of the box
float w = 100;
// draw box in the middle
translate(width/2,height/2);
fill(random(255),random(255),random(255),100);
box(w);
// draw box behind
translate(0,0,-w*4);
fill(random(255),random(255),random(255),100);
box(w);
// draw box in front
translate(0,0,w*8);
fill(random(255),random(255),random(255),100);
box(w);
}
void oblique(float angle, float zscale)
{
PGraphics3D p3d = (PGraphics3D)g;
// set orthographic projection
ortho(-width/2,width/2,-height/2,height/2,-5000,5000);
// get camera's z translation
// ... so we can transform from the original z=0
float z = p3d.camera.m23;
// apply z translation
p3d.projection.translate(0,0,z);
// apply oblique projection
p3d.projection.apply(
1,0,-zscale*cos(angle),0,
0,1,zscale*sin(angle),0,
0,0,1,0,
0,0,0,1);
// remove z translation
p3d.projection.translate(0,0,-z);
}

GDI+, using DrawImage to draw a transperancy mask of the source image

Is it possible to draw a transperancy mask of an image (that is, paint all visible pixels with a constant color) using Graphics::DrawImage? I am not looking for manually scanning the image pixel-by-pixel and creating a seperate mask image, I wonder if it's possible to draw one directly from the original image.
My guessing is that it should be done with certain manipulations to ImageAttributes, if possible at all.
The color of the mask is arbitrary and should be accurate, and it would be a plus if there can be a threshold value for the transparency.
I have to draw an image with a per pixel alpha mask and found the best way was to just draw the base RGB image then draw the opaque parts of the alpha mask over the top. You need to map each alpha level to an alpha level + colour to get the image to honour every detail in the alpha mask. The code I've got has the alpha mask as a seperate 8bit image but the draw code looks like this:
g.DrawImage(&picture, r.left, r.top, r.Width(), r.Height()); // base image
if ( AlphaBuf != NULL ) // do we have and alpha mask?
{
Gdiplus::Bitmap mask(Width, Height, Width(), PixelFormat8bppIndexed, AlphaBuf);
picture.GetPalette( palette, picture.GetPaletteSize() );
// invert - only shows the transparent
palette->Entries[0] = DLIMAKEARGB(255, 0, 0, 200); // 0 = fully transparent
palette->Entries[255] = DLIMAKEARGB(0, 0, 0, 0); // 255 = fully opaque
mask.SetPalette(palette);
g.DrawImage(&mask, r.left, r.top, r.Width(), r.Height()); // draw the mask
}
My alpha masks are only full transparent or full opaque but I would think that setting the other alpha values in the pallette would let you follow a more graduated mask.
palette->Entries[1] = DLIMAKEARGB(254, 0, 0, 200);
palette->Entries[2] = DLIMAKEARGB(253, 0, 0, 200);
etc..
Hope this helps (or at least makes sense :-p )
Do you mean that you want to transform every existing color in the bitmap into one uniform color, while fully honoring the alpha/transparency information present in the image?
If so, just use the following colormatrix:
imageAttributes.SetColorMatrix( new ColorMatrix( new float[][] {
new float[] {0, 0, 0, 0, 0},
new float[] {0, 0, 0, 0, 0},
new float[] {0, 0, 0, 0, 0},
new float[] {0, 0, 0, 1, 0},
new float[] {r, g, b, 0, 1}} ) );
where r, g, b are between 0 and 1, like so:
float r = DesiredColor.R / 255f;
float g = DesiredColor.G / 255f;
float B = DesiredColor.B / 255f;
I don't understand what you mean by "it would be a plus if there can be a threshold value for the transparency", though... so maybe this isn't the answer you were looking for?

Resources