Coordinate transformations in javafx - javafx

I want to have a right-handed Cartesian coordinate system in JavaFX, so (0,0) at lower left corner of window, x increasing to the right and y increasing upwards. I can't figure out how to do that with transforms. If I apply a rotation transform, the buttons will be upside down. All I want is to be able to use this coordinate system instead of the default one.

As mentioned in the JavaFX documentation (see chapter Y-down versus Y-up), Y down is used by many 2D graphics libraries, which is where JavaFX has started.
To force Y up and correct drawing, you could put all your content in a rotated parent node:
// Rotate camera to use Y up.
camera.setRotationAxis(Rotate.Z_AXIS);
camera.setRotate(180.0);
// Rotate scene content for correct drawing.
Group yUp = new Group();
yUp.setRotationAxis(Rotate.Z_AXIS);
yUp.setRotate(180.0);
Scene scene = new Scene(yUp);
scene.setCamera(camera);
Now add everything to yUp to use those nodes like in a Y up environment.
Bear in mind that this is fine in 2D space. If you come up with additional 3D features, make sure your models grow in negative Y direction. Otherwise you would have to use another container.

JavaFX's prism renderer eventually uses a 3D Camera transform to render it's shapes.
There are two cameras that can be set to the scene, Parallel and Perspective.
If you look in the javafx source for parallel camera here you will find some maths to compute the transform.
If you override that method and implement the proper maths, you should be able to invert the coordinate system.
The kind of math you would use is something like this.
You would have to look in the source to see what ortho does exactly. But this should get you on the right track.

Related

Adjust the orthographic camera to fit a 3D object (Three.js)

I'm building a scene, which I want to view through the orthographic camera, from an angle. I do the following:
Build the scene.
Move the OrbitControl's (camera's) target to the center of the scene.
Move the camera by a certain (unit) vector using spherical coordinates.
Try to adjust the camera's left/right/top/bottom params to keep the object in the view, centered. Also considered adjusting the zoom.
My simplified, ideally positioned scene looks like that:
So I guess it is a problem of a calculation of positions of object extremities after (spherical) transformation and projecting them back into cartesian coordinates. I tried to use Euler transform helper, but it depends on the order of transformation for each of the axis. Quaternions are also non-commutive, and I'm lost. Perhaps I need to calculate how would the widths/heights of the diagonals change after transformation and use these?

How to determine a positions of MeshView?

I am using javafx.
I have a MeshView which is wall of a cube.
I try to find a way how to get it coordinates (x,y,z).
I need it to detrmine if the wall is visible on the screen or not
and if not how to rotate it to make it visible.
These methods:
myMeshViewWall.getLocalBounds()
myMeshViewWall.getBoundsInLocal()
myMeshViewWall.getBoundsInParent()
always gives me same result when I rotate my cube.
Wherever my wall is, the result is not changing.
What shoudl I do to achive my goal?
In order to get the coordinates from the object in the scene you can try:
myMeshViewWall.localToScene(myMeshViewWall.getBoundsInLocal());
This will transform the bounds from the local coordinate space of this node into the coordinate space of its scene.

perspective correction of texture coordinates in 3d

I'm writing a software renderer which is currently working well, but I'm trying to get perspective correction of texture coordinates and that doesn't seem to be correct. I am using all the same matrix math as opengl for my renderer. To rasterise a triangle I do the following:
transform the vertices using the modelview and projection matrixes, and transform into clip coordinates.
for each pixel in each triangle, calculate barycentric coordinates to interpolate properties (color, texture coordinates, normals etc.)
to correct for perspective I use perspective correct interpolation:
(w is depth coordinate of vertex, c is texture coordinate of vertex, b is the barycentric weight of a vertex)
1/w = b0*(1/w0) + b1*(1/w1) + b2*(1/w2)
c/w = b0*(c0/w0) + b1*(c1/w1) + b2*(c2/w2)
c = (c/w)/(1/w)
This should correct for perspective, and it helps a little, but there is still an obvious perspective problem. Am I missing something here, perhaps some rounding issues (I'm using floats for all math)?
See in this image the error in the texture coordinates evident along the diagonal, this is the result having done the division by depth coordinates.
Also, this is usually done for texture coordinates... is it necessary for other properties (e.g. normals etc.) as well?
I cracked the code on this issue recently. You can use a homography if you plan on modifying the texture in memory prior to assigning it to the surface. That's computationally expensive and adds an additional dependency to your program. There's a nice hack that'll fix the problem for you.
OpenGL automatically applies perspective correction to the texture you are rendering. All you need to do is multiply your texture coordinates (UV - 0.0f-1.0f) by the Z component (world space depth of an XYZ position vector) of each corner of the plane and it'll "throw off" OpenGL's perspective correction.
I asked and solved this problem recently. Give this link a shot:
texture mapping a trapezoid with a square texture in OpenGL
The paper I read that fixed this issue is called, "Navigating Static Environments Using Image-Space Simplification and Morphing" - page 9 appendix A.
Hope this helps!
ct
The only correct transformation from UV coordinates to a 3D plane is an homographic transformation.
http://en.wikipedia.org/wiki/Homography
You must have it at some point in your computations.
To find it yourself, you can write the projection of any pixel of the texture (the same as for the vertex) and invert them to get texture coordinates from screen coordinates.
It will come in the form of an homographic transform.
Yeah, that looks like your traditional broken-perspective dent. Your algorithm looks right though, so I'm really not sure what could be wrong. I would check that you're actually using the newly calculated value later on when you render it? This really looks like you went to the trouble of calculating the perspective-correct value, and then used the basic non-corrected value for rendering.
You need to inform OpenGL that you need perspective correction on pixels with
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST)
What you are observing is the typical distortion of linear texture mapping. On hardware that is not capable of per-pixel perspective correction (like for example the PS1) the standard solution is just subdividing in smaller polygons to make the defect less noticeable.

How to rotate a picture using jogl?

Dear Friends,Can anyone tell me how to show one picture in GLCanvas and by using mouse how to rotate a picture in the GLCanvas.I m new to this jogl developement.Can u pls provide me how to do this.If possible provide me some code snippet and some reference site to get a clear idea about jogl developement.
regards,
s.kumaran.
To show an image on GLCanvas , create a polygon using gl.glBegin(GL.GL_POLYGON) and load the texture using the Class TextureIO .Then using the MouseListener in Java Swings ,you can easily control the rotation of the image(i.e,the textured polygon) by simply changing the position of Camera or doing some transformations( "gl.glRotate(angle,x-axis,y-axis,z-axis) in your case") in Model-View matrix .
The easiest way to do this will be to texture a Quad with the picture and then apply affine transforms to that Quad. Rendering this quad will let you see a rotating picture you can do pretty much any transform by shifting the vertices of the Quad.
I'm assuming that you are drawing a 3D scene and want to change it's orientation, rather than having a 2D image which you wish to rotate.
The short answer is that it takes place in two parts. You need to store an orientation of your scene as a 4x4 matrix (homogeneous matrix - search for it if you don't know what that is). You first need to write code that translates a mouse drag into a change of that 4x4 matrix. So when the mouse is dragged up apply an appropriate rotation or whatever to the matrix.
Then you need to redraw the scene, but using the new transformed 4x4 matrix. Use glMatrixMode to specify which matrix (use either GL_PROJECTION or GL_MODELVIEW) and then functions like glMultMatrixf() to manipulate the appropriate matrix.
If that didn't make sense pick up an OpenGL tutorial on how to rotate scenes. OpenGL and JOGL are close enough that methods from OpenGL work in JOGL.

Implementing z-axis in a 2D side-scroller

I'm making a side scroller similar to Castle Crashers and right now I'm using SAT for collision detection. That works great, but I want to simulate level "depth" by allowing objects to move up and down on the screen, basically along a z-axis (like this screenshot http://favoniangamers.files.wordpress.com/2009/07/castle-crashers-ps3.jpg). This isn't an isometric game, but rather uses parallax scrolling.
I added a z component to my vector class, and I plan to cull collisions based on the 'thickness' of a shape and it's z position. I'm just not sure how calculate the positions of shapes for rendering or how to add jumping with gravity. How do I calculate the max y value (for the ground) as the z position changes? Basically it's the relationship of the z and y axis that confuses me.
I'd appreciate links to resources if anyone knows of this topic.
Thanks!
It's actually possible to make your collision detection algorithm dimensionally agnostic. Just have a collision detector that works along one dimension, use that to check each dimension, and your answer to "are these colliding or not" is the logical AND of the collision detection along each of the dimensions.
Your game should be organised to keep the interaction of game objects, and the rendering of the game to the screen completely seperate. You can think of these two sections of the program as the "model" and the "view". In the model, you have a full 3D world, with 3 axes. You can't go halvesies on this point without some level of pain. Your model must be proper 3D.
The view will read the location of all the game objects, and project them onto the screen using the camera definition. For this part you don't need a full 3D rendering engine. The correct technical term for the perspective you're talking about is "oblique", and it can be seen in many ancient chinese and japanese scroll paintings and prints- in particular look for images of "The Tale of Genji".
The on screen position of an object (including the ground surface!) goes something like this:
DEPTH_RATIO=0.5;
view_x=model_x-model_z*DEPTH_RATIO-camera_x;
view_y=model_y+model_z*DEPTH_RATIO-camera_y;
you can modify for a straight orthographic front projection:
DEPTH_RATIO=0.5;
view_x=model_x-camera_x;
view_y=model_y+model_z*DEPTH_RATIO-camera_y;
And of course don't forget to cull objects outside the volume defined by the camera.
You can also use this mechanism to handle the positioning of parallax layers for you. This is of course, a matter changing your camera to a 1-point perspective projection instead of an orthographic projection. You don't have to use this to change the rendered size of your sprites, but it will help you manage the x position of objects realistically. if you're up for a challenge, you could even mix projections- use 1 point perspective for deep backgrounds, and the orthographic stuff for the foreground.
You should separate your conceptual Y axis used by you physics calculation (collision detection etc.) and the Y axis you actually draw on the screen. That way it becomes less confusing.
Just do calculations per normal pretending there is no relationship between Y and Z axis then when you actually draw the object on the screen you simulate the Z axis using the Y axis:
screen_Y = Y + Z/some_fudge_factor;
Actually, this is how real 3d engines work. After all the world calculations are done the X, Y and Z coordinates are mapped onto screen_X and screen_Y via a function (usually a bit more complicated than the equation above, but just a bit).
For example, to implement pseudo-isormetric view in your game you can even apply Z to the screen_X axis so objects are displaced diagonally instead of vertically.

Resources