Dear Friends,Can anyone tell me how to show one picture in GLCanvas and by using mouse how to rotate a picture in the GLCanvas.I m new to this jogl developement.Can u pls provide me how to do this.If possible provide me some code snippet and some reference site to get a clear idea about jogl developement.
regards,
s.kumaran.
To show an image on GLCanvas , create a polygon using gl.glBegin(GL.GL_POLYGON) and load the texture using the Class TextureIO .Then using the MouseListener in Java Swings ,you can easily control the rotation of the image(i.e,the textured polygon) by simply changing the position of Camera or doing some transformations( "gl.glRotate(angle,x-axis,y-axis,z-axis) in your case") in Model-View matrix .
The easiest way to do this will be to texture a Quad with the picture and then apply affine transforms to that Quad. Rendering this quad will let you see a rotating picture you can do pretty much any transform by shifting the vertices of the Quad.
I'm assuming that you are drawing a 3D scene and want to change it's orientation, rather than having a 2D image which you wish to rotate.
The short answer is that it takes place in two parts. You need to store an orientation of your scene as a 4x4 matrix (homogeneous matrix - search for it if you don't know what that is). You first need to write code that translates a mouse drag into a change of that 4x4 matrix. So when the mouse is dragged up apply an appropriate rotation or whatever to the matrix.
Then you need to redraw the scene, but using the new transformed 4x4 matrix. Use glMatrixMode to specify which matrix (use either GL_PROJECTION or GL_MODELVIEW) and then functions like glMultMatrixf() to manipulate the appropriate matrix.
If that didn't make sense pick up an OpenGL tutorial on how to rotate scenes. OpenGL and JOGL are close enough that methods from OpenGL work in JOGL.
Related
I am using javafx.
I have a MeshView which is wall of a cube.
I try to find a way how to get it coordinates (x,y,z).
I need it to detrmine if the wall is visible on the screen or not
and if not how to rotate it to make it visible.
These methods:
myMeshViewWall.getLocalBounds()
myMeshViewWall.getBoundsInLocal()
myMeshViewWall.getBoundsInParent()
always gives me same result when I rotate my cube.
Wherever my wall is, the result is not changing.
What shoudl I do to achive my goal?
In order to get the coordinates from the object in the scene you can try:
myMeshViewWall.localToScene(myMeshViewWall.getBoundsInLocal());
This will transform the bounds from the local coordinate space of this node into the coordinate space of its scene.
I want to have a right-handed Cartesian coordinate system in JavaFX, so (0,0) at lower left corner of window, x increasing to the right and y increasing upwards. I can't figure out how to do that with transforms. If I apply a rotation transform, the buttons will be upside down. All I want is to be able to use this coordinate system instead of the default one.
As mentioned in the JavaFX documentation (see chapter Y-down versus Y-up), Y down is used by many 2D graphics libraries, which is where JavaFX has started.
To force Y up and correct drawing, you could put all your content in a rotated parent node:
// Rotate camera to use Y up.
camera.setRotationAxis(Rotate.Z_AXIS);
camera.setRotate(180.0);
// Rotate scene content for correct drawing.
Group yUp = new Group();
yUp.setRotationAxis(Rotate.Z_AXIS);
yUp.setRotate(180.0);
Scene scene = new Scene(yUp);
scene.setCamera(camera);
Now add everything to yUp to use those nodes like in a Y up environment.
Bear in mind that this is fine in 2D space. If you come up with additional 3D features, make sure your models grow in negative Y direction. Otherwise you would have to use another container.
JavaFX's prism renderer eventually uses a 3D Camera transform to render it's shapes.
There are two cameras that can be set to the scene, Parallel and Perspective.
If you look in the javafx source for parallel camera here you will find some maths to compute the transform.
If you override that method and implement the proper maths, you should be able to invert the coordinate system.
The kind of math you would use is something like this.
You would have to look in the source to see what ortho does exactly. But this should get you on the right track.
I'm writing a software renderer which is currently working well, but I'm trying to get perspective correction of texture coordinates and that doesn't seem to be correct. I am using all the same matrix math as opengl for my renderer. To rasterise a triangle I do the following:
transform the vertices using the modelview and projection matrixes, and transform into clip coordinates.
for each pixel in each triangle, calculate barycentric coordinates to interpolate properties (color, texture coordinates, normals etc.)
to correct for perspective I use perspective correct interpolation:
(w is depth coordinate of vertex, c is texture coordinate of vertex, b is the barycentric weight of a vertex)
1/w = b0*(1/w0) + b1*(1/w1) + b2*(1/w2)
c/w = b0*(c0/w0) + b1*(c1/w1) + b2*(c2/w2)
c = (c/w)/(1/w)
This should correct for perspective, and it helps a little, but there is still an obvious perspective problem. Am I missing something here, perhaps some rounding issues (I'm using floats for all math)?
See in this image the error in the texture coordinates evident along the diagonal, this is the result having done the division by depth coordinates.
Also, this is usually done for texture coordinates... is it necessary for other properties (e.g. normals etc.) as well?
I cracked the code on this issue recently. You can use a homography if you plan on modifying the texture in memory prior to assigning it to the surface. That's computationally expensive and adds an additional dependency to your program. There's a nice hack that'll fix the problem for you.
OpenGL automatically applies perspective correction to the texture you are rendering. All you need to do is multiply your texture coordinates (UV - 0.0f-1.0f) by the Z component (world space depth of an XYZ position vector) of each corner of the plane and it'll "throw off" OpenGL's perspective correction.
I asked and solved this problem recently. Give this link a shot:
texture mapping a trapezoid with a square texture in OpenGL
The paper I read that fixed this issue is called, "Navigating Static Environments Using Image-Space Simplification and Morphing" - page 9 appendix A.
Hope this helps!
ct
The only correct transformation from UV coordinates to a 3D plane is an homographic transformation.
http://en.wikipedia.org/wiki/Homography
You must have it at some point in your computations.
To find it yourself, you can write the projection of any pixel of the texture (the same as for the vertex) and invert them to get texture coordinates from screen coordinates.
It will come in the form of an homographic transform.
Yeah, that looks like your traditional broken-perspective dent. Your algorithm looks right though, so I'm really not sure what could be wrong. I would check that you're actually using the newly calculated value later on when you render it? This really looks like you went to the trouble of calculating the perspective-correct value, and then used the basic non-corrected value for rendering.
You need to inform OpenGL that you need perspective correction on pixels with
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST)
What you are observing is the typical distortion of linear texture mapping. On hardware that is not capable of per-pixel perspective correction (like for example the PS1) the standard solution is just subdividing in smaller polygons to make the defect less noticeable.
How to blur 3d object? (Papervision 3d) And save created new object as new 3d model? (can help in sky/clouds generation)
Like in 2d picture I've turn rectangel intu some blury structure
(source: narod.ru)
Set useOwnContainer to true the add the filter:
your3DObject.useOwnContainer = true;
your3DObject.filters = [new BlurFilter(4,4,2)];
When you set useOwnContainer to true, a new 2d DisplayObject is created to render the 3d projection into, and you can apply of the usual DisplayObject properties to that.
Andy Zupko has a good post about this and render layers.
Using this will cost your processor a bit, so use it wisely. For example
in the twigital I worked on at disturb media we used one Glow for
the layer that holds all the characters, not inidividual render layers for each
character. On other projects we 'baked' the filters into bitmaps and used them,
this meant a bit more memory, but freed up the processor a bit for other tasks.
HTH
I'm not familiar with Papervision 3D, but blurring in 3D is normally just blurring in 2D. You pick the object you want blurred, determine the blurring you want for that object, then apply a 2D blur before compositing other objects into the scene.
This is a cheat because in principle, different parts of the object may need different degrees of (depth of field) blurring. But it's not the only cheat in 3D graphics.
That said, there are other approaches. Ray-tracing can give true depth-of-field effects (if you're willing to pay the render-time costs). It's also possible to apply a blur to a 3D "voxel" grid instead of a 2D pixel grid - though I imagine that's more useful for smoothing shapes from e.g. medical scanners than for giving depth-of-field effects.
Blur is 2D operation, try to render object into texture and blur that texture.
I want to project a grid on the xz-plane like shown here:
To do that, I created a vertex grid with x and z range [-1|1]. In the shader I multiply the xz screen coordinate of a vertex with the inverse of the View-Projection matrix. Then I want to adjust the height, depending on the new world xz coordinates and finally I transform these coordinates back to screenspace by multiplying them with the View-Projection matrix.
I dont know why, but I get a very strange plane shown on the screen. Are the mathematical oprations I use correct?
The grid that you initially create, is that in projection space or actual screen co-ords? It sounds like it is in projection space since you only transform it with the inverse of the view-projection matrix to get into world co-ords. I think you need to include the "Window" matrix too i.e. transform them by the inverse of the View-Projection-Window matrix (and similarly on the way back to screen co-ords).
Edit:
I'm probably not understanding exactly what it is you're trying to do so here's some questions back. :)
Are you trying to take the grid that's shown in the screenshot in your question and project that onto world z-x co-ordinates? If so, then why do you start with a grid of z-x values? Also, if you apply an inverse view matrix to those then surely you would end up with a line since the camera looks along z although your second screenshots show that you are getting a plane. I'm a bit confused.