JOML : Perspective Matrix doesnot retain shape - math

Im trying to make my own 3D engine using LWJGL.
This is the Line of Code which creates the projection matrix for the uniform to pass.
pMatrix = new Matrix4f().ortho(-2f, 2f, -1.125f, 1.125f, 0.1f, 1000f);
Perfectly working Orthographic Projection
Notice how the rendered mesh is square, correct according to the vertices
Only changing this one line to
pMatrix = new Matrix4f().perspective((float) Math.toRadians(60.0f), 640/360,0.1f, 1000f);
Breaks It Not Properly working Perspective projection
Notice how in the second image , the mesh is not square
Is there something that im doing wrong. If yes , please help me fix it. If not then why is this happening.

Related

Add a QTextureMaterial to a custom mesh

I have a custom mesh (created in blender) that I insert into Qt3D using the following code:
QMesh *mesh = new QMesh(rootEntity);
mesh->setSource(QUrl::fromLocalFile(baseUrl+"mesh.obj"));
This works fine; I can add it to an entity with a material and everything.
Then I create a custom material using a texture loaded from a .png. I do this using the following code:
Qt3DRender::QTextureLoader *loader = new Qt3DRender::QTextureLoader(rootEntity);
Qt3DExtras::QTextureMaterial *material = new Qt3DExtras::QTextureMaterial(rootEntity);
loader->setSource(QUrl::fromLocalFile(baseUrl+"pattern.jpg"));
material->setTexture(loader);
This also works fine. When I add this material to a built-in Qt mesh (e.g. QPlaneMesh or QSphereMesh) it shows perfectly on the surface as one would expect.
However - now comes the problem - if I add it with the QMesh specified above, the mesh just gets one homogeneous color which seems to be the average over the colors in the pattern. Here you can see what I mean: both objects have the same material. The top one is inserted externally while the bottom one is a QPlaneMesh.
Can someone explain me why that is the case? And is there a way to successfully add textures to custom meshes?
Note: I have tried this with 2D and 3D meshes and it is the same outcome.
Note 2: I have also tried it with diferent images and it still just gets one homogeneous average color.
UPDATE: I tried (following the suggestion in the answer) to add a texture attribute to the geometry of my imported mesh like the following:
Qt3DCore::QEntity *entity = new Qt3DCore::QEntity(rootEntity);
QMesh *mesh = new QMesh(entity);
mesh->setSource(QUrl::fromLocalFile(baseUrl+"mesh.obj"));
const int stride = (3 + 2 + 3 + 4) * sizeof(float);
QSize resolution = QSize(2,2);
const int nVerts = resolution.width() * resolution.height();
QAttribute *texCoordAttr = new QAttribute(mesh->geometry());
Qt3DRender::QBuffer *vertexBuffer = new Qt3DRender::QBuffer(mesh->geometry());
texCoordAttr->setName(QAttribute::defaultTextureCoordinate1AttributeName());
texCoordAttr->setVertexBaseType(QAttribute::Float);
texCoordAttr->setVertexSize(2);
texCoordAttr->setAttributeType(QAttribute::VertexAttribute);
texCoordAttr->setBuffer(vertexBuffer);
texCoordAttr->setByteStride(stride);
texCoordAttr->setByteOffset(3*sizeof(float));
texCoordAttr->setCount(nVerts);
vertexBuffer->setDataGenerator(QSharedPointer<PlaneVertexBufferFunctor>::create(1.0f,1.0f,resolution, false)); //these input values (width, height, resolution, mirrored) are probably the cause of the problem
mesh->geometry()->addAttribute(texCoordAttr); //it crashes here
entity->addComponent(mesh);
entity->addComponent(transform);
entity->addComponent(material);
I created the functor for setDataGenerator like in the QPlaneMesh code. Now I am suspecting the segmentation fault is because of sizing mismatch. So how can I get the correct width and height of an external mesh from its QGeometry? And what else might be wrong here?
It looks like the mesh is missing the texture coordinates. When you open the file with a text editor, do you see the key vt somewhere? Those are the texture coordinates. You can read about the format here.
If you still want the obj file that you have, you have to add texture coordinates if it doesn't have any. It's probably best to open the file in Blender and use its texture mapper - at least for more complex meshes. Guessing which vertex needs which texture coordinate is not really feasible.
The texture coordinates work as follows:
If you have an image of, say 500 by 400 pixels, the texture coordinate (0.7, 0.3) is (500 * 0.7, 400 * 0.3) = (350, 120), meaning that the vertex which has that texture coordinate will receive the color value of the pixel at (350, 120). Values inside a triangle will get interpolated.
If your obj file comes along with a mtl file then it probably already has texture coordinates. If you want to load this mtl file use the QSceneLoader and add it to its parent QEntity to display everything.

How to Find global position of objects in a rotating scene THREE.JS

I am working on a 3D mesh manipulator using this : http://leapmotion.com. So far, I have been able manipulate the points just fine, by 'grabbing' and moving them, however I now want to be able to rotate the mesh and work on the opposite face. What I have done is add an extra object that is called 'rotatable' as Shown below:
scene=new THREE.Scene();
camera = new THREE.PerspectiveCamera(70,window.innerWidth/window.innerHeight,1,8000)
renderer=new THREE.WebGLRenderer( { clearColor: 0x000000, clearAlpha: 1, maxLights:5 } )
//This is the 'Mesh Scene'
rotatable = new THREE.Object3D()
scene.add(rotatable)
//Mesh we are altering
var material = new THREE.MeshNormalMaterial()
material.side=2
var geom = new THREE.SphereGeometry(200,10,10);
var sphere = new THREE.Mesh(geom, material)
rotatable.add(sphere)
I am then trying to change the vertices of this sphere, but to do so I need to do a 'collision test' in order to see if the vertex is being 'grabbed' This involves check the vertex position and see if it coincides with one of your finger position (psuedoCode below)
if(finger.x == vertex.x && finger.y == vertex.y && finger.z == vertex.z){
vertex.grabbed = true
}
This works fine when the rotatable's rotation is zero, however when it starts to rotate, the collision test will still be testing for the unrotated vertex position (which makes sense). My question is how to find the position of the vertex in its 'scene / global' position. The only way I can think of doing this so far is to calculate the rotation of the 'rotatable' and use this vector to calculate the new vertex position.
I know nothing about math, so this may not be the way to go, and even if it is I will have to struggle through it so hard that I won't ever know if I'm just doing the math incorrectly, or this isn't the way I should go about calculating it. Obviously I'm willing to go through this work, but just want to make sure this is the way to do it, rather then an other simpler method.
If there are any other questions about the code, please let me know, and Thanks in advance for your time!
Isaac
To get the world position of a vertex specified in local coordinates, apply the object's world transform to the vertex like so:
vertex.applyMatrix4( object.matrixWorld );
(I am not familiar with leapmotion, so hopefully it does not impact this answer.)
Tip: maxLights is no longer required. And it is best to avoid material.side = 2. Use material.side = THREE.DoubleSide instead.
You can find the constants here: https://github.com/mrdoob/three.js/blob/master/src/Three.js
three.js r.55

XNA FBX model drawing problems, it's either my code, the way I export models or the way content is exported

So basically when I try to draw more a mesh inside an FBX file its orientation is always removed and it's scaled down. I'm not sure if the issue is caused by code or the way I'm exporting the FBX files. I have been trying to narrow down the cause and I am fairly sure it's not caused by the way I export the FBX (but I could be wrong), so it's either the XNA content pipeline or my drawing code
Here are some pics I took to show my problem, where the gray background is in 3Ds Max as I see it and red background is in XNA:
THis is as it appears in 3D StudioMax: http://i.stack.imgur.com/e0oW4.png
This is how it appears in XNA: http://i.stack.imgur.com/1vOcx.png
Both are being viewed from the same angle and direction but varying distances.
Now what is really odd is if I create another mesh in max, say a box, and export that (along with the original model), it works fine: http://i.stack.imgur.com/SIDg9.png
So long as there is more than one mesh in the fbx model it draws properly (though I'm still suspicious if it's drawing with proper scaling applied, i.e. if in Max it is 1 unit long in XNA it becomes something like 1.27 units long), if there is less its orientation which I applied to it in 3D studio max is removed when I draw it.
This is how I draw the model:
model.CopyAbsoluteBoneTransformsTo(boneTransforms);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.World = boneTransforms[mesh.ParentBone.Index];
Vector3 cameraPosition = Camera.Get.Position;// new Vector3(0, 0, 0);
//cameraPosition.X = -Camera.Get.PosX;
//cameraPosition.Y = Camera.Get.PosY;
effect.View = Camera.Get.View;// Matrix.CreateLookAt(cameraPosition, cameraPosition + Camera.Get.LookDir, Camera.Get.Up);
effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4,
BaseGame.Get.GraphicsDevice.Viewport.AspectRatio,
0.01f, 1000000); //Matrix.CreateOrthographic(800 / 1, 480 / 1, 0, 1000000);
//effect.TextureEnabled = true;
effect.LightingEnabled = true;
effect.PreferPerPixelLighting = true;
//effect.SpecularColor = new Vector3(1, 0, 0);
}
mesh.Draw();
}
Obviously mesh.draw() is called twice when there is more than one mesh in the fbx file..
Generally if you are having a problem with the position or scale of the mesh while rendering, then it's likely to be related to the matrices. Not necessarily the exporting, but rather how you use them in the code.
I use blender3d for modelling, but I know that Blender3d actually defines different spaces when you are creating the meshes within the editor. For example, if you create a mesh while in 'object' mode, the position/rotation/scale of the object in the scene will not be exported (because that object will be the root of a new tree, centered around 0,0,0). So I would check for a similar situation in 3DMax - make sure you are transforming the vertices in Max relative to 0,0,0, or else you may lose the 'initial' translation and when you render in XNA, all the objects will be rendered around your 0,0,0 (i.e. appear mixed together).
Failing that, and I can't remember exactly off the top of my head, but I think you may need to multiply the current mesh's absolute matrix transform with that of the parent's world matrix transform. Although it's been a while so I'm not too sure.

Mapping points in 2d space to a sphere

I have a bunch of points in a rectangular x/y space which I would like to project onto a sphere. As in, I am trying to write this function:
function point_on_sphere(2dx:Number, 2dy:Number) : Vector3D
{
//magic
return new Vector3D(3dx, 3dy, 3dz);
}
I have been trying to first plot the points on to a cylinder and then map those points to a sphere as directed by this wikipedia page. However, those formulas assume a constant z=0, which doesn't really do what I want.
I'm using actionscript 3 / flex, but any pseudo code or pushes in the right direction would be greatly appreciated.
Just to clarify: I'm not trying to apply a texture to a sphere object, but rather to place objects along an imaginary sphere.
There is no one right answer. You can choose different approaches based on how you want to place the objects along the sphere.
Is it OK for the objects to get nearer and nearer to each other as you get closer to the sphere's "poles"? Why wouldn't the normal texture-mapping projection actually work for you?

How to rotate a picture using jogl?

Dear Friends,Can anyone tell me how to show one picture in GLCanvas and by using mouse how to rotate a picture in the GLCanvas.I m new to this jogl developement.Can u pls provide me how to do this.If possible provide me some code snippet and some reference site to get a clear idea about jogl developement.
regards,
s.kumaran.
To show an image on GLCanvas , create a polygon using gl.glBegin(GL.GL_POLYGON) and load the texture using the Class TextureIO .Then using the MouseListener in Java Swings ,you can easily control the rotation of the image(i.e,the textured polygon) by simply changing the position of Camera or doing some transformations( "gl.glRotate(angle,x-axis,y-axis,z-axis) in your case") in Model-View matrix .
The easiest way to do this will be to texture a Quad with the picture and then apply affine transforms to that Quad. Rendering this quad will let you see a rotating picture you can do pretty much any transform by shifting the vertices of the Quad.
I'm assuming that you are drawing a 3D scene and want to change it's orientation, rather than having a 2D image which you wish to rotate.
The short answer is that it takes place in two parts. You need to store an orientation of your scene as a 4x4 matrix (homogeneous matrix - search for it if you don't know what that is). You first need to write code that translates a mouse drag into a change of that 4x4 matrix. So when the mouse is dragged up apply an appropriate rotation or whatever to the matrix.
Then you need to redraw the scene, but using the new transformed 4x4 matrix. Use glMatrixMode to specify which matrix (use either GL_PROJECTION or GL_MODELVIEW) and then functions like glMultMatrixf() to manipulate the appropriate matrix.
If that didn't make sense pick up an OpenGL tutorial on how to rotate scenes. OpenGL and JOGL are close enough that methods from OpenGL work in JOGL.

Resources