ICP 3d alignment with forcing scale to 1 - point-cloud-library

I need to register some point clouds with ICP. I understand the moving object will try to match the reference one with scale, rotate and transform matrices. My question is, how I can force scale matrix to identity? The application is such that the size of the objects shouldn't change.
Thanks.

Related

How to decompose a unknown transformation matrix?

I'm working in the revitalization of an old 3d game (built using Direct3D) and I'm struggling with the game objects animations.
The game has its objects animations stored in binary files that contains transformation matrices for each bone of its meshes at each frame of the animation (in the form of an array of D3DMATRIX).
I've tried using the D3DXMatrixDecompose function to get the position, rotation and scale but it seems that something is wrong with the animation. Some animations almost matches the originals, but there are some strange rotations in the middle of the animations (the scale vector goes from negative to positive values and that causes the whole bone to rotate in an strange way - it is definitely wrong) and for other animations the whole thing is wrong.
I read somewhere that the function D3DXMatrixDecompose assumes the matrix was composed as a SRT matrix and apparently the order in which each component was combined in the matrix matters. So, as the animations are clearly wrong I'm assuming maybe the matrices were not composed in the SRT order and the output of D3DXMatrixDecompose is wrong.
I didn't find much material to read about this without going very deep on math. As I don't have a strong background on math, hopefully someone can point me in the right direction.
So, how can I decompose position, rotation and scale of an unknown transformation matrix? I'm not asking for a unique algorithm that can do that, I'm asking what I can do in this scenario to find the original (or equivalents) values for the position, rotation and scale of each matrix.
Thanks in advance!

Ray trigonometry in Opengl

I am quite new to this, and iv'e heard that i need to get my inversed projection matrix and so on to create a ray from a 2D point to a 3D world point, however since im using OpenglES and there are not as many methods as there would be regulary to help me with this. (And i simply don't know how to do it) im using a trigenomeric formula for this insted.
For each time i iterate one step down the negative Z-axis i multiply the Y-position on the screen (-1 to 1) with
(-z / (cot(myAngle / 2))
And the X position likewise but with a koefficent equally to the aspect ratio.
myAngle is the frustum perspective angle.
This works really good for me and i get very accurate values, so what i wonder is: Why should i use the inverse of the projection matrix and multiply it with some stuff instead of using this?
Most of the time you have a matrix lying around for your OpenGl camera. Using an inverse matrix is simple when you already have a camera matrix on hand. It is also (oh so very slightly at computer speeds) faster to do a matrix multiply. And in cases where you are doing a bajillion of these calculations per frame, it can matter.
Here is some good info on getting started on a camera class if you are interested:
Camera Class
And some matrix resources
Depending on what you are working on, I wouldn't worry too much about the 'best way to do it.' You just want to make sure you understand what your code is doing then keep improving it.

perspective correction of texture coordinates in 3d

I'm writing a software renderer which is currently working well, but I'm trying to get perspective correction of texture coordinates and that doesn't seem to be correct. I am using all the same matrix math as opengl for my renderer. To rasterise a triangle I do the following:
transform the vertices using the modelview and projection matrixes, and transform into clip coordinates.
for each pixel in each triangle, calculate barycentric coordinates to interpolate properties (color, texture coordinates, normals etc.)
to correct for perspective I use perspective correct interpolation:
(w is depth coordinate of vertex, c is texture coordinate of vertex, b is the barycentric weight of a vertex)
1/w = b0*(1/w0) + b1*(1/w1) + b2*(1/w2)
c/w = b0*(c0/w0) + b1*(c1/w1) + b2*(c2/w2)
c = (c/w)/(1/w)
This should correct for perspective, and it helps a little, but there is still an obvious perspective problem. Am I missing something here, perhaps some rounding issues (I'm using floats for all math)?
See in this image the error in the texture coordinates evident along the diagonal, this is the result having done the division by depth coordinates.
Also, this is usually done for texture coordinates... is it necessary for other properties (e.g. normals etc.) as well?
I cracked the code on this issue recently. You can use a homography if you plan on modifying the texture in memory prior to assigning it to the surface. That's computationally expensive and adds an additional dependency to your program. There's a nice hack that'll fix the problem for you.
OpenGL automatically applies perspective correction to the texture you are rendering. All you need to do is multiply your texture coordinates (UV - 0.0f-1.0f) by the Z component (world space depth of an XYZ position vector) of each corner of the plane and it'll "throw off" OpenGL's perspective correction.
I asked and solved this problem recently. Give this link a shot:
texture mapping a trapezoid with a square texture in OpenGL
The paper I read that fixed this issue is called, "Navigating Static Environments Using Image-Space Simplification and Morphing" - page 9 appendix A.
Hope this helps!
ct
The only correct transformation from UV coordinates to a 3D plane is an homographic transformation.
http://en.wikipedia.org/wiki/Homography
You must have it at some point in your computations.
To find it yourself, you can write the projection of any pixel of the texture (the same as for the vertex) and invert them to get texture coordinates from screen coordinates.
It will come in the form of an homographic transform.
Yeah, that looks like your traditional broken-perspective dent. Your algorithm looks right though, so I'm really not sure what could be wrong. I would check that you're actually using the newly calculated value later on when you render it? This really looks like you went to the trouble of calculating the perspective-correct value, and then used the basic non-corrected value for rendering.
You need to inform OpenGL that you need perspective correction on pixels with
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST)
What you are observing is the typical distortion of linear texture mapping. On hardware that is not capable of per-pixel perspective correction (like for example the PS1) the standard solution is just subdividing in smaller polygons to make the defect less noticeable.

Scaling an object with a local scale and a rotation

I have an object which has a position, a rotation angle and a scale (x and y). The overall transform matrix is as follows :
QTransform xform;
xform.translate(instance.definition.position.x, instance.definition.position.y);
xform.rotateRadians(instance.definition.rotation);
xform.scale(instance.definition.scale.x, instance.definition.scale.y);
I need to scale this object using a global scale which then modifies the local scale of the object. For example, the object is rotated by 45 degrees, I apply a scale of 1,2, I need to know how this affects the local scale as it should affect both local scale axes.
Thanks.
PS : maybe this is impossible due to being a non affine transformation, I don't know, I didn't find much on Google about this particular problem
UPDATE : I think I need to have at least a 3 col by 2 rows matrix transform to keep enough information, I tried some things in SVG which uses this kind of matrix transform and it seems to work, I will need to update this matrix according to the position and rotation though.
Either scale the object first
or calculate the inverse matrix, apply it to object (that undoes the translation/rotation), scale it and apply the first matrix again.
If you take, say, a rectangle, rotate it so that its edges are no longer parallel to the coordinate axes, then apply a scaling factor to, say, X, it will no longer be a rectangle. It will be a parallelogram, and your data structures will have to accommodate more information than they do now.

Implementing z-axis in a 2D side-scroller

I'm making a side scroller similar to Castle Crashers and right now I'm using SAT for collision detection. That works great, but I want to simulate level "depth" by allowing objects to move up and down on the screen, basically along a z-axis (like this screenshot http://favoniangamers.files.wordpress.com/2009/07/castle-crashers-ps3.jpg). This isn't an isometric game, but rather uses parallax scrolling.
I added a z component to my vector class, and I plan to cull collisions based on the 'thickness' of a shape and it's z position. I'm just not sure how calculate the positions of shapes for rendering or how to add jumping with gravity. How do I calculate the max y value (for the ground) as the z position changes? Basically it's the relationship of the z and y axis that confuses me.
I'd appreciate links to resources if anyone knows of this topic.
Thanks!
It's actually possible to make your collision detection algorithm dimensionally agnostic. Just have a collision detector that works along one dimension, use that to check each dimension, and your answer to "are these colliding or not" is the logical AND of the collision detection along each of the dimensions.
Your game should be organised to keep the interaction of game objects, and the rendering of the game to the screen completely seperate. You can think of these two sections of the program as the "model" and the "view". In the model, you have a full 3D world, with 3 axes. You can't go halvesies on this point without some level of pain. Your model must be proper 3D.
The view will read the location of all the game objects, and project them onto the screen using the camera definition. For this part you don't need a full 3D rendering engine. The correct technical term for the perspective you're talking about is "oblique", and it can be seen in many ancient chinese and japanese scroll paintings and prints- in particular look for images of "The Tale of Genji".
The on screen position of an object (including the ground surface!) goes something like this:
DEPTH_RATIO=0.5;
view_x=model_x-model_z*DEPTH_RATIO-camera_x;
view_y=model_y+model_z*DEPTH_RATIO-camera_y;
you can modify for a straight orthographic front projection:
DEPTH_RATIO=0.5;
view_x=model_x-camera_x;
view_y=model_y+model_z*DEPTH_RATIO-camera_y;
And of course don't forget to cull objects outside the volume defined by the camera.
You can also use this mechanism to handle the positioning of parallax layers for you. This is of course, a matter changing your camera to a 1-point perspective projection instead of an orthographic projection. You don't have to use this to change the rendered size of your sprites, but it will help you manage the x position of objects realistically. if you're up for a challenge, you could even mix projections- use 1 point perspective for deep backgrounds, and the orthographic stuff for the foreground.
You should separate your conceptual Y axis used by you physics calculation (collision detection etc.) and the Y axis you actually draw on the screen. That way it becomes less confusing.
Just do calculations per normal pretending there is no relationship between Y and Z axis then when you actually draw the object on the screen you simulate the Z axis using the Y axis:
screen_Y = Y + Z/some_fudge_factor;
Actually, this is how real 3d engines work. After all the world calculations are done the X, Y and Z coordinates are mapped onto screen_X and screen_Y via a function (usually a bit more complicated than the equation above, but just a bit).
For example, to implement pseudo-isormetric view in your game you can even apply Z to the screen_X axis so objects are displaced diagonally instead of vertically.

Resources