I am able to detect QRCode using AVCaptureSession and AVMetadataObjectTypeQRCode. My question is can I also detect the transforming angle at which the QRcode appears on camera output, so that I can place an image on my view with same transformation angle?
Related
Is there a way to make Amazon Textract return the skew angle of the pdf document that it is processing?
When I do detection on a document that has been scanned it, it attempts to de-skew it but it's calculations are slightly off. There doesn't seem to be a way that I can find to access the raw bounding box or geometry of each LINE or WORD, it is all fixed to de-skewed position.
Can I find the skew angle another way or can I disable de-skewing on the request somehow?
I am trying to build my custom Photosphere viewer to run using SDL2 and a custom IMU I purchased. So far, I have managed to read IMU values, open the .jpg and display it using SDL2.
My issue is how to make sense IMU data to read parts of the jpg appropriately. Basically, I do not want to display the whole jpg but just parts of it based on IMU data (I receive Euler angles or Quaternions). Right now, I am just using a single mono photosphere (I am not concerned with stereo yet), which is stored as a equirectangular projection, and I need to use the IMU to get it to a polar projection (I believe?)
I am not sure how to index the jpg based on IMU data to create a working photosphere viewer and I cannot seem to find a good explanation of how to address the jpg. Can anyone point me into the right direction? Thanks!
I was able to find a really great OpenGL based simple Python photosphere viewer here. I just then needed to create a rotation matrix from the sensor IMU. There are good tutorials to convert from Quaternion to Matrix like this one.
I have make a 3D cone with CSS3Renderer and TrackballControl and it works properly.
http://chito.hk/three-test/
But now, I want to modify it to allow user input the values to control the camera rotation rather than using TrackballControl.
http://chito.hk/three-test/index_static.php
But the camera gives no response to the lookAt function. Can anyone tell me which part I am doing wrong?
Add a render(); call after your lookAt.
I'm writing a software renderer which is currently working well, but I'm trying to get perspective correction of texture coordinates and that doesn't seem to be correct. I am using all the same matrix math as opengl for my renderer. To rasterise a triangle I do the following:
transform the vertices using the modelview and projection matrixes, and transform into clip coordinates.
for each pixel in each triangle, calculate barycentric coordinates to interpolate properties (color, texture coordinates, normals etc.)
to correct for perspective I use perspective correct interpolation:
(w is depth coordinate of vertex, c is texture coordinate of vertex, b is the barycentric weight of a vertex)
1/w = b0*(1/w0) + b1*(1/w1) + b2*(1/w2)
c/w = b0*(c0/w0) + b1*(c1/w1) + b2*(c2/w2)
c = (c/w)/(1/w)
This should correct for perspective, and it helps a little, but there is still an obvious perspective problem. Am I missing something here, perhaps some rounding issues (I'm using floats for all math)?
See in this image the error in the texture coordinates evident along the diagonal, this is the result having done the division by depth coordinates.
Also, this is usually done for texture coordinates... is it necessary for other properties (e.g. normals etc.) as well?
I cracked the code on this issue recently. You can use a homography if you plan on modifying the texture in memory prior to assigning it to the surface. That's computationally expensive and adds an additional dependency to your program. There's a nice hack that'll fix the problem for you.
OpenGL automatically applies perspective correction to the texture you are rendering. All you need to do is multiply your texture coordinates (UV - 0.0f-1.0f) by the Z component (world space depth of an XYZ position vector) of each corner of the plane and it'll "throw off" OpenGL's perspective correction.
I asked and solved this problem recently. Give this link a shot:
texture mapping a trapezoid with a square texture in OpenGL
The paper I read that fixed this issue is called, "Navigating Static Environments Using Image-Space Simplification and Morphing" - page 9 appendix A.
Hope this helps!
ct
The only correct transformation from UV coordinates to a 3D plane is an homographic transformation.
http://en.wikipedia.org/wiki/Homography
You must have it at some point in your computations.
To find it yourself, you can write the projection of any pixel of the texture (the same as for the vertex) and invert them to get texture coordinates from screen coordinates.
It will come in the form of an homographic transform.
Yeah, that looks like your traditional broken-perspective dent. Your algorithm looks right though, so I'm really not sure what could be wrong. I would check that you're actually using the newly calculated value later on when you render it? This really looks like you went to the trouble of calculating the perspective-correct value, and then used the basic non-corrected value for rendering.
You need to inform OpenGL that you need perspective correction on pixels with
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST)
What you are observing is the typical distortion of linear texture mapping. On hardware that is not capable of per-pixel perspective correction (like for example the PS1) the standard solution is just subdividing in smaller polygons to make the defect less noticeable.
Dear Friends,Can anyone tell me how to show one picture in GLCanvas and by using mouse how to rotate a picture in the GLCanvas.I m new to this jogl developement.Can u pls provide me how to do this.If possible provide me some code snippet and some reference site to get a clear idea about jogl developement.
regards,
s.kumaran.
To show an image on GLCanvas , create a polygon using gl.glBegin(GL.GL_POLYGON) and load the texture using the Class TextureIO .Then using the MouseListener in Java Swings ,you can easily control the rotation of the image(i.e,the textured polygon) by simply changing the position of Camera or doing some transformations( "gl.glRotate(angle,x-axis,y-axis,z-axis) in your case") in Model-View matrix .
The easiest way to do this will be to texture a Quad with the picture and then apply affine transforms to that Quad. Rendering this quad will let you see a rotating picture you can do pretty much any transform by shifting the vertices of the Quad.
I'm assuming that you are drawing a 3D scene and want to change it's orientation, rather than having a 2D image which you wish to rotate.
The short answer is that it takes place in two parts. You need to store an orientation of your scene as a 4x4 matrix (homogeneous matrix - search for it if you don't know what that is). You first need to write code that translates a mouse drag into a change of that 4x4 matrix. So when the mouse is dragged up apply an appropriate rotation or whatever to the matrix.
Then you need to redraw the scene, but using the new transformed 4x4 matrix. Use glMatrixMode to specify which matrix (use either GL_PROJECTION or GL_MODELVIEW) and then functions like glMultMatrixf() to manipulate the appropriate matrix.
If that didn't make sense pick up an OpenGL tutorial on how to rotate scenes. OpenGL and JOGL are close enough that methods from OpenGL work in JOGL.