Cross fading two jit.windows with OpenGL content - jit

How can I cross fade two jit.windows? One has an OpenGL content and the other is just a matrix (webcam capture).

I assume that you want the OpenGL content to end up in the same output window as the web cam capture.
I would advise sending the matrix from the webcam input to a jit.gl.texture object, then rendering it with a jit.gl.videoplane, like so:
----------begin_max5_patcher----------
348.3ocwSsraCCBD7L9q.wYWKamG8wo7eTEEgsWkPDFr.bhihx+dMKFklFop
V0V0Kf2gcmc7vx4DBqRO.VF8E5qTB4bBgfPd.xTLg0xGpkbKlFSAG0U6Yogi
bvfCg2KbYakYGDMftSxU.ck+rdCPOBU071XEp9VgRBNjshqf5dWDsbBsi6p2
ITa2XfZWPiKVjkmRKJCaOOyuUVlkSWOUi0cRBnhhMTzfgih9gYQrPybm5f.s
d4uok6LhAV5Xoz097tjj3WR+4NSj5eOKX9b+Z36utAT9eY.iiFwwgUJd6ewP
wS43LwxuokT7oVxV4lIceusf7yjB0Ge.gRzieqWY08l5H4S2FzqprArNgh6D
Z06xo3lb1IZZ.737XUBKuRB3+S9chi20c.L1IJQgLdksWa7gOlhgBUHDYjYf
ChX9yPDtYzucils2D7t1vx4rPo5Fvn5E3kVhuyWRdCvmkpvo
-----------end_max5_patcher-----------

Related

How to avoid strange structure artifacts in scaled images?

I create a big image stitched out of many single microscope images.
Suddenly, (after several month of working properly) the stitched overview images became blurry and they are containing strange structural artefacts like askew lines (not the rectangulars, they are because of not perfect stitching)
If I open any particular tile in full size, they are not blurry and the artefacts are hardly observable. (Consider, the image below is already 4x scaled)
The overview image is created manually by scaling each tile using QImage::scaled and copying all of them to the corresponding region in the big image. I'm not using opencv's stitching.
I assume, this happens because of image contents, because most if the overview images are ok.
The question is, how can I avoid such hardly observable artefacts to become very clearly visible after scaling? Is there some means in OpenCV or QImage?
Is there any algorithms to find out, if image content could lead to such effect for defined scale-factor?
Many thanks in advance!
Are you sure the camera is calibrated properly? That the lightning is uniform? Is the lens clear? Do you have electrical components that interfere with the camera connection?
If you add image frames of photos on a uniform material (or non-uniform material, moved randomly for significant time), the resultant integrated image should be completely uniform.
If your produced image is not uniform, especially if you get systematic noise (like the apparent sinusoidal noise in the provided pictures), write a calibration function that transforms image -> calibrated image.
Filtering in Fourier space is another way to filter out the noise but considering that the image is rotated you will lose precision, and you'll be cutting off components of the real signal, too. The following empiric method will reduce the noise in your particular case significantly:
ground_output: composite image with per-pixel sum of >10 frames (more is better) over uniform material (e.g. excited slab of phosphorus)
ground_input: the average(or sqrt(sum of px^2)) in ground_output
calib_image: ground_input /(per px) ground_output. Saved for the session, or persistent in a file (important: ensure no lossy compression! (jpeg)).
work_input: the images to work on
work_output = work_input *(per px) calib_image: images calibrated for systematic noise.
If you can't create a perfect ground_input target such as having a uniform material on hand, do not worry too much. If you move any material uniformly (or randomly) for enough time, it will act as a uniform material in this case (think of a blurred photo).
This method has the added advantage of calibrating solitary faulty pixels that ccd cameras have (eg NormalPixel.value(signal)).
If you want to have more fun you can always fit the calibration function to something more complex than a zero-intercept line (steps 3. and 5.).
I suggest scaling the image with some other software to verify if the artifacts are in fact caused by Qt or are inherent in the image you've captured.
The askew lines look a lot like analog tv interference, or CCTV noise induced by 50 or 60 Hz power lines running alongside the signal cable or some other electrical interference on the signal.
If the image distortion is caused by signal interference then you can try to mitigate it by moving the signal lines away from whatever could be the source of the problem, or fit something to try to filter the noise (baluns for example).

Is there an easy (and not too slow) way to compare two images in Qt/QML to detect motion

I would like to implement a motion detecting camera in Qt/QML for Nokia N9. I hoped that there would be some built in methods for computing image differences but I can't find any in the Qt documentation.
My first thoughts were to downscale two consecutive images, convert to one bit per pixel, compute XOR, and then count the black and white pixels.
Or is there an easy way of using a library from somewhere else to achieve the same end?
Edit:
I've just found some example code on the Qt developer network that looks promising:
Image Composition Example.
To compare images qt has QImage::operator==(const QImage&). But i don't think it will work for motion detection.
But this may help: Python Motion Detection Library + Demo.

Convert 4 corners into a matrix for OpenGL to Direct3D sprite conversion

I am working on code for Scrolling Game Development Kit. An old release (2.0) of this program was based on DirectX and was using Direct3D Sprite objects to draw all the graphics. It used the Transform property of the sprite object to specify how the texture rectangle would be transformed as it was being output to the display. The current release (2.1) was a conversion to OpenGL and is using GL TexCoord2 and GL Vertex2 calls to send coordinates of the source and output rectangles for drawing sprites. Now someone says that their video card worked great with DirectX, but their OpenGL drivers do not support GL_ARB necessary to use NPOTS textures (pretty basic). So I'm trying to go back to DirectX without reverting everything back to 2.0. Unfortunately it seems it's much easier to get 4 points given a matrix than it is to get a matrix given 4 points. I have done away with all the matrix info in version 2.1 so I only have the 4 corner points left when calling the function that draws images on the display. Is there any way to use the 4 corner information to transform a Direct3D Sprite?
Alternatively does anybody know why DirectX would be able to do something than OpenGL can't -- are some video cards' drivers just that bad where DirectX supports NPOTS textures but OpenGL doesn't?
It's probably worth reading up on how they do bump mapping. See e.g. this site. You end up with a tangent space matrix, which maps from world space to tangent space (the space relative to the current face). The purpose of that is taking a vector in world space, generally a vector from a light, and converting it into a vector in tangent space, that being the space that your texture defines surface normals in.
Anyway, if you inverted that matrix you'd have a mapping from tangent space to world space. Which I think is what you want? The mapping produced in that tutorial is purely for direction vectors, but expanding out to a 4x4 and anchoring the origin somewhere meaningful shouldn't be difficult.

Generating 3D TV stereoscopic output programmatically

Do you know what would be the best approach to generate 3D output for one of these new "3D ready" televisions from software. Our application have some nice 3D visualizations, we want these to look nice.
Also, how feasible is it to generate it from a Flash (Flex) app.
I believe that the gaming and 3DTV industries have paved the way for you. As long as your app already outputs 3D visualizations, it may just be a matter of installing a driver. You can get started with this NVIDIA 3D Stereo User’s Guide, but I believe there's tons of other stuff out there if you look.
See also the answers to this question.
3D televisions can display 3D output only for images shot in 3D. This means "intended for simulated 3D," not just a two-dimensional projection of a 3D image.
Stereoscopy is produced by generating two completely separate images per frame (one for each eye) in which the foreground objects are offset to simulate a 3D image. You cannot take a 2D image and make it into a 3D image, the source frames must be produced as 3D frames from the beginning.
More information:
http://en.wikipedia.org/wiki/3D_television
http://en.wikipedia.org/wiki/Stereoscopy

How to rotate a picture using jogl?

Dear Friends,Can anyone tell me how to show one picture in GLCanvas and by using mouse how to rotate a picture in the GLCanvas.I m new to this jogl developement.Can u pls provide me how to do this.If possible provide me some code snippet and some reference site to get a clear idea about jogl developement.
regards,
s.kumaran.
To show an image on GLCanvas , create a polygon using gl.glBegin(GL.GL_POLYGON) and load the texture using the Class TextureIO .Then using the MouseListener in Java Swings ,you can easily control the rotation of the image(i.e,the textured polygon) by simply changing the position of Camera or doing some transformations( "gl.glRotate(angle,x-axis,y-axis,z-axis) in your case") in Model-View matrix .
The easiest way to do this will be to texture a Quad with the picture and then apply affine transforms to that Quad. Rendering this quad will let you see a rotating picture you can do pretty much any transform by shifting the vertices of the Quad.
I'm assuming that you are drawing a 3D scene and want to change it's orientation, rather than having a 2D image which you wish to rotate.
The short answer is that it takes place in two parts. You need to store an orientation of your scene as a 4x4 matrix (homogeneous matrix - search for it if you don't know what that is). You first need to write code that translates a mouse drag into a change of that 4x4 matrix. So when the mouse is dragged up apply an appropriate rotation or whatever to the matrix.
Then you need to redraw the scene, but using the new transformed 4x4 matrix. Use glMatrixMode to specify which matrix (use either GL_PROJECTION or GL_MODELVIEW) and then functions like glMultMatrixf() to manipulate the appropriate matrix.
If that didn't make sense pick up an OpenGL tutorial on how to rotate scenes. OpenGL and JOGL are close enough that methods from OpenGL work in JOGL.

Resources