Image Plots components - plot

At page 136 of the user manual of ILNumerics CTP (RCh), there is a mention to an Image Plot, in the "future section".
Is this the name of a new coming component similar two the TwoDMode of a 3D surface in a PlotCube, but optimized for 2D rendering or so? Could you describe its use case/functionalities?
(I would appreciate to have the possibility to quickly draw image plots (like Matlab imagesc) even with GDI backend. Currently GDI is to slow to render 700x700 ILSurface objects in a PlotCube with TwoDMode=true.)

imagesc - as you noticed - can be realized by a common surface plot in 2D mode. A 'real' imagesc plot would hardly do anything else. If the GDI renderer is too slow on your hardware, I'd suggest to
switch to an OpenGL driver, or
decrease the size of the rendering output, or
prevent from transparent colors (Wireframe or Fill), or
decrease the number of grid columns / rows in the surface
Note, the GDI renderer is mostly provided as fallback for OpenGL and for offscreen rendering. It utilizes decent scanline / z-buffer rendering. But naturally, it is not able to deliver the same speed as hardware accelerated OpenGL driver. However, 700x700 output should work even with GDI - on recent hardware (at least a couple of frames per second, I would guess).

Related

When I use setUseOpenGL(true), some properties of QChartSeries would be disabled

Firstly, my QT environment is 5.12.0 with MSVC 2017 64 bit, I used 5.10.0 with MSVC 2017 64 bit before but the result is same.
For example, in QLineSeries, setPointsLabelVisible and setPointsVisible would be disabled. In QScatterSeries, setMarkerShape would be disabled.Just like the picture I uploaded.
pic 1
In this picture I didn't setUseOpenGL(true)
pic 2
While in this picture I setUseOpenGL(true) and the markerShape became block instead of circle with a edge.
I tried set these properties after setUseOpenGL(true) but it doesn't work.
I want to know how to make these properties enabled when using setUseOpenGL(true).
I have just reviewed the source code of QtChart, all QChartSeries are inherited from QAbstractSeries and in QAbstractSeries there are some functions like setUseOpenGL. There are some annotations:
The OpenGL acceleration of series drawing is meant for use cases that need fast drawing of large numbers of points. It is optimized for efficiency, and therefore the series using it lack support for many features available to non-accelerated series:
Series animations are not supported for accelerated series.
Point labels are not supported for accelerated series.
Pen styles and marker shapes are ignored for accelerated series.
Only solid lines and plain scatter dots are supported.
The scatter dots may be circular or rectangular, depending on the underlying graphics hardware and drivers.
Polar charts do not support accelerated series.
Enabling chart drop shadow or using transparent chart background color is not recommended when using accelerated series, as that can slow the frame rate down significantly.
I think it means setUseOpenGL is used for high-performance in drawing. And it would not support many features which are supported when you are not using setUseOpenGL.

How to avoid strange structure artifacts in scaled images?

I create a big image stitched out of many single microscope images.
Suddenly, (after several month of working properly) the stitched overview images became blurry and they are containing strange structural artefacts like askew lines (not the rectangulars, they are because of not perfect stitching)
If I open any particular tile in full size, they are not blurry and the artefacts are hardly observable. (Consider, the image below is already 4x scaled)
The overview image is created manually by scaling each tile using QImage::scaled and copying all of them to the corresponding region in the big image. I'm not using opencv's stitching.
I assume, this happens because of image contents, because most if the overview images are ok.
The question is, how can I avoid such hardly observable artefacts to become very clearly visible after scaling? Is there some means in OpenCV or QImage?
Is there any algorithms to find out, if image content could lead to such effect for defined scale-factor?
Many thanks in advance!
Are you sure the camera is calibrated properly? That the lightning is uniform? Is the lens clear? Do you have electrical components that interfere with the camera connection?
If you add image frames of photos on a uniform material (or non-uniform material, moved randomly for significant time), the resultant integrated image should be completely uniform.
If your produced image is not uniform, especially if you get systematic noise (like the apparent sinusoidal noise in the provided pictures), write a calibration function that transforms image -> calibrated image.
Filtering in Fourier space is another way to filter out the noise but considering that the image is rotated you will lose precision, and you'll be cutting off components of the real signal, too. The following empiric method will reduce the noise in your particular case significantly:
ground_output: composite image with per-pixel sum of >10 frames (more is better) over uniform material (e.g. excited slab of phosphorus)
ground_input: the average(or sqrt(sum of px^2)) in ground_output
calib_image: ground_input /(per px) ground_output. Saved for the session, or persistent in a file (important: ensure no lossy compression! (jpeg)).
work_input: the images to work on
work_output = work_input *(per px) calib_image: images calibrated for systematic noise.
If you can't create a perfect ground_input target such as having a uniform material on hand, do not worry too much. If you move any material uniformly (or randomly) for enough time, it will act as a uniform material in this case (think of a blurred photo).
This method has the added advantage of calibrating solitary faulty pixels that ccd cameras have (eg NormalPixel.value(signal)).
If you want to have more fun you can always fit the calibration function to something more complex than a zero-intercept line (steps 3. and 5.).
I suggest scaling the image with some other software to verify if the artifacts are in fact caused by Qt or are inherent in the image you've captured.
The askew lines look a lot like analog tv interference, or CCTV noise induced by 50 or 60 Hz power lines running alongside the signal cable or some other electrical interference on the signal.
If the image distortion is caused by signal interference then you can try to mitigate it by moving the signal lines away from whatever could be the source of the problem, or fit something to try to filter the noise (baluns for example).

How can I transform my canvas in PlayN?

Application screenshot: http://i.imgur.com/0uVKZiL.png
Source file: http://pastie.org/private/rcgm6o7qso8y0vz8nfjn0w
My application draws curves and I would like to be able to zoom in and out. When I apply a scale and translate transformation, the mapCanvasImage.canvas().gfx.transform changes accordingly, but nothing changes on the screen.
I used to have a different render approach (source code) in which the transformation did work, but there I could not get the layer to clear after each paint (results from previous paint iterations were still visible).
Perhaps (or likely) I am doing something fundamentally wrong. :) Any advice?
The Canvas offers nice high-level functions to draw Bézier curves, but is apparently flawed. My current plan is to abandon the Canvas and write my own code to convert Bézier curves to line segments. This is really easy and gives the added benefit of using less CPU power, since the Canvas is not hardware accelerated.

Convert 4 corners into a matrix for OpenGL to Direct3D sprite conversion

I am working on code for Scrolling Game Development Kit. An old release (2.0) of this program was based on DirectX and was using Direct3D Sprite objects to draw all the graphics. It used the Transform property of the sprite object to specify how the texture rectangle would be transformed as it was being output to the display. The current release (2.1) was a conversion to OpenGL and is using GL TexCoord2 and GL Vertex2 calls to send coordinates of the source and output rectangles for drawing sprites. Now someone says that their video card worked great with DirectX, but their OpenGL drivers do not support GL_ARB necessary to use NPOTS textures (pretty basic). So I'm trying to go back to DirectX without reverting everything back to 2.0. Unfortunately it seems it's much easier to get 4 points given a matrix than it is to get a matrix given 4 points. I have done away with all the matrix info in version 2.1 so I only have the 4 corner points left when calling the function that draws images on the display. Is there any way to use the 4 corner information to transform a Direct3D Sprite?
Alternatively does anybody know why DirectX would be able to do something than OpenGL can't -- are some video cards' drivers just that bad where DirectX supports NPOTS textures but OpenGL doesn't?
It's probably worth reading up on how they do bump mapping. See e.g. this site. You end up with a tangent space matrix, which maps from world space to tangent space (the space relative to the current face). The purpose of that is taking a vector in world space, generally a vector from a light, and converting it into a vector in tangent space, that being the space that your texture defines surface normals in.
Anyway, if you inverted that matrix you'd have a mapping from tangent space to world space. Which I think is what you want? The mapping produced in that tutorial is purely for direction vectors, but expanding out to a 4x4 and anchoring the origin somewhere meaningful shouldn't be difficult.

3D Software Renderer with VB6

I am IT student and I have to make a project in VB6, I was thinking to make a 3D Software Renderer but I don't really know where to start, I found a few tutorials but I want something that goes in depth with the maths and algorithms, I will like something that shows how to make 3D transformations, Camera, lights, shading ...
It does not matter the programing language used, I just need some resources that shows me exactly how to make this.
So I just want to know where to find some resources, or you can show me some source code and tell me where to start from.
Or if any of you have a better idea for a VB6 project.
Thanks.
I disagree with the previous posts, a 3D renderer is actually pretty simple. A high-quality 3D renderer is hard however.
Get a bunch of 3D data, triangles are simplest.
Learn about homogenous coordinates and the great 4x4 matrix for transforms.
Define a camera by a position and a rotation (expressed in the 4x4 matrix).
Transform your 3D geometry by this camera.
Perform the perspective divide and scale to your window. This converts your 3D data to 2D.
Render the data as 2D.
Now you're going to lose out on a depth buffer, so stick to wireframes in the beginning. :-)
Don't listen to these nay-sayers, go out and have some fun!
Many years ago I made a shaded triangle renderer that used library calls to draw the triangles. It's a rather naive approach but you would be able to achieve the same result using VB6. I got all the maths & techniques from "Computer Graphics principles and practice" by Foley et al. Some parts are out of date now but I think you'd find it very helpful for this project and it can be bought 2nd hand at reasonable prices from Amazon for example.
One simple approach could be:
Read model file as triangles
Transform each triangle using matrices to account for camera position
Project triangle points onto 2D
Draw 2D triangle (probably using GDI)
This covers wireframe viewing. To extend this to hidden surface removal you need to work out which triangles are in front. Two possible ways:
Z-order sorting the triangles and drawing the ones furthest from the camera first. This is simple but inefficient if there are a lot of triangles and can give overlapping triangle effects when the order is not quite correct. You also have to decide how to sort the triangles - e..g by centroid, by extents...
Using a software depth buffer. This will give better results but is more work to implement. You will have to write your own triangle drawing code so cannot rely on GDI. See bresenham's line algorithm and related algorithms for doing filled triangles for how to do this.
After this you'd also need some kind of shading based on lighting. The calculations are covered in Computer Graphics principles and practice. For simple shading you can stick with drawing triangles using gdi , but if you want to do gouraud or phong shading the colour values vary across a triangle. One way around this is to sub-divide the triangle into smaller triangles, but this is inefficient and won't give very nice looking results. Better would be to draw the triangles yourself as required above for the software depth buffer.
A good extension would be to support primitives other than triangles. Basic approach would be to split primitives into triangles as you read them.
Good luck - could be an interesting project.
VB6 is not the best suited language to do maths and 3D graphics, and given that you have no previous knowledge about the subject either, I would recommend you to choose something different (and easier).
As it's Visual Basic, you could try something more form-oriented, that is the original intent of the language.
There is the 3D engine list which lists three engine in pure basic (an oxymoron) + Source code and of them one is in Visual Basic (Dex3D)
DeX3D is an open source 3D engine
coded entirely in Visual Basic from
Jerry Chen ( -onlyuser#hotmail.com ).
Gouraud shading
Transparency
Fogging
Omni and spot lights
Hierarchical meshes
Support for 3D Studio files
Particle systems
Bezier curve segments
2.5 D text
Visual Basic source
More information, screenshots and the
source can be found on the Dex3D
Homepage. (<= Dead Link)
EGL25 by Erkan Sanli is a fast open source VB 6 renderer that can render, rotate, animate, etc. complex solid shapes made of thousands of polygons. Just Windows API calls – no DirectX, no OpenGL.
VBMigration.com chose EGL25 as a high-quality open-source VB6 project to demonstrate their VB6 to VB.Net upgrade tool.
A 3D software renderer as a whole project is fairly complex if you've never done it before. I would suggest something smaller - like just doing the 3D portion and using lines to do the rendering OR just write a shaded triangle renderer (which is the underpinnings of 3D renderers anyway).
Something a little simpler rather than trying to write a full-blown 3D software renderer on the first go - especially in VB.
A software renderer is a very difficult project and the language VB6 is not indicated at all ( for a task like this c++ is the way.. ), anyway I can suggest you some great books I used:
Shaders: http://wiki.gamedev.net/index.php/D3DBook:Introduction_%28Volume%29
Math: 3D Math Primer for Graphics and Game Development
There are other 2 books. Even if they are for VB.NET you can find some useful code:
.NET Game Programming with DirectX 9.0
Beginning .NET Game Programming in VB .NET
I think you can take two ways either go the Direct X way and use DirectX 8 that has VB 5-6 support. I found a page http://www.gamedev.net/reference/articles/article1308.asp
You can always write a engine group up but by doing so you will need some basic linear algebra like Frank Krueger suggests.

Resources