I am a experimenting with shaders and have a question which i cannot find the answer to.
If i have two quads which intersect (not fully) can i render the intersected fragments differently in a fragment shader?
For example one quad is red and the other one is green (the green one is on top of the red one) and half of the pixels intersect. Can i render the intersected pixels black and the others red/green using a fragment shader (and not with blending)? I am using open gl es 2.0. Thanks!
Custom blending in GL ES 2.0 can be achieved if GPU supports EXT_shader_framebuffer_fetch extension. You can read about it here: http://www.khronos.org/registry/gles/extensions/EXT/EXT_shader_framebuffer_fetch.txt
This way you will be able to access fragment color in fragment shader with gl_LastFragData variable and decide how to change color.
There are also similar proprietary GL extensions for nVidia's (NV_shader_framebuffer_fetch) and Apple (APPLE_shader_framebuffer_fetch) GPUs.
But please note that this feature is not present in some chips, like ARM's Mali GPUs (and Qualcomm's Adreno AFAIK). On such devices you will need to fall back to some other (quite complicated, I believe) solution.
Related
Result of my code:
Basically, what the issue is, the transparent part of my image are not blending correctly with what is drawn before it. I know I can do a
if(alpha<=0){discard;}
in the fragment shader, the only issue is I plan on having a ton of fragments and don't want the if statement for each fragment on mobile devices.
Here is my code related to alpha, and depth testing:
var gl = canvas.getContext("webgl2",
{
antialias : false,
alpha : false,
premultipliedAlpha: false,
}
);
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.GREATER);
Also, these are textured gl.POINTS I am drawing. If I change the order the two images are drawn in the buffer, the problem doesn't exist. They will be dynamically rotating during the program's runtime so this is not an option.
It's not clear what your issue is without more code but it looks like a depth test issue.
Assuming I understand correctly you're drawing 2 rectangles? If you draw the red one before the blue one then depending on how you have the depth test setup the blue one will fail the depth test when the X area is drawn.
You generally solve this by sorting what you draw, making sure to draw things further away first.
For a grid of "tiles" you can generally sort by walking the grid itself in the correct direction instead of "sorting"
On the other hand, if all of your transparency is 100% draw or not draw then discard has its advantages and you can draw front to back. The reason is because in that case drawing front to back, the pixel drawn (not discarded) by the red quad will be rejected when drawing the blue quad by the depth test. The depth test is usually optimized to happen before running the fragment shader for a certain pixel. If the depth test says the pixel will not be drawn then no reason to even run the fragment shader for that pixel, time saved. Unfortunately as soon as you have any transparency that is not 100% opaque or 100% transparent then you need to sort and draw back to front. Some of these issues are covered in this article
A few notes:
you mentioned mobile devices and you mentioned WebGL2 in your code sample. There is no WebGL2 on iOS
you said you're drawing with POINTS. The spec says only POINTS of 1 pixel in size are required. It looks like you're safe up to points of size 60 but to be safe it's generally best to draw with triangles as there are other isses with points
you might also be interested in sprites with depth
I create a big image stitched out of many single microscope images.
Suddenly, (after several month of working properly) the stitched overview images became blurry and they are containing strange structural artefacts like askew lines (not the rectangulars, they are because of not perfect stitching)
If I open any particular tile in full size, they are not blurry and the artefacts are hardly observable. (Consider, the image below is already 4x scaled)
The overview image is created manually by scaling each tile using QImage::scaled and copying all of them to the corresponding region in the big image. I'm not using opencv's stitching.
I assume, this happens because of image contents, because most if the overview images are ok.
The question is, how can I avoid such hardly observable artefacts to become very clearly visible after scaling? Is there some means in OpenCV or QImage?
Is there any algorithms to find out, if image content could lead to such effect for defined scale-factor?
Many thanks in advance!
Are you sure the camera is calibrated properly? That the lightning is uniform? Is the lens clear? Do you have electrical components that interfere with the camera connection?
If you add image frames of photos on a uniform material (or non-uniform material, moved randomly for significant time), the resultant integrated image should be completely uniform.
If your produced image is not uniform, especially if you get systematic noise (like the apparent sinusoidal noise in the provided pictures), write a calibration function that transforms image -> calibrated image.
Filtering in Fourier space is another way to filter out the noise but considering that the image is rotated you will lose precision, and you'll be cutting off components of the real signal, too. The following empiric method will reduce the noise in your particular case significantly:
ground_output: composite image with per-pixel sum of >10 frames (more is better) over uniform material (e.g. excited slab of phosphorus)
ground_input: the average(or sqrt(sum of px^2)) in ground_output
calib_image: ground_input /(per px) ground_output. Saved for the session, or persistent in a file (important: ensure no lossy compression! (jpeg)).
work_input: the images to work on
work_output = work_input *(per px) calib_image: images calibrated for systematic noise.
If you can't create a perfect ground_input target such as having a uniform material on hand, do not worry too much. If you move any material uniformly (or randomly) for enough time, it will act as a uniform material in this case (think of a blurred photo).
This method has the added advantage of calibrating solitary faulty pixels that ccd cameras have (eg NormalPixel.value(signal)).
If you want to have more fun you can always fit the calibration function to something more complex than a zero-intercept line (steps 3. and 5.).
I suggest scaling the image with some other software to verify if the artifacts are in fact caused by Qt or are inherent in the image you've captured.
The askew lines look a lot like analog tv interference, or CCTV noise induced by 50 or 60 Hz power lines running alongside the signal cable or some other electrical interference on the signal.
If the image distortion is caused by signal interference then you can try to mitigate it by moving the signal lines away from whatever could be the source of the problem, or fit something to try to filter the noise (baluns for example).
At page 136 of the user manual of ILNumerics CTP (RCh), there is a mention to an Image Plot, in the "future section".
Is this the name of a new coming component similar two the TwoDMode of a 3D surface in a PlotCube, but optimized for 2D rendering or so? Could you describe its use case/functionalities?
(I would appreciate to have the possibility to quickly draw image plots (like Matlab imagesc) even with GDI backend. Currently GDI is to slow to render 700x700 ILSurface objects in a PlotCube with TwoDMode=true.)
imagesc - as you noticed - can be realized by a common surface plot in 2D mode. A 'real' imagesc plot would hardly do anything else. If the GDI renderer is too slow on your hardware, I'd suggest to
switch to an OpenGL driver, or
decrease the size of the rendering output, or
prevent from transparent colors (Wireframe or Fill), or
decrease the number of grid columns / rows in the surface
Note, the GDI renderer is mostly provided as fallback for OpenGL and for offscreen rendering. It utilizes decent scanline / z-buffer rendering. But naturally, it is not able to deliver the same speed as hardware accelerated OpenGL driver. However, 700x700 output should work even with GDI - on recent hardware (at least a couple of frames per second, I would guess).
I am working on code for Scrolling Game Development Kit. An old release (2.0) of this program was based on DirectX and was using Direct3D Sprite objects to draw all the graphics. It used the Transform property of the sprite object to specify how the texture rectangle would be transformed as it was being output to the display. The current release (2.1) was a conversion to OpenGL and is using GL TexCoord2 and GL Vertex2 calls to send coordinates of the source and output rectangles for drawing sprites. Now someone says that their video card worked great with DirectX, but their OpenGL drivers do not support GL_ARB necessary to use NPOTS textures (pretty basic). So I'm trying to go back to DirectX without reverting everything back to 2.0. Unfortunately it seems it's much easier to get 4 points given a matrix than it is to get a matrix given 4 points. I have done away with all the matrix info in version 2.1 so I only have the 4 corner points left when calling the function that draws images on the display. Is there any way to use the 4 corner information to transform a Direct3D Sprite?
Alternatively does anybody know why DirectX would be able to do something than OpenGL can't -- are some video cards' drivers just that bad where DirectX supports NPOTS textures but OpenGL doesn't?
It's probably worth reading up on how they do bump mapping. See e.g. this site. You end up with a tangent space matrix, which maps from world space to tangent space (the space relative to the current face). The purpose of that is taking a vector in world space, generally a vector from a light, and converting it into a vector in tangent space, that being the space that your texture defines surface normals in.
Anyway, if you inverted that matrix you'd have a mapping from tangent space to world space. Which I think is what you want? The mapping produced in that tutorial is purely for direction vectors, but expanding out to a 4x4 and anchoring the origin somewhere meaningful shouldn't be difficult.
I'm looking for information on how to move (and animate) 2D sprites across an isometric game world, but have their movement animated smoothly as the travel from tile to tile, as opposed to having them jump from the confines of one tile, to the confines of the next.
An example of this would be in the Transport Tycoon Game, where the trains and carriages are often half in one tile and half in the other.
Drawing the sprites in the right place isn't too difficult. The projection formula are:
screen_x = sprite_x - sprite_y
screen_y = (sprite_x + sprite_y) / 2 + sprite_z
sprite_x and sprite_y are fixed point values (or floating point if you want). Usually, the precision of the fixed point is the number of pixels on a tile - so if your tile graphic was 32x16 (a projected 32x32 square) you would have 5 bits of precision, i.e. 1/32th of a tile.
The really hard part is to sort the sprites into an order that renders correctly. If you use OpenGL for drawing, you can use a z-buffer to make this really easy. Using GDI, DirectX, etc, it is really hard. Transport Tycoon doesn't correctly render the sprites in all instances. The original Transport Tycoon had the most horrendous rendering engine you've ever seen. It implemented the three zoom levels are three instanciations of a massive masm macro. TT was written entirely in assembler. I know, because I ported it to the Mac many years ago (and did a cool version for the PS1 dev kit as well, it needed 6Mb though).
P.S. One of the small bungalow graphics in the game was based on the house Chris Sawyer was living in at the time. We were tempted to add a Ferrari parked in the driveway for the Mac version as that was the car he bought with the royalties.
Look up how to do linear interpolation (it's a pretty simple formula). You can then use this to parameterise the transition on a single [0, 1] range. You then simply have a state in your sprites to store the facts:
That they are moving
Start and end points
Start and end times (or start time and duration
and then each frame you can draw it in the correct position using an interpolation from the start point to the end point. Once you have exceeded the duration, the sprite then gets updated to be not-moving and positioned in the end point/tile.
Why are you thinking it'll jump from tile to tile? You can position your sprite at any x,y co-ordinate.
First create your background screen buffer and then place your sprites on top of it.