How Do we adjust contrast and brightness of Grayscale and B/W image - javax.imageio

I am trying to adjust brightness and contrast using ImageUtil.contrast
It turns out it is working for RGB image only. So it means there's probably something I'm not aware about Gray and Bi-Level images.
Pixel wise Pixel will be too heavy task, if there's any filter , that would be good.
//Contrast
BufferedImage contrastImg = ImageUtil.toBuffered(ImageUtil.contrast(img, 0.3f));
//Brightness
BufferedImage brightenedImg = ImageUtil.toBuffered(ImageUtil.brightness(img, 1.0f));
// Sharpness
BufferedImage sharpenedImg = ImageUtil.sharpen(img, 0.3f);
For RGB it works as expected, it fails for B/W and GRAY though.
Any Ideas?

Related

Sprite initial scale based on size on screen

I have a Sprite object with a Width and a Height (texture size).
I want to display the sprite on the screen with the same size as the original texture size.
Because the scene size, the camera position, and texture sizes are not constant values I need some way to scale the Sprite.
Most of the time camera is Perspective but some times it can be Orthographic.
So I need 2 formulas for the scale.
I've found some answers on how to make the Sprite size constant when zooming but in this calculation the initial scale is unknown.
Thanks.
One way that I do something similar to this is by creating an initial(optimal) size of the screen, and scaling the node based on changes to the screen.
The easiest way to do this is to have an initial size of the scene in which your sprite is the exact size you want it. You would call this before doing any zooming and you only call it ONCE:
var initialScreenWidth = self.size.width
var initialScreenHeight = self.size.height
Now whenever you change the size of screen(such as zooming), you have to find the scale at which the screen has changed:
var scaleWidth = self.size.width/initialScreenWidth
var scaleHeight = self.size.height/initialScreenHeight
Now that you have those two scales, all you have to do is:
texture.xScale = scaleWidth
texture.yScale = scaleHeight
You will have to create the scales and set the texture dimensions every single time you change the size of the screen. (Most likely in the Update function so it looks smooth).
Hope this helps!

HLSL: Keep Getting An Oval When I Want a Circle! (Pixel Shader)

I'm trying to tint a circle around the player in my 2D side scroller but I keep getting an oval! Here's the part of the code I'm using that matters:
if(length(abs(coords - playerCoords)) < .1)
{
color = color *float4(1,0,1,1);
}
return color;
My screen size is 1280 wide x 720 tall. I know that this is the reason for the distortion, but I don't know enough about my issue in order to come up with or find a solution. Can someone explain to me how to compensate for the screen stretch?
Thanks!
-ATD
multiply the "abs()" term by "float2((720./1280.),1.0)" -- or whatever your y/x aspect ration might be
The coords you are using are normalized in 0-1 space, so just correct them

QGradient ellipse blending

I am currently working on generating "heat-maps" with QPainter and QImage. My method consists of drawing multiple circles with black to transparent QRadialGradients as the QBrush (see "Intensity Map"). Then I apply a gradient map to the intensity map to get the desired "heat-map" effect (see "After Gradient Map").
The issue I am having, which is more apparent in the "After Gradient Map" image, is that the circles are not blending correctly. Where circles overlap do seem to partially blend, but towards the edges you can clearly see where the circles end (almost a outer-glow). I would like an effect which has no visible borders between the circles and blends correctly.
Intensity Map
After Gradient Map (different intensity map)
Code
// Setup QImage and QPainter
QImage *map = new QImage(500, 500, QImage::Format_ARGB32);
map->fill(QColor(255, 255, 255, 255));
QPainter paint(map);
paint.setRenderHint(QPainter::HighQualityAntialiasing);
// Create Intensity map
std::vector<int> record = disp_data[idx]; // Data
for(int j = 1, c = record.size(); j < c; ++j) {
int dm = 150 + record[j] * 100 / 255; // Vary the diameter
QPen g_pen(QColor(0, 0, 0, 0));
g_pen.setWidth(0);
QRadialGradient grad(sensors[j-1].x, sensors[j-1].y, dm/2); // Create Gradient
grad.setColorAt(0, QColor(0, 0, 0, record[j])); // Black, varying alpha
grad.setColorAt(1, QColor(0, 0, 0, 0)); // Black, completely transparent
QBrush g_brush(grad); // Gradient QBrush
paint.setPen(g_pen);
paint.setBrush(g_brush);
paint.drawEllipse(sensors[j-1].x-dm/2, sensors[j-1].y-dm/2, dm, dm); // Draw circle
}
// Convert to heat map
for(int i = 0; i < 500; ++i) {
for(int j = 0; j < 500; ++j) {
int b = qGray(map->pixel(i, j));
map->setPixel(i, j, grad_map->pixel(b, 0)); //grad_map is a QImage gradient map
}
}
As you can see, there is no QPen for the circles. I have been trying a variety of blending modes with no success. I have also changed the rendering hint to HighQualityAntialiasing. I have also tried making the circles much larger than the radial gradient, so there is no way the gradient is cut-off or a border is applied to the outside of the circle.
Any ideas? Thanks!
I think this is a form of mach-banding, which is an optical illusion where changes in luminance are enhanced by the visual system, causing the appearance of bright or dark bands which are not actually present in the image. Typically these are seen on the boundary between two distinct areas, but in the case here I believe it is the sharp discontinuity in the gradients being observed.
Here are some images to demonstrate the issue:
This first image is calculated in software, and consists of three circles each drawn with a radial linear gradient. Mach-band effects should be visible at the edges of the overlap between circles, as these are the points where the gradient sharply changes.
This second image is exactly the same calculation, but instead of being linear along the radius, the gradient is mapped to a curve (I used the first hermite basis function). The bands should almost entirely have disappeared:
As to why this affects a colourised image more, I'm not sure it does. I think in the case above, perhaps there is additional banding caused by the colourisation effectively being a palette lookup, resulting in additional banding.
I performed roughly the same colourisation locally, also simply mapping a palette, and the effect is similar:
Fixing this using QT's linear gradients is probably non-trivial (you could try adding a lot more control points to the gradient, but you'll have to add quite a few...), but calculating such an image in software is not hard. You could also consider some other post-processing effects, such as adding a blur, and/or adding noise. Anything breaking the discontinuity in the gradient would likely help.
I agree with JasonD.
Furthermore, please keep in mind that Qt is doing linear blending in sRGB color space, which is not linear (it has a gamma 2.2 applied).
To do this right, you need to do the blending or interpolation in linear light, then convert to sRGB (apply gamma) for display.

Particle Blend Issue with premultiplied alpha

I was trying to save some texture from 3D rendering scene,and then get them to reuse.The big problem is that rgb value can not match its alpha value.I need some picture without black edge,so I must use rgb colors to divide its alpha value(Image manipulation software such as Photoshop can just deal with picture which alpha channel is nonpremultiplied.)Unfortunally,the color is too light that some result value are cut off to 1.So I turned to a technique called premultiplied alpha.(See more). Instead of using shaders,I just use separate alpha calculation.For example:RenderState.SourceBlend = Blend.SourceAlpha;
RenderState.DestinationBlend = Blend.InverseSourceAlpha;Now I add some renderstate.RenderState.SourceBlendAlpha = Blend.One; RenderState.DestinationBlendAlpha = Blend.InverseSourceAlpha; It works well.But when I try to handle following things:RenderState.SourceBlend = Blend.SourceAlpha;
RenderState.DestinationBlend = Blend.One;
RenderState.SourceBlendAlpha = Blend.One;
RenderState.DestinationBlendAlpha = Blend.One; The result is totally wrong,can somebody tell me the reason?
PS:now I get the reason.When I use nonpremultiplied blend state,which is SourceAlpha and inverseSourceAlpha.The rgba value is definitely controlled within 0~1.But when I switch to
additive state,which is SourceAlpha and 1,the rgb values might be over one,thus cause the incorrect value.
Now my problem is how to control the alpha value,to make sure it keep all details and do not overflow at the same time?

DirectShow: IVMRWindowlessControl::SetVideoPosition stride(?)

I have my own video source and using VMR7. When I use 24 color depth, my graph contains Color Space Converter filter which converts 24 bits to ARGB32. Everything works fine. When I use 32 bit color depth, my image looks desintegrated. In this case my source produces RGB32 images and passes them directly to VMR7 without color conversion. During window sizing I noticed that when destination height is changing the image becomes "integrated" (normal) in some specific value of destination height. I do not know where is the problem. Here are the example photos: http://talbot.szm.com/desintegrated.jpg and http://talbot.szm.com/integrated.jpg
Thank you for your help.
You need to check for a MediaType change in your FillBuffer method.
HRESULT hr = pSample->GetMediaType((AM_MEDIA_TYPE**)&pmt);
if (S_OK == hr)
{
SetMediaType(pmt);
DeleteMediaType(pmt);
}
Depending on your graphic you get different width for your buffer. This means, you connect with an image width of 1000 pixels but with the first sample you get a new width for your buffer. In my example it was 1024px.
Now you have the new image size in the BitmapInfoHeader.biWidth and got the old size in the VideoInfoHeader.rcSource. So one line of your image has a size of 1024 pixels and not 1000 pixels. If you don't remember this you can sometimes get pictures like you.

Resources