Saving images read with glReadPixels where alpha value is not one - qt

When saving an image using glReadPixels, the colors are distorted where their alpha value is less than one.
The surface is managed by QtQuick. With glGetInteger I found out there are 8 bits for each channel, including alpha.
I can get a better result, but not perfect, using something like this:
for x := 0; x < m.Bounds().Dx(); x++ {
for y := 0; y < m.Bounds().Dy(); y++ {
c := m.RGBAAt(x, y)
w := float64(c.A) / 255
c.R = uint8(float64(c.R)*w + 255*(1-w) + 0.5)
c.G = uint8(float64(c.G)*w + 255*(1-w) + 0.5)
c.B = uint8(float64(c.B)*w + 255*(1-w) + 0.5)
c.A = 255
m.SetRGBA(x, y, c)
}
}
I tried to clear the alpha component in OpenGL itself using:
s.gl.ClearColor(0, 0, 0, 1)
s.gl.ColorMask(false, false, false, true)
s.gl.Clear(GL.COLOR_BUFFER_BIT)
Now the result is similar to my manual composing, moreover displayed and the captured image are the same but are still different from (and darker than) what was displayed before.
I'm interested in how OpenGL/Qt uses the alpha channel when displaying the color buffer. Maybe QtQuick composes it with a backing layer?

I solved the problem by never changing alpha during drawing. So instead of gl.BlendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA), I now use gl.BlendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA) and modified the other parameters to get it looking as before.
As Andon M. Coleman pointed out in his comments, this is the same as using pre-multiplied alpha blending. This way, the alpha value of color buffer remains always one and the problem is worked around.
glBlendFuncSeparate, which specify pixel arithmetic for RGB and alpha components separately, would have been useful to get the same result.

Related

Dicom Windowing: Difference between linear default standard formula and window width changes

What is the difference between applying the Default Linear function for windowing to get the pixel value to display like so
These Attributes are applied according to the following pseudo-code, where x is the input value, y is an output value with a range from ymin to ymax, c is Window Center (0028,1050) and w is Window Width (0028,1051):
if (x <= c - 0.5 - (w-1) /2), then y = ymin
else if (x > c - 0.5 + (w-1) /2), then y = ymax
else y = ((x - (c - 0.5)) / (w-1) + 0.5) * (ymax- ymin) + ymin
and this other approach that many people on the internet also speak off, like so?
lowest_visible_value = window_center - window_width / 2
highest_visible_value = window_center + window_width / 2
The results are very similar and only in rare cases you would visually note a difference.
Obviously, the "official" formula works in floating point space and handles rounding of fractions of pixel values more precisely than the simplified version.
The difference is: the first function is defined in DICOM standard and therefore should be used always!
The sites you linked explained the windowing and gave you a short formula, how to estimate the highest and lowest pixel value that are scaled with the window center and window width parameter. They are not used to calculate the actual pixel value. As you noticed, those short formulas do not contain a input value X.
Maybe you will find some sites on the internet, where they do not use the interpolation as defined in DICOM standard, but they use something like that: (I also have seen sometimes)
y = 128 + 255 * (x - window_center) / window_width
And in fact you will hardly find a difference in the resulting image. But there may be some cornercases where this formula results in different images. And the officially defined is not so hard to implement, so you should use that.
To complete that: DICOM also defined the VOI LUT Function LINEAR_EXACT, that is defined that way:
if (x <= c - w/2), then y = ymin
else if (x > c + w/2), then y = ymax
else y = ((x - c) / w + 0.5) * (ymax- ymin) + ymin
So doing the simplified linear interpolation is also defined in DICOM-Standard. But not as the default function, but only if it is explicitly configured in the DICOM file.

Line plot with color gradient

Is there a way to create a plot in IDL with a color gradient to it? What I'm looking for is similar to this Matlab question. The best I know how to do is to plot each segment of the line in a for loop, but this seems rather cumbersome:
x = float(indgen(11) - 5)
y = x ^ 2
loadct, 2, /silent
!p.background = 255
plot, x, y
for i = 0, 9 do begin
oplot, x(i:i+1), y(i:i+1), color = i * 20, thick = 4
endfor
I'm using IDL 8.2 if that makes a difference.
I had the same issue once and there seems to be no (simple) solution. Though I surrendered, you can try using a RGB-vector and the VERT_COLORS-keywords, provided by the PLOT function:
A vector of indices into the color table for the color of each vertex
(plot data point). Alternately, a 3xN byte array containing vertex
color values. If the values supplied are not of type byte, they are
scaled to the byte range using BYTSCL. If indices are supplied but no
colors are provided with the RGB_TABLE property, a default grayscale
ramp is used. If a 3xN array of colors is provided, the colors are
used directly and the color values provided with RGB_TABLE are
ignored. If the number of indices or colors specified is less than the
number of vertices, the colors are repeated cyclically.
That would change the appearence more discrete, but maybe it will help you.
I have a routine MG_PLOTS which can do this in direct graphics:
IDL> plot, x, y, /nodata, color=0, background=255
IDL> mg_plots, x, y, color=indgen(10) * 20, thick=4
Of course, it is just a wrapper for what you where doing manually.

Rendering 2d function plot

My task is to produce the plot of a 2-dimensional function in real time using nothing but linear algebra and color (imagine having to compute an image buffer in plain C++ from a function definition, for example f(x,y) = x^2 + y^2). The output should be something like this 3d plot.
So far I have tried 3 approaches:
1: Ray tracing:
Divide the (x,y) plane into triangles, find the z-values of each vertex, thus divide the plot into triangles. Intersect each ray with the triangles.
2: Sphere tracing:
a method for rendering implicit surfaces described here.
3: Rasterization:
The inverse of (1). Split the plot into triangles, project them onto the camera plane, loop over the pixels of the canvas and for each one choose the "closest" projected pixel.
All of these are way to slow. Part of my assignment is moving around the camera, so the plot has to be re-rendered in each frame. Please point me towards another source of information/another algorithm/any kind of help. Thank you.
EDIT
As pointed out, here is the pseudocode for my very basic rasterizer. I am aware that this code might not be flawless, but it should resemble the general idea. However, when splitting my plot into 200 triangles (which I do not expect to be enough) it already runs very slowly, even without rendering anything. I am not even using a depth buffer for visibility. I just wanted to test the speed by setting up a frame buffer as follows:
NOTE: In the JavaScript framework I am using, _ denotes array indexing and a..b composes a list from a to b.
/*
* Raster setup.
* The raster is a pxH x pxW array.
* Raster coordinates might be negative or larger than the array dimensions.
* When rendering (i.e. filling the array) positions outside the visible raster will not be filled (i.e. colored).
*/
pxW := Width of the screen in pixels.
pxH := Height of the screen in pixels.
T := Transformation matrix of homogeneous world points to raster space.
// Buffer setup.
colBuffer = apply(1..pxW, apply(1..pxH, 0)); // pxH x pxW array of black pixels.
// Positive/0 if the point is on the right side of the line (V1,V2)/exactly on the line.
// p2D := point to test.
// V1, V2 := two vertices of the triangle.
edgeFunction(p2D, V1, V2) := (
det([p2D-V1, V2-V1]);
);
fillBuffer(V0, V1, V2) := (
// Dehomogenize.
hV0 = V0/(V0_3);
hV1 = V1/(V1_3);
hV2 = V2/(V2_3);
// Find boundaries of the triangle in raster space.
xMin = min(hV0.x, hV1.x, hV2.x);
xMax = max(hV0.x, hV1.x, hV2.x);
yMin = min(hV0.y, hV1.y, hV2.y);
yMax = max(hV0.y, hV1.y, hV2.y);
xMin = floor(if(xMin >= 0, xMin, 0));
xMax = ceil(if(xMax < pxW, xMax, pxW));
yMin = floor(if(yMin >= 0, yMin, 0));
yMax = ceil(if(yMax < pxH, yMax, pxH));
// Check for all points "close to" the triangle in raster space whether they lie inside it.
forall(xMin..xMax, x, forall(yMin..yMax, y, (
p2D = (x,y);
i = edgeFunction(p2D, hV0.xy, hV1.xy) * edgeFunction(p2D, hV1.xy, hV2.xy) * edgeFunction(p2D, hV2.xy, hV0.xy);
if (i > 0, colBuffer_y_x = 1); // Fill all points inside the triangle with some placeholder.
)));
);
mapTrianglesToScreen() := (
tvRaster = homogVerts * T; // Triangle vertices in raster space.
forall(1..(length(tvRaster)/3), i, (
actualI = i / 3 + 1;
fillBuffer(tvRaster_actualI, tvRaster_(actualI + 1), tvRaster_(actualI + 2));
));
);
// After all this, render the colBuffer.
What is wrong about this approach? Why is it so slow?
Thank you.
I would go with #3 it is really not that complex so you should obtain > 20 fps on standard machine with pure SW rasterizer (without any libs) if coded properly. My bet is you are using some slow API like PutPixel or SetPixel or doing some crazy thing. Without seeing code or better description of how you do it is hard to elaborate. All the info you need to do this is in here:
Algorithm to fill triangle
HSV histogram
Understanding 4x4 homogenous transform matrices
Do look also in the sub-links in each ...

Bokeh: enable hover tool on image glyphs

Is it possible to enable hover tool on the image (the glyph created by image(), image_rgba() or image_url()) so that it will display some context data when hovering on points of the image. In the documentation I found only references and examples for the hover tool for glyphs like lines or markers.
Possible workaround solution:
I think it's possible to convert the 2d signal data into a columnar Dataframe format with columns for x,y and value. And use rect glyph instead of image. But this will also require proper handling of color mapping. Particularly, handling the case when the values are real numbers instead of integers that you can pass to some color palette.
Update for bokeh version 0.12.16
Bokeh version 0.12.16 supports HoverTool for image glyphs. See:
bokeh release 0.12.16
for erlier bokeh versions:
Here is the approach I've been using for Hovering over images using bokeh.plotting.image and adding in top of it an invisible (alpha=0) bokeh.plotting.quad that has Hovering capabilities for the data coordinates. And I'm using it for images with approximately 1500 rows and 40000 columns.
# This is used for hover and taptool
imquad = p.quad(top=[y1], bottom=[y0], left=[x0], right=[x1],alpha=0)
A complete example of and image with capabilities of selecting the minimum and maximum values of the colorbar, also selecting the color_mapper is presented here: Utilities for interactive scientific plots using python, bokeh and javascript. Update: Latest bokeh already support matplotlib cmap palettes, but when I created this code, I needed to generate them from matplotlib.cm.get_cmap
In the examples shown there I decided not to show the tooltip on the image with tooltips=None inside the bokeh.models.HoverTool function. Instead I display them in a separate bokeh.models.Div glyph.
Okay, after digging more deeply into docs and examples, I'll probably answer this question by myself.
The hover effect on image (2d signal) data makes no sense in the way how this functionality is designed in Bokeh. If one needs to add some extra information attached to the data point it needs to put the data into the proper data model - the flat one.
tidying the data
Basically, one needs to tidy his data into a tabular format with x,y and value columns (see Tidy Data article by H.Wickham). Now every row represents a data point, and one can naturally add any contextual information as additional columns.
For example, the following code will do the work:
def flatten(matrix: np.ndarray,
extent: Optional[Tuple[float, float, float, float]] = None,
round_digits: Optional[int] = 0) -> pd.DataFrame:
if extent is None:
extent = (0, matrix.shape[1], 0, matrix.shape[0])
x_min, x_max, y_min, y_max = extent
df = pd.DataFrame(data=matrix)\
.stack()\
.reset_index()\
.rename(columns={'level_0': 'y', 'level_1': 'x', 0: 'value'})
df.x = df.x / df.x.max() * (x_max - x_min) + x_min
df.y = df.y / df.y.max() * (y_max - y_min) + y_min
if round_digits is not None:
df = df.round({'x': round_digits, 'y': round_digits})
return df
rect glyph and ColumnDataSource
Then, use rect glyph instead of image with x,y mapped accordingly and the value column color-mapped properly to the color aesthetics of the glyph.
color mapping for values
here you can use a min-max normalization with the following multiplication by the number of colors you want to use and the round
use bokeh builtin palettes to map from computed integer value to a particular color value.
With all being said, here's an example chart function:
def InteractiveImage(img: pd.DataFrame,
x: str,
y: str,
value: str,
width: Optional[int] = None,
height: Optional[int] = None,
color_pallete: Optional[List[str]] = None,
tooltips: Optional[List[Tuple[str]]] = None) -> Figure:
"""
Notes
-----
both x and y should be sampled with a constant rate
Parameters
----------
img
x
Column name to map on x axis coordinates
y
Column name to map on y axis coordinates
value
Column name to map color on
width
Image width
height
Image height
color_pallete
Optional. Color map to use for values
tooltips
Optional.
Returns
-------
bokeh figure
"""
if tooltips is None:
tooltips = [
(value, '#' + value),
(x, '#' + x),
(y, '#' + y)
]
if color_pallete is None:
color_pallete = bokeh.palettes.viridis(50)
x_min, x_max = img[x].min(), img[x].max()
y_min, y_max = img[y].min(), img[y].max()
if width is None:
width = 500 if height is None else int(round((x_max - x_min) / (y_max - y_min) * height))
if height is None:
height = int(round((y_max - y_min) / (x_max - x_min) * width))
img['color'] = (img[value] - img[value].min()) / (img[value].max() - img[value].min()) * (len(color_pallete) - 1)
img['color'] = img['color'].round().map(lambda x: color_pallete[int(x)])
source = ColumnDataSource(data={col: img[col] for col in img.columns})
fig = figure(width=width,
height=height,
x_range=(x_min, x_max),
y_range=(y_min, y_max),
tools='pan,wheel_zoom,box_zoom,reset,hover,save')
def sampling_period(values: pd.Series) -> float:
# #TODO think about more clever way
return next(filter(lambda x: not pd.isnull(x) and 0 < x, values.diff().round(2).unique()))
x_unit = sampling_period(img[x])
y_unit = sampling_period(img[y])
fig.rect(x=x, y=y, width=x_unit, height=y_unit, color='color', line_color='color', source=source)
fig.select_one(HoverTool).tooltips = tooltips
return fig
#### Note: however this comes with a quite high computational price
Building off of Alexander Reshytko's self-answer above, I've implemented a version that's mostly ready to go off the shelf, with some examples. It should be a bit more straightforward to modify to suit your own application, and doesn't rely on Pandas dataframes, which I don't really use or understand. Code and examples at Github: Bokeh - Image with HoverTool

Looks like a simple graphing problem

At present I have a control to which I need to add the facility to apply various acuteness (or sensitivity). The problem is best illustrated as an image:
Graph http://img87.imageshack.us/img87/7886/control.png
As you can see, I have X and Y axess that both have arbitrary limits of 100 - that should suffice for this explanation. At present, my control is the red line (linear behaviour), but I would like to add the ability for the other 3 curves (or more) i.e. if a control is more sensitive then a setting will ignore the linear setting and go for one of the three lines. The starting point will always be 0, and the end point will always be 100.
I know that an exponential is too steep, but can't seem to figure a way forward. Any suggestions please?
The curves you have illustrated look a lot like gamma correction curves. The idea there is that the minimum and maximum of the range stays the same as the input, but the middle is bent like you have in your graphs (which I might note is not the circular arc which you would get from the cosine implementation).
Graphically, it looks like this:
(source: wikimedia.org)
So, with that as the inspiration, here's the math...
If your x values ranged from 0 to 1, the function is rather simple:
y = f(x, gamma) = x ^ gamma
Add an xmax value for scaling (i.e. x = 0 to 100), and the function becomes:
y = f(x, gamma) = ((x / xmax) ^ gamma) * xmax
or alternatively:
y = f(x, gamma) = (x ^ gamma) / (xmax ^ (gamma - 1))
You can take this a step further if you want to add a non-zero xmin.
When gamma is 1, the line is always perfectly linear (y = x). If x is less than 1, your curve bends upward. If x is greater than 1, your curve bends downward. The reciprocal value of gamma will convert the value back to the original (x = f(y, 1/g) = f(f(x, g), 1/g).
Just adjust the value of gamma according to your own taste and application needs. Since you're wanting to give the user multiple options for "sensitivity enhancement", you may want to give your users choices on a linear scale, say ranging from -4 (least sensitive) to 0 (no change) to 4 (most sensitive), and scale your internal gamma values with a power function. In other words, give the user choices of (-4, -3, -2, -1, 0, 1, 2, 3, 4), but translate that to gamma values of (5.06, 3.38, 2.25, 1.50, 1.00, 0.67, 0.44, 0.30, 0.20).
Coding that in C# might look something like this:
public class SensitivityAdjuster {
public SensitivityAdjuster() { }
public SensitivityAdjuster(int level) {
SetSensitivityLevel(level);
}
private double _Gamma = 1.0;
public void SetSensitivityLevel(int level) {
_Gamma = Math.Pow(1.5, level);
}
public double Adjust(double x) {
return (Math.Pow((x / 100), _Gamma) * 100);
}
}
To use it, create a new SensitivityAdjuster, set the sensitivity level according to user preferences (either using the constructor or the method, and -4 to 4 would probably be reasonable level values) and call Adjust(x) to get the adjusted output value. If you wanted a wider or narrower range of reasonable levels, you would reduce or increase that 1.5 value in the SetSensitivityLevels method. And of course the 100 represents your maximum x value.
I propose a simple formula, that (I believe) captures your requirement. In order to have a full "quarter circle", which is your extreme case, you would use (1-cos((x*pi)/(2*100)))*100.
What I suggest is that you take a weighted average between y=x and y=(1-cos((x*pi)/(2*100)))*100. For example, to have very close to linear (99% linear), take:
y = 0.99*x + 0.01*[(1-cos((x*pi)/(2*100)))*100]
Or more generally, say the level of linearity is L, and it's in the interval [0, 1], your formula will be:
y = L*x + (1-L)*[(1-cos((x*pi)/(2*100)))*100]
EDIT: I changed cos(x/100) to cos((x*pi)/(2*100)), because for the cos result to be in the range [1,0] X should be in the range of [0,pi/2] and not [0,1], sorry for the initial mistake.
You're probably looking for something like polynomial interpolation. A quadratic/cubic/quartic interpolation ought to give you the sorts of curves you show in the question. The differences between the three curves you show could probably be achieved just by adjusting the coefficients (which indirectly determine steepness).
The graph of y = x^p for x from 0 to 1 will do what you want as you vary p from 1 (which will give the red line) upwards. As p increases the curve will be 'pushed in' more and more. p doesn't have to be an integer.
(You'll have to scale to get 0 to 100 but I'm sure you can work that out)
I vote for Rax Olgud's general idea, with one modification:
y = alpha * x + (1-alpha)*(f(x/100)*100)
alt text http://www4c.wolframalpha.com/Calculate/MSP/MSP4501967d41e1aga1b3i00004bdeci2b6be2a59b?MSPStoreType=image/gif&s=6
where f(0) = 0, f(1) = 1, f(x) is superlinear, but I don't know where this "quarter circle" idea came from or why 1-cos(x) would be a good choice.
I'd suggest f(x) = xk where k = 2, 3, 4, 5, whatever gives you the desired degre of steepness for &alpha = 0. Pick a value for k as a fixed number, then vary α to choose your particular curve.
For problems like this, I will often get a few points from a curve and throw it through a curve fitting program. There are a bunch of them out there. Here's one with a 7-day free trial.
I've learned a lot by trying different models. Often you can get a pretty simple expression to come close to your curve.

Resources