Particle Blend Issue with premultiplied alpha - alphablending

I was trying to save some texture from 3D rendering scene,and then get them to reuse.The big problem is that rgb value can not match its alpha value.I need some picture without black edge,so I must use rgb colors to divide its alpha value(Image manipulation software such as Photoshop can just deal with picture which alpha channel is nonpremultiplied.)Unfortunally,the color is too light that some result value are cut off to 1.So I turned to a technique called premultiplied alpha.(See more). Instead of using shaders,I just use separate alpha calculation.For example:RenderState.SourceBlend = Blend.SourceAlpha;
RenderState.DestinationBlend = Blend.InverseSourceAlpha;Now I add some renderstate.RenderState.SourceBlendAlpha = Blend.One; RenderState.DestinationBlendAlpha = Blend.InverseSourceAlpha; It works well.But when I try to handle following things:RenderState.SourceBlend = Blend.SourceAlpha;
RenderState.DestinationBlend = Blend.One;
RenderState.SourceBlendAlpha = Blend.One;
RenderState.DestinationBlendAlpha = Blend.One; The result is totally wrong,can somebody tell me the reason?
PS:now I get the reason.When I use nonpremultiplied blend state,which is SourceAlpha and inverseSourceAlpha.The rgba value is definitely controlled within 0~1.But when I switch to
additive state,which is SourceAlpha and 1,the rgb values might be over one,thus cause the incorrect value.
Now my problem is how to control the alpha value,to make sure it keep all details and do not overflow at the same time?

Related

Mathematically calculate "Vibrancy" of a color

Im writing a program that analyzes a picture and returns the most prominent color. Its simple to get the most frequently occurring color but I've found that very often this color is a dark black/gray/brown or a white and not the "color" you would associate with the image. So I'd like to get the top 5 colors and compare them based on some metric to determine which color is most "Vibrant/Colorful" and return that color.
Saturation wont work in this case because a saturated black will be ranked above a lighter pink and Brightness/Luminance wont work because a white will be ranked about a darker red. Im want to know what metric I can use to judge this. I recognize this is kind of an obtuse question but I know of other programs that do similar things so I assume there must be some way to calculate "Vibrancy/Colorfulness". It doesn't need to be perfect just work most of the time
For what its worth I'm working in javascript but the actual code is not the issue, I just need the equation I can use and then I can implement it
There is no common way to define "vibrancy" of a color. Thus, you can try combining multiple metrics such as "saturation", "brightness", and "luminance". The lower the overall metric is, the better. The following is an example in pseudocode.
// Compare metrics to "ideal"
var deltaSat = Saturation(thisColor) - idealSat;
var deltaBright = Brightness(thisColor) - idealBrightness;
var deltaLum = Luminance(thisColor) - idealLum;
// Calculate overall distance from ideal; the lower
// the better.
var dist = sqrt((deltaSat*deltaSat) +
(deltaBright*deltaBright) +
(deltaLum*deltaLum))
(If your issue is merely that you're having trouble calculating a metric for a given color, see my page on color topics for programmers.)
If your criteria for "vibrancy" are complex enough, you should consider using machine learning techniques such as classification algorithms. In machine learning in general:
You train a model to recognize different categories (such as "vibrant" and "non-vibrant" colors in this case).
You test the model to check how well it performs.
Once the model works well, you deploy the model and use it to predict whether a color is "vibrant" or "non-vibrant".
Machine learning is quite complex, however, so you should try the simpler method given earlier in this answer.
After trying several different formulas I had the most success with the following
let colorfulness = ((max+ min) * (max-min))/max
where max & min are the highest and lowest RGB values, respectively. This page has a more detailed explanation of the formula itself.
This will return a value between 0 and 255 with 0 being least colorful and 255 being most. From running this on a bunch of different colors, I found that for my application any value above 50 was colorful enough, thought you can adjust this.
My final code is as follows
function getColorFromImage(image) {
//gets the three most commonly occuring, distinct colors in an image as RGB values, in order of their frequency
let palette = getPaletteFromImage(image, 3)
for (let color of palette){
var colorfulness = 0
//(0,0,0) will return NAN if used in the formula, if (0,0,0) leave colorfulness as its default 0
if (color != [0,0,0]){
//get min & max values
let min = Math.min(color)
let max = Math.max(color)
//calculate colorfulness of color
colorfulness = ((max+ min) * (max-min))/max
}
//compare color's colorfulness against a threshold to determine if color is "colorful" enough
//ive found 50 is a good threshold but adjust as needed
if (colorfulness >= 50.0){
return color
}
}
//if none of the colors are deemed to be sufficiently colorful, just return the most common
return palette[0]
}

Domain coloring (color wheel) plots of complex functions in Octave (Matlab)

I understand that domain or color wheel plotting is typical for complex functions.
Incredibly, I can't find a million + returns on a web search to easily allow me to reproduce some piece of art as this one in Wikipedia:
There is this online resource that reproduces plots with zeros in black - not bad at all... However, I'd like to ask for some simple annotated code in Octave to produce color plots of functions of complex numbers.
Here is an example:
I see here code to plot a complex function. However, it uses a different technique with the height representing the Re part of the image of the function, and the color representing the imaginary part:
Peter Kovesi has some fantastic color maps. He provides a MATLAB function, called colorcet, that we can use here to get the cyclic color map we need to represent the phase. Download this function before running the code below.
Let's start with creating a complex-valued test function f, where the magnitude increases from the center, and the phase is equal to the angle around the center. Much like the example you show:
% A test function
[xx,yy] = meshgrid(-128:128,-128:128);
z = xx + yy*1i;
f = z;
Next, we'll get its phase, convert it into an index into the colorcet C2 color map (which is cyclic), and finally reshape that back into the original function's shape. out here has 3 dimensions, the first two are the original dimensions, and the last one is RGB. imshow shows such a 3D matrix as a color image.
% Create a color image according to phase
cm = colorcet('C2');
phase = floor((angle(f) + pi) * ((size(cm,1)-1e-6) / (2*pi))) + 1;
out = cm(phase,:);
out = reshape(out,[size(f),3]);
The last part is to modulate the intensity of these colors using the magnitude of f. To make the discontinuities at powers of two, we take the base 2 logarithm, apply the modulo operation, and compute the power of two again. A simple multiplication with out decreases the intensity of the color where necessary:
% Compute the intensity, with discontinuities for |f|=2^n
magnitude = 0.5 * 2.^mod(log2(abs(f)),1);
out = out .* magnitude;
That last multiplication works in Octave and in the later versions of MATLAB. For older versions of MATLAB you need to use bsxfun instead:
out = bsxfun(#times,out,magnitude);
Finally, display using imshow:
% Display
imshow(out)
Note that the colors here are more muted than in your example. The colorcet color maps are perceptually uniform. That means that the same change in angle leads to the same perceptual change in color. In the example you posted, for example yellow is a very narrow, bright band. Such a band leads to false highlighting of certain features in the function, which might not be relevant at all. Perceptually uniform color maps are very important for proper interpretation of the data. Note also that this particular color map has easily-named colors (purple, blue, green, yellow) in the four cardinal directions. A purely real value is green (positive) or purple (negative), and a purely imaginary value is blue (positive) or yellow (negative).
There is also a great online tool made by Juan Carlos Ponce Campuzano for color wheel plotting.
In my experience it is much easier to use than the Octave solution. The downside is that you cannot use perceptually uniform coloring.

How to combine two QColor objects with alpha channel?

I have objects where the border color has already been determined.
Now I want the user to be able to set at least the opacity of the fill pattern. E.g., the border is blue. The user sets the opacity to 128, so the fill pattern is also drawn in blue, but half-transparent.
The next step would be to allow the user to also slightly adjust the color of the pattern. E.g.: "Use the border color, but make it half-transparent (alpha=128) and a little bit yellow-ish.".
Is there a (useful) way to combine two colors where one does not have an alpha value set? Or would it make more sense to set an alpha value on the original color and combine it with an "overlay color" that also has an alpha value set?
And is there a function (or otherwise, can someone give a short code snippet) to combine the two QColor objects?
I would look at existing color pickers that are out there (Gimp, Photoshop, Paint, wwWidgets). Most of them deal with a few different ways of picking your color:
Saturation, Hue, Value, Brightness, Contrast, RGB, CMYK, HSV, Alpha/Opacity.
Qt handles a bunch of these right out of the box:
QColor
In order to combine two colors, I would probably average their different components together:
// Rough pseudocode
Color1 RGBA, Color 2 RGBA, Color 3 = combination
Color3.R = (Color1.R + Color2.R)/2
Color3.G = (Color1.G + Color2.G)/2
Color3.B = (Color1.B + Color2.B)/2
Color3.A = (Color1.A + Color2.A)/2
I hope that helps.
PS: Understanding Color Space can be helpful, too.

"Straight" version of an image with alpha channel

So I'm working on a shader for the upcoming CSS shader spec. I’m building something specifically targeted toward professional video product, and I need to separate out the alpha channel (as luminance, which I’ve done successfully), and a “straight” version of the image, which has no alpha channel.
Example: https://dl.dropbox.com/u/4031469/shadertest.html (only works with fancy adobe webkit browser)
I’m so close, just trying to figure out the last shader.
Here’s an example of what I’d expect to see. (This is from a Targa file)
https://dl.dropbox.com/u/4031469/Randalls%20Mess.png – the fill (what I haven’t figured out)
https://dl.dropbox.com/u/4031469/Randalls%20Mess%20Alpha.png – the key (aka alpha which I have figured out)
(The final, in case you're curious: https://dl.dropbox.com/u/4031469/final.png )
I thought it'd be a matrix transform, but I'm thinking now that i've tried more and more, it's going to be something more complex than a matrix transform. Am I sadly correct? And if so, how would I even get started attacking this problem?
In your shader, I presume you have some piece of code that samples the textures similar to the following, yes?
vec4 textureColor = texture2D(texture1, texCoord);
textureColor at that point contains 4 values: the Red, Green, Blue, and Alpha channels, each ranging from 0 to 1. You can access each of these colors separately:
float red = textureColor.r;
float alpha = textureColor.a;
or by using a technique known as "swizzling" you can access them in sets:
vec3 colorChannels = textureColor.rgb;
vec2 alphaAndBlue = textureColor.ab;
The color values that you get out of this should not be premultipied, so the alpha won't have any effect unless you want it to.
It's actually a very common to use this to do things like packing the specular level for a texture into the alpha channel of the diffuse texture:
float specularLevel = textureColor.a;
float lightValue = lightFactor + (specularFactor * specularLevel); // Lighting factors calculated from normals
gl_FragColor = vec4(textureColor.rgb * lightValue, 1.0); // 1.0 gives us a constant alpha
Given the flexibility of shaders any number of effects are possible that use and abuse various combinations of color channels, and as such it's difficult to say the exact algorithm you'll need. Hopefully that gives you an idea of how to work with the color channels separately, though.
Apparently, according to one of the adobe guys, this is not possible in CSS shader language since the matrix transform is only able to transform existing values, and not add a 'bias' vector.
The alternative, which I'm exploring now, is to use SVG filters.
SVG filters are now the way to pull this off in Chrome.
https://dl.dropbox.com/u/4031469/alphaCanvases.html
It's still early though, and CSS animations are only supported in the Canary build currently.

Color similarity/distance in RGBA color space

How to compute similarity between two colors in RGBA color space? (where the background color is unknown of course)
I need to remap an RGBA image to a palette of RGBA colors by finding the best palette entry for each pixel in the image*.
In the RGB color space the most similar color can be assumed to be the one with the smallest euclidean distance. However, this approach doesn't work in RGBA, e.g., Euclidean distance from rgba(0,0,0,0) to rgba(0,0,0,50%) is smaller than to rgba(100%,100%,100%,1%), but the latter looks much better.
I'm using premultiplied RGBA color space:
r = r×a
g = g×a
b = b×a
and I've tried this formula (edit: See the answer below for better formula):
Δr² + Δg² + Δb² + 3 × Δa²
but it doesn't look optimal — in images with semitransparent gradients it finds wrong colors that cause discontinuities/sharp edges. Linear proportions between opaque colors and alpha seem fishy.
What's the optimal formula?
*) for simplicity of this question I'm ignoring error diffusion, gamma and psychovisual color spaces.
Slightly related: if you want to find nearest color in this non-Euclidean RGBA space, vp-trees are the best.
Finally, I've found it! After thorough testing and experimentation my conclusions are:
The correct way is to calculate maximum possible difference between the two colors.
Formulas with any kind of estimated average/typical difference had room for discontinuities.
I was unable to find a working formula that calculates the distance without blending RGBA colors with some backgrounds.
There is no need to take every possible background color into account. It can be simplified down to blending maximum and minimum separately for each of R/G/B channels:
blend the channel in both colors with channel=0 as the background, measure squared difference
blend the channel in both colors with channel=max as the background, measure squared difference
take higher of the two.
Fortunately blending with "white" and "black" is trivial when you use premultiplied alpha.
The complete formula for premultiplied alpha color space is:
rgb *= a // colors must be premultiplied
max((r₁-r₂)², (r₁-r₂ - a₁+a₂)²) +
max((g₁-g₂)², (g₁-g₂ - a₁+a₂)²) +
max((b₁-b₂)², (b₁-b₂ - a₁+a₂)²)
C Source including SSE2 implementation.
Several principles:
When two colors have same alpha, rgbaDistance = rgbDistance * ( alpha / 255). Compatible with RGB color distance algorithm when both alpha are 255.
All Colors with very low alpha are similar.
The rgbaDistance between two colors with same RGB is linearly dependent on delta Alpha.
double DistanceSquared(Color a, Color b)
{
int deltaR = a.R - b.R;
int deltaG = a.G - b.G;
int deltaB = a.B - b.B;
int deltaAlpha = a.A - b.A;
double rgbDistanceSquared = (deltaR * deltaR + deltaG * deltaG + deltaB * deltaB) / 3.0;
return deltaAlpha * deltaAlpha / 2.0 + rgbDistanceSquared * a.A * b.A / (255 * 255);
}
My idea is integrating once over all possible background colors and averaging the square error.
i.e. for each component calculate(using red channel as example here)
Integral from 0 to 1 ((r1*a1+rB*(1-a1))-(r2*a2+rB*(1-a2)))^2*drB
which if I calculated correctly evaluates to:
dA=a1-a2
dRA=r1*a1-r2*a2
errorR=dRA^2+dA*dRA+dA^2/3
And then sum these over R, G and B.
First of all, a very interesting problem :)
I don't have a full solution (at least not yet), but there are 2 obvious extreme cases we should consider:
When Δa==0 the problem is similiar to RGB space
When Δa==1 the problem is only on the alpha 1-dim space
So the formula (which is very similar to the one you stated) that would satisfy that is:
(Δr² + Δg² + Δb²) × (1-(1-Δa)²) + Δa² or (Δr² + Δg² + Δb²) × (1-Δa²) + Δa²
In any case, it would probably be something like (Δr² + Δg² + Δb²) × f(Δa) + Δa²
If I were you, I would try to simulate it with various RGBA pairs and various background colors to find the best f(Δa) function. Not very mathematic, but will give you a close enough answer
I've never done it, but theory and practice say that converting the RGB values in the image and the palette to luminance–chrominance will help you find the best matches. I'd leave the alpha channel alone, as transparency should have little to nothing to do with the 'looking better' part.
This xmass I made some photomosaics for presents using open-source software that matches fragments of the original image to a collection of images. That seems like a harder problem than the one you're trying to solve. One of them programs was metapixel.
Lastly, the best option should be to use an existing library to convert the image to a format, like PNG, in which you can control the palette.

Resources