"Straight" version of an image with alpha channel - css

So I'm working on a shader for the upcoming CSS shader spec. I’m building something specifically targeted toward professional video product, and I need to separate out the alpha channel (as luminance, which I’ve done successfully), and a “straight” version of the image, which has no alpha channel.
Example: https://dl.dropbox.com/u/4031469/shadertest.html (only works with fancy adobe webkit browser)
I’m so close, just trying to figure out the last shader.
Here’s an example of what I’d expect to see. (This is from a Targa file)
https://dl.dropbox.com/u/4031469/Randalls%20Mess.png – the fill (what I haven’t figured out)
https://dl.dropbox.com/u/4031469/Randalls%20Mess%20Alpha.png – the key (aka alpha which I have figured out)
(The final, in case you're curious: https://dl.dropbox.com/u/4031469/final.png )
I thought it'd be a matrix transform, but I'm thinking now that i've tried more and more, it's going to be something more complex than a matrix transform. Am I sadly correct? And if so, how would I even get started attacking this problem?

In your shader, I presume you have some piece of code that samples the textures similar to the following, yes?
vec4 textureColor = texture2D(texture1, texCoord);
textureColor at that point contains 4 values: the Red, Green, Blue, and Alpha channels, each ranging from 0 to 1. You can access each of these colors separately:
float red = textureColor.r;
float alpha = textureColor.a;
or by using a technique known as "swizzling" you can access them in sets:
vec3 colorChannels = textureColor.rgb;
vec2 alphaAndBlue = textureColor.ab;
The color values that you get out of this should not be premultipied, so the alpha won't have any effect unless you want it to.
It's actually a very common to use this to do things like packing the specular level for a texture into the alpha channel of the diffuse texture:
float specularLevel = textureColor.a;
float lightValue = lightFactor + (specularFactor * specularLevel); // Lighting factors calculated from normals
gl_FragColor = vec4(textureColor.rgb * lightValue, 1.0); // 1.0 gives us a constant alpha
Given the flexibility of shaders any number of effects are possible that use and abuse various combinations of color channels, and as such it's difficult to say the exact algorithm you'll need. Hopefully that gives you an idea of how to work with the color channels separately, though.

Apparently, according to one of the adobe guys, this is not possible in CSS shader language since the matrix transform is only able to transform existing values, and not add a 'bias' vector.
The alternative, which I'm exploring now, is to use SVG filters.

SVG filters are now the way to pull this off in Chrome.
https://dl.dropbox.com/u/4031469/alphaCanvases.html
It's still early though, and CSS animations are only supported in the Canary build currently.

Related

TriangleMesh with a partially transparent material gives an unexpected result

Recently, i tried to make transparency work in JavaFX 3D as some of the animations i want to play on meshes use transforms that change the alpha of the mesh each keyframe.
However to my surprise, my TriangleMesh that uses a material which has transparent areas doesn't look as expected.
Current result(depth buffer enabled): https://i.imgur.com/EIIWY1p.gif
Result with depth buffer disabled for my TriangleMeshView: https://i.imgur.com/1C95tKy.gif (this looks much closer to the result i was expecting)
However i don't really want to disable depth buffering because it causes other issues.
In case it matters, this is the diffuse map i used for my TriangleMesh: https://i.imgur.com/UqCesXL.png (1 pixel per triangle as my triangles have a color per face, 128 pixels per column).
I compute UV's for the TriangleMesh like this:
float u = (triangleIndex % width + 0.5f) / width;
float v = (triangleIndex / width + 0.5f) / (float) atlas.getHeight();
and then use these for each vertex of the triangle.
What's the proper way to render a TriangleMesh that uses a transparent material(in my case only part of the image is transparent, as some triangles are opaque)?
After a bit of research, i've found this which potentially explains my issue: https://stackoverflow.com/a/31942840/14999427 however i am not sure whether this is what i should do or whether there's a better option.
Minimal reproducible example(this includes the same exact mesh i showed in my gifs): https://pastebin.com/ndkbZCcn (used pastebin because it was 42k characters and the limit in stackoverflow is 30k) make sure to copy the raw data as the preview in pastebin stripped a few lines.
Update: a "fix" i found orders the triangles every time the camera moves the following way:
Take the camera's position multiplied by some scalar(5 worked for me)
Order all opaque triangles first by their centroids distance to the camera and then after that order all transparent triangles the same way.
I am not sure why multiplying the camera's position is necessary but it does work, best solution i've found so far.
I have not yet found a 'proper fix' for this problem, however this is what worked for me pretty well:
Create an ArrayList that'll store the sorted indices
Take the Camera's position multiplied by some scalar (between 5-10 worked for me), call that P
Order all opaque triangles first by their centroids distance to P and add them to the list
Order all triangles with transparency by their centroids distance to P and add them to the list
Use the sorted triangle indices when creating the TriangleMesh

Why are the transparent pixels not blending correctly in WebGL

Result of my code:
Basically, what the issue is, the transparent part of my image are not blending correctly with what is drawn before it. I know I can do a
if(alpha<=0){discard;}
in the fragment shader, the only issue is I plan on having a ton of fragments and don't want the if statement for each fragment on mobile devices.
Here is my code related to alpha, and depth testing:
var gl = canvas.getContext("webgl2",
{
antialias : false,
alpha : false,
premultipliedAlpha: false,
}
);
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.GREATER);
Also, these are textured gl.POINTS I am drawing. If I change the order the two images are drawn in the buffer, the problem doesn't exist. They will be dynamically rotating during the program's runtime so this is not an option.
It's not clear what your issue is without more code but it looks like a depth test issue.
Assuming I understand correctly you're drawing 2 rectangles? If you draw the red one before the blue one then depending on how you have the depth test setup the blue one will fail the depth test when the X area is drawn.
You generally solve this by sorting what you draw, making sure to draw things further away first.
For a grid of "tiles" you can generally sort by walking the grid itself in the correct direction instead of "sorting"
On the other hand, if all of your transparency is 100% draw or not draw then discard has its advantages and you can draw front to back. The reason is because in that case drawing front to back, the pixel drawn (not discarded) by the red quad will be rejected when drawing the blue quad by the depth test. The depth test is usually optimized to happen before running the fragment shader for a certain pixel. If the depth test says the pixel will not be drawn then no reason to even run the fragment shader for that pixel, time saved. Unfortunately as soon as you have any transparency that is not 100% opaque or 100% transparent then you need to sort and draw back to front. Some of these issues are covered in this article
A few notes:
you mentioned mobile devices and you mentioned WebGL2 in your code sample. There is no WebGL2 on iOS
you said you're drawing with POINTS. The spec says only POINTS of 1 pixel in size are required. It looks like you're safe up to points of size 60 but to be safe it's generally best to draw with triangles as there are other isses with points
you might also be interested in sprites with depth

GLSL - Vertex normals + normal mapping?

I'm trying to create a simple shader for my lighting system. Right now, I'm working on adding support for normal-mapping right now. Without normal-mapping, the lighting system works fine. I'm using the normals forwarded from the vertex shader, and they work perfectly fine. I'm also reading the normals from the normal map correctly. Without including the normal map, the lighting works perfectly. I've tried adding the vertex normal and the normal map's normal, and that doesn't work. Also tried multiplying. Here's how I'm reading the normal-map:
vec4 normalHeight = texture2D(m_NormalMap, texCoord);
vec3 normals = normalize((normalHeight.xyz * vec3(2.0) - vec3(1.0)));
So I have the correct vertex normals, and the normals from the normal map. How should I combine these to get the correct normals?
It depends on how you store your normal maps. If they are in world space to begin with (this is rather rare) and your scene never changes, you can look them up the way you have them. Typically, however, they are in tangent space. Tangent space is a vector space that uses the object's normal, and the rate of change in the (s,t) texture coordinates to properly transform the normals on a surface with arbitrary orientation.
Tangent space normal maps usually appear bluish to the naked eye, whereas world space normal maps are every color of the rainbow (and need to be biased and scaled because half of the colorspace is supposed to represent negative vectors) :)
If you want to understand tangent space better, complete with implementation on deriving the basis vectors, see this link.
Does your normal map not contain the adjusted normals? If yes, then you just need to read the texture in the fragment shader and you should have your normal, like so:
vec4 normalHeight = texture2D(m_NormalMap, texCoord);
vec3 normal = normalize(normalHeight.xyz);
If your trying to account for negative values then you should not be multiplying by the vector but rather the scalar.
vec3 normal = normalize( (normalHeight.xyz * 2.0) - 1.0 );

Distribution Pattern (Texture) and Ramp Math?

I'm trying to achieve the ramp effect as seen here:
(source: splashdamage.com)
Blending the textures based on a distribution pattern is easy. Basically, just this (HLSL):
Result = lerp(SampleA, SampleB, DistributionPatternSample);
Which works, but without the ramp.
http://aaronm.nuclearglory.com/private/stackoverflow/result1.png
My first guess was that to incorporate "Ramp Factor" I could just do this:
Result = lerp(A, B, (1.0f - Ramp)*Distribution);
However, that does not work because if Ramp is also 1.0 the result would be zero, causing just 'A' to be used. This is what I get when Ramp is 1.0f with that method:
http://aaronm.nuclearglory.com/private/stackoverflow/result2.png
I've attempted to just multiply the ramp with the distribution, which is obviously incorrect. (Figured it's worth a shot to try and discover interesting effects. No interesting effect was discovered.)
I've also attempted subtracting the Ramp from the Distribution, like so:
Result = lerp(A, B, saturate(Distribution - Ramp));
But the issue with that is that the ramp is meant to control sharpness of the blend. So, that doesn't really do anything either.
I'm hoping someone can inform me what I need to do to accomplish this, mathematically. I'm trying to avoid branching because this is shader code. I can simulate branching by multiplying out results, but I'd prefer not to do this. I am also hoping someone can fill me in on why the math is formulated the way it is for the sharpness. Throwing around math without knowing how to use it can be troublesome.
For context, that top image was taken from here:
http://wiki.splashdamage.com/index.php/A_Simple_First_Megatexture
I understand how MegaTextures (the clip-map approach) and Virtual Texturing (the more advanced approach) work just fine. So I don't need any explanation on that. I'm just trying to implement this particular blend in a shader.
For reference, this is the distribution pattern texture I'm using.
http://aaronm.nuclearglory.com/private/stackoverflow/distribution.png
Their ramp width is essentially just a contrast change on the distribution map. A brute version of this is a simple rescaling and clamp.
Things we want to preserve are that 0.5 maps to 0.5, and that the texture goes from 0 to 1 over a region of width w.
This gives
x = 0.5 + (x-0.5)/w
This means the final HLSL will look something like this:
Result = lerp(A, B, clamp( 0.5 + (Distribution-0.5)/w, 0, 1) );
Now if this ends up looking jaggy at the edges you can switch to using a smoothstep. In shich case you'd get
Result = lerp(A, B, smoothstep( 0.5 + (Distribution-0.5)/w, 0, 1) );
However, one thing to keep in mind here is that this type of thresholding works best with smoothish distribution patters. I'm not sure if yours is going to be smooth enough (unless that is a small version of a mega texture in which case you're probabbly OK.)

perspective correction of texture coordinates in 3d

I'm writing a software renderer which is currently working well, but I'm trying to get perspective correction of texture coordinates and that doesn't seem to be correct. I am using all the same matrix math as opengl for my renderer. To rasterise a triangle I do the following:
transform the vertices using the modelview and projection matrixes, and transform into clip coordinates.
for each pixel in each triangle, calculate barycentric coordinates to interpolate properties (color, texture coordinates, normals etc.)
to correct for perspective I use perspective correct interpolation:
(w is depth coordinate of vertex, c is texture coordinate of vertex, b is the barycentric weight of a vertex)
1/w = b0*(1/w0) + b1*(1/w1) + b2*(1/w2)
c/w = b0*(c0/w0) + b1*(c1/w1) + b2*(c2/w2)
c = (c/w)/(1/w)
This should correct for perspective, and it helps a little, but there is still an obvious perspective problem. Am I missing something here, perhaps some rounding issues (I'm using floats for all math)?
See in this image the error in the texture coordinates evident along the diagonal, this is the result having done the division by depth coordinates.
Also, this is usually done for texture coordinates... is it necessary for other properties (e.g. normals etc.) as well?
I cracked the code on this issue recently. You can use a homography if you plan on modifying the texture in memory prior to assigning it to the surface. That's computationally expensive and adds an additional dependency to your program. There's a nice hack that'll fix the problem for you.
OpenGL automatically applies perspective correction to the texture you are rendering. All you need to do is multiply your texture coordinates (UV - 0.0f-1.0f) by the Z component (world space depth of an XYZ position vector) of each corner of the plane and it'll "throw off" OpenGL's perspective correction.
I asked and solved this problem recently. Give this link a shot:
texture mapping a trapezoid with a square texture in OpenGL
The paper I read that fixed this issue is called, "Navigating Static Environments Using Image-Space Simplification and Morphing" - page 9 appendix A.
Hope this helps!
ct
The only correct transformation from UV coordinates to a 3D plane is an homographic transformation.
http://en.wikipedia.org/wiki/Homography
You must have it at some point in your computations.
To find it yourself, you can write the projection of any pixel of the texture (the same as for the vertex) and invert them to get texture coordinates from screen coordinates.
It will come in the form of an homographic transform.
Yeah, that looks like your traditional broken-perspective dent. Your algorithm looks right though, so I'm really not sure what could be wrong. I would check that you're actually using the newly calculated value later on when you render it? This really looks like you went to the trouble of calculating the perspective-correct value, and then used the basic non-corrected value for rendering.
You need to inform OpenGL that you need perspective correction on pixels with
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST)
What you are observing is the typical distortion of linear texture mapping. On hardware that is not capable of per-pixel perspective correction (like for example the PS1) the standard solution is just subdividing in smaller polygons to make the defect less noticeable.

Resources