Distribution Pattern (Texture) and Ramp Math? - math

I'm trying to achieve the ramp effect as seen here:
(source: splashdamage.com)
Blending the textures based on a distribution pattern is easy. Basically, just this (HLSL):
Result = lerp(SampleA, SampleB, DistributionPatternSample);
Which works, but without the ramp.
http://aaronm.nuclearglory.com/private/stackoverflow/result1.png
My first guess was that to incorporate "Ramp Factor" I could just do this:
Result = lerp(A, B, (1.0f - Ramp)*Distribution);
However, that does not work because if Ramp is also 1.0 the result would be zero, causing just 'A' to be used. This is what I get when Ramp is 1.0f with that method:
http://aaronm.nuclearglory.com/private/stackoverflow/result2.png
I've attempted to just multiply the ramp with the distribution, which is obviously incorrect. (Figured it's worth a shot to try and discover interesting effects. No interesting effect was discovered.)
I've also attempted subtracting the Ramp from the Distribution, like so:
Result = lerp(A, B, saturate(Distribution - Ramp));
But the issue with that is that the ramp is meant to control sharpness of the blend. So, that doesn't really do anything either.
I'm hoping someone can inform me what I need to do to accomplish this, mathematically. I'm trying to avoid branching because this is shader code. I can simulate branching by multiplying out results, but I'd prefer not to do this. I am also hoping someone can fill me in on why the math is formulated the way it is for the sharpness. Throwing around math without knowing how to use it can be troublesome.
For context, that top image was taken from here:
http://wiki.splashdamage.com/index.php/A_Simple_First_Megatexture
I understand how MegaTextures (the clip-map approach) and Virtual Texturing (the more advanced approach) work just fine. So I don't need any explanation on that. I'm just trying to implement this particular blend in a shader.
For reference, this is the distribution pattern texture I'm using.
http://aaronm.nuclearglory.com/private/stackoverflow/distribution.png

Their ramp width is essentially just a contrast change on the distribution map. A brute version of this is a simple rescaling and clamp.
Things we want to preserve are that 0.5 maps to 0.5, and that the texture goes from 0 to 1 over a region of width w.
This gives
x = 0.5 + (x-0.5)/w
This means the final HLSL will look something like this:
Result = lerp(A, B, clamp( 0.5 + (Distribution-0.5)/w, 0, 1) );
Now if this ends up looking jaggy at the edges you can switch to using a smoothstep. In shich case you'd get
Result = lerp(A, B, smoothstep( 0.5 + (Distribution-0.5)/w, 0, 1) );
However, one thing to keep in mind here is that this type of thresholding works best with smoothish distribution patters. I'm not sure if yours is going to be smooth enough (unless that is a small version of a mega texture in which case you're probabbly OK.)

Related

Trying to make a circle in Minecraft using coordinates and Sin & Cos

I am trying to write a Minecraft Datapack, which will plot a full armorstand circle around whatever runs the particular command. I am using a 3rd party mathematics datapack to use Sin and Cos. However, when running the command, the resulting plot was... not good. As you can see here: 1. Broken Circle., rather than have each vertex evenly placed in a circular line, I find a strange mess instead.
I would have thought loosing precision in Cos and Sin would simply make the circle more angular, I didn't expect it to spiral. What confuses me, is that +z (the red square) and -x (the purple one) are all alone. You can see on the blue ring (Which was made with a smaller radius) the gap between them persists.
My main issue is; How did my maths go from making a circle to a shredded mushroom, and is there a way to calculate the vertices with a greater precision?
Going into the project I knew I could simply spin the centre entity, and summon an armorstand x blocks in front using ^5 ^ ^, however I wanted to avoid this, due to my desire to be able to change the radius without needing to edit the datapack. To solve this, I used the Sin and Cos components to plot a new point, using a radius defined with scoreboards.
I first tested this using Scratch, in order to check my maths. You can see my code here: 2. Scratch code.
With an addition of the pen blocks, I was able to produce a perfect circle, which you can see here:
. Scratch visual proof.
With my proof of concept working, I looked online and found a Mathematical Functions datapack by yosho27, since the Cos and Sin functions are not built into the game. However, due to how Minecraft scoreboards are only Integers, Yosho27 multiplied the result of Cos and Sin by 100 to preserve 2 decimal places.
To start with, I am using a central armorstand with the tag center, which is at x: 8.5 z: 8.5. The scoreboards built into yosho's datapack that I am using is math_in for the values I want converted and math_out, which is where the final value is dumped.
Using signs, I keep track of the important values I am working with, as seen here: 4. Sign maths.
As I was writing this, I decided to actually compare both numbers to find this: 5. Image comparison, which shows me that somewhere in this calculation process, the maths has gone wrong. I modified the scratch side to match the minecraft conditions as much as possible, such as x100 and adding 850 to the result. From this result, I can see a disparity between x and z, even though they should be equal. Where Minecraft says 1: x= 864 z= 1487, Scratch says 1: x= 862.21668448: z= 1549.89338664. I assume this means the datapack's Cos and Sin are not accurate enough?
In light of this , I looked in yosho's datapack, I found this: 6. Yosho's code., which I just modified to be *= 10 instead of divide, in the hope of getting more precision. Modifying the rest of my code to match, I couldn't see any improvement in the numbers, although the armorstand vertices were a few pixels off the original circle, although I couldn't find a discernible pattern to this shift.
While this doesn't answer your full question, I'd like to point out two different ways you can solve the original issue at hand, no need to rely on some foreign math library:
^ ^ ^
Use Math, but let the game do it for you.
You can use the fact that the game is doing those rotational conversions for you already when using local coordinates. So, if you (or any entity) go to 0 0 0 and look / rotate in the angle that you want to calculate, then move forward by ^ ^ ^1, the position you're at now is basically <sin> 0 <cos>.
You can now take those numbers with your desired precision using data get and continue using them in whatever way you see fit.
Use recursive functions to move in incremenets
You point out in your question that
Going into the project I knew I could simply spin the centre entity, and summon an armorstand x blocks in front using ^5 ^ ^, however I wanted to avoid this, due to my desire to be able to change the radius without needing to edit the datapack. To solve this, I used the Sin and Cos components to plot a new point, using a radius defined with scoreboards.
So, to go back to that original idea, you could fairly easily (at least easier than trying to calculate the SIN/COS manually) find a solution that works for (almost) arbitrary radii and steps: By making the datapack configurable through e.g. scores, you can set it up to for example move forward by ^^^0.1 blocks for every point in a score, that way you can change that score to 50 to get a distance of ^^^5 and to 15 to get a distance of ^^^1.5.
Similarly you could set the "minimum" rotation between summons to be 0.1 degrees, then repeating said rotation for however many times you desire.
Both of these things can be achieved with recursive functions. Here is a quick example where you can control the rotational angle using the #rot steps score and the distance using the #dist steps score as described above (you might want to limit how often this runs with a score, too, like 360/rotation or whatever if you want to do one full circle). This example technically recurses twice, as I'm not using an entity to store the rotation. If there is an entity, you don't need to call the forward function from the rotate function but can call it from step (at the entity).
step.mcfunction
# copy scores over so we can use them
scoreboard players operation #rot_steps steps = #rot steps
scoreboard players operation #dist_steps steps = #dist steps
execute rotated ~ ~0.1 function foo:rotate
rotate.mcfunction
scoreboard players remove #rot_steps steps 1
execute if score #rot_steps matches ..0 positioned ^ ^ ^.1 run function foo:forward
execute if score #rot_steps matches 1.. rotated ~ ~0.1 run function foo:rotate
forward.mcfunction
scoreboard players remove #dist_steps steps 1
execute if score #dist_steps matches ..0 run summon armor_stand
execute if score #dist_steps matches 1.. positioned ^ ^ ^.1 run function foo:forward

Rotate model around x,y,z axes, without gimbal lock, with input data always as x,y,z axes angle rotations

I have an input device that gives me 3 angles -- rotation around x,y,z axes.
Now I need to use these angles to rotate the 3D space, without gimbal lock. I thought I could convert to Quaternions, but apparently since I'm getting the data as 3 angles this won't help?
If that's the case, just how can I correctly rotate the space, keeping in mind that my input data simply is x,y,z axes rotation angles, so I can't just "avoid" that. Similarly, moving around the order of axes rotations won't help -- all axes will be used anyway, so shuffling the order around won't accomplish anything. But surely there must be a way to do this?
If it helps, the problem can pretty much be reduced to implementing this function:
void generateVectorsFromAngles(double &lastXRotation,
double &lastYRotation,
double &lastZRotation,
JD::Vector &up,
JD::Vector &viewing) {
JD::Vector yaxis = JD::Vector(0,0,1);
JD::Vector zaxis = JD::Vector(0,1,0);
JD::Vector xaxis = JD::Vector(1,0,0);
up.rotate(xaxis, lastXRotation);
up.rotate(yaxis, lastYRotation);
up.rotate(zaxis, lastZRotation);
viewing.rotate(xaxis, lastXRotation);
viewing.rotate(yaxis, lastYRotation);
viewing.rotate(zaxis, lastZRotation);
}
in a way that avoids gimbal lock.
If your device is giving you absolute X/Y/Z angles (which implies something like actual gimbals), it will have some specific sequence to describe what order the rotations occur in.
Since you say that "the order doesn't matter", this suggests your device is something like (almost certainly?) a 3-axis rate gyro, and you're getting differential angles. In this case, you want to combine your 3 differential angles into a rotation vector, and use this to update an orientation quaternion, as follows:
given differential angles (in radians):
dXrot, dYrot, dZrot
and current orientation quaternion Q such that:
{r=0, ijk=rot(v)} = Q {r=0, ijk=v} Q*
construct an update quaternion:
dQ = {r=1, i=dXrot/2, j=dYrot/2, k=dZrot/2}
and update your orientation:
Q' = normalize( quaternion_multiply(dQ, Q) )
Note that dQ is only a crude approximation of a unit quaternion (which makes the normalize() operation more important than usual). However, if your differential angles are not large, it is actually quite a good approximation. Even if your differential angles are large, this simple approximation makes less nonsense than many other things you could do. If you have problems with large differential angles, you might try adding a quadratic correction to improve your accuracy (as described in the third section).
However, a more likely problem is that any kind of repeated update like this tends to drift, simply from accumulated arithmetic error if nothing else. Also, your physical sensors will have bias -- e.g., your rate gyros will have offsets which, if not corrected for, will cause your orientation estimate Q to precess slowly. If this kind of drift matters to your application, you will need some way to detect/correct it if you want to maintain a stable system.
If you do have a problem with large differential angles, there is a trigonometric formula for computing an exact update quaternion dQ. The assumption is that the total rotation angle should be linearly proportional to the magnitude of the input vector; given this, you can compute an exact update quaternion as follows:
given differential half-angle vector (in radians):
dV = (dXrot, dYrot, dZrot)/2
compute the magnitude of the half-angle vector:
theta = |dV| = 0.5 * sqrt(dXrot^2 + dYrot^2 + dZrot^2)
then the update quaternion, as used above, is:
dQ = {r=cos(theta), ijk=dV*sin(theta)/theta}
= {r=cos(theta), ijk=normalize(dV)*sin(theta)}
Note that directly computing either sin(theta)/theta ornormalize(dV) is is singular near zero, but the limit value of vector ijk near zero is simply ijk = dV = (dXrot,dYrot,dZrot), as in the approximation from the first section. If you do compute your update quaternion this way, the straightforward method is to check for this, and use the approximation for small theta (for which it is an extremely good approximation!).
Finally, another approach is to use a Taylor expansion for cos(theta) and sin(theta)/theta. This is an intermediate approach -- an improved approximation that increases the range of accuracy:
cos(x) ~ 1 - x^2/2 + x^4/24 - x^6/720 ...
sin(x)/x ~ 1 - x^2/6 + x^4/120 - x^6/5040 ...
So, the "quadratic correction" mentioned in the first section is:
dQ = {r=1-theta*theta*(1.0/2), ijk=dV*(1-theta*theta*(1.0/6))}
Q' = normalize( quaternion_multiply(dQ, Q) )
Additional terms will extend the accurate range of the approximation, but if you need more than +/-90 degrees per update, you should probably use the exact trig functions described in the second section. You could also use a Taylor expansion in combination with the exact trigonometric solution -- it may be helpful by allowing you to switch seamlessly between the approximation and the exact formula.
I think that the 'gimbal lock' is not a problem of computations/mathematics but rather a problem of some physical devices.
Given that you can represent any orientation with XYZ rotations, then even at the 'gimbal lock point' there is a XYZ representation for any imaginable orientation change. Your physical gimbal may be not able to rotate this way, but the mathematics still works :).
The only problem here is your input device - if it's gimbal then it can lock, but you didn't give any details on that.
EDIT: OK, so after you added a function I think I see what you need. The function is perfectly correct. But sadly, you just can't get a nice and easy, continuous way of orientation edition using XYZ axis rotations. I haven't seen such solution even in professional 3D packages.
The only thing that comes to my mind is to treat your input like a steering in aeroplane - you just have some initial orientation and you can rotate it around X, Y or Z axis by some amount. Then you store the new orientation and clear your inputs. Rotations in 3DMax/Maya/Blender are done the same way.
If you give us more info about real-world usage you want to achieve we may get some better ideas.

"Straight" version of an image with alpha channel

So I'm working on a shader for the upcoming CSS shader spec. I’m building something specifically targeted toward professional video product, and I need to separate out the alpha channel (as luminance, which I’ve done successfully), and a “straight” version of the image, which has no alpha channel.
Example: https://dl.dropbox.com/u/4031469/shadertest.html (only works with fancy adobe webkit browser)
I’m so close, just trying to figure out the last shader.
Here’s an example of what I’d expect to see. (This is from a Targa file)
https://dl.dropbox.com/u/4031469/Randalls%20Mess.png – the fill (what I haven’t figured out)
https://dl.dropbox.com/u/4031469/Randalls%20Mess%20Alpha.png – the key (aka alpha which I have figured out)
(The final, in case you're curious: https://dl.dropbox.com/u/4031469/final.png )
I thought it'd be a matrix transform, but I'm thinking now that i've tried more and more, it's going to be something more complex than a matrix transform. Am I sadly correct? And if so, how would I even get started attacking this problem?
In your shader, I presume you have some piece of code that samples the textures similar to the following, yes?
vec4 textureColor = texture2D(texture1, texCoord);
textureColor at that point contains 4 values: the Red, Green, Blue, and Alpha channels, each ranging from 0 to 1. You can access each of these colors separately:
float red = textureColor.r;
float alpha = textureColor.a;
or by using a technique known as "swizzling" you can access them in sets:
vec3 colorChannels = textureColor.rgb;
vec2 alphaAndBlue = textureColor.ab;
The color values that you get out of this should not be premultipied, so the alpha won't have any effect unless you want it to.
It's actually a very common to use this to do things like packing the specular level for a texture into the alpha channel of the diffuse texture:
float specularLevel = textureColor.a;
float lightValue = lightFactor + (specularFactor * specularLevel); // Lighting factors calculated from normals
gl_FragColor = vec4(textureColor.rgb * lightValue, 1.0); // 1.0 gives us a constant alpha
Given the flexibility of shaders any number of effects are possible that use and abuse various combinations of color channels, and as such it's difficult to say the exact algorithm you'll need. Hopefully that gives you an idea of how to work with the color channels separately, though.
Apparently, according to one of the adobe guys, this is not possible in CSS shader language since the matrix transform is only able to transform existing values, and not add a 'bias' vector.
The alternative, which I'm exploring now, is to use SVG filters.
SVG filters are now the way to pull this off in Chrome.
https://dl.dropbox.com/u/4031469/alphaCanvases.html
It's still early though, and CSS animations are only supported in the Canary build currently.

Gaussian Falloff Format for Mesh Manipulation

This return below is defined as a gaussian falloff. I am not seeing e or powers of 2, so I am not sure how this is related to the Gaussian falloff, or if it is the wrong kind of fallout for me to use to get a nice smooth deformation on my mesh:
Mathf.Clamp01 (Mathf.Pow (360.0, -Mathf.Pow (distance / inRadius, 2.5) - 0.01))
where Mathf.Clamp01 returns a value between 0 and 1.
inRadius is the size of the distortion and distance is determined by:
sqrMagnitude = (vertices[i] - position).sqrMagnitude;
// Early out if too far away
if (sqrMagnitude > sqrRadius)
continue;
distance = Mathf.Sqrt(sqrMagnitude);
vertices is a list of mesh vertices, and position is the point of mesh manipulation/deformation.
My question is two parts:
1) Is the above actually a Gaussian falloff? It is expontential, but there does not seem to be the crucial e or power of 2... (Updated - I see how the graph seems to decrease smoothly in a Gaussian-like way. Perhaps this function is not the cause for problem 2 below)
2) My mesh is not deforming smoothly enough - given the above parameters, would you recommend a different Gaussian falloff?
Don't know about meshes etc. but lets see that math:
f=360^(-0.1- ((d/r)^2.5) ) looks similar enough to gausian function to make a "fall off".
i'll take the exponent apart to show a point:
f= 360^( -(d/r)^2.5)*360^(-0.1)=(0.5551)*360^( -(d/r)^2.5)
if d-->+inf then f-->0
if d-->+0 then f-->(0.5551)
the exponent of 360 is always negative (assuming 'distance' and 'inRadius' are always positive) and getting bigger (more negative) almost cubicly ( power of 2.5) with distance thus the function is "falling off" and doing it pretty fast.
Conclusion: the function is not Gausian because it behaves badly for negative input and probably for other reasons. It does exibits the "fall off" behavior you are looking for.
Changing r will change the speed of the fall-off. When d==r the f=(1/360)*0.5551.
The function will never go over 0.5551 and below zero so the "clipping" in the code is meaningless.
I don't see any see any specific reason for the constant 360 - changing it changes the slope a bit.
cheers!

Similarity Between Colors

I'm writing a program that works with images and at some point I need to posterize the image. This means I need to bin the colors, but I'm having trouble deciding how to tell how close one color is to another.
Given a color in RGB, I can think of at least 2 ways to see how different they are:
|r1 - r2| + |g1 - g2| + |b1 - b2|
sqrt((r1 - r2)^2 + (g1 - g2)^2 + (b1 - b2)^2)
And if I move into HSV, I can think of other ways of doing it.
So I ask, ignoring speed, what is the best way to tell how similar two colors are? Best meaning most accurate to the human eye.
Well, if speed is not an issue, the most accurate way would be to take some sample images and apply the filter to them using various cutoff values for the distance (distance being determined by one of the equations on the Color_difference page that astander linked to, meaning you'd have to use one of those color spaces listed there with the calculations, then convert to sRGB or something [which also means that you'd need to convert the image into the other color space first if it's not in it to begin with]), and then have a large number of people examine the images to see what looks best to them, then go with the cutoff value for the images that the majority agrees looks best.
Basically, it's largely a matter of subjectiveness; in fact, it also depends on how stylized you want the images, and you might even want to add in some sort of control so that you can alter the cutoff distance on the fly.
If speed does become a bit of an issue and/or you want more simplicity, then just use your second choice for distance calculation (which is simply the CIE76 equation; just make sure to use the Lab* color space) with the cutoff being around 2 or 2.3.
What do you mean by "posterize the image"?
If you're trying to cluster the colors into bins, you should look at
cluster analysis
Just a comment if you are going to move to HSV (or similar spaces):
Diffing on H: difference between 0° and 359° is numerically big but perceptually is negligible.
H difference if V or S are small - is small.
For computer vision apps, more important not perceptual difference (used mostly by paint manufacturers) but are these colors belong to the same object/segment or not. Which means that we might partially ignore V, which can change from lighting conditions.

Resources