Tangent Space View Direction based factor value remap - math

I'm trying to setup a mask similar to what Fresnel produces. Unfortunately Fresnel gives pretty bad results at grazing angles so I ended up using this :
float mask = abs(-viewDirTangentSpace.x / viewDirTangentSpace.z)
This gives a nice results but I would like to have values that evolves linearly. Is there a way to remap the range so it behaves like this (Photoshop) ?

Related

applying noise to voronoï for procedural generation

I know how to generate a Voronoï / cell noise such as this one using Delaunay Triangles :
But how do I apply noise to the lines to make them more natural ? I cannot have sharp edges for procedural generation as it would look very out of place and unpleasant.
I am looking for a result that would somehow look like this :
( the picture is from a more advanced project )
Note : I cannot generate the entire map at once ( it is too big ) so the Voronoï diagram is used as metadata but I need a way to know in what cell are the coordinates (x, y) after deformation in order to make it work.
I would randomize 3 - 5 points on each line to generate sub segments, based on a seed computed thanks to the coords of the two original segment points.
This kind of random seed allows to get the same results each time.
You could thus cache the results or decide to compute the same ones again.
Maybe more zoom means more random sub-segments based on the same method.

Rotate model around x,y,z axes, without gimbal lock, with input data always as x,y,z axes angle rotations

I have an input device that gives me 3 angles -- rotation around x,y,z axes.
Now I need to use these angles to rotate the 3D space, without gimbal lock. I thought I could convert to Quaternions, but apparently since I'm getting the data as 3 angles this won't help?
If that's the case, just how can I correctly rotate the space, keeping in mind that my input data simply is x,y,z axes rotation angles, so I can't just "avoid" that. Similarly, moving around the order of axes rotations won't help -- all axes will be used anyway, so shuffling the order around won't accomplish anything. But surely there must be a way to do this?
If it helps, the problem can pretty much be reduced to implementing this function:
void generateVectorsFromAngles(double &lastXRotation,
double &lastYRotation,
double &lastZRotation,
JD::Vector &up,
JD::Vector &viewing) {
JD::Vector yaxis = JD::Vector(0,0,1);
JD::Vector zaxis = JD::Vector(0,1,0);
JD::Vector xaxis = JD::Vector(1,0,0);
up.rotate(xaxis, lastXRotation);
up.rotate(yaxis, lastYRotation);
up.rotate(zaxis, lastZRotation);
viewing.rotate(xaxis, lastXRotation);
viewing.rotate(yaxis, lastYRotation);
viewing.rotate(zaxis, lastZRotation);
}
in a way that avoids gimbal lock.
If your device is giving you absolute X/Y/Z angles (which implies something like actual gimbals), it will have some specific sequence to describe what order the rotations occur in.
Since you say that "the order doesn't matter", this suggests your device is something like (almost certainly?) a 3-axis rate gyro, and you're getting differential angles. In this case, you want to combine your 3 differential angles into a rotation vector, and use this to update an orientation quaternion, as follows:
given differential angles (in radians):
dXrot, dYrot, dZrot
and current orientation quaternion Q such that:
{r=0, ijk=rot(v)} = Q {r=0, ijk=v} Q*
construct an update quaternion:
dQ = {r=1, i=dXrot/2, j=dYrot/2, k=dZrot/2}
and update your orientation:
Q' = normalize( quaternion_multiply(dQ, Q) )
Note that dQ is only a crude approximation of a unit quaternion (which makes the normalize() operation more important than usual). However, if your differential angles are not large, it is actually quite a good approximation. Even if your differential angles are large, this simple approximation makes less nonsense than many other things you could do. If you have problems with large differential angles, you might try adding a quadratic correction to improve your accuracy (as described in the third section).
However, a more likely problem is that any kind of repeated update like this tends to drift, simply from accumulated arithmetic error if nothing else. Also, your physical sensors will have bias -- e.g., your rate gyros will have offsets which, if not corrected for, will cause your orientation estimate Q to precess slowly. If this kind of drift matters to your application, you will need some way to detect/correct it if you want to maintain a stable system.
If you do have a problem with large differential angles, there is a trigonometric formula for computing an exact update quaternion dQ. The assumption is that the total rotation angle should be linearly proportional to the magnitude of the input vector; given this, you can compute an exact update quaternion as follows:
given differential half-angle vector (in radians):
dV = (dXrot, dYrot, dZrot)/2
compute the magnitude of the half-angle vector:
theta = |dV| = 0.5 * sqrt(dXrot^2 + dYrot^2 + dZrot^2)
then the update quaternion, as used above, is:
dQ = {r=cos(theta), ijk=dV*sin(theta)/theta}
= {r=cos(theta), ijk=normalize(dV)*sin(theta)}
Note that directly computing either sin(theta)/theta ornormalize(dV) is is singular near zero, but the limit value of vector ijk near zero is simply ijk = dV = (dXrot,dYrot,dZrot), as in the approximation from the first section. If you do compute your update quaternion this way, the straightforward method is to check for this, and use the approximation for small theta (for which it is an extremely good approximation!).
Finally, another approach is to use a Taylor expansion for cos(theta) and sin(theta)/theta. This is an intermediate approach -- an improved approximation that increases the range of accuracy:
cos(x) ~ 1 - x^2/2 + x^4/24 - x^6/720 ...
sin(x)/x ~ 1 - x^2/6 + x^4/120 - x^6/5040 ...
So, the "quadratic correction" mentioned in the first section is:
dQ = {r=1-theta*theta*(1.0/2), ijk=dV*(1-theta*theta*(1.0/6))}
Q' = normalize( quaternion_multiply(dQ, Q) )
Additional terms will extend the accurate range of the approximation, but if you need more than +/-90 degrees per update, you should probably use the exact trig functions described in the second section. You could also use a Taylor expansion in combination with the exact trigonometric solution -- it may be helpful by allowing you to switch seamlessly between the approximation and the exact formula.
I think that the 'gimbal lock' is not a problem of computations/mathematics but rather a problem of some physical devices.
Given that you can represent any orientation with XYZ rotations, then even at the 'gimbal lock point' there is a XYZ representation for any imaginable orientation change. Your physical gimbal may be not able to rotate this way, but the mathematics still works :).
The only problem here is your input device - if it's gimbal then it can lock, but you didn't give any details on that.
EDIT: OK, so after you added a function I think I see what you need. The function is perfectly correct. But sadly, you just can't get a nice and easy, continuous way of orientation edition using XYZ axis rotations. I haven't seen such solution even in professional 3D packages.
The only thing that comes to my mind is to treat your input like a steering in aeroplane - you just have some initial orientation and you can rotate it around X, Y or Z axis by some amount. Then you store the new orientation and clear your inputs. Rotations in 3DMax/Maya/Blender are done the same way.
If you give us more info about real-world usage you want to achieve we may get some better ideas.

Distribution Pattern (Texture) and Ramp Math?

I'm trying to achieve the ramp effect as seen here:
(source: splashdamage.com)
Blending the textures based on a distribution pattern is easy. Basically, just this (HLSL):
Result = lerp(SampleA, SampleB, DistributionPatternSample);
Which works, but without the ramp.
http://aaronm.nuclearglory.com/private/stackoverflow/result1.png
My first guess was that to incorporate "Ramp Factor" I could just do this:
Result = lerp(A, B, (1.0f - Ramp)*Distribution);
However, that does not work because if Ramp is also 1.0 the result would be zero, causing just 'A' to be used. This is what I get when Ramp is 1.0f with that method:
http://aaronm.nuclearglory.com/private/stackoverflow/result2.png
I've attempted to just multiply the ramp with the distribution, which is obviously incorrect. (Figured it's worth a shot to try and discover interesting effects. No interesting effect was discovered.)
I've also attempted subtracting the Ramp from the Distribution, like so:
Result = lerp(A, B, saturate(Distribution - Ramp));
But the issue with that is that the ramp is meant to control sharpness of the blend. So, that doesn't really do anything either.
I'm hoping someone can inform me what I need to do to accomplish this, mathematically. I'm trying to avoid branching because this is shader code. I can simulate branching by multiplying out results, but I'd prefer not to do this. I am also hoping someone can fill me in on why the math is formulated the way it is for the sharpness. Throwing around math without knowing how to use it can be troublesome.
For context, that top image was taken from here:
http://wiki.splashdamage.com/index.php/A_Simple_First_Megatexture
I understand how MegaTextures (the clip-map approach) and Virtual Texturing (the more advanced approach) work just fine. So I don't need any explanation on that. I'm just trying to implement this particular blend in a shader.
For reference, this is the distribution pattern texture I'm using.
http://aaronm.nuclearglory.com/private/stackoverflow/distribution.png
Their ramp width is essentially just a contrast change on the distribution map. A brute version of this is a simple rescaling and clamp.
Things we want to preserve are that 0.5 maps to 0.5, and that the texture goes from 0 to 1 over a region of width w.
This gives
x = 0.5 + (x-0.5)/w
This means the final HLSL will look something like this:
Result = lerp(A, B, clamp( 0.5 + (Distribution-0.5)/w, 0, 1) );
Now if this ends up looking jaggy at the edges you can switch to using a smoothstep. In shich case you'd get
Result = lerp(A, B, smoothstep( 0.5 + (Distribution-0.5)/w, 0, 1) );
However, one thing to keep in mind here is that this type of thresholding works best with smoothish distribution patters. I'm not sure if yours is going to be smooth enough (unless that is a small version of a mega texture in which case you're probabbly OK.)

Gaussian Falloff Format for Mesh Manipulation

This return below is defined as a gaussian falloff. I am not seeing e or powers of 2, so I am not sure how this is related to the Gaussian falloff, or if it is the wrong kind of fallout for me to use to get a nice smooth deformation on my mesh:
Mathf.Clamp01 (Mathf.Pow (360.0, -Mathf.Pow (distance / inRadius, 2.5) - 0.01))
where Mathf.Clamp01 returns a value between 0 and 1.
inRadius is the size of the distortion and distance is determined by:
sqrMagnitude = (vertices[i] - position).sqrMagnitude;
// Early out if too far away
if (sqrMagnitude > sqrRadius)
continue;
distance = Mathf.Sqrt(sqrMagnitude);
vertices is a list of mesh vertices, and position is the point of mesh manipulation/deformation.
My question is two parts:
1) Is the above actually a Gaussian falloff? It is expontential, but there does not seem to be the crucial e or power of 2... (Updated - I see how the graph seems to decrease smoothly in a Gaussian-like way. Perhaps this function is not the cause for problem 2 below)
2) My mesh is not deforming smoothly enough - given the above parameters, would you recommend a different Gaussian falloff?
Don't know about meshes etc. but lets see that math:
f=360^(-0.1- ((d/r)^2.5) ) looks similar enough to gausian function to make a "fall off".
i'll take the exponent apart to show a point:
f= 360^( -(d/r)^2.5)*360^(-0.1)=(0.5551)*360^( -(d/r)^2.5)
if d-->+inf then f-->0
if d-->+0 then f-->(0.5551)
the exponent of 360 is always negative (assuming 'distance' and 'inRadius' are always positive) and getting bigger (more negative) almost cubicly ( power of 2.5) with distance thus the function is "falling off" and doing it pretty fast.
Conclusion: the function is not Gausian because it behaves badly for negative input and probably for other reasons. It does exibits the "fall off" behavior you are looking for.
Changing r will change the speed of the fall-off. When d==r the f=(1/360)*0.5551.
The function will never go over 0.5551 and below zero so the "clipping" in the code is meaningless.
I don't see any see any specific reason for the constant 360 - changing it changes the slope a bit.
cheers!

Scalling connected lines

I have some kind of a shape consisting of vertical, horizontal and diagonal lines. I have starting X,Y and ending X,Y (this is my input - just 2 points defining a line) of each line and I would like to make the whole shape scalable (just by changing the value of a scale ratio variable), so that I can still preserve the proper connection of the lines and the proportions as well. Just for getting a better idea of what I mean: it'd be as if I had the same lines in a vector editor.
Would that be possible with an algorithm, and could you please, give me another possible solution if there is no such algorithm ?
Thank you very much in advance!
what point do you want it to scale about? You could scale relative to the first point, the center, or some other arbitrary location. Typically, you subtract out an offset (for instance the first point in your input), multiply by a scale factor, and then add back the offset.
A more systematic approach in computer graphics would be to use a transformation matrix... although thats probably overkill in your case.

Resources