Question :
I'm looking for a 2D noise whose gradient always have a norm of 1. which is equivalent to say that its isolines are always at the same distance. It can be any type of noise but its gradient must be continuous ( and if possible the second derivative too ). My goal is to implement it as a function in a fragment shader but just having the mathematical principle would be enough.
To explain more graphicaly what I want, here is a classic gradient noise with isolines and a simple lightning :
As you can see, the isolines density is variable because the slope isn't constant.
On this second picture, you can see the exact same noise but with a different lightning that I made by normalizing the gradient of the first one :
this looks way more like what I'm looking for, however, as you can see, the isolines are still wrong. I just cheated to get the lightning I wanted but I still don't have the noise itself.
Ways of thought :
During my research, I tried to do something similar to gradient noise ( the gradient is defined by a random vector at each grid point ), I focused on a square grid noise but a simplex grid would work too. I came across two main potential ways to solve the problem :
Finding the gradient of the noise first:
It is possible to find a function with its gradient and a fixed value, the reason why it doesn't work with the normalized gradient I used for the lighting of the second picture is that the rotational of the gradient must be 0 everywhere ( else the function can't be continuous ). So the gradient I'm looking for must have a rotational of 0, have a norm of 1, and if we integrate it from one node to another, the result must be zero ( because all nodes of a gradient noise have a value of 0 ).
norm of 1 :
I found three ways to deal with this problem : we can define the gradient by (cos(a(x, y)), sin(a(x, y))), say that the dot product between the gradient and its derivative is 0 or simply say that the dot product of he gradient with itself is 1.
rotational :
The derivative of the x component of the gradient in respect of y must be equal to the derivative of the y component of the gradient in respect of x ( which with the trigonometric technique seen above, becomes : cos(a)*da/dx = -sin(a)*da/dy )
integral from a node to the next one :
I havent investigated that part yet.
Finding the noise itself:
it solves the nodes = 0 problem easily but the main one is still there : the norm of the gradient must be 1 everywhere.
Conclusion :
Of course, those are just ideas and if your answer is completely different from that, ill take it anyways ( and with a big smile ).
Related
I am studying/implementing a version of the Perlin Noise and Improved Perlin Noise. Perlin says in his paper that he replaced the smoothstep function
3t^2 - 2t^3
that he used to interpolate the 8 linear functions at the grid cell's corners with the function:
6t^5 - 15t^4 + 10t^3
Because the 2nd order derivative of the smoothstep function is discontinuous. He says (and that's clearly visible in the image he shows), that this causes some visual artefacts due to the way normals look like as the result of this function being used. Now I understand what a discontinuous function is. I also understand how normals are computed in the Perlin noise function using the partial derivatives of the Perlin Noise function, but I don't understand why the fact that 2nd order derivative being not continuous causes an issue with the normals. The normals are computed using the 1st order derivative of the Noise function, not the 2nd order derivative. So how can the fact that the 2nd order derivative is not continuous has such an effect on the normals?
For more details on the improved Noise Function.
So how can the fact that the 2nd order derivative is not continuous
has such an effect on the normals?
Firstly, remember that the 2nd order derivative is the normal's 1st order derivative. The problem is not about calculating normals, but rather how smoothly the normals progress in space. This will directly affect lighting and put in evidence the smoothness of the underlying function, and, generally speaking, the more continuosly derivable a function is, the smoother it will feel.
Although Perlin's former method results in continuos normals and continuos shading, you can still tell where the border is because the shading does not have a continuos derivative, and our perception is designed to perceive that as what it is: a continuos surface that is not that smooth in that border
Take a look at this four functions (from top to bottom):
Blatantly discontinuos mapping
Continous mapping, discontinuos derivative
Continuos derivative
Infinitely continuos derivative
half TriangleWave (half x) {
x = 2 * frac(x/2);
return min(x, 2 - x);
}
half SShape (half x) {
x = saturate(x);
return x * x * (3 - 2 * x);
}
fixed4 frag (v2f f) : SV_Target {
// Bottom to top coordinate system
half x = 3 * f.tex.x;
half y = f.tex.y;
if (y > 0.75)
return frac(x);
if (y > 0.5)
return TriangleWave(x);
if (y > 0.25)
return SShape(TriangleWave(x));
return 0.5 - 0.5 * cos(x*3.141592);
}
The last one is the smoothest, but you almost can't tell it from the third one.
But the second one, although a continous mapping, still lets you feel a border there.
Due to the normals' direct effect on lighting (the previous shader can easily be though as a lighting calculation), you are basically choosing between the second and third option. That's why you usually want the normals themselves to be continuously differentiable, and not settle for just continuity
I'm trying to teach myself some machine learning, and have been using the MNIST database (http://yann.lecun.com/exdb/mnist/) do so. The author of that site wrote a paper in '98 on all different kinds of handwriting recognition techniques, available at http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf.
The 10th method mentioned is a "Tangent Distance Classifier". The idea being that if you place each image in a (NxM)-dimensional vector space, you can compute the distance between two images as the distance between the hyperplanes formed by each where the hyperplane is given by taking the point, and rotating the image, rescaling the image, translating the image, etc.
I can't figure out enough to fill in the missing details. I understand that most of these are indeed linear operators, so how does one use that fact to then create the hyperplane? And once we have a hyperplane, how do we take its distance with other hyperplanes?
I will give you some hints. You need some background knowledge in image processing. Please refer to 2,3 for details.
2 is a c implementation of tangent distance
3 is a paper that describes tangent distance in more details
Image Convolution
According to 3, the first step you need to do is to smooth the picture. Below we show the result of 3 different smooth operations (check section 4 of 3) (The left column shows the result images, the right column shows the original images and the convolution operators). This step is to map the discrete vector to continuous one so that it is differentiable. The author suggests to use a Gaussian function. If you need more background about image convolution, here is an example.
After this step is done, you have calculated the horizontal and vertical shift:
Calculating Scaling Tangent
Here I show you one of the tangent calculations implemented in 2 - the scaling tangent. From 3, we know the transformation is as below:
/* scaling */
for(k=0;k<height;k++)
for(j=0;j<width;j++) {
currentTangent[ind] = ((j+offsetW)*x1[ind] + (k+offsetH)*x2[ind])*factor;
ind++;
}
In the beginning of td.c in 2's implementation, we know the below definition:
factorW=((double)width*0.5);
offsetW=0.5-factorW;
factorW=1.0/factorW;
factorH=((double)height*0.5);
offsetH=0.5-factorH;
factorH=1.0/factorH;
factor=(factorH<factorW)?factorH:factorW; //min
The author is using images with size 16x16. So we know
factor=factorW=factorH=1/8,
and
offsetH=offsetW = 0.5-8 = -7.5
Also note we already computed
x1[ind] = ,
x2[ind] =
So that, we plug in those constants:
currentTangent[ind] = ((j-7.5)*x1[ind] + (k-7.5)*x2[ind])/8
= x1 * (j-7.5)/8 + x2 * (k-7.5)/8.
Since j(also k) is an integer between 0 and 15 inclusive (the width and the height of the image are 16 pixels), (j-7.5)/8 is just a fraction number between -0.9375 to 0.9375.
So I guess (j+offsetW)*factor is the displacement for each pixel, which is proportional to the horizontal distance from the pixel to the center of the image. Similarly you know the vertical displacement (k+offsetH)*factor.
Calculating Rotation Tangent
Rotation tangent is defined as below in 3:
/* rotation */
for(k=0;k<height;k++)
for(j=0;j<width;j++) {
currentTangent[ind] = ((k+offsetH)*x1[ind] - (j+offsetW)*x2[ind])*factor;
ind++;
}
Using the conclusion from previous, we know (k+offsetH)*factor corresponds to y. Similarly - (j+offsetW)*factor corresponds to -x. So you know that is exactly the formula used in 3.
You can find all other tangents described in 3 implemented at 2. I like the below image from 3, which clearly shows the displacements effect of different transformation tangents.
Calculating the tangent distance between images
Just follow the implementation in tangentDistance function:
// determine the tangents of the first image
calculateTangents(imageOne, tangents, numTangents, height, width, choice, background);
// find the orthonormal tangent subspace
numTangentsRemaining = normalizeTangents(tangents, numTangents, height, width);
// determine the distance to the closest point in the subspace
dist=calculateDistance(imageOne, imageTwo, (const double **) tangents, numTangentsRemaining, height, width);
I think the above should be enough to get you started and if anything is missing, please read 3 carefully and see corresponding implementations in 2. Good luck!
Lets say I have a plane, with the four co-ordinates:
(0,0,0)
(0,0,1)
(1,2,0)
(1,2,1)
So it's a basic plane with a gradient of 2 in x axis and 0 in the others? -I can figure that out just by plotting/looking at it.
How could I work the gradient out of any given (assuming four co-ords form a flat surface) plane?
I'm very confused when it comes to vectors/matrix's/co-ords/transformations etc... But I need to know the gradient of planes for a java3d project I'm making.
I could be wrong, but I think you're confused about what a gradient is. If I'm thinking of the correct definition of gradient, then you can only take the gradient of a function. In other words let f:R^3 -> R, grad(f) = (df/dx,df/dy,df/dz). So, you can't exactly take the gradient of a plane, because a plan in general is not a function. However, a plane can be expressed as a two-variable function, and you can take the gradient of that. A plane is every linear combination of two vectors, in this case (0,0,1) and (1,2,0), which you would write as:
f:R^2 -> R^3, f(u,v) = u*(0,0,1) + v*(1,2,0). To find the vectors multiplied to u and v, just choose three of those four points such that the three you choose are not colinear, and find a vector from the first to the second and from the first to the third. Since you now have your plane expressed as a function, you can take the gradient.
This return below is defined as a gaussian falloff. I am not seeing e or powers of 2, so I am not sure how this is related to the Gaussian falloff, or if it is the wrong kind of fallout for me to use to get a nice smooth deformation on my mesh:
Mathf.Clamp01 (Mathf.Pow (360.0, -Mathf.Pow (distance / inRadius, 2.5) - 0.01))
where Mathf.Clamp01 returns a value between 0 and 1.
inRadius is the size of the distortion and distance is determined by:
sqrMagnitude = (vertices[i] - position).sqrMagnitude;
// Early out if too far away
if (sqrMagnitude > sqrRadius)
continue;
distance = Mathf.Sqrt(sqrMagnitude);
vertices is a list of mesh vertices, and position is the point of mesh manipulation/deformation.
My question is two parts:
1) Is the above actually a Gaussian falloff? It is expontential, but there does not seem to be the crucial e or power of 2... (Updated - I see how the graph seems to decrease smoothly in a Gaussian-like way. Perhaps this function is not the cause for problem 2 below)
2) My mesh is not deforming smoothly enough - given the above parameters, would you recommend a different Gaussian falloff?
Don't know about meshes etc. but lets see that math:
f=360^(-0.1- ((d/r)^2.5) ) looks similar enough to gausian function to make a "fall off".
i'll take the exponent apart to show a point:
f= 360^( -(d/r)^2.5)*360^(-0.1)=(0.5551)*360^( -(d/r)^2.5)
if d-->+inf then f-->0
if d-->+0 then f-->(0.5551)
the exponent of 360 is always negative (assuming 'distance' and 'inRadius' are always positive) and getting bigger (more negative) almost cubicly ( power of 2.5) with distance thus the function is "falling off" and doing it pretty fast.
Conclusion: the function is not Gausian because it behaves badly for negative input and probably for other reasons. It does exibits the "fall off" behavior you are looking for.
Changing r will change the speed of the fall-off. When d==r the f=(1/360)*0.5551.
The function will never go over 0.5551 and below zero so the "clipping" in the code is meaningless.
I don't see any see any specific reason for the constant 360 - changing it changes the slope a bit.
cheers!
I have an implicit scalar field defined in 2D, for every point in 2D I can make it compute an exact scalar value but its a somewhat complex computation.
I would like to draw an iso-line of that surface, say the line of the '0' value. The function itself is continuous but the '0' iso-line can have multiple continuous instances and it is not guaranteed that all of them are connected.
Calculating the value for each pixel is not an option because that would take too much time - in the order of a few seconds and this needs to be as real time as possible.
What I'm currently using is a recursive division of space which can be thought of as a kind of quad-tree. I take an initial, very coarse sampling of the space and if I find a square which contains a transition from positive to negative values, I recursively divide it to 4 smaller squares and checks again, stopping at the pixel level. The positive-negative transition is detected by sampling a sqaure in its 4 corners.
This work fairly well, except when it doesn't. The iso-lines which are drawn sometimes get cut because the transition detection fails for transitions which happen in a small area of an edge and that don't cross a corner of a square.
Is there a better way to do iso-line drawing in this settings?
I've had a lot of success with the algorithms described here http://web.archive.org/web/20140718130446/http://members.bellatlantic.net/~vze2vrva/thesis.html
which discuss adaptive contouring (similar to that which you describe), and also some other issues with contour plotting in general.
There is no general way to guarantee finding all the contours of a function, without looking at every pixel. There could be a very small closed contour, where a region only about the size of a pixel where the function is positive, in a region where the function is generally negative. Unless you sample finely enough that you place a sample inside the positive region, there is no general way of knowing that it is there.
If your function is smooth enough, you may be able to guess where such small closed contours lie, because the modulus of the function gets small in a region surrounding them. The sampling could then be refined in these regions only.