Math function to find saturation point of a curve - math

Does anybody know an algorithm in c to find the saturation point in a curve of saturation?
The curve could change its increasing speed in a sharp or in a smooth way, and have noise included, so it's not as simple as I thought.
I tried calculating the derivative of atan(delta_y/delta_x), but it doesn't work fine for all the curves.

It appears you're trying to ascertain, numerically, when the gradient of a function, fitted to some data points from a chemistry experiment, is less than one. It also seems like your data is noisy and you want to determine when the gradient would be less than one if the noise wasn't there.
Firstly, let's forget about the noise. You do not want to do this:
atan(((y(i)-y(i-1))/(x(i)-x(i-1)))*180/PI
There is no need to compute the angle of the gradient when you have the gradient is right there. Just compare (y(i)-y(i-1))/(x(i)-x(i-1)) to 1.
Secondly, if there is noise you can't trust derivatives computed like that. But to do better we really need to know more about your problem. There are infinitely many ways to interpret your data. Is there noise in the x values, or just in the y values? Do we expect this curve to have a characteristic shape or can it do anything.
I'll make a guess: This is some kind of chemistry thing where the y values rapidly increase but then the rate of increase slows down, so that in the absence of noise we have y = A(1-exp(-B*x)) for some A and B. If that's the case then you can use a non-linear regression algorithm to fit such a curve to your points and then test when the gradient of the fitted curve is less than 1.
But without more data, your question will be hard to answer. If you really are unwilling to give more information I'd suggest a quick and dirty filtering of your data. Eg. at any time estimate the true value of y by using a weighted average of the previous y values using weights that drop off exponentially the further back in time you go. Eg. instead of using y[i] use z[i] where
z[i] = sum over j = 0 to i of w[i,j]*y[j] / sum over j = 0 to i of w[i,j]
where
w[i,j] = exp(A*(x[j]-x[i]))
and A is some number that you tune by hand until you get the results you want. Try this, and plotting the z[i] as you tweak A. See if it does what you want.

We can get the maxima or minima of a curve quite easily from the function parameters of the curve.Can't see whats the reason why you getting inconsistent results.
I think the problem might be while trying to include the noise curve with the original .So make sure that you mixes these curves in a proper way.There is nothing wrong with the atan or any other math function you used. The problem is with your implementation which you haven't specified here.

Related

The reason of bias'es usage in networks?

It may be easy to see why but I still don't understand why we use bias in a neutral network? The weight's values will get changed, therefore ensuring whether the algorithm will learn. So, why use bias in all of this?
Because of linear equations.
Bias is another learned parameter. A single neuron will compute w*x + b where w is your weight parameter and b your bias.
Perhaps this helps you: Let's assume you are dealing with a 2D euclidian space that you'd like to classify with two labels. You can do that by computing a linear function and then classify everything below it with one label, and everything below with another. If you would not use the bias you could only change the slope of your function and your function would always intersect (0, 0). Bias gives you the possibility to define where that linear function intersects the y-axis for x=0, i.e. (0, y). E.g. without bias you could not separate data that is only separatable by the y-axis.

computer vision: segmentation setup. Graph cut potentials

I have been trying to teach myself some simple computer vision algorithms and am trying to solve a problem where I have some noise corrupted image and all I am trying to do is separate the black background from the foreground which has some signal. Now, the background RGB channels are not all completely zero as they can have some noise. However, the human eye can easily discern the foreground from the background.
So, what I did was use the SLIC algorithm to break the image down into super pixels. The idea being that since the image is noise corrupted, doing statistics on the patches might result in better classification of background and foreground because of higher SNR.
After this, I get around 100 patches which should have similar profile and the result of SLIC seems reasonable. I have been reading about graph cuts (the Kolmogorov paper) and it seemed like something nice to try for the binary problem I have. So, I constructed a graph which is a first order MRF and I have edges between the immediate neighbours (4-connected graph).
Now, I was wondering what possible unary and binary terms I can use here to do my segmentation. So, I was thinking for the unary term, I can model it as a simple Gaussian where the background should have a zero mean intensity and the foreground should have some non-zero mean. Although, I am struggling to figure out how to encode this. Should I just assume some noise variance and compute probabilities directly using patch statistics?
Similarly, for neighbouring patches I do want to encourage them to take similar label but I am not sure what binary term I can design that reflects that. Seems just the difference between the label (1 or 0) seems weird...
Sorry for the long-winded question. Hoping someone can give some helpful hint on how to start.
You could build your CRF model over superpixels, such that a superpixel has a connection to another superpixel if it is a neighbour of it.
For your statistical model Pixel Wise Posteriors are simple and cheap to compute.
So, I suggest the following for the unary terms of the CRF:
Build foreground and background histograms over texture per pixel(assuming you have a mask, or reasonable amount of marked foreground pixels(note, not superpixels)).
For each superpixel, make an independence assumption over pixels within it, such that a superpixels likelihood of being either foreground or background is the product over each observation in the superpixel(in practice, we sum logs). The individual likelihood terms come from the histograms that you generated.
Compute the posterior for foreground as the cumulative likelihood described above for foreground divided by the sum of the cumulative likelihoods of both. Similar for background.
The pairwise terms between superpixels can be as simple as the difference between the mean observed textures(pixelwise) for each passed through a kernel, such as the Radial Basis Function.
Alternatively, you could compute histograms over each superpixels observed texture(again, pixel wise) and compute the Bhattacharyya Distance between each neighbouring pair of superpixels.

How to smoothly interpolate between points in two-dimensional space?

Let's say I have a number of points, each defined by an X and Y coordinate in a two-dimensional cartesian coordinate system. The X coordinate of every point is greater than the one of its predecessor, so there can't be any loops.
How can I draw a smooth line through these points? The result should look something like a sine wave, but with varying amplitude and wavelength. It's absolutely fine if it is simplified or approximated as long as it allows me to calculate the Y coordinate of the interpolated points without using any library functions for lines or splines. Language doesn't matter, I'm interested in the algorithm, not the implementation. For full disclosure, I plan to implement it in JavaScript.
I'd like to stay away from complicated math like Bézier splines or something with control points. I feel there must be a simple solution that maybe works with the distance to the points or something like that.
Any idea is appreciated.
Sounds like you need an interpolating polynomial. There are a number of ways you can fit it. Try reading this
http://en.wikipedia.org/wiki/Polynomial_interpolation#Constructing_the_interpolation_polynomial
If you have a large number of points, then you may consider wanting to use an approximation instead otherwise you could suffer from overfitting and poor representation of your data between points. In that case, you could use least-squares polynomial approximation. It depends on the degree of accuracy that you need.
http://en.wikipedia.org/wiki/Least_squares#Linear_least_squares
In particular, if your data is sinusoidal, you can actually approximate data using trignometric basis functions (sine or cosine functions of different integer frequencies) instead of regular powers of x.
Alternatively you can interpolate using splines in a non parametric way that does not involve control points
http://en.wikipedia.org/wiki/Spline_interpolation
Using splines will prevent you getting the potential wild oscillations that you can get using basic high degree polynomial interpolation.
As with all approximation problems, it is hard to give a definitive answer without seeing the data (and the amount of it). Ultimately though if you have a large number of data, basic polynomial interpolation is not your friend as if you have 1000 points to interpolate, you need a 999 degree polynomial.
You cannot avoid "complicated" math here. And it is not that much complicated.
Cubic splines is good solution for your problem. For the similar task I found this paper with short explanation and a matrix which I used for my computations.
You can try using approximation methods. "Least squares" and its modifications are one of the simplest, and easy to implement.

Mathematical indicator for the "flattness" of a curve?

I am currently working on a computer science project where I have to evaluate charts. The charts are simple lines in an x-y-coordinate-system, given by CSV files. the flatter the curve, the better for me. Now I am looking for an indicator for the "flatness" of these curves.
My first idea was to calculate the first derivative of the function and then calculate the average between two points. If this value is near 0, then the function is pretty flat.
Is that a good idea? Is there any better solution?
Edit:
Here is a picture as an example. Which curve is flatter between x1 and x2?
You might consider using the standard deviation as a measure of distance from a perfectly flat line. First do a simple linear regression to find the ideally fitting flat line, then compute the standard deviation of the residues.
if the values are all positive you could try calculating the integral.
So basically the surface below the line.
The lower the integral, the better. Just like you need it.
If you also expect negative values, you could basically do the same after changing the sign.
If the quickness of change is important to the answer (that is, many small zig-zags are considered flatter than a gradual rise), the slope of the autocorrelation function might be interesting.
Compare max(abs(d)) where d is the (numerical) derivative of the curve. That'll give you how steep the curve is compared to the flat curve (y = CONSTANT), but won't tell you how far away from the flat curve you'll get.
The peakedness of a statistical distribution is called "kurtosis".
Kurtosis = [[E[(mu-x)^4]]/[E[(mu-x)^2]]^2]-3
mu = average value of x in the population
E[y] = the expected value of y
Since this is usually used with probability functions, I would suggest you divide all values in the curve by the area under it.
1.First apply the linear regression to find the ideally fitting flat line
2.Measure the least square of the residues.

Point Sequence Interpolation

Given an arbitrary sequence of points in space, how would you produce a smooth continuous interpolation between them?
2D and 3D solutions are welcome. Solutions that produce a list of points at arbitrary granularity and solutions that produce control points for bezier curves are also appreciated.
Also, it would be cool to see an iterative solution that could approximate early sections of the curve as it received the points, so you could draw with it.
The Catmull-Rom spline is guaranteed to pass through all the control points. I find this to be handier than trying to adjust intermediate control points for other types of splines.
This PDF by Christopher Twigg has a nice brief introduction to the mathematics of the spline. The best summary sentence is:
Catmull-Rom splines have C1
continuity, local control, and
interpolation, but do not lie within
the convex hull of their control
points.
Said another way, if the points indicate a sharp bend to the right, the spline will bank left before turning to the right (there's an example picture in that document). The tightness of those turns in controllable, in this case using his tau parameter in the example matrix.
Here is another example with some downloadable DirectX code.
One way is Lagrange polynominal, which is a method for producing a polynominal which will go through all given data points.
During my first year at university, I wrote a little tool to do this in 2D, and you can find it on this page, it is called Lagrange solver. Wikipedia's page also has a sample implementation.
How it works is thus: you have a n-order polynominal, p(x), where n is the number of points you have. It has the form a_n x^n + a_(n-1) x^(n-1) + ...+ a_0, where _ is subscript, ^ is power. You then turn this into a set of simultaneous equations:
p(x_1) = y_1
p(x_2) = y_2
...
p(x_n) = y_n
You convert the above into a augmented matrix, and solve for the coefficients a_0 ... a_n. Then you have a polynomial which goes through all the points, and you can now interpolate between the points.
Note however, this may not suit your purpose as it offers no way to adjust the curvature etc - you are stuck with a single solution that can not be changed.
You should take a look at B-splines. Their advantage over Bezier curves is that each part is only dependent on local points. So moving a point has no effect on parts of the curve that are far away, where "far away" is determined by a parameter of the spline.
The problem with the Langrange polynomial is that adding a point can have extreme effects on seemingly arbitrary parts of the curve; there's no "localness" like described above.
Have you looked at the Unix spline command? Can that be coerced into doing what you want?
There are several algorithms for interpolating (and exrapolating) between an aribtrary (but final) set of points. You should check out numerical recipes, they also include C++ implementations of those algorithms.
Unfortunately the Lagrange or other forms of polynomial interpolation will not work on an arbitrary set of points. They only work on a set where in one dimension e.g. x
xi < xi+1
For an arbitary set of points, e.g. an aeroplane flight path, where each point is a (longitude, latitude) pair, you will be better off simply modelling the aeroplane's journey with current longitude & latitude and velocity. By adjusting the rate at which the aeroplane can turn (its angular velocity) depending on how close it is to the next waypoint, you can achieve a smooth curve.
The resulting curve would not be mathematically significant nor give you bezier control points. However the algorithm would be computationally simple regardless of the number of waypoints and could produce an interpolated list of points at arbitrary granularity. It would also not require you provide the complete set of points up front, you could simply add waypoints to the end of the set as required.
I came up with the same problem and implemented it with some friends the other day. I like to share the example project on github.
https://github.com/johnjohndoe/PathInterpolation
Feel free to fork it.
Google "orthogonal regression".
Whereas least-squares techniques try to minimize vertical distance between the fit line and each f(x), orthogonal regression minimizes the perpendicular distances.
Addendum
In the presence of noisy data, the venerable RANSAC algorithm is worth checking out too.
In the 3D graphics world, NURBS are popular. Further info is easily googled.

Resources