I found a formulas on wikipedia where the formulas for converting from RGB to Grayscale are shown. If the receipt of a linear shade of gray is understandable. The formulas for a compressed shade of gray are not clear.
What is mean: 1.055, 1/2.4 (I think 2.4 - gray gamma, but why 2.4 not 2.2?), 0.055...and other coefs. I did not find any information about this.
The reason given in “A Standard Default Color Space for the Internet” is:
The effect of the above equations is to closely fit a straightforward gamma 2.2 curve with an slight offset to allow for invertability in integer math.
My interpretation is that the formula is easier to calculate when using integer-variables (e.g. not floating point) and this was important at the time.
Related
Recently I have seen some pictures that could be drawn by some mathematical equations like the Batman Logo and the Heart.
Is there a specific way to find the equations which draw a desired picture? (e.g. I want to draw the letter S with some mathematical equations).
Thanks.
p.s. I guess it is an optimization problem. First get some samples from the border of the desired picture and then finding a function which has the minimum difference from those samples.
Assuming your picture is in black and white you may want to have a look at this
http://www.mathworks.co.uk/help/stats/nlinfit.html
You can get the points and perform regression on them. Linear regression will get you a line. Nonlinear will get you something more accurate.
If the picture is more complex, then you will have to extract some features and it gets more complicated.
You're right. The more samples you have from the layout of you picture the closer to its function you can get by using numerical analysis for approximation (e.g., you can find a polynomial containing all your samples).
I am currently working on a computer science project where I have to evaluate charts. The charts are simple lines in an x-y-coordinate-system, given by CSV files. the flatter the curve, the better for me. Now I am looking for an indicator for the "flatness" of these curves.
My first idea was to calculate the first derivative of the function and then calculate the average between two points. If this value is near 0, then the function is pretty flat.
Is that a good idea? Is there any better solution?
Edit:
Here is a picture as an example. Which curve is flatter between x1 and x2?
You might consider using the standard deviation as a measure of distance from a perfectly flat line. First do a simple linear regression to find the ideally fitting flat line, then compute the standard deviation of the residues.
if the values are all positive you could try calculating the integral.
So basically the surface below the line.
The lower the integral, the better. Just like you need it.
If you also expect negative values, you could basically do the same after changing the sign.
If the quickness of change is important to the answer (that is, many small zig-zags are considered flatter than a gradual rise), the slope of the autocorrelation function might be interesting.
Compare max(abs(d)) where d is the (numerical) derivative of the curve. That'll give you how steep the curve is compared to the flat curve (y = CONSTANT), but won't tell you how far away from the flat curve you'll get.
The peakedness of a statistical distribution is called "kurtosis".
Kurtosis = [[E[(mu-x)^4]]/[E[(mu-x)^2]]^2]-3
mu = average value of x in the population
E[y] = the expected value of y
Since this is usually used with probability functions, I would suggest you divide all values in the curve by the area under it.
1.First apply the linear regression to find the ideally fitting flat line
2.Measure the least square of the residues.
Does anybody know an algorithm in c to find the saturation point in a curve of saturation?
The curve could change its increasing speed in a sharp or in a smooth way, and have noise included, so it's not as simple as I thought.
I tried calculating the derivative of atan(delta_y/delta_x), but it doesn't work fine for all the curves.
It appears you're trying to ascertain, numerically, when the gradient of a function, fitted to some data points from a chemistry experiment, is less than one. It also seems like your data is noisy and you want to determine when the gradient would be less than one if the noise wasn't there.
Firstly, let's forget about the noise. You do not want to do this:
atan(((y(i)-y(i-1))/(x(i)-x(i-1)))*180/PI
There is no need to compute the angle of the gradient when you have the gradient is right there. Just compare (y(i)-y(i-1))/(x(i)-x(i-1)) to 1.
Secondly, if there is noise you can't trust derivatives computed like that. But to do better we really need to know more about your problem. There are infinitely many ways to interpret your data. Is there noise in the x values, or just in the y values? Do we expect this curve to have a characteristic shape or can it do anything.
I'll make a guess: This is some kind of chemistry thing where the y values rapidly increase but then the rate of increase slows down, so that in the absence of noise we have y = A(1-exp(-B*x)) for some A and B. If that's the case then you can use a non-linear regression algorithm to fit such a curve to your points and then test when the gradient of the fitted curve is less than 1.
But without more data, your question will be hard to answer. If you really are unwilling to give more information I'd suggest a quick and dirty filtering of your data. Eg. at any time estimate the true value of y by using a weighted average of the previous y values using weights that drop off exponentially the further back in time you go. Eg. instead of using y[i] use z[i] where
z[i] = sum over j = 0 to i of w[i,j]*y[j] / sum over j = 0 to i of w[i,j]
where
w[i,j] = exp(A*(x[j]-x[i]))
and A is some number that you tune by hand until you get the results you want. Try this, and plotting the z[i] as you tweak A. See if it does what you want.
We can get the maxima or minima of a curve quite easily from the function parameters of the curve.Can't see whats the reason why you getting inconsistent results.
I think the problem might be while trying to include the noise curve with the original .So make sure that you mixes these curves in a proper way.There is nothing wrong with the atan or any other math function you used. The problem is with your implementation which you haven't specified here.
I have a function that sometimes is non-differentiable at a point. When I now use a spline (Bezierspline in degrafa) for interpolation the interpolation at this point does not work as expected (at this point my function has a kink). Now when interpolating with a spline it draws some kind of loop around this point. I think this happens because the spline needs the derivatives of the functions which is not unique at this point.
Is that right? What would you advise me to do in this case?
Thanks in advance
Sebastian
You can't calculate the gradient of a "kink" (as you so eloquently put it). If you really need a gradient at such a point (x), I'd just average the gradient at (x-d) and (x+d) where d is a small enough delta. It's as mathematically valid as any other single answer you're likely to get.
For example, the function:
f(x) = |x|
will produce:
\ | /
\ | /
\ | /
\|/
----+----
where there is no gradient at the origin (0,0). However, averaging the gradients at -0.0001 (gradient = -1) and +0.0001 (gradient = +1) will give you a gradient of zero (flat line).
This should give a half-decent answer for other equations that produce non-symmetrical gradients at (x-d) and (x+d) as well.
What I would do, since it's licensed under MIT, is to modify the source to allow an option for the Bezierspline to use that +/- delta method to calculate gradients at the non-continuous points. Maybe even push back the source changes to the developers if you think it's a worthwhile addition.
That sounds right. It's been a while since I've looked at splines, but I'm pretty sure if the function is not continuous, your spline should also be discontinuous at the same points. Although I have seen interpolations that give an approximate curve at such a point... I'll check my text-books if no one else comes up with a better answer.
But a loop is a pretty good attempt. kudos to your function.
Given an arbitrary sequence of points in space, how would you produce a smooth continuous interpolation between them?
2D and 3D solutions are welcome. Solutions that produce a list of points at arbitrary granularity and solutions that produce control points for bezier curves are also appreciated.
Also, it would be cool to see an iterative solution that could approximate early sections of the curve as it received the points, so you could draw with it.
The Catmull-Rom spline is guaranteed to pass through all the control points. I find this to be handier than trying to adjust intermediate control points for other types of splines.
This PDF by Christopher Twigg has a nice brief introduction to the mathematics of the spline. The best summary sentence is:
Catmull-Rom splines have C1
continuity, local control, and
interpolation, but do not lie within
the convex hull of their control
points.
Said another way, if the points indicate a sharp bend to the right, the spline will bank left before turning to the right (there's an example picture in that document). The tightness of those turns in controllable, in this case using his tau parameter in the example matrix.
Here is another example with some downloadable DirectX code.
One way is Lagrange polynominal, which is a method for producing a polynominal which will go through all given data points.
During my first year at university, I wrote a little tool to do this in 2D, and you can find it on this page, it is called Lagrange solver. Wikipedia's page also has a sample implementation.
How it works is thus: you have a n-order polynominal, p(x), where n is the number of points you have. It has the form a_n x^n + a_(n-1) x^(n-1) + ...+ a_0, where _ is subscript, ^ is power. You then turn this into a set of simultaneous equations:
p(x_1) = y_1
p(x_2) = y_2
...
p(x_n) = y_n
You convert the above into a augmented matrix, and solve for the coefficients a_0 ... a_n. Then you have a polynomial which goes through all the points, and you can now interpolate between the points.
Note however, this may not suit your purpose as it offers no way to adjust the curvature etc - you are stuck with a single solution that can not be changed.
You should take a look at B-splines. Their advantage over Bezier curves is that each part is only dependent on local points. So moving a point has no effect on parts of the curve that are far away, where "far away" is determined by a parameter of the spline.
The problem with the Langrange polynomial is that adding a point can have extreme effects on seemingly arbitrary parts of the curve; there's no "localness" like described above.
Have you looked at the Unix spline command? Can that be coerced into doing what you want?
There are several algorithms for interpolating (and exrapolating) between an aribtrary (but final) set of points. You should check out numerical recipes, they also include C++ implementations of those algorithms.
Unfortunately the Lagrange or other forms of polynomial interpolation will not work on an arbitrary set of points. They only work on a set where in one dimension e.g. x
xi < xi+1
For an arbitary set of points, e.g. an aeroplane flight path, where each point is a (longitude, latitude) pair, you will be better off simply modelling the aeroplane's journey with current longitude & latitude and velocity. By adjusting the rate at which the aeroplane can turn (its angular velocity) depending on how close it is to the next waypoint, you can achieve a smooth curve.
The resulting curve would not be mathematically significant nor give you bezier control points. However the algorithm would be computationally simple regardless of the number of waypoints and could produce an interpolated list of points at arbitrary granularity. It would also not require you provide the complete set of points up front, you could simply add waypoints to the end of the set as required.
I came up with the same problem and implemented it with some friends the other day. I like to share the example project on github.
https://github.com/johnjohndoe/PathInterpolation
Feel free to fork it.
Google "orthogonal regression".
Whereas least-squares techniques try to minimize vertical distance between the fit line and each f(x), orthogonal regression minimizes the perpendicular distances.
Addendum
In the presence of noisy data, the venerable RANSAC algorithm is worth checking out too.
In the 3D graphics world, NURBS are popular. Further info is easily googled.