I'm trying to do a precise calculation with floats like
let pi = double (22/7)
printfn "%f" (cos(2.00*pi*1.00/2.00))
// output: -0.989992
On a calculator I get -1, so it can round up and down correctly, however, when I do this in F# I get the result/output: -0.989992 which is close to -1, but how do I get an output -1 so it rounds up and down correctly?
I tried to read about the topic and it seems like I need to import a module, can this be true?
Your calculation is off not because of rounding error, but because 22/7 is a very loose approximation of the value of π.
22/7 = 3.142857142857...
π = 3.14159265358979...
22/7 - π = 0.00126448927...
Wolfram Alpha uses a much better approximation of π than 22/7, so that's why your calculation is showing different results from Wolfram Alpha.
Instead of doing let pi = double (22/7), you should just use System.Math.PI (e.g., let pi = System.Math.PI). That will get you an accurate value for (cos(2.00*pi*1.00/2.00)). No need for rounding.
See the docs for Math.PI for more details.
So the question has a few problems.
As others point out 22/7 is just an approximation of PI
Aslo, let pi = double(22/7) results in pi = 3.0. This because 22/7 is integer division in F#.
When comparing with wolfram the expression uses a better approximation of PI than 3.0 meaning the F# result differs from wolfram rather significantly
When asking Wolfram and F# to compute the same expression: cos(3) the result is as following.
F#: cos 3.0 => -0.989992496600445
Wolfram: cos(3) => -0.98999249660044545727157279473126130239367909661558832881
Wolfram do compute more decimals but we see that the numbers only differs by > 1e-15
When we ask F# and Wolfram to what cos(pi) is they are in agreement:
F#: cos System.Math.PI => -1.0
Wolfram: cos(pi) => -1
I found the way, sorry! You need to use System.Math.Round. No need to import anything as its already built-in.
I.e.: System.Math.Round(System.Math.Round(cos((2.00*pi*2.00/ 2.00)), 0))
Related
How to integrate an interval-algorithm-based nonlinear function (for example, d(F(X))/dX= a/(1+cX), where a=[1, 2], c=[2, 3] are interval constants) using the "IntervalArithmetic" package in Julia? Could you please give an example? Or could you please provide a relevant document? F(X) will come out as an interval with bounds, F(X)=[p, q].
Just numerical integration?
As long as the integrating code is written in Julia (otherwise i suspect it will struggle to understand IntervalArithmetic) and there isn't some snag about how it should interpret tolerances, then it should just work, more or less how you might expect it to handle e.g. complex numbers.
using IntervalArithmetic
f(x) = interval(1,2)/(1+interval(2,3)*x)
and combined with e.g.
using QuadGK
quadgk(f, 0, 1)
gives ([0.462098, 1.09862], [0, 0.636515]) (so.. i guess here the interpretation is that the error is in the interval 0 to 0.636515 :))
Just as a sanity check, lets just go with a good old trapezodial rule.
using Trapz
xs = range(0, 1,length=100)
trapz(xs, f.(xs))
again gives us the expected interval [0.462098, 1.09862]
Here is the question and solution to Structure and Interpretation of Computer Programs' exercise 1.15 (see here). My problem is, I don't know how the combination of these formulae actually work:
and
for small x radian values.
I understand the idea that the closer the radian angle gets to zero, the more it approximates the sine of that angle. I've seen excellent explanations (MIT OCW, Khan Academy). I also have worked out how the
formula is derived. But how are they being used together to derive an answer to sin(x)? The p function seems to simply be taking the variable angle divided by 3 each recursive pass until angle is down below 0.1 Then on the way back, we perform p as many times as we had to divide by 3. So it seems
magically becomes the same as
through recursive application. How? I'm not very deeply versed in recursion theory. Also, if this is logarithmically getting closer to 0.1, it's not as if we're totaling up lots of small x's a la integration. This seems to be doing something vaguely like the Y-combinator -- which I also don't grasp that well yet.
Also, when we see the recursive steps (recursion) repeatedly dividing angle by $3$, what tells you definitively this is logarithmic? I mean, it looks like it's taking those giant order of magnitude leaps at each division, but is there another analytical way to call this logarithmic reduction?
The first thing to point out is that is not exactly accurate since x is just an approximation. The correct notation is
. This might seem a little nitpicky but it's important because this explains the exercise and the definition of sine given in the book.
The way and are combined is in the definition of the sine procedure. The idea is that we would like to return either the approximation or the second formula () depending on the value of x.
If x is "sufficiently small", then we just return x as an approximation for sin(x). But if it's not "sufficiently small" we will use . This is obviously fine since it's an equality. It might seem unnecessary until you notice that sin(x/3) is smaller and therefore it might be "sufficiently smaller". This is why the procedure is recursive, we will keep doing this until the argument for sine is "sufficiently small".
It seems that the source of your confusion is here:
So it seems magically becomes the same as .
This is not the case. It's a bit tricky since (define (p x) (- (* 3 x) (* (4 (cube x)))) doesn't include any sine but remember that the x in this definition is just a local variable. But if we look at the final line of the definition of the sine procedure we can see that we are actually calling (p (sin (/ angle 3.0))))) so the sine is in the argument of the p call.
The reason why the recursion is logarithmic in terms of the number of steps is that the number of times we will be calling the p procedure is around the number of times we have to divide the angle by 3.0 to get a value smaller than 0.1. This is a value close to 1 if the angle is a big number. So we will have to call p until angle/(3.0^n) < 0.1 which approximates to the n such that 3.0^n > angle which approximates to
i tried to get this work by using Newton's method as described here: wiki using the following code, but the problem is that it is giving accurate result only upto 16 decimal places. I tried to increase the number of iterations still the result is the same. I started with initial guess of 1. So how can i improve the accuracy of answer(upto 100 or more decimal places) ?
Thanks.
Code:
double x0,x1;
#define n 2
double f(double x0)
{
return ((x0*x0)-n);
}
double firstDerv(double x0)
{
return 2.0*x0;
}
int main()
{
x0 = n/2.0;
int i;
for(i=0;i<40000;i++)
{
x1=x0-(f(x0)/((firstDerv(x0))));
x0=x1;
}
printf("%.100lf\n",x1);
return 0;
}
To get around the problem of limited precision floating points, you can also use Newton's method to
find in each iteration a rational (a/b, with a and b integers) that is a better approximation of sqr(2).
If x=a/b is the value returned from you last iteration, then Newton's method states that the new estimate y=c/d is:
y = x/2 + 1/x = a/2b + b/a = (a^2+2b^2)(2ab)
so:
c= a^2 + 2b^2
d= 2ab
Precision doubles each iteration. You are still limited in the precision you can reach, because nominator and denominator quickly increase, but perhaps finding an implementation of large integers (or concocting one yourself) is easier than finding an implementation of arbitrary precision floating points. Also, if you are really interested in decimals, then this answer won't help you. It does give you a very precise estimate of sqr(2).
Just some iterates of a/b for the algorithm:
1/1 , 3/2 , 17/12 , 577/408 , 665857/470832.
665857/470832 approximates sqr(2) with an error of 1.59e-12. Error will remain to be of the order 1/a^2, so implementing a and b as longs will give you precision of 1e-37 -ish.
The floating point numbers on current machines are IEEE754 and have a limited precision (of about 15 digits).
If you want much more precision, you'll need bignums which are (slowly) provided by software libraries like GMP
You could also code your program in languages and implementations using bignums.
You simply can't do it with that approach; doubles don't have enough bits to get 100 places of precision. Consider using a library for arbitrary-precision such as GMP.
maybe this is because floating point numbers are approximated by computers through an m*10^e form. since m and e consist of a finite number of digits you cannot approximate all numbers with absolute accuracy.
think of 1/3 which is 0.333333333333333...
I have a function that takes a floating point number and returns a floating point number. It can be assumed that if you were to graph the output of this function it would be 'n' shaped, ie. there would be a single maximum point, and no other points on the function with a zero slope. We also know that input value that yields this maximum output will lie between two known points, perhaps 0.0 and 1.0.
I need to efficiently find the input value that yields the maximum output value to some degree of approximation, without doing an exhaustive search.
I'm looking for something similar to Newton's Method which finds the roots of a function, but since my function is opaque I can't get its derivative.
I would like to down-thumb all the other answers so far, for various reasons, but I won't.
An excellent and efficient method for minimizing (or maximizing) smooth functions when derivatives are not available is parabolic interpolation. It is common to write the algorithm so it temporarily switches to the golden-section search (Brent's minimizer) when parabolic interpolation does not progress as fast as golden-section would.
I wrote such an algorithm in C++. Any offers?
UPDATE: There is a C version of the Brent minimizer in GSL. The archives are here: ftp://ftp.club.cc.cmu.edu/gnu/gsl/ Note that it will be covered by some flavor of GNU "copyleft."
As I write this, the latest-and-greatest appears to be gsl-1.14.tar.gz. The minimizer is located in the file gsl-1.14/min/brent.c. It appears to have termination criteria similar to what I implemented. I have not studied how it decides to switch to golden section, but for the OP, that is probably moot.
UPDATE 2: I googled up a public domain java version, translated from FORTRAN. I cannot vouch for its quality. http://www1.fpl.fs.fed.us/Fmin.java I notice that the hard-coded machine efficiency ("machine precision" in the comments) is 1/2 the value for a typical PC today. Change the value of eps to 2.22045e-16.
Edit 2: The method described in Jive Dadson is a better way to go about this. I'm leaving my answer up since it's easier to implement, if speed isn't too much of an issue.
Use a form of binary search, combined with numeric derivative approximations.
Given the interval [a, b], let x = (a + b) /2
Let epsilon be something very small.
Is (f(x + epsilon) - f(x)) positive? If yes, the function is still growing at x, so you recursively search the interval [x, b]
Otherwise, search the interval [a, x].
There might be a problem if the max lies between x and x + epsilon, but you might give this a try.
Edit: The advantage to this approach is that it exploits the known properties of the function in question. That is, I assumed by "n"-shaped, you meant, increasing-max-decreasing. Here's some Python code I wrote to test the algorithm:
def f(x):
return -x * (x - 1.0)
def findMax(function, a, b, maxSlope):
x = (a + b) / 2.0
e = 0.0001
slope = (function(x + e) - function(x)) / e
if abs(slope) < maxSlope:
return x
if slope > 0:
return findMax(function, x, b, maxSlope)
else:
return findMax(function, a, x, maxSlope)
Typing findMax(f, 0, 3, 0.01) should return 0.504, as desired.
For optimizing a concave function, which is the type of function you are talking about, without evaluating the derivative I would use the secant method.
Given the two initial values x[0]=0.0 and x[1]=1.0 I would proceed to compute the next approximations as:
def next_x(x, xprev):
return x - f(x) * (x - xprev) / (f(x) - f(xprev))
and thus compute x[2], x[3], ... until the change in x becomes small enough.
Edit: As Jive explains, this solution is for root finding which is not the question posed. For optimization the proper solution is the Brent minimizer as explained in his answer.
The Levenberg-Marquardt algorithm is a Newton's method like optimizer. It has a C/C++ implementation levmar that doesn't require you to define the derivative function. Instead it will evaluate the objective function in the current neighborhood to move to the maximum.
BTW: this website appears to be updated since I last visited it, hope it's even the same one I remembered. Apparently it now also support other languages.
Given that it's only a function of a single variable and has one extremum in the interval, you don't really need Newton's method. Some sort of line search algorithm should suffice. This wikipedia article is actually not a bad starting point, if short on details. Note in particular that you could just use the method described under "direct search", starting with the end points of your interval as your two points.
I'm not sure if you'd consider that an "exhaustive search", but it should actually be pretty fast I think for this sort of function (that is, a continuous, smooth function with only one local extremum in the given interval).
You could reduce it to a simple linear fit on the delta's, finding the place where it crosses the x axis. Linear fit can be done very quickly.
Or just take 3 points (left/top/right) and fix the parabola.
It depends mostly on the nature of the underlying relation between x and y, I think.
edit this is in case you have an array of values like the question's title states. When you have a function take Newton-Raphson.
I just wondered how it is possible to write a user-defined square root function (sqrt) in a way that it interacts properly with F#'s unit system.
What it should be like:
let sqrt (x : float<'u ^ 2>) =
let x' = x / 1.0<'u ^ 2> // Delete unit
(x ** 0.5) * 1.0<'u> // Reassign unit
But this is disallowed due to nonzero constants not being allowed to have generic units.
Is there a way to write this function? With the builtin sqrt it works fine, so what magic does it perform?
Allowing nonzero generic constants would make it very easy to break the safety of the type system for units (see Andrew Kennedy's papers). I believe that the answer to your last question is that sqrt is indeed magic in some sense in that it shouldn't be possible to define a parametric function with that type signature through normal means. However, it is possible to do what you want (at least in the current version of F#) by taking advantage of boxing and casting:
let sqrt (x : float<'u^2>) =
let x' = (float x) ** 0.5 (* delete unit and calculate sqrt *)
((box x') :?> float<'u>)
#kvb is right, more generally:
If you have a non-unit aware algorithm (e.g. say you write 'cube root'), and you want to put units on it, you can wrap the algorithm in a function with the right type signature and use e.g. "float" to 'cast away' the units as they come in and the box-and-downcast approach to 'add back' the appropriate units on the way out.
In the RTM release (after Beta2), F# will have primitive library functions for the 'add back units', since the box-and-downcast approach is a currently bit of a hack to overcome the lack of these primitives in the language/library.