I have a super strange issue. Here's my simple piece of recursive code:
let float2cfrac x =
let rec tofloat (lst : int list) (xi : float) =
let qi = floor xi
let ri = xi-qi
printfn "%A %A %A %A" xi qi ri (1.0/ri)
if ri > (floor 0.0) then
tofloat (lst # [int qi]) (1.0/ri)
else
lst
tofloat [] x
I'm not going to explain my code much, as the issue i'm having seems quite basic.
The printfn will print xi and qi, where qi is simply the floor of xi.
When looking at the output, it looks like once the software reaches a round number for xi, the floor function removes 1, instead of doing nothing.
Here's my output for the number 3.245, which should complete computing after just a few calculations:
float2cfrac 3.245;;
3.245 3.0 0.245 4.081632653
4.081632653 4.0 0.08163265306 12.25
12.25 12.0 0.25 4.0
4.0 3.0 1.0 1.0 - Here it gets messed up. Floor of 4.0 should be 4, right?
1.0 1.0 4.035882739e-12 2.477772682e+11
2.477772682e+11 2.477772682e+11 0.2112731934 4.733208147
4.733208147 4.0 0.7332081468 1.363869188
If anybody has an explanation for this or some sugguestions, it would be greatly appreciated!
A super well known issue: floating-point numbers have finite precision, so you can't generally count on the same calculation done via different methods to produce the same result. There will always be a margin of error.
The corollary is that you can't compare floating-point numbers for strict equality. You have to take their difference and compare it to some very small number.
You can avoid the numerical issues of floats by not using floats. Here one solution is to represent the input as a rational number, i.e integer numerator and integer denominator, and then adjust the formulas accordingly.
open System.Numerics
let number2cfrac (xNumerator : int) (xDenominator : int) =
let rec loop acc (xin : BigInteger) (xid : BigInteger) =
let qi = xin / xid
let rin = xin - (qi * xid)
printfn "%A %A %A %A" (float xin / float xid) qi (float rin / float xid) (float xid / float rin)
if rin <> BigInteger.Zero then
loop (int qi :: acc) xid rin
else
List.rev acc
loop [] (BigInteger(xNumerator)) (BigInteger(xDenominator))
> number2cfrac 3245 1000;;
3.245 3 0.245 4.081632653
4.081632653 4 0.08163265306 12.25
12.25 12 0.25 4.0
4.0 4 0.0 infinity
val it : int list = [3; 4; 12]
Related
I've been messing around a bit with Riccardo Terrell's Akka.NET Fractal demo (https://github.com/rikace/akkafractal) to try and understand it. (It's great, btw)
One thing I tried as a minor challenge for myself was to rewrite some bits of it in a more functional way. I've got it to work but it's much slower than the original.
Here's (more or less) the original Mandelbrot Set calculation adapted for testing:
let mandelbrotSet (xp : int) (yp : int) (w : int) (h :int) (width : int) (height : int)
(maxr : float) (minr : float) (maxi : float) (mini : float) : List<int> =
let mutable zx = 0.
let mutable zy = 0.
let mutable cx = 0.
let mutable cy = 0.
let mutable xjump = ((maxr - minr) / ( float width))
let yjump = ((maxi - mini) / (float height))
let mutable tempzx = 0.
let loopmax = 1000
let mutable loopgo = 0
let outputList: int list = List.empty
for x = xp to (xp + w) - 1 do
cx <- (xjump * float x) - abs(minr)
for y = yp to (yp + h) - 1 do
zx <- 0.
zy <- 0.
cy <- (yjump * float y) - abs(mini)
loopgo <- 0
while (zx * zx + zy * zy <= 4. && loopgo < loopmax) do
loopgo <- loopgo + 1
tempzx <- zx
zx <- (zx * zx) - (zy * zy) + cx
zy <- (2. * tempzx * zy) + cy
(List.append outputList [loopgo]) |> ignore
outputList
And here's my version with the recursive mbCalc function doing the work:
let mandelbrotSetRec (xp : int) (yp : int) (w : int) (h :int) (width : int) (height : int)
(maxr : float) (minr : float) (maxi : float) (mini : float) : List<int> =
let xjump = ((maxr - minr) / (float width))
let yjump = ((maxi - mini) / (float height))
let loopMax = 1000
let outputList: int list = List.empty
let rec mbCalc(zx:float, zy:float, cx:float, cy:float, loopCount:int) =
match (zx * zx + zy * zy), loopCount with //The square of the magnitude of z
| a,b when a > 4. || b = loopMax -> loopCount
| _ -> mbCalc((zx * zx) - (zy * zy) + cx, (2. * zx * zy) + cy, cx, cy, loopCount+1) //iteration is the next value of z^2+c
[|0..w-1|] //For each x...
|> Array.map (fun x -> let cx = (xjump * float (x+xp) - abs(minr))
[|0..h-1|] ///and for each y...
|> Array.map (fun y -> let cy = (yjump * float (y+yp) - abs(mini))
let mbVal = mbCalc(0., 0., cx, cy,0) //Calculate the number of iterations to convergence (recursively)
List.append outputList [mbVal]))|>ignore
outputList
Is this just to be expected, pointlessly loading up an Actor with a load of recursive calls, or am I just doing something very inefficiently? Any pointers gratefully received!
If you want to run them then here's a little test script:
let xp = 1500
let yp = 1500
let w = 200
let h = 200
let width = 4000
let height = 4000
let timer1 = new System.Diagnostics.Stopwatch()
timer1.Start()
let ref = mandelbrotSet xp yp w h width height 0.5 -2.5 1.5 -1.5
timer1.Stop()
let timer2 = new System.Diagnostics.Stopwatch()
timer2.Start()
let test = mandelbrotSetRec xp yp w h width height 0.5 -2.5 1.5 -1.5
timer2.Stop
timer1.ElapsedTicks;;
timer2.ElapsedTicks;;
ref = test;;
EDIT: As per Philip's answer below, I added the list output quickly (too quickly!) to make something that ran in a script without requiring any imports. Here's the code to return the image:
let mandelbrotSetRec (xp : int) (yp : int) (w : int) (h :int) (width : int) (height : int)
(maxr : float) (minr : float) (maxi : float) (mini : float) : Image<Rgba32> =
let img = new Image<Rgba32>(w, h)
let xjump = ((maxr - minr) / (float width))
let yjump = ((maxi - mini) / (float height))
let loopMax = 1000
//Precalculate the possible colour list
let palette = List.append ([0..loopMax - 1] |> List.map(fun c -> Rgba32(byte(c % 32 * 7), byte(c % 128 * 2), byte(c % 16 * 14)))) [Rgba32.Black]
let rec mbCalc(zx:float, zy:float, cx:float, cy:float, loopCount:int) =
match (zx * zx + zy * zy), loopCount with //The square of the magnitude of z
| a,b when a > 4. || b = loopMax -> loopCount
| _ -> mbCalc((zx * zx) - (zy * zy) + cx, (2. * zx * zy) + cy, cx, cy, loopCount+1) //iteration is the next value of z^2+c
[|0..w-1|] //For each x...
|> Array.map (fun x -> let cx = (xjump * float (x+xp) - abs(minr))
[|0..h-1|] ///and for each y...
|> Array.map (fun y -> let cy = (yjump * float (y+yp) - abs(mini))
let mbVal = mbCalc(0., 0., cx, cy,0) //Calculate the number of iterations to convergence (recursively)
img.[x,y] <- palette.[mbVal]))|>ignore
img
Firstly, both functions return [] so there is no mandlebrot set being returned even if it's correctly calculated. List.append returns a list, it doesn't mutate an existing one.
Using a quick BenchmarkDotNet program below, where each function is in its own module:
open BenchmarkDotNet.Attributes
open BenchmarkDotNet.Running
open ActorTest
[<MemoryDiagnoser>]
type Bench() =
let xp = 1500
let yp = 1500
let w = 200
let h = 200
let width = 4000
let height = 4000
[<Benchmark(Baseline=true)>]
member _.Mutable() =
Mutable.mandelbrotSet xp yp w h width height 0.5 -2.5 1.5 -1.5
[<Benchmark>]
member _.Recursive() =
Recursive.mandelbrotSet xp yp w h width height 0.5 -2.5 1.5 -1.5
[<EntryPoint>]
let main argv =
let summary = BenchmarkRunner.Run<Bench>()
printfn "%A" summary
0 // return an integer exit code
Your code gave these results:
| Method | Mean | Error | StdDev | Ratio | RatioSD | Gen 0 | Gen 1 | Gen 2 | Allocated |
|---------- |---------:|----------:|----------:|------:|--------:|---------:|---------:|------:|----------:|
| Mutable | 1.356 ms | 0.0187 ms | 0.0166 ms | 1.00 | 0.00 | 406.2500 | - | - | 1.22 MB |
| Recursive | 2.558 ms | 0.0303 ms | 0.0283 ms | 1.89 | 0.03 | 613.2813 | 304.6875 | - | 2.13 MB |
I noticed that you're using Array.map but there's no results being captured anywhere, so changing that to Array.iter got your code to be nearly the same:
| Method | Mean | Error | StdDev | Ratio | Gen 0 | Gen 1 | Gen 2 | Allocated |
|---------- |---------:|----------:|----------:|------:|---------:|------:|------:|----------:|
| Mutable | 1.515 ms | 0.0107 ms | 0.0094 ms | 1.00 | 406.2500 | - | - | 1.22 MB |
| Recursive | 1.652 ms | 0.0114 ms | 0.0101 ms | 1.09 | 607.4219 | - | - | 1.82 MB |
This difference can probably be explained by the additional allocations done with the mapping calls. Allocations are expensive, especially when it's larger arrays, so it's best to avoid them if possible. Exact timings will differ from machine to machine but I'd expect a similar before/after ratio when using BenchmarkDotNet.
This could probably be further optimized by avoiding the list allocations and pre-allocating a list or array that you fill in. The same is true for the iterative call. Also looping through a Span<'T> will be faster than an array since it elides a bounds check, but you'd probably have to change the shape of your code a lot to do that.
Lastly, always use a statistical benchmarking tool like BenchmarkDotNet to measure performance in microbenchmarks like this. Quick scripts are fine as a starting point, but they're no substitute for a tool that accounts for execution time variability on a machine.
i am trying to implement a recursive function which takes a float and returns a list of ints representing the continued fraction representation of the float (https://en.wikipedia.org/wiki/Continued_fraction) In general i think i understand how the algorithm is supposed to work. its fairly simply. What i have so far is this:
let rec float2cfrac (x : float) : int list =
let q = int x
let r = x - (float q)
if r = 0.0 then
[]
else
q :: (float2cfrac (1.0 / r ))
the problem is with the base case obviously. It seems the value r never does reduce to 0.0 instead the algorithm keeps on returning values which are the likes of 0.0.....[number]. I am just not sure how to perform the comparison. How exactly should i go about it. The algorithm the function is based on says the base case is 0, so i naturally interpret this as 0.0. I dont see any other way. Also, do note that this is for an assignment where i am explicitly asked to implement the algorithm recursively. Does anyone have some guidance for me? It would be much appreciated
It seems the value r never does reduce to 0.0 instead the algorithm keeps on returning values which are the likes of 0.0.....[number].
This is a classic issue with floating point comparisons. You need to use some epsilon tolerance value for comparisons, because r will never reach exactly 0.0:
let epsilon = 0.0000000001
let rec float2cfrac (x : float) : int list =
let q = int x
let r = x - (float q)
if r < epsilon then
[]
else
q :: (float2cfrac (1.0 / r))
> float2cfrac 4.23
val it : int list = [4; 4; 2; 1]
See this MSDN documentation for more.
You could define a helper function for this:
let withinTolerance (x: float) (y: float) e =
System.Math.Abs(x - y) < e
Also note your original solution isn't tail-recursive, so it consumes stack as it recurses and could overflow the stack. You could refactor it such that a float can be unfolded without recursion:
let float2cfrac (x: float) =
let q = int x
let r = x - (float q)
if withinTolerance r 0.0 epsilon then None
else Some (q, (1.0 / r))
4.23 |> Seq.unfold float2cfrac // seq [4; 4; 2; 1]
Below is SML code to compute a definite integral using the trapezoidal method given input f=unary function, a & b=range to take integral under, and n=number of sub-intervals to divide the range into.
fun integrate f a b n =
let val w = (b - a) / (real n)
fun genBlock c = let val BB = f c
val SB = f (c+w)
in (BB + SB) * w / 2.0
end
fun sumSlice 0 c acc = acc
| sumSlice n c acc = sumSlice (n-1) (c+w) (acc + (genBlock c))
in sumSlice n a 0.0
end
Problem is I can't figure out for the life of me how to define a function (say X cubed) and feed it to this function with a,b, and n. Here's a screenshot of me trying and receiving an error:
In this picture I define cube x =xxx and show it works, then try to feed it to the integrate function to no avail.
The error message is pretty specific: integrate is expecting a function of type real -> real but you defined a function, cube, of type int -> int.
There are a couple of things you can do:
1) Add a type annotation to the definition of cube:
- fun cube x:real = x*x*x;
val cube = fn : real -> real
And then:
- integrate cube 0.0 5.0 5;
val it = 162.5 : real
2) You can dispense with defining cube as a named function and just pass the computation as an anonymous function. In this case, SML's type inference mechanism gives the function x => x*x*x the intended type:
- integrate (fn x => x*x*x) 0.0 5.0 5;
val it = 162.5 : real
I'm teaching myself OCaml, and the main resources I'm using for practice are some problem sets Cornell has made available from their 3110 class. One of the problems is to write a function to reverse an int (i.e: 1234 -> 4321, -1234 -> -4321, 2 -> 2, -10 -> -1 etc).
I have a working solution, but I'm concerned that it isn't exactly idiomatic OCaml:
let rev_int (i : int) : int =
let rec power cnt value =
if value / 10 = 0 then cnt
else power (10 * cnt) (value/10) in
let rec aux pow temp value =
if value <> 0 then aux (pow/10) (temp + (value mod 10 * pow)) (value / 10)
else temp in
aux (power 1 i) 0 i
It works properly in all cases as far as I can tell, but it just seems seriously "un-OCaml" to me, particularly because I'm running through the length of the int twice with two inner-functions. So I'm just wondering whether there's a more "OCaml" way to do this.
I would say, that the following is idiomatic enough.
(* [rev x] returns such value [y] that its decimal representation
is a reverse of decimal representation of [x], e.g.,
[rev 12345 = 54321] *)
let rev n =
let rec loop acc n =
if n = 0 then acc
else loop (acc * 10 + n mod 10) (n / 10) in
loop 0 n
But as Jeffrey said in a comment, your solution is quite idiomatic, although not the nicest one.
Btw, my own style, would be to write like this:
let rev n =
let rec loop acc = function
| 0 -> acc
| n -> loop (acc * 10 + n mod 10) (n / 10) in
loop 0 n
As I prefer pattern matching to if/then/else. But this is a matter of mine personal taste.
I can propose you some way of doing it:
let decompose_int i =
let r = i / 10 in
i - (r * 10) , r
This function allows me to decompose the integer as if I had a list.
For instance 1234 is decomposed into 4 and 123.
Then we reverse it.
let rec rev_int i = match decompose_int i with
| x , 0 -> 10 , x
| h , t ->
let (m,r) = rev_int t in
(10 * m, h * m + r)
The idea here is to return 10, 100, 1000... and so on to know where to place the last digit.
What I wanted to do here is to treat them as I would treat lists, decompose_int being a List.hd and List.tl equivalent.
I am trying to make a function to round a floating point number to a defined length of digits. What I have come up with so far is this:
import Numeric;
digs :: Integral x => x -> [x] <br>
digs 0 = [] <br>
digs x = digs (x `div` 10) ++ [x `mod` 10]
roundTo x t = let d = length $ digs $ round x <br>
roundToMachine x t = (fromInteger $ round $ x * 10^^t) * 10^^(-t)
in roundToMachine x (t - d)
I am using the digs function to determine the number of digits before the comma to optimize the input value (i.e. move everything past the comma, so 1.234 becomes 0.1234 * 10^1)
The roundTo function seems to work for most input, however for some inputs I get strange results, e.g. roundTo 1.0014 4 produces 1.0010000000000001 instead of 1.001.
The problem in this example is caused by calculating 1001 * 1.0e-3 (which returns 1.0010000000000001)
Is this simply a problem in the number representation of Haskell I have to live with or is there a better way to round a floating point number to a specific length of digits?
I realise this question was posted almost 2 years back, but I thought I'd have a go at an answer that didn't require a string conversion.
-- x : number you want rounded, n : number of decimal places you want...
truncate' :: Double -> Int -> Double
truncate' x n = (fromIntegral (floor (x * t))) / t
where t = 10^n
-- How to answer your problem...
λ truncate' 1.0014 3
1.001
-- 2 digits of a recurring decimal please...
λ truncate' (1/3) 2
0.33
-- How about 6 digits of pi?
λ truncate' pi 6
3.141592
I've not tested it thoroughly, so if you find numbers this doesn't work for let me know!
This isn't a haskell problem as much as a floating point problem. Since each floating point number is implemented in a finite number of bits, there exist numbers that can't be represented completely accurately. You can also see this by calculating 0.1 + 0.2, which awkwardly returns 0.30000000000000004 instead of 0.3. This has to do with how floating point numbers are implemented for your language and hardware architecture.
The solution is to continue using your roundTo function for doing computation (it's as accurate as you'll get without special libraries), but if you want to print it to the screen then you should use string formatting such as the Text.Printf.printf function. You can specify the number of digits to round to when converting to a string with something like
import Text.Printf
roundToStr :: (PrintfArg a, Floating a) => Int -> a -> String
roundToStr n f = printf ("%0." ++ show n ++ "f") f
But as I mentioned, this will return a string rather than a number.
EDIT:
A better way might be
roundToStr :: (PrintfArg a, Floating a) => Int -> a -> String
roundToStr n f = printf (printf "%%0.%df" n) f
but I haven't benchmarked to see which is actually faster. Both will work exactly the same though.
EDIT 2:
As #augustss has pointed out, you can do it even easier with just
roundToStr :: (PrintfArg a, Floating a) => Int -> a -> String
roundToStr = printf "%0.*f"
which uses a formatting rule that I was previously unaware of.
I also think that avoiding string conversion is the way to go; however, I would modify the previous post (from schanq) to use round instead of floor:
round' :: Double -> Integer -> Double
round' num sg = (fromIntegral . round $ num * f) / f
where f = 10^sg
> round' 4 3.99999
4.0
> round' 4 4.00001
4.0