# List.fold_left (-) 100 [1;2;3;4];;
- : int = 90
# List.fold_right (-) [1;2;3;4] 100;;
- : int = 98
What causes these two results to come out so differently and what is the process for that?
I tried running both with the same list and accumulator but was expecting them to get to the same result of 90 because aren't both folds ultimately subtracting 10 from 100?
It may be instructive to run List.fold_right with a smaller sample.
# List.fold_right (-) [1] 100;;
- : int = -99
So here's it's clear that 100 is being subtracted from 1 rather than the other way around.
Let's expand on that.
# List.fold_right (-) [1; 2] 100;;
- : int = 99
Huh. Well this makes sense if we consider:
# 1 - (2 - 100);;
- : int = 99
And then if we extrapolate the original can be considered equal to writing:
# 1 - (2 - (3 - (4 - 100)));;
- : int = 98
The documentation on List.fold_right does specify this.
The meaning of List.fold_left (-) 100 [1;2;3;4] is:
(((100 - 1) - 2) - 3) - 4
But the meaning of List.fold_right (-) [1;2;3;4] 100 is:
1 - (2 - (3 - (4 - 100)))
You can check that this is 98.
You can get the result you expected using flipped subtraction:
# List.fold_right (Fun.flip (-)) [1;2;3;4] 100;;
- : int = 90
The order of arguments to the fold functions is (I think) intended to suggest this difference. The arguments to fold_left suggest that you're subtracting the list from 100. The arguments to fold_right suggest that you're subtracting 100 from the list.
One way to look at it is that the difference is because subtraction isn't commutative.
Related
I have a very simple (and stupid :) question:
How do I get from this expression:
To this recursion formula?
I do not want any complete solution, but maybe you can help me to understand it with just a few tips.
Thx, and a nice day :)
This is very similar to [Wikipedia]: Fibonacci number, and is a typical [Wikipedia]: Mathematical induction example.
Starting from the assumption that the formula is true for (a given) n (let's call this P(n)):
Jn = 2 * Jn - 1 + (-1)n - 1
then, using the above equality and the function definition, you must prove P(n + 1) - that the formula is also true for (the consecutive) n + 1. This is the one in your question:
Jn + 1 = 2 * Jn + (-1)n
Since this is a programming site (and I'm too lazy calculating the first few values manually), here's some Python code that does that for us:
>>> def j(n):
... if n <= 0:
... return 0
... elif n == 1:
... return 1
... else:
... return j(n - 1) + 2 * j(n - 2)
...
>>>
>>> for i in range(15):
... print("{:2d} - {:4d}".format(i, j(i)))
...
0 - 0
1 - 1
2 - 1
3 - 3
4 - 5
5 - 11
6 - 21
7 - 43
8 - 85
9 - 171
10 - 341
11 - 683
12 - 1365
13 - 2731
14 - 5461
From the above, it's kind of noticeable by the naked eye that the formula is true. Now the induction mechanism should be applied: start from P(n) and using the function definition (3rd branch) get to P(n + 1). That might not be trivial, considering there's a level 2 dependency (most of the terms that will eventually reduce each other, but I didn't try it to see how "visible" that would be). You could check [SO]: Recursive summation of a sequence returning wrong result (#CristiFati's answer), for more details on a simpler problem.
Note:
Given the current coefficients, I must mention recursion's characteristic equation (check [Wikipedia]: Constant-recursive sequence) that would give a non recurring formula for Jn:
Jn = Jn - 1 + 2 * Jn - 2 translates to: x2 = x1 + 2 * x0 -> x2 - x - 2 = 0 (which has -1 and 2 as roots), and from here (using Binet (or Moivre) formula):
Jn = (2n - (-1)n) / 3 (denominator value is: 2 - -1)
Let the computer do some calculations from us (check that previous implementation results match this one's):
>>> def j_simpl(n):
... return (2 ** n - (-1) ** n) / 3
...
>>>
>>> print(all(j(i) == j_simpl(i) for i in range(20)))
True
Combinations without repetitions look like this, when the number of elements to choose from (n) is 5 and elements chosen (r) is 3:
0 1 2
0 1 3
0 1 4
0 2 3
0 2 4
0 3 4
1 2 3
1 2 4
1 3 4
2 3 4
As n and r grows the amount of combinations gets large pretty quickly. For (n,r) = (200,4) the number of combinations is 64684950.
It is easy to iterate the list with r nested for-loops, where the initial iterating value of each for loop is greater than the current iterating value of the for loop in which it is nested, as in this jsfiddle example:
https://dotnetfiddle.net/wHWK5o
What I would like is a function that calculates only one combination based on its index. Something like this:
tuple combination(i,n,r) {
return [combination with index i, when the number of elements to choose from is n and elements chosen is r]
Does anyone know if this is doable?
You would first need to impose some sort of ordering on the set of all combinations available for a given n and r, such that a linear index makes sense. I suggest we agree to keep our combinations in increasing order (or, at least, the indices of the individual elements), as in your example. How then can we go from a linear index to a combination?
Let us first build some intuition for the problem. Suppose we have n = 5 (e.g. the set {0, 1, 2, 3, 4}) and r = 3. How many unique combinations are there in this case? The answer is of course 5-choose-3, which evaluates to 10. Since we will sort our combinations in increasing order, consider for a minute how many combinations remain once we have exhausted all those starting with 0. This must be 4-choose-3, or 4 in total. In such a case, if we are looking for the combination at index 7 initially, this implies we must subtract 10 - 4 = 6 and search for the combination at index 1 in the set {1, 2, 3, 4}. This process continues until we find a new index that is smaller than this offset.
Once this process concludes, we know the first digit. Then we only need to determine the remaining r - 1 digits! The algorithm thus takes shape as follows (in Python, but this should not be too difficult to translate),
from math import factorial
def choose(n, k):
return factorial(n) // (factorial(k) * factorial(n - k))
def combination_at_idx(idx, elems, r):
if len(elems) == r:
# We are looking for r elements in a list of size r - thus, we need
# each element.
return elems
if len(elems) == 0 or len(elems) < r:
return []
combinations = choose(len(elems), r) # total number of combinations
remains = choose(len(elems) - 1, r) # combinations after selection
offset = combinations - remains
if idx >= offset: # combination does not start with first element
return combination_at_idx(idx - offset, elems[1:], r)
# We now know the first element of the combination, but *not* yet the next
# r - 1 elements. These need to be computed as well, again recursively.
return [elems[0]] + combination_at_idx(idx, elems[1:], r - 1)
Test-driving this with your initial input,
N = 5
R = 3
for idx in range(choose(N, R)):
print(idx, combination_at_idx(idx, list(range(N)), R))
I find,
0 [0, 1, 2]
1 [0, 1, 3]
2 [0, 1, 4]
3 [0, 2, 3]
4 [0, 2, 4]
5 [0, 3, 4]
6 [1, 2, 3]
7 [1, 2, 4]
8 [1, 3, 4]
9 [2, 3, 4]
Where the linear index is zero-based.
Start with the first element of the result. The value of that element depends on the number of combinations you can get with smaller elements. For each such smaller first element, the number of combinations with first element k is n − k − 1 choose r − 1, with potentially some of-by-one corrections. So you would sum over a bunch of binomial coefficients. Wolfram Alpha can help you compute such a sum, but the result still has a binomial coefficient in it. Solving for the largest k such that the sum doesn't exceed your given index i is a computation you can't do with something as simple as e.g. a square root. You need a loop to test possible values, e.g. like this:
def first_naive(i, n, r):
"""Find first element and index of first combination with that first element.
Returns a tuple of value and index.
Example: first_naive(8, 5, 3) returns (1, 6) because the combination with
index 8 is [1, 3, 4] so it starts with 1, and because the first combination
that starts with 1 is [1, 2, 3] which has index 6.
"""
s1 = 0
for k in range(n):
s2 = s1 + choose(n - k - 1, r - 1)
if i < s2:
return k, s1
s1 = s2
You can reduce the O(n) loop iterations to O(log n) steps using bisection, which is particularly relevant for large n. In that case I find it easier to think about numbering items from the end of your list. In the case of n = 5 and r = 3 you get choose(2, 2)=1 combinations starting with 2, choose(3,2)=3 combinations starting with 1 and choose(4,2)=6 combinations starting with 0. So in the general choose(n,r) binomial coefficient you increase the n with each step, and keep the r. Taking into account that sum(choose(k,r) for k in range(r,n+1)) can be simplified to choose(n+1,r+1), you can eventually come up with bisection conditions like the following:
def first_bisect(i, n, r):
nCr = choose(n, r)
k1 = r - 1
s1 = nCr
k2 = n
s2 = 0
while k2 - k1 > 1:
k3 = (k1 + k2) // 2
s3 = nCr - choose(k3, r)
if s3 <= i:
k2, s2 = k3, s3
else:
k1, s1 = k3, s3
return n - k2, s2
Once you know the first element to be k, you also know the index of the first combination with that same first element (also returned from my function above). You can use the difference between that first index and your actual index as input to a recursive call. The recursive call would be for r − 1 elements chosen from n − k − 1. And you'd add k + 1 to each element from the recursive call, since the top level returns values starting at 0 while the next element has to be greater than k in order to avoid duplication.
def combination(i, n, r):
"""Compute combination with a given index.
Equivalent to list(itertools.combinations(range(n), r))[i].
Each combination is represented as a tuple of ascending elements, and
combinations are ordered lexicograplically.
Args:
i: zero-based index of the combination
n: number of possible values, will be taken from range(n)
r: number of elements in result list
"""
if r == 0:
return []
k, ik = first_bisect(i, n, r)
return tuple([k] + [j + k + 1 for j in combination(i - ik, n - k - 1, r - 1)])
I've got a complete working example, including an implementation of choose, more detailed doc strings and tests for some basic assumptions.
I am trying to fit 3 numbers inside 1 number.But numbers will be only between 0 and 11.So their (base) is 12.For example i have 7,5,2 numbers.I come up with something like this:
Three numbers into One number :
7x12=84
84x5=420
420+2=422
Now getting back Three numbers from One number :
422 MOD 12 = 2 (the third number)
422 - 2 = 420
420 / 12 = 35
And i understanded that 35 is multiplication of first and the second number (i.e 7 and 5)
And now i cant get that 7 and 5 anyone knows how could i ???
(I started typing this answer before the other one got posted, but this one is more specific to Arduino then the other one, so I'm leaving it)
The code
You can use bit shifting to get multiple small numbers into one big number, in code it would look like this:
int a, b, c;
//putting then together
int big = (a << 8) + (b << 4) + c;
//separating them again
a = (big >> 8) & 15;
b = (big >> 4) & 15;
c = big & 15;
This code only works when a, b and c are all in the range [0, 15] witch appears to be enough for you case.
How it works
The >> and << operators are the bitshift operators, in short a << n shifts every bit in a by n places to the left, this is equivalent to multiplying by 2^n. Similarly, a >> n shifts to to the right. An example:
11 << 3 == 120 //0000 1011 -> 0101 1000
The & operator performs a bitwise and on the two operands:
6 & 5 == 4 // 0110
// & 0101
//-> 0100
These two operators are combined to "pack" and "unpack" the three numbers. For the packing every small number is shifted a bit to the left and they are all added together. This is how the bits of big now look (there are 16 of them because ints in Arduino are 16 bits wide):
0000aaaabbbbcccc
When unpacking, the bits are shifted to the right again, and they are bitwise anded together with 15 to filter out any excess bits. This is what that last operation looks like to get b out again:
00000000aaaabbbb //big shifted 4 bits to the right
& 0000000000001111 //anded together with 15
-> 000000000000bbbb //gives the original number b
All is working exactly like in base 10 (or 16). Here after your corrected example.
Three numbers into One number :
7x12^2=1008
5*12^1=60
2*12^0=2
1008+60+2=1070
Now getting back Three numbers from One number :
1070 MOD 12 = 2 (the third number)
1070/12 = 89 (integer division) => 89 MOD 12 = 5
89 / 12 = 7
Note also that the maximum value will be 11*12*12+11*12+11=1727.
If this is really programming related, you will be using 16bits instead of 3*8 bits so sparing one byte. An easyer method not using base 12 would be fit each number into half a byte (better code efficiency and same transmission length):
7<<(4+4) + 5<<4 + 2 = 1874
1874 & 0x000F = 2
1874>>4 & 0x000F = 5
1874>>8 & 0x0F = 7
Because MOD(12) and division by 12 is much less efficient than working with powers of 2
you can use the principle of the positional notation to change from one or the other in any base
Treat yours numbers (n0,n1,...,nm) as a digit of a big number in the base B of your choosing so the new number is
N = n0*B^0 + n1*B^1 + ... + nm*B^m
to revert the process is also simple, while your number is greater than 0 find its modulo in respect to the base to get to get the first digit, then subtracts that digit and divide for the base, repeat until finish while saving each digit along the way
digit_list = []
while N > 0 do:
d = N mod B
N = (N - d) / B
digit_list.append( d )
then if N is N = n0*B^0 + n1*B^1 + ... + nm*B^m doing N mod B give you n0, then subtract it leaving you with n1*B^1 + ... + nm*B^m and divide by B to reduce the exponents of all B and that is the new N, N = n1*B^0 + ... + nm*B^(m-1) repetition of that give you all the digit you start with
here is a working example in python
def compact_num( num_list, base=12 ):
return sum( n*pow(base,i) for i,n in enumerate(num_list) )
def decompact_num( n, base=12):
if n==0:
return [0]
result = []
while n:
n,d = divmod(n,base)
result.append(d)
return result
example
>>> compact_num([2,5,7])
1070
>>> decompact_num(1070)
[2, 5, 7]
>>> compact_num([10,2],16)
42
>>> decompact_num(42,16)
[10, 2]
>>>
I need to write my own recursive function in ML that somehow uses ord to convert a string of numbers to integer type. I can use helper functions, but apparently I should be able to do this without using one (according to my professor).
I can assume that the input is valid, and is a positive integer (in string type of course).
So, the call str2int ("1234") should output 1234: int
I assume I will need to use explode and implode at some point since ord operates on characters, and my input is a string. Any direction would be greatly appreciated.
Given that you asked, I guess I can ruin all the fun for you. This will solve your problem, but ironically, it won't help you.
Well, the ordinal number for the character #'0' is 48. So, this means that if you subtract of any ordinal representing a digit the number 48 you get its decimal value. For instance
ord(#"9") - 48
Yields 9.
So, a function that takes a given character representing a number from 0-9 and turns it into the corresponding decimal is:
fun charToInt(c) = ord(c) - 48
Supposing you had a string of numbers like "2014". Then you can first explode the string into list of characters and then map every character to its corresponding decimal.
For instance
val num = "2014"
val digits = map charToInt (explode num)
The explode function is a helper function that takes a string and turn it into a list of characters.
And now digits would be a list of integers representing the decimal numbers [2,0,1,4];
Then, all you need is to apply powers of 10 to obtain the final integer.
2 * 10 ^ 3 = 2000
0 * 10 ^ 2 = 0
1 * 10 ^ 1 = 10
4 * 10 ^ 0 = 4
The result would be 2000 + 0 + 10 + 4 = 2014
You could define a helper function charsToInt that processes the digits in the string from left to right.
At each step it converts the leftmost digit c into a number and does addition with the 10x-multiple of n (which is the intermediary sum of all previously parsed digits) ...
fun charsToInt ([], n) = n
| charsToInt (c :: cs, n) = charsToInt (cs, 10*n + ord c - 48)
val n = charsToInt (explode "1024", 0)
Gives you: val n = 1024 : int
As you see the trick is to pass the intermediary result down to the next step at each recursive call. This is a very common technique when dealing with these kind of problems.
Here's what I came up with:
fun pow10 n =
if n = 0 then 1 else 10*pow10(n-1);
fun str2help (L,n) =
if null L then 0
else (ord(hd L)-48) * pow10(n) + str2help(tl L, n-1);
fun str2int (string) =
str2help(explode string, size string -1);
str2int ("1234");
This gives me the correct result, though is clearly not the easiest way to get there.
I am trying to make a function to round a floating point number to a defined length of digits. What I have come up with so far is this:
import Numeric;
digs :: Integral x => x -> [x] <br>
digs 0 = [] <br>
digs x = digs (x `div` 10) ++ [x `mod` 10]
roundTo x t = let d = length $ digs $ round x <br>
roundToMachine x t = (fromInteger $ round $ x * 10^^t) * 10^^(-t)
in roundToMachine x (t - d)
I am using the digs function to determine the number of digits before the comma to optimize the input value (i.e. move everything past the comma, so 1.234 becomes 0.1234 * 10^1)
The roundTo function seems to work for most input, however for some inputs I get strange results, e.g. roundTo 1.0014 4 produces 1.0010000000000001 instead of 1.001.
The problem in this example is caused by calculating 1001 * 1.0e-3 (which returns 1.0010000000000001)
Is this simply a problem in the number representation of Haskell I have to live with or is there a better way to round a floating point number to a specific length of digits?
I realise this question was posted almost 2 years back, but I thought I'd have a go at an answer that didn't require a string conversion.
-- x : number you want rounded, n : number of decimal places you want...
truncate' :: Double -> Int -> Double
truncate' x n = (fromIntegral (floor (x * t))) / t
where t = 10^n
-- How to answer your problem...
λ truncate' 1.0014 3
1.001
-- 2 digits of a recurring decimal please...
λ truncate' (1/3) 2
0.33
-- How about 6 digits of pi?
λ truncate' pi 6
3.141592
I've not tested it thoroughly, so if you find numbers this doesn't work for let me know!
This isn't a haskell problem as much as a floating point problem. Since each floating point number is implemented in a finite number of bits, there exist numbers that can't be represented completely accurately. You can also see this by calculating 0.1 + 0.2, which awkwardly returns 0.30000000000000004 instead of 0.3. This has to do with how floating point numbers are implemented for your language and hardware architecture.
The solution is to continue using your roundTo function for doing computation (it's as accurate as you'll get without special libraries), but if you want to print it to the screen then you should use string formatting such as the Text.Printf.printf function. You can specify the number of digits to round to when converting to a string with something like
import Text.Printf
roundToStr :: (PrintfArg a, Floating a) => Int -> a -> String
roundToStr n f = printf ("%0." ++ show n ++ "f") f
But as I mentioned, this will return a string rather than a number.
EDIT:
A better way might be
roundToStr :: (PrintfArg a, Floating a) => Int -> a -> String
roundToStr n f = printf (printf "%%0.%df" n) f
but I haven't benchmarked to see which is actually faster. Both will work exactly the same though.
EDIT 2:
As #augustss has pointed out, you can do it even easier with just
roundToStr :: (PrintfArg a, Floating a) => Int -> a -> String
roundToStr = printf "%0.*f"
which uses a formatting rule that I was previously unaware of.
I also think that avoiding string conversion is the way to go; however, I would modify the previous post (from schanq) to use round instead of floor:
round' :: Double -> Integer -> Double
round' num sg = (fromIntegral . round $ num * f) / f
where f = 10^sg
> round' 4 3.99999
4.0
> round' 4 4.00001
4.0