I was solving a math problem: want to get the sum of the digits of the number 2^1000.
In Java, the solution is like:
String temp = BigInteger.ONE.shiftLeft(1000).toString();
int sum = 0;
for (int i = 0; i < temp.length(); i++)
sum += temp.charAt(i) - '0';
Then came up a solution in Haskell, like this:
digitSum ::(Integral a) => a -> a
digitSum 0 = 0
digitSum n = (mod n 10) + (digitSum (div n 10))
The whole process is pretty smooth, one point seems interesting, we know integer type can not handle 2 ^ 1000, too big, in Java, it's obvious to use BigInteger and treat the big number to string, but in Haskell, no compiling errors means the 2 ^ 1000 could be passed in directly. Here is the thing, does Haskell transform the number into string internally? I want to make sure what the type is and let the compiler to determine, then I type the following lines in GHCi:
Prelude> let i = 2 ^ 1000
Prelude> i
107150860718626732094842504906000181056140481170553360744375038837035105112493612249319
837881569585812759467291755314682518714528569231404359845775746985748039345677748242309
854210746050623711418779541821530464749835819412673987675591655439460770629145711964776
86542167660429831652624386837205668069376
Prelude> :t i
i :: Integer
Here, I was totally confused, apparently, the number of i is oversized, but the return type of i is still Integer. How could we explain this and what's the upper bound or limit of Integer of Haskell?
In Haskell, Integer is a - theoretically - unbounded integer type. Fixed-width types are Int, Int8, Int16, Int32, Int64 and the corresponding unsigned Word, Word8 etc.
In practice, even Integer is of course bounded, by the available memory for instance, or by the internal representation.
By default, GHC uses the GMP package to represent Integer, and that means the bound is 2^(2^37) or so, since GMP uses a 32-bit integer to store the number of limbs.
Related
I am implementing a recursive program to calculate the certain values in the Schroder sequence, and I'm having two problems:
I need to calculate the number of calls in the program;
Past a certain number, the program will generate incorrect values (I think it's because the number is too big);
Here is the code:
let rec schroder n =
if n <= 0 then 1
else if n = 1 then 2
else 3 * schroder (n-1) + sum n 1
and sum n k =
if (k > n-2) then 0
else schroder k * schroder (n-k-1) + sum n (k+1)
When I try to return tuples (1.), the function sum stops working because it's trying to return int when it has type int * int;
Regarding 2., when I do schroder 15 it returns:
-357364258
when it should be returning
3937603038.
EDIT:
firstly thanks for the tips, secondly after some hours of deep struggle, i manage to create the function, now my problem is that i'm struggling to install zarith. I think I got it installed, but ..
in terminal when i do ocamlc -I +zarith test.ml i get an error saying Required module 'Z' is unavailable.
in utop after doing #load "zarith.cma";; and #install_printer Z.pp_print;; i can compile, run the function and it works. However i'm trying to implement a Scanf.scanf so that i can print different values of the sequence. With this being said whenever i try to run the scanf, i dont get a chance to write any number as i get a message saying that '\\n' is not a decimal digit.
With this being said i will most probably also have problems with printing the value, because i dont think that i'm going to be able to print such a big number with a %d. The let r1,c1 = in the following code, is a example of what i'm talking about.
Here's what i'm using :
(function)
..
let v1, v2 = Scanf.scanf "%d %d" (fun v1 v2-> v1,v2);;
let r1,c1 = schroder_a (Big_int_Z.of_int v1) in
Printf.printf "%d %d\n" (Big_int_Z.int_of_big_int r1) (Big_int_Z.int_of_big_int c1);
let r2,c2 = schroder_a v2 in
Printf.printf "%d %d\n" r2 c2;
P.S. 'r1' & 'r2' stands for result, and 'c1' and 'c2' stands for the number of calls of schroder's recursive function.
P.S.S. the prints are written differently because i was just testing, but i cant even pass through the scanf so..
This is the third time I've seen this problem here on StackOverflow, so I assume it's some kind of school assignment. As such, I'm just going to make some comments.
OCaml doesn't have a function named sum built in. If it's a function you've written yourself, the obvious suggestion would be to rewrite it so that it knows how to add up the tuples that you want to return. That would be one approach, at any rate.
It's true, ints in OCaml are subject to overflow. If you want to calculate larger values you need to use a "big number" package. The one to use with a modern OCaml is Zarith (I have linked to the description on ocaml.org).
However, none of the other people solving this assignment have mentioned overflow as a problem. It could be that you're OK if you just solve for representable OCaml int values.
3937603038 is larger than what a 32-bit int can hold, and will therefore overflow. You can fix this by using int64 instead (until you overflow that too). You'll have to use int64 literals, using the L suffix, and operations from the Int64 module. Here's your code converted to compute the value as an int64:
let rec schroder n =
if n <= 0 then 1L
else if n = 1 then 2L
else Int64.add (Int64.mul 3L (schroder (n-1))) (sum n 1)
and sum n k =
if (k > n-2) then 0L
else Int64.add (Int64.mul (schroder k) (schroder (n-k-1))) (sum n (k+1))
I need to calculate the number of calls in the program;
...
the function 'sum' stops working because it's trying to return 'int' when it has type 'int * int'
Make sure that you have updated all the recursive calls to shroder. Remember it is now returning a pair not a number, so you can't, for example, just to add it and you need to unpack the pair first. E.g.,
...
else
let r,i = schroder (n-1) (i+1) in
3 * r + sum n 1 and ...
and so on.
Past a certain number, the program will generate incorrect values (I think it's because the number is too big);
You need to use an arbitrary-precision numbers, e.g., zarith
I am given three unsigned numbers of size 128 bits: a, b, and c where a and b <= c I want to be able to calculate (a * b) / c with the highest possible precision.
If a, b, and c were 64 bit integers, I would first calculate a * b in a 128 bit number and then divide it by c to obtain an accurate result. However, I am dealing with 128 bit numbers and I don't have a native 256 bit type to perform the multiplication a * b.
Is there is a way to compute (a * b) / c with high precision while staying in the world of 128 bits?
My (failed) attempts:
Calculating a / (c / b). This looks somewhat unsymmetrical, and as expected I didn't get very accurate results.
Calculating: ((((a+b)/c)^2 - ((a-b)/c)^2)*c)/4 = ((a*b)/c^2) * c = a*b/c This also gave me pretty inaccurate results.
The question was originally tagged as rust, so I've assumed that in my answer, even though the tag has now been removed.
As others have said in the comments, you'll always have to step up a size or else you run the risk of an overflow on the multiplication, unless you have some guarantees about the bounds on the sizes of those numbers. There is no larger primitive type than u128 in Rust.
The usual solution is to switch to structures that support arbitrary-precision arithmetic, often referred to as "bignums" or "bigints". However, they are significantly less performant than using native integer types.
In Rust, you can use the num-bigint crate:
extern crate num_bigint;
use num_bigint::BigUint;
fn main() {
let a: u128 = 234234234234991231;
let b: u128 = 989087987;
let c: u128 = 123;
let big_a: BigUint = a.into();
let big_b: BigUint = b.into();
let big_c: BigUint = c.into();
let answer = big_a * big_b / big_c;
println!("answer: {}", answer);
// answer: 1883563148178650094572699
}
Is there an elegant way to do linear interpolation using integers? (To average ADC measurements in microcontroller, ADC measurements are 12bit, microcontroller works fine with 32bit integers). Coefficient f is in [0, 1] range.
float lerp(float a, float b, float f)
{
return a + f * (b - a);
}
Well, since you have so many extra integer bits to spare, a solution using ints would be:
Use an integer for your parameter F, with F from 0 to 1024 instead of a float from 0 to 1. Then you can just do:
(A*(1024-F) + B * F) >> 10
without risk of overflow.
In fact, if you need more resolution in your parameter, you can pick the maximum value of F as any power of 2 up to 2**19 (if you are using unsigned ints; 2**18 otherwise).
This doesn't do a good job of rounding (it truncates instead) but it only uses integer operations, and avoids division by using the shift operator. It still requires integer multiplication, which a number of MCUs don't have hardware for, but hopefully it won't be too bad.
At the moment I'm storing currency amounts in type ::Float64, the majoirty of amounts are in the billions to hundreds-of-millions in differing currency units. In other use-cases I also have it that currency values are necessary to be kept in tens-of-thousands of a unit of currency, e.g 0.7564
However, given the rounding errors associated double-precision-floating numbers, should I be converting everything into fixed-point integers for storing the currency units?
Secondly, how do you format the string output of an currency unit, and allow for the relevant currency symbol to be displayed?
Secondly, are their any packages that provide a "currency" data type that would be safe to use?
Here's a really basic starting point for storing currency and displaying it:
immutable Currency
symbol::Symbol
amount::Int
end
function Base.show(io::IO, c::Currency)
print(io, c.symbol, c.amount/100)
end
Currency(:£, 1275) #=> £12.75
This stores the currency as an exact value in pennies, so no rounding error, but displays it in the usual way. You could of course easily parameterise on the number of decimal places to store. I can't answer as to whether you should use fixed point numbers like this, but they'll certainly be more accurate for addition, subtraction and multiplication.
As for prior art, a quick google for "currency.jl" turned up this – it looks way out of date but might be useful as a reference.
You should definitely not be using floating point numbers for currencies — you should define your own fixed-point numeric type, and use that. The Julia manual has a good tutorial about defining a new numeric type in the chapter about conversions and promotions.
Let's assume that you will only ever need two digits after the decimal point — never mind pounds, shillings and pence. Your new type will look something like
immutable Monetary <: Number
hundredths :: Int64
end
Monetary(ones :: Int64, hundredths :: Int64) = Monetary(hundredths + 100 * ones)
Obviously, you'll want to be able to display monetary mounts:
Base.show(io :: IO, x :: Monetary) =
#printf(io, "%lld.%02lld", fld(x.hundredths, 100), mod(x.hundredths, 100))
you'll also want to be able to add and subtract them:
+(x :: Monetary, y :: Monetary) = Monetary(x.hundredths + y.hundredths)
-(x :: Monetary, y :: Monetary) = Monetary(x.hundredths - y.hundredths)
On the other hand, you'll never want to multiply them — but multiplying a monetary sum by an integer is fine:
*(x :: Bool, y :: Monetary) = ifelse(x, y, Monetary(0))
*(x :: Monetary, y :: Bool) = ifelse(y, x, Monetary(0))
*(x :: Integer, y :: Monetary) = Monetary(x * y.hundredths)
*(x :: Monetary, y :: Integer) = Monetary(x.hundredths * y)
Finally, if you mix integers with monetary sums in an expression, it's fine to convert everything to monetary values:
Base.convert(::Type{Monetary}, x :: Int64) = Monetary(x, 0)
Base.promote_rule(::Type{Monetary}, ::Type{Int64}) = Monetary
This is good enough to perform useful computations:
julia> Scrooge.Monetary(30,5) * 3 + 12
102.15
but will reliably catch incorrect operations:
julia> Scrooge.Monetary(30,5) * 3.5
ERROR: no promotion exists for Monetary and Float64
in * at ./promotion.jl:159
Currencies.jl provides an interface for working with currencies:
using Currencies
#usingcurrencies USD
format(1.23USD + 4.56USD, styles=[:us, :brief]) # $5.79
It supports type-safe arithmetic, currency conversion, and flexible pretty-printing.
Disclaimer: I am the maintainer of Currencies.jl.
Trying to figure out this pseudo code. The following is assumed....
I can only use unsigned and signed integers (or long).
Division returns a real number with no remainder.
MOD returns a real number.
Fractions and decimals are not handled.
INT I = 41828;
INT C = 15;
INT D = 0;
D = (I / 65535) * C;
How would you handle a fraction (or decimal value) in this situation? Is there a way to use negative value to represent the remainder?
In this example I/65535 should be 0.638, however, with the limitations, I get 0 with a MOD of 638. How can I then multiply by C to get the correct answer?
Hope that makes sense.
MOD here would actually return 23707, not 638. (I hope I'm right on that :) )
If you were to switch your order of operations on that last line, you would get the integer answer you're looking for (9, if my calculations are correct)
D = (I * C) / 65535
/* D == 9 */
Is that the answer you're looking for?
Well, one way to handle decimals is this replacement division function. There are numerous obvious downsides to this technique.
ALT DIV (dividend, divisor) returns (decimal, point)
for point = 0 to 99
if dividend mod divisor = 0 return dividend / divisor, point
dividend = divident * 10
return dividend / divisor, 100
Assuming these are the values you're always using for this computation, then I would do something like:
D = I / (65535 / C);
or
D = I / 4369;
Since C is a factor of 65535. This will help to reduce the possibility of overruning the available range of integers (i.e. if you've only got 16 bit unsigned ints).
In the more general case you, if you think there's a risk that the multiplication of I and C will result in a value outside the allowed range of the integer type you're using (even if the final result will be inside that range) you can factor out the GCD of the numerator and denominator as in:
INT I = 41828;
INT C = 15;
INT DEN = 65535;
INT GCDI = GCD(I, DEN);
DEN = DEN / GCDI;
I = I / GCDI;
INT GCDC = GCD(C, DEN);
DEN = DEN / GCDC;
C = C / GCDC;
INT D = (I * C) / DEN;
Where DEN is your denominator (65535 in this case). This will not provide you with the correct answer in all cases, especially if I and C are both mutually prime to DEN and I*C > MAX_INT.
As to the larger question you raise, division of integer values will always loose the decimal component (equivalent to the floor function). The only way to preserve the information contained in what we think of as the "decimal" part is through the remainder which can be derived from the modulus. I highly encourage you to not mix the meanings of these different number systems. Integers are just that integers. If you need them to be floating point numbers, you should really be using floats, not ints. If all you're interested in doing is displaying the decimal part to the user (i.e. you're not really using it for further computation) then you could write a routine to convert the remainder into a character string representing the remainder.