In Ada (GNAT), using pragma Overflow_Mod / infinite precision intermediate calculations - ada

I'm trying to convert a (small) numerator and denominator to a numerator in terms of a large constant denominator, chosen to be divisible by most small numbers and to be just under 2**63. Since that's likely to overflow, I'll use pragma Overflow_Mode (Eliminated) (cf. the GNAT 4.8 manual http://gcc.gnu.org/onlinedocs/gcc-4.8.0/gnat_ugn_unw/Specifying-the-Desired-Mode.html#Specifying-the-Desired-Mode).
with Ada.Command_Line;
with Ada.Text_IO;
procedure Example is
pragma Overflow_Mode (Eliminated);
Large_Composite : constant := (2 ** 7) * (3 ** 5) * (5 ** 2) * 7
* 11 * 13 * 17 * 19 * 23 * 29 * 31 * 37 * 41;
type Word_Unsigned is mod 2**64;
N, D : Integer;
begin
N := Integer'Value (Ada.Command_Line.Argument (1));
D := Integer'Value (Ada.Command_Line.Argument (2));
Ada.Text_IO.Put (Word_Unsigned ((N * Large_Composite) / D)'Img);
end Example;
Unfortunately, when trying to compile the example code (and the real code it's a distillation of) with "~/bin/gcc-4.8.0/bin/gnatmake -gnat12 -gnata -Wall example.adb" (and with -gnato3, though that should be redundant to the pragma), the compiler says:
example.adb:12:46: value not in range of type "Standard.Integer"
example.adb:12:46: static expression fails Constraint_Check
gnatmake: "example.adb" compilation error
Hrumph. Am I not understanding what Overflow_Mode does? Is there some easy way to rearrange this so it works? (I can go to plan A, a more normal fraction class that may or may not be faster or plan B, just using floats and accepting that 1/3 will get rounded, but I'd like this to work. Proper infinite-length integer support is overkill here.)

It's not a complete answer, but using Long_Long_Integer, which is large enough to hold Large_Composite, instead of Integer suppresses the warnings, and pragma Overflow_Mode does its job and lets me use stuff like N = 99 and D = 100 and get the right answer. This computational model still seems somewhat inconsistent, but at least the code is working.

Related

'Big' fractions in Julia

I've run across a little problem when trying to solve a Project Euler problem in Julia. I've basically written a recursive function which produces fractions with increasingly large numerators and denominators. I don't want to post the code for obvious reasons, but the last few fractions are as follows:
1180872205318713601//835002744095575440
2850877693509864481//2015874949414289041
6882627592338442563//4866752642924153522
At that point I get an OverflowError(), presumably because the numerator and/or denominator now exceeds 19 digits. Is there a way of handling 'Big' fractions in Julia (i.e. those with BigInt-type numerators and denominators)?
Addendum:
OK, I've simplified the code and disguised it a bit. If anyone wants to wade through 650 Project Euler problems to try to work out which question it is, good luck to them – there will probably be around 200 better solutions!
function series(limit::Int64, i::Int64=1, n::Rational{Int64}=1//1)
while i <= limit
n = 1 + 1//(1 + 2n)
println(n)
return series(limit, i + 1, n)
end
end
series(50)
If I run the above function with, say, 20 as the argument it runs fine. With 50 I get the OverflowError().
Julia defaults to using machine integers. For more information on this see the FAQ: Why does Julia use native machine integer arithmetic?.
In short: the most efficient integer operations on any modern CPU involves computing on a fixed number of bits. On your machine, that's 64 bits.
julia> 9223372036854775805 + 1
9223372036854775806
julia> 9223372036854775805 + 2
9223372036854775807
julia> 9223372036854775805 + 3
-9223372036854775808
Whoa! What just happened!? That's definitely wrong! It's more obvious if you look at how these numbers are represented in binary:
julia> bitstring(9223372036854775805 + 1)
"0111111111111111111111111111111111111111111111111111111111111110"
julia> bitstring(9223372036854775805 + 2)
"0111111111111111111111111111111111111111111111111111111111111111"
julia> bitstring(9223372036854775805 + 3)
"1000000000000000000000000000000000000000000000000000000000000000"
So you can see that those 63 bits "ran out of space" and rolled over — the 64th bit there is called the "sign bit" and signals a negative number.
There are two potential solutions when you see overflow like this: you can use "checked arithmetic" — like the rational code does — that ensures you don't silently have this problem:
julia> Base.Checked.checked_add(9223372036854775805, 3)
ERROR: OverflowError: 9223372036854775805 + 3 overflowed for type Int64
Or you can use a bigger integer type — like the unbounded BigInt:
julia> big(9223372036854775805) + 3
9223372036854775808
So an easy fix here is to remove your type annotations and dynamically choose your integer types based upon limit:
function series(limit, i=one(limit), n=one(limit)//one(limit))
while i <= limit
n = 1 + 1//(1 + 2n)
println(n)
return series(limit, i + 1, n)
end
end
julia> series(big(50))
#…
1186364911176312505629042874//926285732032534439103474303
4225301286417693889465034354//3299015554385159450361560051

In what scenario will pre-increment be undefined behaviour in C?

In cases like:
int q = 3;
++q * ++q
It will be an undefined behaviour in C.
However, what about the following scenarios?
++q * q
++q * q++
Note: I am not asking what is the definition of undefined behaviour.
My question is: What are the specific rules to help us to determine whether an expression will be an undefined behaviour especially when pre-increment is involved?
I found this piece of information online:
The behavior of modifying the value of an object through the evaluation of an expression more than once between sequence points is undefined. The behavior of using the value of an object in one expression when its being modified in another expression without an intervening sequence point is also undefined.
Does it mean that if the same variable in a single expression is changed more than once, it will be an undefined behaviour?
++q * q
Let's say q is 4. The compiler is free to evaluate the operands in any order it pleases, so you can have 5 * 5 = 25 (if the compiler evaluates first the left operand) or 5 * 4 = 20 (if the compiler evaluates first the right operand).
++q * q++
Let's say q is 4. The compiler is free to evaluate the operands in any order it pleases, so you can have 5 * 5 = 25 (if the compiler evaluates first the left operand) or 6 * 4 = 24 (if the compiler evaluates first the right operand). Explanation:
Evaluate ++q; result is 5, q is now 5. Then evaluate q++; result is 5, q is now 6. Evaluate 5 * 5.
Evaluate q++; result is 4, q is now 5. Evaluate ++q; result is 6, q is now 6. Evaluate 6 * 4.

F#: integer (%) integer - Is Calculated How?

So in my text book there is this example of a recursive function using f#
let rec gcd = function
| (0,n) -> n
| (m,n) -> gcd(n % m,m);;
with this function my text book gives the example by executing:
gcd(36,116);;
and since the m = 36 and not 0 then it ofcourse goes for the second clause like this:
gcd(116 % 36,36)
gcd(8,36)
gcd(36 % 8,8)
gcd(4,8)
gcd(8 % 4,4)
gcd(0,4)
and now hits the first clause stating this entire thing is = 4.
What i don't get is this (%)percentage sign/operator or whatever it is called in this connection. for an instance i don't get how
116 % 36 = 8
I have turned this so many times in my head now and I can't figure how this can turn into 8?
I know this is probably a silly question for those of you who knows this but I would very much appreciate your help the same.
% is a questionable version of modulo, which is the remainder of an integer division.
In the positive, you can think of % as the remainder of the division. See for example Wikipedia on Euclidean Divison. Consider 9 % 4: 4 fits into 9 twice. But two times four is only eight. Thus, there is a remainder of one.
If there are negative operands, % effectively ignores the signs to calculate the remainder and then uses the sign of the dividend as the sign of the result. This corresponds to the remainder of an integer division that rounds to zero, i.e. -2 / 3 = 0.
This is a mathematically unusual definition of division and remainder that has some bad properties. Normally, when calculating modulo n, adding or subtracting n on the input has no effect. Not so for this operator: 2 % 3 is not equal to (2 - 3) % 3.
I usually have the following defined to get useful remainders when there are negative operands:
/// Euclidean remainder, the proper modulo operation
let inline (%!) a b = (a % b + b) % b
So far, this operator was valid for all cases I have encountered where a modulo was needed, while the raw % repeatedly wasn't. For example:
When filling rows and columns from a single index, you could calculate rowNumber = index / nCols and colNumber = index % nCols. But if index and colNumber can be negative, this mapping becomes invalid, while Euclidean division and remainder remain valid.
If you want to normalize an angle to (0, 2pi), angle %! (2. * System.Math.PI) does the job, while the "normal" % might give you a headache.
Because
116 / 36 = 3
116 - (3*36) = 8
Basically, the % operator, known as the modulo operator will divide a number by other and give the rest if it can't divide any longer. Usually, the first time you would use it to understand it would be if you want to see if a number is even or odd by doing something like this in f#
let firstUsageModulo = 55 %2 =0 // false because leaves 1 not 0
When it leaves 8 the first time means that it divided you 116 with 36 and the closest integer was 8 to give.
Just to help you in future with similar problems: in IDEs such as Xamarin Studio and Visual Studio, if you hover the mouse cursor over an operator such as % you should get a tooltip, thus:
Module operator tool tip
Even if you don't understand the tool tip directly, it'll give you something to google.

lerp for integers or in fixed point math

Is there an elegant way to do linear interpolation using integers? (To average ADC measurements in microcontroller, ADC measurements are 12bit, microcontroller works fine with 32bit integers). Coefficient f is in [0, 1] range.
float lerp(float a, float b, float f)
{
return a + f * (b - a);
}
Well, since you have so many extra integer bits to spare, a solution using ints would be:
Use an integer for your parameter F, with F from 0 to 1024 instead of a float from 0 to 1. Then you can just do:
(A*(1024-F) + B * F) >> 10
without risk of overflow.
In fact, if you need more resolution in your parameter, you can pick the maximum value of F as any power of 2 up to 2**19 (if you are using unsigned ints; 2**18 otherwise).
This doesn't do a good job of rounding (it truncates instead) but it only uses integer operations, and avoids division by using the shift operator. It still requires integer multiplication, which a number of MCUs don't have hardware for, but hopefully it won't be too bad.

Why 2 ^ 3 ^ 4 = 0 in Julia?

I just read a post from Quora:
http://www.quora.com/Is-Julia-ready-for-production-use
At the bottom, there's an answer said:
2 ^ 3 ^ 4 = 0
I tried it myself:
julia> 2 ^ 3 ^ 4
0
Personally I don't consider this a bug in the language. We can add parenthesis for clarity, both for Julia and for our human beings:
julia> (2 ^ 3) ^ 4
4096
So far so good; however, this doesn't work:
julia> 2 ^ (3 ^ 4)
0
Since I'm learning, I'd like to know, how Julia evaluate this expression to 0? What's the evaluation precedent?
julia> typeof(2 ^ 3 ^ 4)
Int64
I'm surprised I couldn't find a duplicate question about this on SO yet. I figure I'll answer this slightly differently than the FAQ in the manual since it's a common first question. Oops, I somehow missed: Factorial function works in Python, returns 0 for Julia
Imagine you've been taught addition and multiplication, but never learned any numbers higher than 99. As far as you're concerned, numbers bigger than that simply don't exist. So you learned to carry ones into the tens column, but you don't even know what you'd call the column you'd carry tens into. So you just drop them. As long as your numbers never get bigger than 99, everything will be just fine. Once you go over 99, you wrap back down to 0. So 99+3 ≡ 2 (mod 100). And 52*9 ≡ 68 (mod 100). Any time you do a multiplication with more than two factors of 10, your answer will be zero: 25*32 ≡ 0 (mod 100). Now, after you do each computation, someone could ask you "did you go over 99?" But that takes time to answer… time that could be spent computing your next math problem!
This is effectively how computers natively do arithmetic, except they do it in binary with 64 bits. You can see the individual bits with the bits function:
julia> bits(45)
"0000000000000000000000000000000000000000000000000000000000101101"
As we multiply it by 2, 101101 will shift to the left (just like multiplying by 10 in decimal):
julia> bits(45 * 2)
"0000000000000000000000000000000000000000000000000000000001011010"
julia> bits(45 * 2 * 2)
"0000000000000000000000000000000000000000000000000000000010110100"
julia> bits(45 * 2^58)
"1011010000000000000000000000000000000000000000000000000000000000"
julia> bits(45 * 2^60)
"1101000000000000000000000000000000000000000000000000000000000000"
… until it starts falling off the end. If you multiply more than 64 twos together, the answer will always zero (just like multiplying more than two tens together in the example above). We can ask the computer if it overflowed, but doing so by default for every single computation has some serious performance implications. So in Julia you have to be explicit. You can either ask Julia to check after a specific multiplication:
julia> Base.checked_mul(45, 2^60) # or checked_add for addition
ERROR: OverflowError()
in checked_mul at int.jl:514
Or you can promote one of the arguments to a BigInt:
julia> bin(big(45) * 2^60)
"101101000000000000000000000000000000000000000000000000000000000000"
In your example, you can see that the answer is 1 followed by 81 zeros when you use big integer arithmetic:
julia> bin(big(2) ^ 3 ^ 4)
"1000000000000000000000000000000000000000000000000000000000000000000000000000000000"
For more details, see the FAQ: why does julia use native machine integer arithmetic?

Resources