julia c^3 results in wrong, negative number [duplicate] - math

This question already has an answer here:
Wrong answer given by a julia code for project euler 29
(1 answer)
Closed 11 months ago.
So I was using light speed c to do some calculations in VScode and I found this:
julia> c = 3*10^8
300000000
julia> c^3
-1238598680542445568
It's obviously wrong.
But if I define c as float number:
julia> c = 3.0*10^8
3.0e8
julia> c^3
2.7e25
Then everything is fine.
What is happening? Is this some inherent error within julia or I am asking some silly question?

Use BigInts for that as integers in Julia are normally 64 bits:
julia> c = big(3*10^8)
300000000
julia> typeof(c)
BigInt
julia> c^3
27000000000000000000000000

This is the overflow behavior of int type
julia> x = typemax(Int64)
9223372036854775807
julia> x + 1
-9223372036854775808
julia> x + 1 == typemin(Int64)
true
While floating point numbers are not exact values, and they handle the mantissa and exponent separately hence they are able to return correct answers even for large order calulations.

Related

'Big' fractions in Julia

I've run across a little problem when trying to solve a Project Euler problem in Julia. I've basically written a recursive function which produces fractions with increasingly large numerators and denominators. I don't want to post the code for obvious reasons, but the last few fractions are as follows:
1180872205318713601//835002744095575440
2850877693509864481//2015874949414289041
6882627592338442563//4866752642924153522
At that point I get an OverflowError(), presumably because the numerator and/or denominator now exceeds 19 digits. Is there a way of handling 'Big' fractions in Julia (i.e. those with BigInt-type numerators and denominators)?
Addendum:
OK, I've simplified the code and disguised it a bit. If anyone wants to wade through 650 Project Euler problems to try to work out which question it is, good luck to them – there will probably be around 200 better solutions!
function series(limit::Int64, i::Int64=1, n::Rational{Int64}=1//1)
while i <= limit
n = 1 + 1//(1 + 2n)
println(n)
return series(limit, i + 1, n)
end
end
series(50)
If I run the above function with, say, 20 as the argument it runs fine. With 50 I get the OverflowError().
Julia defaults to using machine integers. For more information on this see the FAQ: Why does Julia use native machine integer arithmetic?.
In short: the most efficient integer operations on any modern CPU involves computing on a fixed number of bits. On your machine, that's 64 bits.
julia> 9223372036854775805 + 1
9223372036854775806
julia> 9223372036854775805 + 2
9223372036854775807
julia> 9223372036854775805 + 3
-9223372036854775808
Whoa! What just happened!? That's definitely wrong! It's more obvious if you look at how these numbers are represented in binary:
julia> bitstring(9223372036854775805 + 1)
"0111111111111111111111111111111111111111111111111111111111111110"
julia> bitstring(9223372036854775805 + 2)
"0111111111111111111111111111111111111111111111111111111111111111"
julia> bitstring(9223372036854775805 + 3)
"1000000000000000000000000000000000000000000000000000000000000000"
So you can see that those 63 bits "ran out of space" and rolled over — the 64th bit there is called the "sign bit" and signals a negative number.
There are two potential solutions when you see overflow like this: you can use "checked arithmetic" — like the rational code does — that ensures you don't silently have this problem:
julia> Base.Checked.checked_add(9223372036854775805, 3)
ERROR: OverflowError: 9223372036854775805 + 3 overflowed for type Int64
Or you can use a bigger integer type — like the unbounded BigInt:
julia> big(9223372036854775805) + 3
9223372036854775808
So an easy fix here is to remove your type annotations and dynamically choose your integer types based upon limit:
function series(limit, i=one(limit), n=one(limit)//one(limit))
while i <= limit
n = 1 + 1//(1 + 2n)
println(n)
return series(limit, i + 1, n)
end
end
julia> series(big(50))
#…
1186364911176312505629042874//926285732032534439103474303
4225301286417693889465034354//3299015554385159450361560051

Julia and big numbers

How does Julia compute big numbers?
For example, this works as expected:
julia> 10^18
1000000000000000000
But for bigger numbers, there is a problem with integers:
julia> 10^19
-8446744073709551616
julia> 10^20
7766279631452241920
But it works if a decimal number is used:
julia> 10.0^20
1.0e20
Do you know why?
Check this documentation page: https://docs.julialang.org/en/release-0.4/manual/integers-and-floating-point-numbers/
As you can see, Int64 have a max length of: 2^63 - 1 ~ 1.0 * 10^19
So your 10^19 is greater than this max value. That's why there is a problem.
You can try to convert your 10 to another type.
10.0^20 works because 10.0 is a float, so it has a superior max value.
If you want unlimited precision for integers, you can use BigInts:
julia> BigInt(10)^100
10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

Why 2 ^ 3 ^ 4 = 0 in Julia?

I just read a post from Quora:
http://www.quora.com/Is-Julia-ready-for-production-use
At the bottom, there's an answer said:
2 ^ 3 ^ 4 = 0
I tried it myself:
julia> 2 ^ 3 ^ 4
0
Personally I don't consider this a bug in the language. We can add parenthesis for clarity, both for Julia and for our human beings:
julia> (2 ^ 3) ^ 4
4096
So far so good; however, this doesn't work:
julia> 2 ^ (3 ^ 4)
0
Since I'm learning, I'd like to know, how Julia evaluate this expression to 0? What's the evaluation precedent?
julia> typeof(2 ^ 3 ^ 4)
Int64
I'm surprised I couldn't find a duplicate question about this on SO yet. I figure I'll answer this slightly differently than the FAQ in the manual since it's a common first question. Oops, I somehow missed: Factorial function works in Python, returns 0 for Julia
Imagine you've been taught addition and multiplication, but never learned any numbers higher than 99. As far as you're concerned, numbers bigger than that simply don't exist. So you learned to carry ones into the tens column, but you don't even know what you'd call the column you'd carry tens into. So you just drop them. As long as your numbers never get bigger than 99, everything will be just fine. Once you go over 99, you wrap back down to 0. So 99+3 ≡ 2 (mod 100). And 52*9 ≡ 68 (mod 100). Any time you do a multiplication with more than two factors of 10, your answer will be zero: 25*32 ≡ 0 (mod 100). Now, after you do each computation, someone could ask you "did you go over 99?" But that takes time to answer… time that could be spent computing your next math problem!
This is effectively how computers natively do arithmetic, except they do it in binary with 64 bits. You can see the individual bits with the bits function:
julia> bits(45)
"0000000000000000000000000000000000000000000000000000000000101101"
As we multiply it by 2, 101101 will shift to the left (just like multiplying by 10 in decimal):
julia> bits(45 * 2)
"0000000000000000000000000000000000000000000000000000000001011010"
julia> bits(45 * 2 * 2)
"0000000000000000000000000000000000000000000000000000000010110100"
julia> bits(45 * 2^58)
"1011010000000000000000000000000000000000000000000000000000000000"
julia> bits(45 * 2^60)
"1101000000000000000000000000000000000000000000000000000000000000"
… until it starts falling off the end. If you multiply more than 64 twos together, the answer will always zero (just like multiplying more than two tens together in the example above). We can ask the computer if it overflowed, but doing so by default for every single computation has some serious performance implications. So in Julia you have to be explicit. You can either ask Julia to check after a specific multiplication:
julia> Base.checked_mul(45, 2^60) # or checked_add for addition
ERROR: OverflowError()
in checked_mul at int.jl:514
Or you can promote one of the arguments to a BigInt:
julia> bin(big(45) * 2^60)
"101101000000000000000000000000000000000000000000000000000000000000"
In your example, you can see that the answer is 1 followed by 81 zeros when you use big integer arithmetic:
julia> bin(big(2) ^ 3 ^ 4)
"1000000000000000000000000000000000000000000000000000000000000000000000000000000000"
For more details, see the FAQ: why does julia use native machine integer arithmetic?

Julia #evalpoly macro with varargs

I'm trying to grok using Julia's #evalpoly macro. It works when I supply the coefficients manually, but I've been unable to puzzle out how to provide these via an array
julia> VERSION
v"0.3.5"
julia> #evalpoly 0.5 1 2 3 4
3.25
julia> c = [1, 2, 3, 4]
4-element Array{Int64,1}:
1
2
3
4
julia> #evalpoly 0.5 c
ERROR: BoundsError()
julia> #evalpoly 0.5 c...
ERROR: BoundsError()
julia> #evalpoly(0.5, c...)
ERROR: BoundsError()
Can someone point me in the right direction on this?
Added after seeing the great answers to this question
There is one subtlety to that I hadn't seen until I played with some of these answers. The z argument to #evalpoly can be a variable, but the coefficients are expected to be literals
julia> z = 0.5
0.5
julia> #evalpoly z 1 2 3 4
3.25
julia> #evalpoly z c[1] c[2] c[3] c[4]
ERROR: c not defined
Looking at the output of the expansion of this last command, one can see that it is indeed the case that z is assigned to a variable in the expansion but that the coefficients are inserted literally into the code.
julia> macroexpand(:#evalpoly z c[1] c[2] c[3] c[4])
:(if Base.Math.isa(z,Base.Math.Complex)
#291#t = z
#292#x = Base.Math.real(#291#t)
#293#y = Base.Math.imag(#291#t)
#294#r = Base.Math.+(#292#x,#292#x)
#295#s = Base.Math.+(Base.Math.*(#292#x,#292#x),Base.Math.*(#293#y,#293#y))
#296#a2 = c[4]
#297#a1 = Base.Math.+(c[3],Base.Math.*(#294#r,#296#a2))
#298#a0 = Base.Math.+(Base.Math.-(c[2],Base.Math.*(#295#s,#296#a2)),Base.Math.*(#294#r,#297#a1))
Base.Math.+(Base.Math.*(#298#a0,#291#t),Base.Math.-(c[1],Base.Math.*(#295#s,#297#a1)))
else
#299#t = z
Base.Math.+(Base.Math.c[1],Base.Math.*(#299#t,Base.Math.+(Base.Math.c[2],Base.Math.*(#299#t,Base.Math.+(Base.Math.c[3],Base.Math.*(#299#t,Base.Math.c[4]))))))
end)
I don't believe what you are trying to do is possible, because #evalpoly is a macro - that means it generates code at compile-time. What it is generating is a very efficient implementation of Horner's method (in the real number case), but to do so it needs to know the degree of the polynomial. The length of c isn't known at compile time, so it doesn't (and cannot) work, whereas when you provide the coefficients directly it has everything it needs.
The error message isn't very good though, so if you can, you could file an issue on the Julia Github page?
UPDATE: In response to the update to the question, yes, the first argument can be a variable. You can think of it like this:
function dostuff()
z = 0.0
# Do some stuff to z
# Time to evaluate a polynomial!
y = #evalpoly z 1 2 3 4
return y
end
is becoming
function dostuff()
z = 0.0
# Do some stuff to z
# Time to evaluate a polynomial!
y = z + 2z^2 + 3z^3 + 4z^4
return y
end
except, not that, because its using Horners rule, but whatever. The problem is, it can't generate that expression at compile time without knowing the number of coefficients. But it doesn't need to know what z is at all.
Macros in Julia are applied to their arguments. To make this work, you need to ensure that c is expanded before #evalpoly is evaluated. This works:
function f()
c=[1,2,3,4]
#eval #evalpoly 0.5 $(c...)
end
Here, #eval evaluates its argument, and expands $(c...). Later, #evalpoly sees five arguments.
As written, this is probably not efficient since #eval is called every time the function f is called. You need to move the call to #eval outside the function definition:
c=[1,2,3,4]
#eval begin
function f()
#evalpoly 0.5 $(c...)
end
end
This calls #eval when f is defined. Obviously, c must be known at this time. Whenever f is actually called, c is not used any more; it is only used while f is being defined.
Erik and Iain have done a great job of explaining why #evalpoly doesn't work and how to coerce it into working. If you just want to evaluate the polynomial, however, the easiest solution is probably just to use Polynomials.jl:
julia> using Polynomials
c = [1,2,3,4]
polyval(Poly(c), 0.5)
3.25

Easier way to convert a character to an integer?

Still getting a feel for what's in the Julia standard library. I can convert strings to their integer representation via the Int() constructor, but when I call Int() with a Char I don't integer value of a digit:
julia> Int('3')
51
Currently I'm calling string() first:
intval = Int(string(c)) # doesn't work anymore, see note below
Is this the accepted way of doing this? Or is there a more standard method? It's coming up quite a bit in my Project Euler exercise.
Note: This question was originally asked before Julia 1.0. Since it was asked the int function was renamed to Int and became a method of the Int type object. The method Int(::String) for parsing a string to an integer was removed because of the potentially confusing difference in behavior between that and Int(::Char) discussed in the accepted answer.
The short answer is you can do parse(Int, c) to do this correctly and efficiently. Read on for more discussion and details.
The code in the question as originally asked doesn't work anymore because Int(::String) was removed from the languge because of the confusing difference in behavior between it and Int(::Char). Prior to Julia 1.0, the former was parsing a string as an integer whereas the latter was giving the unicode code point of the character which meant that Int("3") would return 3 whereaas Int('3') would return 51. The modern working equivalent of what the questioner was using would be parse(Int, string(c)). However, you can skip converting the character to a string (which is quite inefficient) and just write parse(Int, c).
What does Int(::Char) do and why does Int('3') return 51? That is the code point value assigned to the character 3 by the Unicode Consortium, which was also the ASCII code point for it before that. Obviously, this is not the same as the digit value of the letter. It would be nice if these matched, but they don't. The code points 0-9 are a bunch of non-printing "control characters" starting with the NUL byte that terminates C strings. The code points for decimal digits are at least contiguous, however:
julia> [Int(c) for c in "0123456789"]
10-element Vector{Int64}:
48
49
50
51
52
53
54
55
56
57
Because of this you can compute the value of a digit by subtracting the code point of 0 from it:
julia> [Int(c) - Int('0') for c in "0123456789"]
10-element Vector{Int64}:
0
1
2
3
4
5
6
7
8
9
Since subtraction of Char values works and subtracts their code points, this can be simplified to [c-'0' for c in "0123456789"]. Why not do it this way? You can! That is exactly what you'd do in C code. If you know your code will only ever encounter c values that are decimal digits, then this works well. It doesn't, however, do any error checking whereas parse does:
julia> c = 'f'
'f': ASCII/Unicode U+0066 (category Ll: Letter, lowercase)
julia> parse(Int, c)
ERROR: ArgumentError: invalid base 10 digit 'f'
Stacktrace:
[1] parse(::Type{Int64}, c::Char; base::Int64)
# Base ./parse.jl:46
[2] parse(::Type{Int64}, c::Char)
# Base ./parse.jl:41
[3] top-level scope
# REPL[38]:1
julia> c - '0'
54
Moreover, parse is a bit more flexible. Suppose you want to accept f as a hex "digit" encoding the value 15. To do that with parse you just need to use the base keyword argument:
julia> parse(Int, 'f', base=16)
15
julia> parse(Int, 'F', base=16)
15
As you can see it parses upper or lower case hex digits correctly. In order to do that with the subtraction method, your code would need to do something like this:
'0' <= c <= '9' ? c - '0' :
'A' <= c <= 'F' ? c - 'A' + 10 :
'a' <= c <= 'f' ? c - 'a' + 10 : error()
Which is actually quite close to the implementation of the parse(Int, c) method. Of course at that point it's much clearer and easier to just call parse(Int, c) which does this for you and is well optimized.

Resources