When I want Julia (0.4.3) to compute (2.4 - 1.2im) // (0.7 - 0.6im), it gives an overflow error:
ERROR: OverflowError()
in * at rational.jl:188
in // at rational.jl:45
in // at rational.jl:42
However (24 - 12 im) // (0.7 - 0.6im), mathmatically essentialy the same, does work. Also, (2.4 - 1.2im) / (0.7 - 0.6im) works too but this doesn't give a rational of course.
Is this a bug, or am I doing something wrong? Are there rationals that Julia can't work with?
You should use:
(24//10 - 12im//10) / (7//10 - 6im//10)
instead.
Why does this happen? The numbers you write are floating point numbers—they are not 0.7 or 2.4, but rather approximations of those numbers. You can see this effect by converting to a Rational:
julia> Rational{Int64}(0.7)
3152519739159347//4503599627370496
The // operator used in your question did an implicit conversion to rationals, so results like these are observed.
Now why did an OverflowError occur? Because the type is Rational{Int64}, which means both numerator and denominator can only store numbers within the range of an Int64. Note what happens when we try to square this number, for instance:
julia> Rational{Int64}(0.7) * Rational{Int64}(0.7)
ERROR: OverflowError()
in *(::Rational{Int64}, ::Rational{Int64}) at ./rational.jl:196
in eval(::Module, ::Any) at ./boot.jl:234
in macro expansion at ./REPL.jl:92 [inlined]
in (::Base.REPL.##1#2{Base.REPL.REPLBackend})() at ./event.jl:46
The OverflowError tells us that the resulting rational is no longer exactly representable in this type, which is a good thing—after all, the whole point of Rational is to be exact! This could be fixed with Rational{BigInt}, but of course that comes with a substantial performance penalty.
So the root of the issue is that 0.7 and the like are floating point literals, and are therefore not exactly 0.7. Indeed, expressed exactly, 0.7 is 0.6999999999999999555910790149937383830547332763671875. Instead, using 7//10 avoids the issue.
Related
I did the following calculations in Julia
z = LinRange(-0.09025000000000001,0.19025000000000003,5)
d = Normal.(0.05*(1-0.95) .+ 0.95.*z .- 0.0051^2/2, 0.0051 .* (similar(z) .*0 .+1))
minimum(cdf.(d, (z[3]+z[2])/2))
The problem I have is that the last code sometimes gives me the correct result 4.418051841202834e-239, sometimes reports the error DomainError with NaN: Normal: the condition σ >= zero(σ) is not satisfied. I think this is because 4.418051841202834e-239 is too small. But I was wondering why my code can give me different results.
In addition to points mentioned by others, here are a few more:
Firstly, don't use LinRange when numerical accuracy is of importance. This is what the range function is for. LinRange can be used when numerical precision is of lesser importance, since it is faster. From the docstring of range:
Special care is taken to ensure intermediate values are computed rationally. To avoid this induced overhead, see the LinRange constructor.
Example:
julia> LinRange(-0.09025000000000001,0.19025000000000003,5) .- range(-0.09025000000000001,0.19025000000000003,5)
0.0:-3.469446951953614e-18:-1.3877787807814457e-17
Secondly, this is a pretty terrible way to create a vector of a certain value:
0.0051 .* (similar(z) .*0 .+1)
Other's have mentioned ones, etc. but I think it's better to use fill
fill(0.0051, size(z))
which directly fills the array with the right value. Perhaps one should use convert(eltype(z), 0.0051) inside fill.
Thirdly, don't create this vector at all! You use broadcasting, so just use the scalar value:
d = Normal.(0.05*(1-0.95) .+ 0.95.*z .- 0.0051^2/2, 0.0051) # look! just a scalar!
This is how broadcasting works, it expands singleton dimensions implicitly to match other arguments (without actually wasting that memory).
Much of the point of broadcasting is that you don't need to create that sort of 'dummy arrays' anymore. If you find yourself doing that, give it another think; constant-valued arrays are inherently wasteful, and you shouldn't need to create them.
There are two problems:
Noted by #Dan Getz: similar does no initialize the values and quite often unused areas of memory have values corresponding to NaN. In that case multiplication by 0 does not help since NaN * 0 == NaN. Instead you want to have ones(eltype(z),size(z))
you need to use higher precision than Float64. BigFloat is one way to go - just you need to remember to call setprecision(BigFloat, 128) so you actually control how many bits you use. However, much more time-efficient solution (if you run computations at scale) will be to use a dedicated package such as DoubleFloats.
Sample corrected code using DoubleFloats below:
julia> z = LinRange(df64"-0.09025000000000001",df64"0.19025000000000003",5)
5-element LinRange{Double64, Int64}:
-0.09025000000000001,-0.020125,0.05000000000000001,0.12012500000000002,0.19025000000000003
julia> d = Normal.(0.05*(1-0.95) .+ 0.95.*z .- 0.0051^2/2, 0.0051 .* ones(eltype(z),size(z)))
5-element Vector{Normal{Double64}}:
Normal{Double64}(μ=-0.083250505, σ=0.0051)
Normal{Double64}(μ=-0.016631754999999998, σ=0.0051)
Normal{Double64}(μ=0.049986995000000006, σ=0.0051)
Normal{Double64}(μ=0.11660574500000001, σ=0.0051)
Normal{Double64}(μ=0.18322449500000001, σ=0.0051)
julia> minimum(cdf.(d, (z[3]+z[2])/2))
4.418051841203009e-239
The problem in the code is similar(z) which produces a vector with undefined entries and is used without initialization. Use ones(length(z)) instead.
I try to write
testFunc = function(x){x^0.3 * (1-x)^0.7}
but when I try
testFunc(2)
R returns NaN result (for any x>1). How can I solve this problem?
If you try to raise a negative floating-point value to a fractional exponent, you'll always get NaN. This is not necessarily the mathematically correct answer - for example, we know that the cube root of -8 (-8^(1/3)) "should" be -2 ((-2)^3 == -8). From ?"^":
Users are sometimes surprised by the value returned, for example
why ‘(-8)^(1/3)’ is ‘NaN’. For double inputs, R makes use of IEC
60559 arithmetic on all platforms, together with the C system
function ‘pow’ for the ‘^’ operator. The relevant standards
define the result in many corner cases. In particular, the result
in the example above is mandated by the C99 standard. On many
Unix-alike systems the command ‘man pow’ gives details of the
values in a large number of corner cases.
If you really want to raise negative values to fractional powers, you could use as.complex():
as.complex(-1)^0.7
[1] -0.5877853+0.809017i
Your function would be
function(x){x^0.3 * as.complex(1-x)^0.7}
but you might need to rethink the mathematical foundations of whatever you're trying to do ...
I've started using the unicode cdot in place of * in my Julia code because I find it easier to read. I thought they were the same, but apparently there is a difference I don't understand. Is there documentation on this?
julia> 2pi⋅(0:1)
ERROR: MethodError: no method matching dot(::Float64, ::UnitRange{Int64})
Closest candidates are:
dot(::Number, ::Number) at linalg\generic.jl:301
dot{T<:Union{Float32,Float64},TI<:Integer}(::Array{T<:Union{Float32,Float64},1}, ::Union{Range{TI<:Integer},UnitRange{TI<:Integer}}, ::Array{T<:Union{Float32,Float64},1}, ::Union{Range{TI<:Integer},UnitRange{TI<:Integer}}) at linalg\matmul.jl:48
dot{T<:Union{Complex{Float32},Complex{Float64}},TI<:Integer}(::Array{T<:Union{Complex{Float32},Complex{Float64}},1}, ::Union{Range{TI<:Integer},UnitRange{TI<:Integer}}, ::Array{T<:Union{Complex{Float32},Complex{Float64}},1}, ::Union{Range{TI<:Integer},UnitRange{TI<:Integer}}) at linalg\matmul.jl:61
...
julia> 2pi*(0:1)
0.0:6.283185307179586:6.283185307179586
dot or ⋅ is not the same as multiplication (*). You can find out what it's for by typing ?dot:
help?> ⋅
search: ⋅
dot(x, y)
⋅(x,y)
Compute the dot product. For complex vectors, the first vector is conjugated. [...]
For more info about the dot product, see e.g. here.
It seems like you are conflating two different operators. The cdot aliases the dot function, while the asterisk * aliases multiplication routines.
I suspect that you want to do a dot product. The error that you see tells you that Julia does not know how to compute the dot product of a scalar floating point number (Float64) with an integer unit range (UnitRange{Int}). If you think about it, using dot here makes little sense.
In contrast, the second command 2pi*(0:1) computes the product of a scalar against the same UnitRange object. That simply rescales the range, and Julia has a method to do that.
A few options for you, depending on what you want to do:
Use * instead of dot here (easiest)
Code your own dot method to handle rescaling of UnitRange objects (probably not helpful)
Use elementwise multiplication .* (careful, not equal to dot!)
If I run following line of code, I get DIVIDE BY ZERO error
1. System.out.println(5/0);
which is the expected behavior.
Now I run the below line of code
2. System.out.println(5/0F);
here there is no DIVIDE BY ZERO error, rather it shows INFINITY
In the first line I am dividing two integers and in the second two real numbers.
Why does dividing by zero for integers gives DIVIDE BY ZERO error while in the case of real numbers it gives INFINITY
I am sure it is not specific to any programming language.
(EDIT: The question has been changed a bit - it specifically referred to Java at one point.)
The integer types in Java don't have representations of infinity, "not a number" values etc - whereas IEEE-754 floating point types such as float and double do. It's as simple as that, really. It's not really a "real" vs "integer" difference - for example, BigDecimal represents real numbers too, but it doesn't have a representation of infinity either.
EDIT: Just to be clear, this is language/platform specific, in that you could create your own language/platform which worked differently. However, the underlying CPUs typically work the same way - so you'll find that many, many languages behave this way.
EDIT: In terms of motivation, bear in mind that for the infinity case in particular, there are ways of getting to infinity without dividing by zero - such as dividing by a very, very small floating point number. In the case of integers, there's obviously nothing between zero and one.
Also bear in mind that the cases in which integers (or decimal floating point types) are used typically don't need to concept of infinity, or "not a number" results - whereas in scientific applications (where float/double are more typically useful), "infinity" (or at least, "a number which is too large to sensibly represent") is still a potentially valid result.
This is specific to one programming language or a family of languages. Not all languages allow integers and floats to be used in the same expression. Not all languages have both types (for example, ECMAScript implementations like JavaScript have no notion of an integer type externally). Not all languages have syntax like this to convert values inline.
However, there is an intrinsic difference between integer arithmetic and floating-point arithmetic. In integer arithmetic, you must define that division by zero is an error, because there are no values to represent the result. In floating-point arithmetic, specifically that defined in IEEE-754, there are additional values (combinations of sign bit, exponent and mantissa) for the mathematical concept of infinity and meta-concepts like NaN (not a number).
So we can assume that the / operator in this programming language is generic, that it performs integer division if both operands are of the language's integer type; and that it performs floating-point division if at least one of the operands is of a float type of the language, whereas the other operands would be implicitly converted to that float type for the purpose of the operation.
In real-number math, division of a number by a number close to zero is equivalent to multiplying the first number by a number whose absolute is very large (x / (1 / y) = x * y). So it is reasonable that the result of dividing by zero should be (defined as) infinity as the precision of the floating-point value would be exceeded.
Implementation details were to be found in that programming language's specification.
Very basic Fortran question. The following function returns a NaN and I can't seem to figure out why:
F_diameter = 1. - (2.71828**(-1.0*((-1. / 30.)**1.4)))
I've fed 2.71... in rather than using exp() but they both fail the same way. I've noticed that I only get a NaN when the fractional part (-1 / 30) is negative. Positives evaluate ok.
Thanks a lot
The problem is that you are taking a root of a negative number, which would give you a complex answer. This is more obvious if you imagine e.g.
(-1) ** (3/2)
which is equivalent to
(1/sqrt(-1))**3
In other words, your fractional exponent can't trivially operate on a negative number.
There is another interesting point here I learned today and I want to add to ire_and_curses answer: The fortran compiler seems to compute powers with integers with successive multiplications.
For example
PROGRAM Test
PRINT *, (-23) ** 6
END PROGRAM
work fine and gives 148035889 as an answer.
But for REAL exponents, the compiler uses logarithms: y**x = 10**(x * log(y)) (maybe compilers today do differently, but my book says so). Now that negative logarithms give a complex result, this does not work:
PROGRAM Test
PRINT *, (-23) ** 6.1
END PROGRAM
and even gives an compiler error:
Error: Raising a negative REAL at (1) to a REAL power is prohibited
From an mathematical point of view, this problem seems also be quite interesting: https://math.stackexchange.com/questions/1211/non-integer-powers-of-negative-numbers