Round to next largest integer in Julia? - julia

If I have some number in Julia like 1.1, is there any function/way to round this Float64 to the next largest integer? For example, what function/method could round 1.1 to 2?

I guess you want smallest integer greater or equal than your value. In this case use:
julia> ceil(Int, 1.1)
2

ceil is probably the preferred method here, but just for completeness, you can pass in a RoundUp RoundingMode to round() and get the nearest larger integer as a float too:
julia> round(1.1, RoundUp)
2.0
You can also search for other RoundingModes by searching for it in help:
help?> RoundingMode
search: RoundingMode
RoundingMode
A type used for controlling the rounding mode of floating point operations (via rounding/setrounding functions), or as optional arguments for rounding
to the nearest integer (via the round function).
Currently supported rounding modes are:
• RoundNearest (default)
• RoundNearestTiesAway
• RoundNearestTiesUp
• RoundToZero
• RoundFromZero (BigFloat only)
• RoundUp
• RoundDown

Related

Why implement lround specifically using integer math?

I noticed that the C++ standard library has separate functions for round and lround rather than just having you use long(round(x)) for the latter.
Looking into the implementation in glibc, I find that indeed, for platforms using IEEE754 floating point, the version that returns an integer will directly manipulate the bits from within the floating point representation, and not do the rounding using floating point operations (e.g. adding ±0.5).
What is the benefit of having a distinct implementation when you want the result as an integer type? Is this supposed to be faster, or more accurate? If it is better to use integer math on the underlying representation, why not just always do it that way even if returning the result as a double?
One reason is that adding .5 is insufficient. Let’s say you add .5 and then truncate to an integer. (How? Is there an instruction for that? Or are you doing more work?) If x is ½−2−54 (the greatest representable value less than ½), adding .5 yields 1, because the mathematical sum, 1−2−54, is exactly halfway between the nearest two representable values, 1−2−53 and 1, and the common default rounding mode, round-to-nearest-ties-to-even, rounds that to 1. But the correct result for lround(x) is 0.
And, of course, lround is specified to round ties away from zero, regardless of the current rounding mode. You could set the rounding mode, do some arithmetic, and restore the rounding mode, but there are problems with this.
One is that changing the rounding mode is a typically a time-consuming operation. The rounding mode is a global state that affects most floating-point instructions. So the processor has to ensure all pending instructions complete with the prior mode, change the global state, and ensure all later instructions start after that change.
If you are lucky, you might have a processor with per-instruction rounding modes or something similar, and then you can use any rounding mode you like without time penalty. Hewlett Packard has some processors like that. However, “round away from zero” is an uncommon mode. Most processors have round-to-nearest-ties-to-even, round toward zero, round down (toward −∞), and round up (toward +∞), and round-to-odd is becoming popular for its value in avoiding double-rounding errors. But round away from zero is rare.
Another reason is that doing floating-point instructions alters the floating-point status flags and may generate traps, but it is desired that library routines behave as single operations. For example, if we add .5 and rounding occurs, the inexact flag will be raised, since the floating-point addition with .5 produced a result different from the mathematical sum. But to the user of lround, no inexact condition ever occurs; lround is defined to return a value rounded to an integer, and it always does so—within the long range, it never returns a computed result different from its ideal mathematical definition. So if lround(x) raised the inexact flag, that would be incorrect behavior. To avoid it, an implementation that used floating-point instructions would have to save the current floating-point flags, do its work, and restore the flags before returning.

Why does this complex rational give an overflow error in Julia?

When I want Julia (0.4.3) to compute (2.4 - 1.2im) // (0.7 - 0.6im), it gives an overflow error:
ERROR: OverflowError()
in * at rational.jl:188
in // at rational.jl:45
in // at rational.jl:42
However (24 - 12 im) // (0.7 - 0.6im), mathmatically essentialy the same, does work. Also, (2.4 - 1.2im) / (0.7 - 0.6im) works too but this doesn't give a rational of course.
Is this a bug, or am I doing something wrong? Are there rationals that Julia can't work with?
You should use:
(24//10 - 12im//10) / (7//10 - 6im//10)
instead.
Why does this happen? The numbers you write are floating point numbers—they are not 0.7 or 2.4, but rather approximations of those numbers. You can see this effect by converting to a Rational:
julia> Rational{Int64}(0.7)
3152519739159347//4503599627370496
The // operator used in your question did an implicit conversion to rationals, so results like these are observed.
Now why did an OverflowError occur? Because the type is Rational{Int64}, which means both numerator and denominator can only store numbers within the range of an Int64. Note what happens when we try to square this number, for instance:
julia> Rational{Int64}(0.7) * Rational{Int64}(0.7)
ERROR: OverflowError()
in *(::Rational{Int64}, ::Rational{Int64}) at ./rational.jl:196
in eval(::Module, ::Any) at ./boot.jl:234
in macro expansion at ./REPL.jl:92 [inlined]
in (::Base.REPL.##1#2{Base.REPL.REPLBackend})() at ./event.jl:46
The OverflowError tells us that the resulting rational is no longer exactly representable in this type, which is a good thing—after all, the whole point of Rational is to be exact! This could be fixed with Rational{BigInt}, but of course that comes with a substantial performance penalty.
So the root of the issue is that 0.7 and the like are floating point literals, and are therefore not exactly 0.7. Indeed, expressed exactly, 0.7 is 0.6999999999999999555910790149937383830547332763671875. Instead, using 7//10 avoids the issue.

Integer divide by Zero and Float (Real no.) divide by Zero

If I run following line of code, I get DIVIDE BY ZERO error
1. System.out.println(5/0);
which is the expected behavior.
Now I run the below line of code
2. System.out.println(5/0F);
here there is no DIVIDE BY ZERO error, rather it shows INFINITY
In the first line I am dividing two integers and in the second two real numbers.
Why does dividing by zero for integers gives DIVIDE BY ZERO error while in the case of real numbers it gives INFINITY
I am sure it is not specific to any programming language.
(EDIT: The question has been changed a bit - it specifically referred to Java at one point.)
The integer types in Java don't have representations of infinity, "not a number" values etc - whereas IEEE-754 floating point types such as float and double do. It's as simple as that, really. It's not really a "real" vs "integer" difference - for example, BigDecimal represents real numbers too, but it doesn't have a representation of infinity either.
EDIT: Just to be clear, this is language/platform specific, in that you could create your own language/platform which worked differently. However, the underlying CPUs typically work the same way - so you'll find that many, many languages behave this way.
EDIT: In terms of motivation, bear in mind that for the infinity case in particular, there are ways of getting to infinity without dividing by zero - such as dividing by a very, very small floating point number. In the case of integers, there's obviously nothing between zero and one.
Also bear in mind that the cases in which integers (or decimal floating point types) are used typically don't need to concept of infinity, or "not a number" results - whereas in scientific applications (where float/double are more typically useful), "infinity" (or at least, "a number which is too large to sensibly represent") is still a potentially valid result.
This is specific to one programming language or a family of languages. Not all languages allow integers and floats to be used in the same expression. Not all languages have both types (for example, ECMAScript implementations like JavaScript have no notion of an integer type externally). Not all languages have syntax like this to convert values inline.
However, there is an intrinsic difference between integer arithmetic and floating-point arithmetic. In integer arithmetic, you must define that division by zero is an error, because there are no values to represent the result. In floating-point arithmetic, specifically that defined in IEEE-754, there are additional values (combinations of sign bit, exponent and mantissa) for the mathematical concept of infinity and meta-concepts like NaN (not a number).
So we can assume that the / operator in this programming language is generic, that it performs integer division if both operands are of the language's integer type; and that it performs floating-point division if at least one of the operands is of a float type of the language, whereas the other operands would be implicitly converted to that float type for the purpose of the operation.
In real-number math, division of a number by a number close to zero is equivalent to multiplying the first number by a number whose absolute is very large (x / (1 / y) = x * y). So it is reasonable that the result of dividing by zero should be (defined as) infinity as the precision of the floating-point value would be exceeded.
Implementation details were to be found in that programming language's specification.

How can I get the min/max possible numeric?

Is there a function that returns the highest and lowest possible numeric values?
help(numeric) sends you to help(double) which has
Double-precision values:
All R platforms are required to work with values conforming tothe
IEC 60559 (also known as IEEE 754) standard. This basically works
with a precision of 53 bits, and represents to that precision a
range of absolute values from about 2e-308 to 2e+308. It also has
special values ‘NaN’ (many of them), plus and minus infinity and
plus and minus zero (although R acts as if these are the same).
There are also _denormal(ized)_ (or _subnormal_) numbers with
absolute values above or below the range given above but
represented to less precision.
See ‘.Machine’ for precise information on these limits. Note that
ultimately how double precision numbers are handled is down to the
CPU/FPU and compiler.
So you want to look at .Machine which on my 64-bit box has
$double.xmin
[1] 2.22507e-308
$double.xmax
[1] 1.79769e+308
help("numeric")
will ask you to do
help("double)
which will give the answer: range of absolute values from about 2e-308 to 2e+308.

Why do programming languages round down until .6?

If you put a decimal in a format where has to be rounded to the nearest 10th, and it is: 1.55, it'll round to 1.5. 1.56 will then round to 1.6. In school I recall learning that you round up when you reach five, and down if it's 4 or below. Why is it different in Python, et al.
Here's a code example for Python 2.6x (whatever the latest version is)
'{0:01.2f}'.format(5.555) # This will return '5.55'
After trying some of the examples provided, I realized something even more confusing:
'{0:01.1f}'.format(5.55) # This will return '5.5'
# But then
'{0:01.1f}'.format(1.55) # This will return '1.6'
Why the difference when using 1.55 vs 5.55. Both are typed as literals (so floats)
First off, in most languages an undecorated constant like "1.55" is treated as a double precision value. However, 1.55 is not exactly representable as double precision value, because it doesn't have a terminating representation in binary. This causes many curious behaviors, but one effect is that when you type 1.55, you don't actually get the value that's exactly halfway between 1.5 and 1.6.
In binary, the decimal number 1.55 is:
1.10001100110011001100110011001100110011001100110011001100110011001100...
When you type "1.55", this value actually gets rounded to the nearest representable double-precision value (on many systems... but there are exceptions, which I'll get to). This value is:
1.1000110011001100110011001100110011001100110011001101
which is slightly larger than 1.55; in decimal, it's exactly:
1.5500000000000000444089209850062616169452667236328125
So, when asked to round this value to a single digit after the decimal place, it will round up to 1.6. This is why most of the commenters have said that they can't duplicate the behavior that you're seeing.
But wait, on your system, "1.55" rounded down, not up. What's going on?
It could be a few different things, but the most likely is that you're on a platform (probably Windows), that defaults to doing floating-point arithmetic using x87 instructions, which use a different (80-bit) internal format. In the 80-bit format, 1.55 has the value:
1.100011001100110011001100110011001100110011001100110011001100110
which is slightly smaller than 1.55; in decimal, this number is:
1.54999999999999999995663191310057982263970188796520233154296875
Because it is just smaller than 1.55, it rounds down when it is rounded to one digit after the decimal point, giving the result "1.5" that you're observing.
FWIW: in most programming languages, the default rounding mode is actually "round to nearest, ties to even". It's just that when you specify fractional values in decimal, you'll almost never hit an exact halfway case, so it can be hard for a layperson to observe this. You can see it, though, if you look at how "1.5" is rounded to zero digits:
>>> "%.0f" % 0.5
'0'
>>> "%.0f" % 1.5
'2'
Note that both values round to even numbers; neither rounds to "1".
Edit: in your revised question, you seem to have switched to a different python interpreter, on which floating-point is done in the IEEE754 double type, not the x87 80bit type. Thus, "1.55" rounds up, as in my first example, but "5.55" converts to the following binary floating-point value:
101.10001100110011001100110011001100110011001100110011
which is exactly:
5.54999999999999982236431605997495353221893310546875
in decimal; since this is smaller than 5.55, it rounds down.
There are many ways to round numbers. You can read more about rounding on Wikipedia. The rounding method used in Python is Round half away from zero and the rounding method you are describing is more or less the same (at least for positive numbers).
Can you give some example code, because that's not the behaviour I see in Python:
>>> "%.1f" % 1.54
'1.5'
>>> "%.1f" % 1.55
'1.6'
>>> "%.1f" % 1.56
'1.6'
This doesn't appear to be the case. You're using the "float" string formatter, right?
>>> "%0.2f" % 1.55
'1.55'
>>> "%0.1f" % 1.55
'1.6'
>>> "%0.0f" % 1.55
'2'
Rounding and truncation is different for every programming language, so your question is probably directly related to Python.
However, rounding as a practice depends on your methodology.
You also should know that CONVERTING a decimal to a whole number in many programming languages yields different results from actually rounding the number.
Edit: Per some of the other posters, it seems that Python does not exhibit the rounding behavior you've described:
>>> "%0.2f" % 1.55
'1.55'
>>> "%0.1f" % 1.55
'1.6'
>>> "%0.0f" % 1.55
'2'
I can't see a reason for the exact behaviour that you are describing. If your numbers are just examples, a similar scenario can be explained by bankers rounding being used:
1.5 rounds to 2
2.5 rounds to 2
3.5 rounds to 4
4.5 rounds to 4
I.e. a .5 value will be rounded to the nearest even whole number. The reason for this is that rounding a lot of numbers would even out in the long run. If a bank for example is to pay interrest to a million customers, and 10% of them ends up with a .5 cent value to be rounded, the bank would pay out $500 more if the values were rounded up instead.
Another reason for unexpected rounding is the precision of floating point numbers. Most numbers can't be represented exactly, so they are represented by the closest possible approximation. When you think that you have a number that is 1.55, you may actually ending up with a number like 1.54999. Rounding that number to one decimal would of course result in 1.5 rather than 1.6.
One method to do away with at least one aspect of rounding problems (at least some of the time) is to do some preprocessing. Single and double precision formats can represent all integers exactly from -2^24-1 to 2^24-1 and -2^53-1 to 2^53-1 respectively. What can be done with a real number (with a non-zero fraction part) is to
strip off the sign and keep it for later
multiply the remaining positive number with 10^(number of decimal places required)
add 0.5 if your environment's rounding mode is set to chop (round towards zero)
round the number to nearest
sprintf the number to a string with 0 decimals in format
"manually" format the string according to its length following the sprintf, number of decimal places required, decimal point and sign
the string should now contain the exact number
Keep in mind that if the result after step 3 exceeds the range of the specific format (above) your answer will be incorrect.

Resources