How to find source of arithmetic error in IDL program? - idl-programming-language

I am debugging an existing program which appears to fail due to various arithmetic errors.
Program caused arithmetic error: Floating divide by 0
Is there a way to find out what part of the program is producing these errors? Or make the IDL development environment stop when one of these is raised?

Setting the !EXCEPT system variable to 2 will print errors for each statement, showing what file and which line caused the error.
!EXCEPT = 2
% Program caused arithmetic error: Floating divide by 0
% Detected at FUNCTION 77 /home/user/function.pro
The documentation on CHECK_MATH provides more information about how arithmetic errors are handled, in addition to the documentation on math error handling in general.

Related

Julia Differential Equations suppress warning of detected instabilities

I have a program that simulates the paths of particles using the Differential Equations package of Julia. The simulation allows for particles to hit devices - to prevent the continued simulation of such particles, I use the unstable_check of the solver (specifically of the EulerHeun solver). However, this leads to warnings like the following:
┌ Warning: Instability detected. Aborting
└ # SciMLBase <path>\.julia\packages\SciMLBase\0s9uL\src\integrator_interface.jl:351
As I simulate thousands of particles, this can be quite annoying (and slow).
Can I suppress this warning? And if not, is there another (better) way to abort the simulation of some particles?
I don't think a code sample makes sense / is necessary here; let me know though if you think otherwise.
https://diffeq.sciml.ai/stable/basics/common_solver_opts/#Miscellaneous
verbose: Toggles whether warnings are thrown when the solver exits early. Defaults to true.
Thus to turn off the warnings, you simply do solve(prob,alg;verbose=false).
The simulation allows for particles to hit devices - to prevent the continued simulation of such particles, I use the unstable_check of the solver
Using a DiscreteCallback or ContinuousCallback with affect!(integrator) = terminate!(integrator) is a much better way to do this.
There is Suppressor.jl, although I don't know whether this reduces the overhead you get from the warnings being created, so a DiffEq-specific setting might be the better way to go here (I don't know much about DiffEq though, sorry!)
Here's an example from the readme:
julia> using Suppressor
julia> #suppress begin
println("This string doesn't get printed!")
#warn("This warning is ignored.")
end
for just suppressing warnings you want #suppress_err

Why is negative one (-1) elevated to the power of an even integer returning always the same result in Julia?

I'm currently executing the following (very simple) code in Julia:
-1^2
But for some reason the result is always:
-1
Now, if I put in parenthesis, then the answer is correct. So I'm curious as to why this is happening. I'm running this on a Jupyter Notebook.
This is due to order of operations. Exponentiation takes precedence over Subtraction, so you get -(1^n) which is always -1.

julialang: can (should) this type error be caught at compile time?

function somefun()
x::Int = 1
x = 0.5
end
this compiles with no warning. of course calling it produces an InexactError: Int64(0.5). question: can you enforce a compile time check?
Julia is a dynamic language in this sense. So, no, it appears you cannot detect if the result of an assignment will result in such an error without running the function first, as this kind of type checking is done at runtime.
I wasn't sure myself, so I wrapped this function in a module to force (pre)compilation in the absence of running the function, and the result was that no error was thrown, which confirms this idea. (see here if you want to see what I mean by this).
Having said this, to answer the spirit of your question: is there a way to avoid such obscure runtime errors from creeping up in unexpected ways?
Yes there is. Consider the following two, almost equivalent functions:
function fun1(x ); y::Int = x; return y; end;
function fun2(x::Int); y::Int = x; return y; end;
fun1(0.5) # ERROR: InexactError: Int64(0.5)
fun2(0.5) # ERROR: MethodError: no method matching fun2(::Float64)
You may think, big deal, we exchanged one error for another. But this is not the case. In the first instance, you don't know that your input will cause a problem until the point where it gets used in the function. Whereas in the second case, you are effectively enforcing a type check at the point of calling the function.
This is a trivial example of programming "by contract", by making use of Julia's elegant type-checking system. See Design By Contract for details.
So the answer to your question is, yes, if you rethink your design and follow good programming practices, such that this kind of errors are caught early on, then you can avoid having them occuring later on in obscure scenarios where they are hard to fix or detect.
The Julia manual provides a style guide which may also be of help (the example I give above is right at the top!).
It's worth thinking through what "compile time" really is in Julia — because it's probably not what you're thinking.
When you define the function:
julia> function somefun()
x::Int = 1
x = 0.5
end
somefun (generic function with 1 method)
You are not compiling it. Julia won't compile it, in fact, until you call it. Julia's compiler can be thought of as Just-Barely-Ahead-Of-Time, standing in contrast to typical JIT or AOT designs.
Now, when you call the function it compiles it and then runs it which throws the error. You can see this compilation happening the very first time you call the function — it takes a bit more time and memory as it generates and caches the specialized code:
julia> #time try somefun() catch end
0.005828 seconds (6.76 k allocations: 400.791 KiB)
julia> #time try somefun() catch end
0.000107 seconds (6 allocations: 208 bytes)
So perhaps you can see that with Julia's compilation model it doesn't so much matter if it gets caught at compile time or not — even if Julia refused to compile (and cache) the code it'd behave exactly like what you currently see. It'd still allow you to define the function in the first place, and it'd still only throw its error upon calling the function.
The question you mean to ask is if Julia could (or should) catch this error at function definition time. And then the question is really — is it ok to define a method that always results in an error? What about a function like error itself? In Julia, it's totally fine to define a method that unconditionally errors like this one, and there can be good reasons to do so.
Now, there are ways to ask Julia if it is able to detect that this method will always unconditionally error:
julia> #code_typed somefun()
CodeInfo(
1 ─ invoke Base.convert(Main.Int::Type{Int64}, 0.5::Float64)::Union{}
└── $(Expr(:unreachable))::Union{}
) => Union{}
This is the very first step in Julia's process of compilation, and in this case it can see that everything beyond convert(Int, 0.5) is unreachable — that is, it errors. Further, it knows that since the function will never return, it's return type is Union{} (that is, no possible type can ever be returned!) So you can ask Julia to do this step with, for example, the #inferred macro as part of a test suite.

What happens if you divide by Zero on a computer?

What happens if you divide by Zero on a Computer?
In any given programming languange (I worked with, at least) this raises an error.
But why? Is it built in the language, that this is prohibited? Or will it compile, and the hardware will figure out that an error must be returned?
I guess handling this by the language can only be done, if it is hard code, e.g. there is a line like double z = 5.0/0.0; If it is a function call, and the devisor is given from outside, the language could not even know that this is a division by zero (at least a compile time).
double divideByZero(double divisor){
return 5.0/divisor;
}
where divisor is called with 0.0.
Update:
According to the comments/answers it makes a difference whether you divide by int 0 or double 0.0.
I was not aware of that. This is interesting in itself and I'm interested in both cases.
Also one answer is, that the CPU throws an error. Now, how is this done? Also in software (doesn't make sense on a CPU), or are there some circuits which recognize this? I guess this happens on the Arithmetic Logic Unit (ALU).
When an integer is divided by 0 in the CPU, this causes an interrupt.¹ A programming language implementation can then handle that interrupt by throwing an exception or employing whichever other error-handling mechanisms the language has.
When a floating point number is divided by 0, the result is infinity, NaN or negative infinity (which are special floating point values). That's mandated by the IEEE floating point standard, which any modern CPU will adhere to. Programming languages generally do as well. If a programming language wanted to handle it as an error instead, it could just check for NaN or infinite results after every floating point operation and cause an error in that case. But, as I said, that's generally not done.
¹ On x86 at least. But I imagine it's the same on most other architectures as well.

Negative Exponents throwing NaN in Fortran

Very basic Fortran question. The following function returns a NaN and I can't seem to figure out why:
F_diameter = 1. - (2.71828**(-1.0*((-1. / 30.)**1.4)))
I've fed 2.71... in rather than using exp() but they both fail the same way. I've noticed that I only get a NaN when the fractional part (-1 / 30) is negative. Positives evaluate ok.
Thanks a lot
The problem is that you are taking a root of a negative number, which would give you a complex answer. This is more obvious if you imagine e.g.
(-1) ** (3/2)
which is equivalent to
(1/sqrt(-1))**3
In other words, your fractional exponent can't trivially operate on a negative number.
There is another interesting point here I learned today and I want to add to ire_and_curses answer: The fortran compiler seems to compute powers with integers with successive multiplications.
For example
PROGRAM Test
PRINT *, (-23) ** 6
END PROGRAM
work fine and gives 148035889 as an answer.
But for REAL exponents, the compiler uses logarithms: y**x = 10**(x * log(y)) (maybe compilers today do differently, but my book says so). Now that negative logarithms give a complex result, this does not work:
PROGRAM Test
PRINT *, (-23) ** 6.1
END PROGRAM
and even gives an compiler error:
Error: Raising a negative REAL at (1) to a REAL power is prohibited
From an mathematical point of view, this problem seems also be quite interesting: https://math.stackexchange.com/questions/1211/non-integer-powers-of-negative-numbers

Resources