What library for arbitrary precision library should I use? - math

I need to program something that calculates a number to arbitrary precision...
but I need it to output the digits that are already "certain" (ie below some error bound) to a file so that there are digits to work on while the program keeps running.
Also, most libraries for arbitrary precision seem to require a fixed precision, but what if I wanted dynamic precision, ie, it would go on and on...

Most algorithms that calculate a number to extended precision require that all intermediate calculations are done to a somewhat higher precision to guarantee accurate results. You normally specify your final desired precision and that's the result that you get. If you want to output the "known" accurate digits during the calculation, you'll generally need to implement the algorithm and track the accurate digits yourself.
Without knowing what number you want to calculate, I can't offer any better suggestions.
GMP/MPIR only support very basic floating point calculations. MPFR, which requires either GMP or MPIR, provides a much broader set of floating point operations.

My advice is to use MPIR. It's a fork of GMP but with (in my opinion) a more helpful and developer-friendly crew.

Related

Understanding the complex-step in a physical sense

I think I understand what complex step is doing numerically/algorithmically.
But the questions still linger. First two questions might have the same answer.
1- I replaced the partial derivative calculations of 'Betz_limit' example with complex step and removed the analytical gradients. Looking at the recorded design_var evolution none of the values are complex? Aren't they supposed to be shown as somehow a+bi?
Or it always steps in the real space. ?
2- Tying to picture 'cs', used in a physical concept. For example a design variable of beam length (m), objective of mass (kg) and a constraint of loads (Nm). I could be using an explicit component to calculate these (pure python) or an external code component (pure fortran). Numerically they all can handle complex numbers but obviously the mass is a real value. So when we say capable of handling the complex numbers is it just an issue of handling a+bi (where actual mass is always 'a' and b is always equal to 0?)
3- How about the step size. I understand there wont be any subtractive cancellation errors but what if i have a design variable normalized/scaled to 1 and a range of 0.8 to 1.2. Decreasing the step to 1e-10 does not make sense. I am a bit confused there.
The ability to use complex arithmetic to compute derivative approximations is based on the mathematics of complex arithmetic.
You should read about the theory to get a better understanding of why it works and how the step size issue is resolved with complex-step vs finite-difference.
There is no physical interpretation that you can make for the complex-step method. You are simply taking advantage of the mathematical properties of complex arithmetic to approximate a derivative in a more accurate manner than FD can. So the key is that your code is set up to do complex-arithmetic correctly.
Sometimes, engineering analyses do actually leverage complex numbers. One aerospace example of this is the Jukowski Transformation. In electrical engineering, complex numbers come up all the time for load-flow analysis of ac circuits. If you have such an analysis, then you can not easily use complex-step to approximate derivatives since the analysis itself is already complex. In these cases, it is technically possible to use a more general class of numbers called hyper dual numbers, but this is not supported in OpenMDAO. So if you had an analysis like this you could not use complex-step.
Also, occationally there are implementations of methods that are not complex-step safe which will prevent you from using it unless you define a new complex-step safe version. The simplest example of this is the np.absolute() method in the numpy library for python. The implementation of this, when passed a complex number, will return the asolute magnitude of the number:
abs(a+bj) = sqrt(1^2 + 1^2) = 1.4142
While not mathematically incorrect, this implementation would mess up the complex-step derivative approximation.
Instead you need an alternate version that gives:
abs(a+bj) = abs(a) + abs(b)*j
So in summary, you need to watch out for these kinds of functions that are not implemented correctly for use with complex-step. If you have those functions, you need to use alternate complex-step safe versions of them. Also, if your analysis itself uses complex numbers then you can not use complex-step derivative approximations either.
With regard to your step size question, again I refer you to the this paper for greater detail. The basic idea is that without subtractive cancellation you are free to use a very small step size with complex-step without the fear of lost accuracy due to numerical issues. So typically you will use 1e-20 smaller as the step. Since complex-step accuracy scalea with the order of step^2, using such a small step gives effectively exact results. You need not worry about scaling issues in most cases, if you just take a small enough step.

Is there an easy way to implement binary arithmetic without 2's compliment?

As a beginner computer science student, please excuse my limited knowledge of the field.
Initially, we learned how to perform the following basic binary arithmetic manually.
How to do addition with binary numbers
How to do subtraction with binary numbers
However, even as a novice programmer, I realized that the methods we learned by hand are challenging to translate into computer code algorithms example. Maybe this is just a personal perception.
Later, we studied 2's complement, which made things a bit easier (e.g., negative numbers were simpler to implement and subtraction was now just the addition of negative numbers).
My question is, was there a way to perform all arithmetic operations (multiplication, division, addition, and subtraction) without using 2's complement? Or was the invention of 2's complement solely for this purpose? This is a purely pedagogical exercise.
2's complement works great. What exactly would you want to improve? It can deal with arbitrarily large numbers, and only uses a chain of very simple processing units to do its job.
The main exception is with floating point numbers, which don't use 2's complement. I'm sure you'll learn about IEEE-754 soon, it's a lot of fun :)
Finally, noöne is forcing you to use 2's complement. You can do whatever you want, it's just that 2's is great and cheap. You can make your software calculate everything in Roman numerals if you so desire. It's not going to be very fast, though.

Parallel arithmetic on large integers

Are there any software tools for performing arithmetic on very large numbers in parallel? What I mean by parallel is that I want to use all available cores on my computer for this.
The constraints are wide open for me. I don't mind trying any language or tech.
Please and thanks.
It seems like you are either dividing really huge numbers, or are using a suboptimal algorithm. Parallelizing things to a fixed number of cores will only tweak the constants, but have no effect on the asymptotic behavior of your operation. And if you're talking about hours for a single division, asymptotic behavior is what matters most. So I suggest you first make sure sure your asymptotic complexity is as good as can be, and then start looking for ways to improve the constants, perhaps by parallelizing.
Wikipedia suggests Barrett division, and GMP has a variant of that. I'm not sure whether what you've tried so far is on a similar level, but unless you are sure that it is, I'd give GMP a try.
See also Parallel Modular Multiplication on Multi-core Processors for recent research. Haven't read into that myself, though.
The only effort I am aware of is a CUDA library called CUMP. However, the library only provides support for addition, subtraction and multiplication. Anyway, you can use multiplication to perform the division on the GPU and check if the quality of the result is enough for your particular problem.

Mathematical division in circuitry?

(Is this the right site for this question?)
I've recently been looking into circuits, specifically the one's used to preform mathematical functions such as adding, subtracting, multiplication, and dividing. I got a book from the library about circuits in general and mainly read the part about math, but they didn't seem to have any part about division. I fully understood all the logic gates and their uses in addition, subtraction, and multiplication, but the book had nothing on division. Google proved to be not much of a help either. So my questions are
A) Do processors do division? Or is it done later on, like in the machine code or higher level programming language?
If the answer to the beginning of that is yes, then i would like to know
B) How do they preform division? What binary method of division do they use? And what arrangement of logic gates does it use (a diagram of the gates preferably)?
A) Yes, in many cases (x86 is one example). In other cases, there may be opcodes that perform parts of the division operation. In yet other cases, the whole thing may have to be emulated in software.
B) A variety of techniques. This book has a whole chapter on division techniques: Finite Precision Number Systems and Arithmetic.
Binary restoring division is perhaps the easiest to understand, it's equivalent to the long division that you would have done in school. Binary non-restoring division is the same thing but rearranged, which results in needing fewer operations. SRT division takes it even further. You can then get into non-binary division (i.e. based on higher radices).
On top of the basic division algorithm, you'll also need to handle negative numbers, special cases, and floating-point (if you're into that sort of thing). Again, lots of techniques exist.
Each method has trade-offs; I doubt it's common knowledge which particular variant e.g. Intel uses.

Articles on analysis of mixed precision numerical algorithms?

Many numerical algorithms tend to run on 32/64bit floating points.
However, what if you had access to lower precision (and less power hungry) co-processors? How can then be utilized in numerical algorithms?
Does anyone know of good books/articles that address these issues?
Thanks!
Numerical analysis theory uses methods to predict the precision error of operations, independent of the machine they are running on. There are always cases where even on the most advanced processor operations may lose accuracy.
Some books to read about it:
Accuracy and Stability of Numerical Algorithms by N.J. Higham
An Introduction to Numerical Analysis by E. Süli and D. Mayers
If you cant find them or are too lazy to read them tell me and i will try to explain some things to you. (Well im no expert in this because im a Computer Scientist, but i think i can explain you the basics)
I hope you understand what i wrote (my english is not the best).
Most of what you are likely to find will be about doing floating-point arithmetic on computers irrespective of the size of the representation of the numbers themselves. The basic issues surround f-p arithmetic apply whatever the number of bits. Off the top of my head these basic issues will be:
range and accuracy of numbers that are represented;
careful selection of algorithms which are robust and reliable on f-p numbers rather than on real numbers;
the perils and pitfalls of iterative and lengthy calculations in which you run the risk of losing precision and accuracy.
In general, the fewer bits you have the sooner you run into problems, but just as there are algorithms which are useful in 32 bits, there are algorithms which are useful in 8 bits. Sometimes the same algorithm is useful however many bits you use.
As #George suggested, you should probably start with a basic text on numerical analysis, though I think the Higham book is not a basic text.
Regards
Mark

Resources