What is the value of %pi in Scilab? - pi

In the Scilab documentation %pi is described as:
%pi returns the floating-point number nearest the value of π.
But what exactly is that number? Is it processor dependent?
By using "format()" in the Scilab console you can only show up to 25 digits.

As the Scilab article on %eps indicates, the floating point relative accuracy is not processor dependent: it is 2^(-52), because Scilab uses IEEE 754 double-precision binary floating-point format. According to Exploring Binary, the double-precision approximation to pi is
1.1001001000011111101101010100010001000010110100011 x 2^1
which is exactly
3.141592653589793115997963468544185161590576171875
Most of these digits are useless, as the actual decimal expansion of pi begins with
3.14159265358979323846...
The relative error is about 3.9e-17, within the promised 2^(-52) = 2.2e-16.

Related

How can an R package produce a p-value with an exponent of -237?

I'm using the R package MAST and it produces some impressively small P-values -- so small I didn't think they could be stored as regular floating point values. Quadruple precision reaches only $10^{-34}$ (source). How is this possible?
This isn't just R; computers in general can store tiny numbers because floating point numbers are represented with a sign bit, a fraction, and an exponent. The space reserved for the exponent permits very large and very small numbers. See the R documentation on machine precision (noting e.g. the difference between double.eps and double.xmin), and the Wikipedia page on IEEE 754-1985 which describes the original standard for representing floating-point numbers (updated in 2008).

Is there a way to display more than 25 digits when outputting in Scilab?

I'm working with Scilab 5.5.2 and when using the format command I can display at most 25 digits of a number. Is there a way to somehow display more than this number?
Scilab operates with double precision floating point numbers; it does not support variable-precision arithmetics. Double precision means relative error of %eps, which is 2-52, approximately 2e-16.
This means you can't even get 25 correct decimal digits: when using format(25) you get garbage at the end. For example,
format(25); sqrt(3)
returns 1.732050807568877 1931766
I separated the last 7 digits here because they are wrong; the correct value of sqrt(3) begins with
1.732050807568877 2935274
Of course, if you don't mind the digits being wrong, you can have as many as you want:
strcat([sprintf('%.15f', sqrt(3)), "1111111111111111111111111111111"])
returns 1.7320508075688771111111111111111111111111111111.
But if you want to have arbitrary exceeding of real numbers, Scilab is not the right tool for the job (correction: phuclv pointed out Multiple Precision Arithmetic Toolbox which might work for you). Out of free software packages, mpmath Python library implements arbitrary precision of real numbers: it can be used directly or via Sagemath or SymPy. Commercial packages (Matlab, Maple, Mathematica) support variable precision too.
As for Scilab, I recommend using formatted print commands such as fprintf or sprintf, because they actually care about the output being meaningful. Example: printf('%.25f', sqrt(3)) returns
1.7320508075688772000000000
with garbage replaced by zeros. The last nonzero digit is still off by 1, but at least it's not meaningless.
Scilab uses double-precision floating-point type which has 53 bits of mantissa and can only be precise to ~15-17 digits. There's no reason printing digits beyond that.
If 25 digits of accuracy is needed then you can use a quadruple precision or double-double arithmetic library like ATOMS: Multiple Precision Arithmetic Toolbox details
If you need even more precision then the only way is using an arbitrary precision library like mpscilab, Xnum, etc...

Decimal expansions using lazy sequences in Clojure

Is there a package which represents decimal expansions in Clojure using lazy sequences?
For example, syntax like
(defn r `(B N x_1 x_2 x_3 ...))
could represent a real number r in base B, with decimal expansion (in math notation)
r = N . x_1 x_2 x_3 ...
with integer significand N and decimal digits 0 ≤ x_i ≤ B-1.
If the type were "smart" enough, it could handle different decimal expansions of real numbers as valid inputs, such as (10 0 9 9 9 ...) and (10 1), and consistently output decimal expansions in the latter form. It should also be able to handle overflowing digits, like reducing (10 0 15) to (10 1 5).
Is there any obstruction to working with a lazy-sequence representation of real numbers instead of the usual decimal expansion? I don't know how efficient it would be in contrast to floating-point, but it would be convenient for doing rigorous precise arithmetic involving real numbers. For example, I think there are algorithms which recursively compute the decimal expansions of π and e.
TL;DR
The short answer is that no, there is no such library and I doubt that there will ever be one. It is possible to compute numbers to accuracy greater than IEEE double precision, but to do so by representation as a sequence of single digits is immensely wasteful in terms of memory and impossible to do entirely lazily in general case. For instance, compute (+ '(0 9 0 ... ) '(0 9 1 ...)) lazily by terms.
The Long Version
When "computing" (approximating) the value of a real number or expression to machine precision, the operation computed is the taylor series expansion of the desired expression to N terms, until that the value of the N+1th term is less than machine precision at which point the approximation is aborted because the hardware convention cannot represent more information.
Typically you will only see the 32 and 64 bit IEEE floating point standards, however the IEEE floating point specification extends out to a whopping 128 bits of representation.
For the sake of argument, let's assume that someone extends clojure.core.math to have some representation arbitrary-precision-number, being a software floating point implementation against a backing ByteArray which through a protocol appears for all intents and purposes to be a normal java.lang.Number. All that this representation achieves is to push the machine epsilon (representational error limit) out even lower than the 5x10e-16 bound offered by IEEE DOUBLE/64. Building such a software floating point system is entirely viable and relatively well explored. However I am not aware of a Java/Clojure implementation thereof.
True arbitrary precision is not possible because we have finite memory machines to build upon, therefore at some point we must compromise on performance, memory and precision. Given some library which can correctly and generally represent an arbitrary taylor series evaluation as a sequence of decimal digits at some point, I claim that the overwhemling majority of operations on such arbitrary numbers will be truncated to some precision P either due to the need to perform comparison against a fixed precision representation such as a float or double because they are the industry standards for floating point representation.
To blow this well and truly out of the water, at a distance of 1 light-year an angular deviation of 1e-100 degrees would result in a navigational error of approximately 1.65117369558e-86 meteres. This means that the existing machine epsilon of 5x10e-16 with IEEE DOUBLE/64 is entirely acceptable even for interstellar navigation.
As you mentioned computing the decimal terms of Pi or other interesting series as a lazy sequence, here one could achieve headway only because the goal is the representation and investigation of a series/sequence rather than the addition, subtraction, multiplication and soforth between two or more such representations.

The precision of the natural logarithm is not correct in CLISP. What might be wrong?

What went wrong?
[1]> (log (exp 1))
0.99999994
This is due to the finite precision of floating-point representations of fractional numbers.
Please see: http://en.wikipedia.org/wiki/Floating_point
(exp 1) is going to be an approximation of e (which requires infinite precision to represent perfectly). The natural logarithm of that approximation will be approximately (but not exactly) 1. Understanding floating-point representation will allow you to understand why this happens.
CLISP is using your machine architecture's native representation of floats. Most commonly by far, this representation is in one the formats specified by IEEE 754 (typically 32- or 64-bit; in your case it looks like 32-bit). In a nutshell, fractional parts are represented by a sum of inverse powers of 2 (i.e., some combination of 1/2, 1/4, 1/8, ... 1/2^32, etc.)
Try with double precision floating point:
(log (exp 1.0d0))
=> 1.0D0 ; at least in Clozure CL

IEEE Double Precision

Standard IEEEDoublePrecision
What is the largest and smallest number be displayed in the standard? how is it?
The largest "number" you can store in IEEE754 double precision is Infinity, the smallest is -Infinity.
If you mean non-infinite representations, then that would be roughly
±1.7976931348623157 x 10308.
See here for an excellent answer regarding IEEE754 formats. See here for a wikipedia article showing the representation.

Resources