Genius Math Tool: How to display fraction answers as decimal? - math

When I do some calculations with genius I get the answers as fractions. For example:
genius> 1-((3/4)^4)
=
175
---
256
or
genius> 10/54
= 5/27
How do I get the answers as a normal floating point/decimal number?
For example 5/27 = 0.185185185185

You can use float() to turn a number into a float. It is listed in the Numeric section of the help.
genius> help float
float (x)
Description: Make number a float
genius> 5/27
= 5/27
genius> float(ans)
= 0.185185185185
genius> float(5/27)
= 0.185185185185
From the manual in 5.1.1 Numbers, you will see that a division will create a special number type, a rational. You can avoid this by having one or both operands as floats, simply by appending .0.
genius> 5.0/27
= 0.185185185185

Related

R not calculating large cubes correctly?

There's been some news lately about the discovery of three cubes that sum to 42. Namely, Andrew Sutherland and Andrew Booker discovered that (-80538738812075974)^3 + 80435758145817515^3 + 12602123297335631^3=42
(https://math.mit.edu/~drew/)
I was tinkering with this a bit, and I'm not getting 42 in R.
I do get it in other places (WolframAlpha), but R gives me this:
> (-80538738812075974)^3 + 80435758145817515^3 + 12602123297335631^3
[1] 1.992544e+35
Any idea what I'm doing wrong? Is it a limitation with large numbers in R? Or am I (very probably) just doing something dumb?
UPDATE
As pointed out by #MrFlick, this is a well-known floating point arithmetic issue. R stores large numbers in your example as double precision numbers.
Also, be aware of integer overflow. See a related discussion here. Note that base R will not throw an error (just a warning) when integer overflow occurs.The bit64 package may help sometimes, but won't do the job in your case the accuracy it provides is still not enough.
If you want to calculate with large (integer) numbers in R you can use the gmp library like:
library(gmp)
as.bigz("-80538738812075974")^3 + as.bigz("80435758145817515")^3 + as.bigz("12602123297335631")^3
#[1] 42
#or even better use also Integer in the exponent:
as.bigz("-80538738812075974")^3L + as.bigz("80435758145817515")^3L + as.bigz("12602123297335631")^3L
As #mrflick pointed already out you are using numeric - double. So the calculated result is approximately right. When using Integer you get warnings and R will convert it to numeric:
(-80538738812075974L)^3L + 80435758145817515L^3L + 12602123297335631L^3L
#[1] 1.992544e+35
#Warning messages:
#1: non-integer value 80538738812075974L qualified with L; using numeric value
#2: non-integer value 80435758145817515L qualified with L; using numeric value
#3: non-integer value 12602123297335631L qualified with L; using numeric value
Note that you have to give a string to as.bigz. When writing a number R will treat it as double or convert, as above, large integer numbers, to double and might lose precision.

What is the relationship between signed negative and positive versions of the same binary integer in decimal?

I want to preface this question by stating that it at first appears that this is a duplicate of several other questions here on SO, but none of the answers to those questions answered my question and thus I am asking it.
What is the relationship between a binary integer interpreted as a positive integer versus that exact same binary integer interpreted as a negative integer in terms of decimal? Let's take for example the integer 5:
5 is 101
-5 is 11111011
11111011 is 251 when interpreted as an unsigned number.
The question is, what is the decimal relationship between -5 and 251? Is there a direct relationship aside from the action which occurs in the binary number system? Meaning, is there some rule in decimal that we can directly map any given decimal integer to the decimal integer for which the identical binary integer would be when converted from positive to negative and vice versa?
Note that -5 is not actually 11111011 in binary--that is the binary representation in eight bits. If you use a different number of bits you get a different binary representation. For example, if you use 16 bits, as is often done, you get 1111111111111011, which is 65531.
This then is the key. In eight bits we consider 2^8 which is 256. (That caret stands for exponentiation.) We then see that -5 is represented by 256 - 5.
So the final answer is this: for a given positive integer n that is to be represented in b binary bits, the number -n is then represented by
(2 ^ b) - n
At least, if the number of bits b is large enough. Is that clear? Much more can be said than that, but you are better off reading more about two's-complement notation in a book or large web page.
Assuming that we work with two complement, we have that a vector of n bits:
n = <a(n-1), a(n-2), ..., a(2), a(1), a(0)>
is interpreted as:
n_signed = -[a(n-1) * 2^(n-1)] + sum(i=0, i=n-2) {a(i) * 2^(i)}
whereas, treating it as unsigned leads to:
n_unsigned = sum(i=0, i=n-1) {a(i) * 2^(i)}
Hence the difference is:
n_unsigned - n_signed = 2 * [a(n-1) * 2^(n-1)]
Hope it helps (and sorry for the poor formatting).

a negative unsigned int?

I'm trying to wrap my head around the truetype specification. On this page, in the section 'cmap' format 4, the parameter idDelta is listed as an unsigned 16-bits integer (UInt16). Yet, further down, a few examples are given, and here idDelta is given the values -9, -18, -27 and 1. How is this possible?
This is not a bug in the spec. The reason they show negative numbers in the idDelta row for the examples is that All idDelta[i] arithmetic is modulo 65536. (quoted from the section just above). Here's how that works.
The formula to get the glyph index is
glyphIndex = idDelta[i] + c
where c is the character code. Since this expression must be modulo 65536, that's equivalent to the following expression if you were using integers larger than 2 bytes :
glyphIndex = (idDelta[i] + c) % 65536
idDelta is a u16, so let's say it had the max value 65535 (0xFFFF), then glyphIndex would be equal to c - 1 since:
0xFFFF + 2 = 0x10001
0x10001 % 0x10000 = 1
You can think of this as a 16 integer wrapping around to 0 when an overflow occurs.
Now remember that a modulo is repeated division, keeping the remainder. Well in this case, since idDelta is only 16 bits, the max amount of divisions a modulo will need to do is 1, since the max value you can get from adding two 16 bit integers is 0x1FFFE , which is smaller than 0x100000. That means that a shortcut is to subtract 65536 (0x10000) instead of performing the modulo.
glyphIndex = (idDelta[i] - 0x10000) + c
And this is what the example shows as the values in the table. Here's an actual example from a .ttf file I've decoded :
I want the index for the character code 97 (lowercase 'a').
97 is greater than 32 and smaller than 126, so we use index 2 of the mappings.
idDelta[2] == 65507
glyphIndex = (65507 + 97) % 65536 === 68 which is the same as (65507 - 65536) + 97 === 68
The definition and use of idDelta on that page is not consistent. In the struct subheader it is defined as an int16, while a little earlier the same subheader is listed as UInt16*4.
It's probably a bug in the spec.
If you look at actual implementations, like this one from perl Tk, you'll see that idDelta is usually given as signed:
typedef struct SUBHEADER {
USHORT firstCode; /* First valid low byte for subHeader. */
USHORT entryCount; /* Number valid low bytes for subHeader. */
SHORT idDelta; /* Constant adder to get base glyph index. */
USHORT idRangeOffset; /* Byte offset from here to appropriate
* glyphIndexArray. */
} SUBHEADER;
Or see the implementation from libpdfxx:
struct SubHeader
{
USHORT firstCode;
USHORT entryCount;
SHORT idDelta;
USHORT idRangeOffset;
};

E notation with negative numbers

I'm a bit confused with e notations and small negative numbers.
I understand that e means 10^exponent
like 6e5 is equal to 610^5 = 600000
and 6e-5 is equal to 610^-5 = 0.00006
But lately I found some configuration files that consist of numbers like:
1.215e-011
1.33e-002
7.20e-004
so how would I go with them?
I understand that the sign shows the order of magnitude, if its positive or negative, but what about the number behind the sign? It starts with a zero. So is the zero ignored or is the number smaller than zero?
So what I would like to know is which would be the correct way if my example number is 6e-005:
Way 1: 6e-005 = 6 * -10^-5 = 0.00006
Way 2: 6e-005 = 6 * 10^-0.005 = 5.93131856794
which is the correct approach? or is there a third way? Thanks!
Just ignore the leading zeros. 6e-005 == 6e-5.
They are sometimes used so that all numbers in a context have a fixed format.
The format is padded with zeros to a fixed with of three digits, so "Way 1" is the correct interpretation.

How do computers evaluate huge numbers?

If I enter a value, for example
1234567 ^ 98787878
into Wolfram Alpha it can provide me with a number of details. This includes decimal approximation, total length, last digits etc. How do you evaluate such large numbers? As I understand it a programming language would have to have a special data type in order to store the number, let alone add it to something else. While I can see how one might approach the addition of two very large numbers, I can't see how huge numbers are evaluated.
10^2 could be calculated through repeated addition. However a number such as the example above would require a gigantic loop. Could someone explain how such large numbers are evaluated? Also, how could someone create a custom large datatype to support large numbers in C# for example?
Well it's quite easy and you can have done it yourself
Number of digits can be obtained via logarithm:
since `A^B = 10 ^ (B * log(A, 10))`
we can compute (A = 1234567; B = 98787878) in our case that
`B * log(A, 10) = 98787878 * log(1234567, 10) = 601767807.4709646...`
integer part + 1 (601767807 + 1 = 601767808) is the number of digits
First, say, five, digits can be gotten via logarithm as well;
now we should analyze fractional part of the
B * log(A, 10) = 98787878 * log(1234567, 10) = 601767807.4709646...
f = 0.4709646...
first digits are 10^f (decimal point removed) = 29577...
Last, say, five, digits can be obtained as a corresponding remainder:
last five digits = A^B rem 10^5
A rem 10^5 = 1234567 rem 10^5 = 34567
A^B rem 10^5 = ((A rem 10^5)^B) rem 10^5 = (34567^98787878) rem 10^5 = 45009
last five digits are 45009
You may find BigInteger.ModPow (C#) very useful here
Finally
1234567^98787878 = 29577...45009 (601767808 digits)
There are usually libraries providing a bignum datatype for arbitrarily large integers (eg. mapping digits k*n...(k+1)*n-1, k=0..<some m depending on n and number magnitude> to a machine word of size n redefining arithmetic operations). for c#, you might be interested in BigInteger.
exponentiation can be recursively broken down:
pow(a,2*b) = pow(a,b) * pow(a,b);
pow(a,2*b+1) = pow(a,b) * pow(a,b) * a;
there also are number-theoretic results that have engenedered special algorithms to determine properties of large numbers without actually computing them (to be precise: their full decimal expansion).
To compute how many digits there are, one uses the following expression:
decimal_digits(n) = 1 + floor(log_10(n))
This gives:
decimal_digits(1234567^98787878) = 1 + floor(log_10(1234567^98787878))
= 1 + floor(98787878 * log_10(1234567))
= 1 + floor(98787878 * 6.0915146640862625)
= 1 + floor(601767807.4709647)
= 601767808
The trailing k digits are computed by doing exponentiation mod 10^k, which keeps the intermediate results from ever getting too large.
The approximation will be computed using a (software) floating-point implementation that effectively evaluates a^(98787878 log_a(1234567)) to some fixed precision for some number a that makes the arithmetic work out nicely (typically 2 or e or 10). This also avoids the need to actually work with millions of digits at any point.
There are many libraries for this and the capability is built-in in the case of python. You seem primarily concerned with the size of such numbers and the time it may take to do computations like the exponent in your example. So I'll explain a bit.
Representation
You might use an array to hold all the digits of large numbers. A more efficient way would be to use an array of 32 bit unsigned integers and store "32 bit chunks" of the large number. You can think of these chunks as individual digits in a number system with 2^32 distinct digits or characters. I used an array of bytes to do this on an 8-bit Atari800 back in the day.
Doing math
You can obviously add two such numbers by looping over all the digits and adding elements of one array to the other and keeping track of carries. Once you know how to add, you can write code to do "manual" multiplication by multiplying digits and putting the results in the right place and a lot of addition - but software will do all this fairly quickly. There are faster multiplication algorithms than the one you would use manually on paper as well. Paper multiplication is O(n^2) where other methods are O(n*log(n)). As for the exponent, you can of course multiply by the same number millions of times but each of those multiplications would be using the previously mentioned function for doing multiplication. There are faster ways to do exponentiation that require far fewer multiplies. For example you can compute x^16 by computing (((x^2)^2)^2)^2 which involves only 4 actual (large integer) multiplications.
In practice
It's fun and educational to try writing these functions yourself, but in practice you will want to use an existing library that has been optimized and verified.
I think a part of the answer is in the question itself :) To store these expressions, you can store the base (or mantissa), and exponent separately, like scientific notation goes. Extending to that, you cannot possibly evaluate the expression completely and store such large numbers, although, you can theoretically predict certain properties of the consequent expression. I will take you through each of the properties you talked about:
Decimal approximation: Can be calculated by evaluating simple log values.
Total number of digits for expression a^b, can be calculated by the formula
Digits = floor function (1 + Log10(a^b)), where floor function is the closest integer smaller than the number. For e.g. the number of digits in 10^5 is 6.
Last digits: These can be calculated by the virtue of the fact that the expression of linearly increasing exponents form a arithmetic progression. For e.g. at the units place; 7, 9, 3, 1 is repeated for exponents of 7^x. So, you can calculate that if x%4 is 0, the last digit is 1.
Can someone create a custom datatype for large numbers, I can't say, but I am sure, the number won't be evaluated and stored.

Resources