How do computers evaluate huge numbers? - math

If I enter a value, for example
1234567 ^ 98787878
into Wolfram Alpha it can provide me with a number of details. This includes decimal approximation, total length, last digits etc. How do you evaluate such large numbers? As I understand it a programming language would have to have a special data type in order to store the number, let alone add it to something else. While I can see how one might approach the addition of two very large numbers, I can't see how huge numbers are evaluated.
10^2 could be calculated through repeated addition. However a number such as the example above would require a gigantic loop. Could someone explain how such large numbers are evaluated? Also, how could someone create a custom large datatype to support large numbers in C# for example?

Well it's quite easy and you can have done it yourself
Number of digits can be obtained via logarithm:
since `A^B = 10 ^ (B * log(A, 10))`
we can compute (A = 1234567; B = 98787878) in our case that
`B * log(A, 10) = 98787878 * log(1234567, 10) = 601767807.4709646...`
integer part + 1 (601767807 + 1 = 601767808) is the number of digits
First, say, five, digits can be gotten via logarithm as well;
now we should analyze fractional part of the
B * log(A, 10) = 98787878 * log(1234567, 10) = 601767807.4709646...
f = 0.4709646...
first digits are 10^f (decimal point removed) = 29577...
Last, say, five, digits can be obtained as a corresponding remainder:
last five digits = A^B rem 10^5
A rem 10^5 = 1234567 rem 10^5 = 34567
A^B rem 10^5 = ((A rem 10^5)^B) rem 10^5 = (34567^98787878) rem 10^5 = 45009
last five digits are 45009
You may find BigInteger.ModPow (C#) very useful here
Finally
1234567^98787878 = 29577...45009 (601767808 digits)

There are usually libraries providing a bignum datatype for arbitrarily large integers (eg. mapping digits k*n...(k+1)*n-1, k=0..<some m depending on n and number magnitude> to a machine word of size n redefining arithmetic operations). for c#, you might be interested in BigInteger.
exponentiation can be recursively broken down:
pow(a,2*b) = pow(a,b) * pow(a,b);
pow(a,2*b+1) = pow(a,b) * pow(a,b) * a;
there also are number-theoretic results that have engenedered special algorithms to determine properties of large numbers without actually computing them (to be precise: their full decimal expansion).

To compute how many digits there are, one uses the following expression:
decimal_digits(n) = 1 + floor(log_10(n))
This gives:
decimal_digits(1234567^98787878) = 1 + floor(log_10(1234567^98787878))
= 1 + floor(98787878 * log_10(1234567))
= 1 + floor(98787878 * 6.0915146640862625)
= 1 + floor(601767807.4709647)
= 601767808
The trailing k digits are computed by doing exponentiation mod 10^k, which keeps the intermediate results from ever getting too large.
The approximation will be computed using a (software) floating-point implementation that effectively evaluates a^(98787878 log_a(1234567)) to some fixed precision for some number a that makes the arithmetic work out nicely (typically 2 or e or 10). This also avoids the need to actually work with millions of digits at any point.

There are many libraries for this and the capability is built-in in the case of python. You seem primarily concerned with the size of such numbers and the time it may take to do computations like the exponent in your example. So I'll explain a bit.
Representation
You might use an array to hold all the digits of large numbers. A more efficient way would be to use an array of 32 bit unsigned integers and store "32 bit chunks" of the large number. You can think of these chunks as individual digits in a number system with 2^32 distinct digits or characters. I used an array of bytes to do this on an 8-bit Atari800 back in the day.
Doing math
You can obviously add two such numbers by looping over all the digits and adding elements of one array to the other and keeping track of carries. Once you know how to add, you can write code to do "manual" multiplication by multiplying digits and putting the results in the right place and a lot of addition - but software will do all this fairly quickly. There are faster multiplication algorithms than the one you would use manually on paper as well. Paper multiplication is O(n^2) where other methods are O(n*log(n)). As for the exponent, you can of course multiply by the same number millions of times but each of those multiplications would be using the previously mentioned function for doing multiplication. There are faster ways to do exponentiation that require far fewer multiplies. For example you can compute x^16 by computing (((x^2)^2)^2)^2 which involves only 4 actual (large integer) multiplications.
In practice
It's fun and educational to try writing these functions yourself, but in practice you will want to use an existing library that has been optimized and verified.

I think a part of the answer is in the question itself :) To store these expressions, you can store the base (or mantissa), and exponent separately, like scientific notation goes. Extending to that, you cannot possibly evaluate the expression completely and store such large numbers, although, you can theoretically predict certain properties of the consequent expression. I will take you through each of the properties you talked about:
Decimal approximation: Can be calculated by evaluating simple log values.
Total number of digits for expression a^b, can be calculated by the formula
Digits = floor function (1 + Log10(a^b)), where floor function is the closest integer smaller than the number. For e.g. the number of digits in 10^5 is 6.
Last digits: These can be calculated by the virtue of the fact that the expression of linearly increasing exponents form a arithmetic progression. For e.g. at the units place; 7, 9, 3, 1 is repeated for exponents of 7^x. So, you can calculate that if x%4 is 0, the last digit is 1.
Can someone create a custom datatype for large numbers, I can't say, but I am sure, the number won't be evaluated and stored.

Related

RSA exponent size

I am learning about the RSA algorithm. I perform the algorithm on very small prime numbers and use online Big Integer calculators to perform the encryption and decryption and everything works just fine.
My question is about the size of the exponent we create and when it comes to bigger numbers, it seems infeasible to calculate.
For example, the algorithm starts with picking two prime numbers p and q. You compute n=pxq and then the totient of n. Next you pick a number 'e' such that 1
Then to perform an encryption you take say like the ASCII character 'A' which is 65 and you raise it to the power of e. (65^e)
The online big integer calculator started getting very slow and sluggish (over a minute to calculate) when e was bigger than about 100,000 (6 digits)
My question is then, for the working RSA algorithm, what size (number of digits) number does that algorithm pick?
One thought I had was it was possible the online calculator that I was using was not using the best method for exponents? This is the calculator I am using: http://www.javascripter.net/math/calculators/100digitbigintcalculator.htm
Let's say M is the modulus. So YES, you could first perform intermediate = 65^e, and finally compute intermediate mod M. And of course, intermediate would be a very very very very big integer (if e equals 65537, the decimal representation of intermediate contains 118813 digits!).
BUT, thanks to a very basic modular arithmetic theorem,
(65^e) mod M = ((((65 mod M) * 65) mod M) * 65) mod M [...] (e times)
(the theorem states that in a quotient ring, the n-th power of the class of an element is the class of the n-th power of the element)
As you can see, this does not need any very big integer library, since after each arithmetic product, you use mod M that returns an integer between 0 and M-1. So, you only have to compute arithmetic products of integers less than M.
As an example, here is a simple shell script (bash) that computes 65^65537 mod 991*997. As you can see, no need to get a big number library:
#!/bin/bash
# set RSA parameters
m=65 # message to encode
M=$((991*997)) # modulus (both 991 and 997 are prime numbers)
e=65537 # public exponent (coprime with 990*996, thus compliant with RSA algorithm)
# compute (m^e) mod M
ret=1
for i in {1..$e}
do
ret=$(((ret*m)%M))
done
# display the result
echo $ret
It immediately returns 784933, thus 65^65537 mod 991*997 = 784933
The biggest integer computed with your method of calculus has 118813 digits, but the biggest integer handled with this shell script only has 12 or less digits ((M-1)^2 is made of 12 digits).
According to these explanations, we can now answer your question:
My question is then, for the working RSA algorithm, what size (number of digits) number does that algorithm pick?
With the above explanations, you can see that the maximum number of digits in the decimal representation of integers you have to manipulate is 1+log10((M-1)^2), because you will, at most, compute a product of two integers between 0 and M-1.
Note that 1+log10((M-1)^2) = 1+2.log10(M-1) < 2+2.log10(M) = 2.(1+log10(M)). Also note that 1+log10(M) is the number of digits of M.
Therefore, as a conclusion, this proves that the number of digits your library has to handle correctly is twice the number of digits of the modulus (if you are computing the exponentiation using integer multiplications the way explained here).

Using MPFR And Adding - How many Digits are Correct?

I have a pretty easy question (I think). As much as I've tried, I can not find an answer to this question.
I am creating a function, for which I want the user to enter two numbers. The first is the the number of terms of a certain infinite series to add together. The second is the number of digits the user would like the truncated sum to be accurate to.
Say the terms of the sequence are a_i. How much precision n, would be required in mpfr to ensure the result of adding these a_i from i=0 up to the user's entered value would be needed to guarantee the number of digits the user needs?
By the way, I'm adding the a_i in a naive way.
Any help will be much appreciated.
Thanks,
Rick
You can convert between decimal digits of precision, d, and binary digits of precision, b, with logarithms
b = d × log(10) / log(2)
A little rearranging shows why
b × log(2) = d × log(10)
log(2b) = log(10d)
2b = 10d
Each term of the series (and each addition) will introduce a rounding error at the least significant digit so, assuming each of the t terms involves n (two argument) arithmetic operations, you will want to add an extra
log(t * (n+2))/log(2)
bits.
You'll need to round the number of bits of precision up to be sure that you have enough room for your decimal digits of precision
b = ceil((d*log(10.0) + log(t*(n+2)))/log(2.0));
Finally, you should be aware that the terms may introduce cancellation errors, in which case this simple calculation will dramatically underestimate the required number of bits, even assuming I've got it right in the first place ;-)

What is the fastest way to find if a large integer is power of ten?

I could just use division and modulus in a loop, but this is slow for really large integers. The number is stored in base two, and may be as large as 2^8192. I only need to know if it is a power of ten, so I figure there may be a shortcut (other than using a lookup table).
If your number x is a power of ten then
x = 10^y
for some integer y, which means that
x = (2^y)(5^y)
So, shift the integer right until there are no more trailing zeroes (should be a very low cost operation) and count the number of digits shifted (call this k). Now check if the remaining number is 5^k. If it is, then your original number is a power of 10. Otherwise, it's not. Since 2 and 5 are both prime this will always work.
Let's say that X is your input value, and we start with the assumption.
X = 10 ^ Something
Where Something is an Integer.
So we say the following:
log10(X) = Something.
So if X is a power of 10, then Something will be an Integer.
Example
int x = 10000;
double test = Math.log10(x);
if(test == ((int)test))
System.out.println("Is a power of 10");

Efficient Multiplication of Varying-Length #s [Conceptual]

EDIT
So it seems I "underestimated" what varying length numbers meant. I didn't even think about situations where the operands are 100 digits long. In that case, my proposed algorithm is definitely not efficient. I'd probably need an implementation who's complexity depends on the # of digits in each operands as opposed to its numerical value, right?
As suggested below, I will look into the Karatsuba algorithm...
Write the pseudocode of an algorithm that takes in two arbitrary length numbers (provided as strings), and computes the product of these numbers. Use an efficient procedure for multiplication of large numbers of arbitrary length. Analyze the efficiency of your algorithm.
I decided to take the (semi) easy way out and use the Russian Peasant Algorithm. It works like this:
a * b = a/2 * 2b if a is even
a * b = (a-1)/2 * 2b + a if a is odd
My pseudocode is:
rpa(x, y){
if x is 1
return y
if x is even
return rpa(x/2, 2y)
if x is odd
return rpa((x-1)/2, 2y) + y
}
I have 3 questions:
Is this efficient for arbitrary length numbers? I implemented it in C and tried varying length numbers. The run-time in was near-instant in all cases so it's hard to tell empirically...
Can I apply the Master's Theorem to understand the complexity...?
a = # subproblems in recursion = 1 (max 1 recursive call across all states)
n / b = size of each subproblem = n / 1 -> b = 1 (problem doesn't change size...?)
f(n^d) = work done outside recursive calls = 1 -> d = 0 (the addition when a is odd)
a = 1, b^d = 1, a = b^d -> complexity is in n^d*log(n) = log(n)
this makes sense logically since we are halving the problem at each step, right?
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
Many thanks in advance
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
This actually change everything about the problem (and make your algorithm incorrect).
It means than 1234 is provided as 1,2,3,4 and you cannot operate directly on the whole number. You need to analyze your algorithm in terms of #additions, #multiplications, #divisions.
You should expect a division to be a bit more expensive than a multiplication, and a multiplication to be lot more expensive than an addition. So a good algorithm try to reduce the number of divisions and multiplications.
Check out the Karatsuba algorithm, (ps don't copy it that's not what your teacher want) is one of the fastest for this specification.
Add 3): Native integers are limited in how large (or small) numbers they can represent (32- or 64-bit integers for example). To represent arbitrary length numbers you can choose strings, because then you are not really limited by this. The problem is then, of course, that your arithmetic units are not really made to add strings ;-)

A way of checking if the digits of num1 are the digits in num2 without checking each digit?

Lets say I have guessed a lottery number of:
1689
And the way the lottery works is, the order of the digits don't matter as long as the digits match up 1:1 with the digits in the actual winning lottery number.
So, the number 1689 would be a winning lottery number with:
1896, 1698, 9816, etc..
As long as each digit in your guess was present in the target number, then you win the lottery.
Is there a mathematical way I can do this?
I've solved this problem with a O(N^2) looping checking each digit against each digit of the winning lottery number (separating them with modulus). Which is fine, it works but I want to know if there are any neat math tricks I can do.
For example, at first... I thought I could be tricky and just take the sum and product of each digit in both numbers and if they matched then you won.
^ Do you think that would work?
However, I quickly disproved this when I found that lottery guess: 222, and 124 have the different digits but the same product and sum.
Anyone know any math tricks I can use to quickly determine if the digits in num1 match the digits in num2 regardless of order?
How about going through each number, and counting up the number of appearances of each digit (into two different 10 element arrays)? After you'd done the totaling, compare the counts of each digit. Since you only look at each digit once, that's O(N).
Code would look something like:
for(int i=0; i<digit_count; i++)
{
guessCounts[guessDigits[i] - '0']++;
actualCounts[actualDigits[i] - '0']++;
}
bool winner = true;
for(int i=0; i<10 && winner; i++)
{
winner &= guessCounts[i] == actualCounts[i];
}
Above code makes the assumption that guessDigits and actualDigits are both char strings; if they held the actual digits then you can just skip the - '0' business.
There are probably optimizations that would make this take less space or terminate sooner, but it's a pretty straightforward example of an O(N) approach.
By the way, as I mentioned in a comment, the multiplication/sum comparison will definitely not work because of zeros. Consider 0123 and 0222. Product is 0, sum is 6 in both cases.
Split into array, sort array, join into string, compare strings.
(Not a math trick, I know)
You can place the digits into an array, sort the array, then compare the arrays element by element. This will give you O( NlogN ) complexity which is better than O( N^2 ).
If N can become large, sorting the digits is the answer.
Because digits are 0..9 you can count the number of occurrences of each digit of the lottery answer in an array [0..9].
To compare you can subtract 1 for each digit you encounter in the guess. When you encounter a digit where the count is already 0, you know the guess is different. When you get through all the digits, the guess is the same (as long as the guess has as many digits as the lottery answer).
For each digit d multiply with the (d+1)-th prime number.
This is more mathematical but less efficient than the sorting or bucket methods. Actually it is the bucket method in disguise.
I'd sort both number's digits and compare them.
If you are only dealing with 4 digits I dont think you have to put much thought into which algorithm you use. They will all perform roughly the same.
Also 222 and 124 dont have the same sum
You have to consider that when n is small, the order of efficiency is irrelevant, and the constants start to matter more. How big can your numbers actually get? Can you get up to 10 digits? 20? 100? If your numbers have just a few digits, n^2 really isn't that bad. If you have strings of thousands of digits, then you might actually need to do something more clever like sorting or bucketing. (i.e. count the 0s, count the 1s, etc.)
I'm stealing the answer from Yuliy, and starblue (upvote them)
Bucketing is the fastest aside from the O(1)
lottonumbers == mynumbers;
Sorting is O(nlog2n)
Bucketsort is an O(n) algorithm.
So all you need to do is do it twice (once for your numbers, once for the target-set), and if the numbers of the digits add up, then they match.
Any kind of sorting is an added overhead that is unnecessary in this case.
array[10] digits;
while(targetnum > 0)
{
short currDig = targetnum % 10;
digits[currDig]++;
targetnum = targetnum / 10;
}
while(mynum > 0)
{
short myDig = mynum % 10;
digits[myDig]--;
mynum = mynum / 10;
}
for(int i = 0; i < 10; i++)
{
if(digits[i] == 0)
continue;
else
//FAIL TO MATCH
}
Not the prettiest code, I'll admit.
Create an array of 10 integers subscripted [0 .. 9].
Initialize each element to a different prime number
Set product to 1.
Use each digit from the number, to subscript into the array,
pull out the prime number, and multiply the product by it.
That gives you a unique representation which is digit order independent.
Do the same procedure for the other number.
If the unique representations match, then the original numbers match.
If there are no repeating digits allowed (not sure if this is the case though) then use a 10-bit binary number. The most significant bit represents the digit 9 and the LSB represents the digit 0. Work through each number in turn and flip the appropriate bit for each digit that you find
So 1689 would be: 1101000010
and 9816 would also be: 1101000010
then a XOR or a subtract will leave 0 if you are a winner
This is just a simple form of bucketing
Just for fun, and thinking outside of the normal, instead of sorting and other ways, do the deletion-thing. If resultstring is empty, you have a winner!
Dim ticket As String = "1324"
Dim WinningNumber As String = "4321"
For Each s As String In WinningNumber.ToCharArray
ticket = Replace(ticket, s, "", 1, 1)
Next
If ticket = "" Then MsgBox("WINNER!")
If ticket.Length=1 then msgbox "Close but no cigar!"
This works with repeating numbers too..
Sort digits before storing a number. After that, your numbers will be equal.
One cute solution is to use a variant of Zobrist hashing. (Yes, I know it's overkill, as well as probabilistic, but hey, it's "clever".)
Initialize a ten-element array a[0..9] to random integers. Then, for each number d[], compute the sum of a[d[i]]. If the numbers contained the same digits, the resulting numbers will be equal; with high probability (~ 1 in how many possible ints there are), the opposite is true as well.
(If you know that there will be at most 10 digits total, then you can use the fixed numbers 1, 10, 100, ... instead of random numbers for guaranteed success. This is bucket sorting in not-too-much disguise.)

Resources