Trying to understand Amdahl's Law - math

I am trying to answer a school assignment but i am getting confused to what the question is trying to ask.
A design optimization was applied to a computer system in order to increase the performance of a given
execution mode by a factor of 10. The optimized mode is used 50% of the time, measured as a percentage
of the execution time after the optimization has been applied.
(a)What is the global speedup value that is achieved with this optimization?
Remind:Amdahl’s law defines the global speedup as a function of the optimized fraction before the optimization is applied. As a consequence, the 50% ratio cannot be directly used to evaluate this
speedup value.
(b)What is the percentage of the original execution time that is affected by this optimization?
(c)How much should such execution mode be optimized in order to achieve a global speedup of 5?Can a global speedup of 12 be achieved?And 11?
When trying to calculate answer A) i came to the answer 1.81 ( 20/11 )
T' = 0.5 * T + 0.5 * T / 10 = T / 2 + ( 1 / 20 ) T = ( 11 / 20 ) * T
Speedup = T / T' = T / ( ( 11 / 20 ) * T = 20 / 11 = 1.81
For me this answer makes sense but in the professor's solutions say otherwise:
(a) 5.5
(b) 91%
(c)Yes it can with an optimization by a factor of 25 / 3.No, because the factor can’t be negative, so it is impossible.Also no, because ∞ optimization → impossible
I can't solve the other ones because I am confused with the first one.
Why is 5.5 the correct answer?

Let's suppose a computer has two states A and B, and after whatever optimization, it spends 0 ≤ p ≤ 1 of its time in state A, and q = 1 - p of its time in state B. (So p is something like .5, or .27).
State A was sped up by a factor of X. State B was sped up by a factor of Y.
So before, it was spending time p * X + q * Y time that it can now do in p + q = 1 unit of time. So its speed up is p * X + q * Y.
Applying this to the problem you were given:
p = q = .5, X=10, Y=1 (no speedup).
10 * (.5) + 1 * (.5) = 5.5
This easily generalizes.

After optimization, time = x minutes optimized mode + x minutes other = 2x.
Before optimization, time = 10x minutes unoptimized mode + x minutes other = 11x.
Speedup = 11x/2x = 5.5

I love Amdahl's argument, incl. "improvers", so let's start from facts
I will not answer the assignment questions directly, yet will help you learn the know-why, which is to my deepest belief & decades of joy to experience working with the most skilled people the core of what education should promote in our knowledge
( introducing text, decomposed )
A design optimization was applied to a COMPUTER SYSTEM ___ [Fig.1:A]
in order
to increase the performance
of a given
EXECUTION MODE_________________ [Fig.1:B]
by a FACTOR
of 10._________________________ [Fig.1:C]
Fig.1 :
BEFORE
+------------------------------------------------------------A: SYSTEM
| +----------------------------------------------------B |
| | | |
| | | |
| | | |
| +----------------------------------------------------+ |
+--:----------------------------------------------------:----+
: :
: :
: C: FACTOR ~ 10 x_________________________/
: /
AFTER : /
+--:--------/--A*
| +------B* |
| | 10x | |
| | less | |
| | time | |
| +123456+ |
+12+------+3456+
D: in smarter, optimised "EXECUTION MODE",
the 50% was duration of the said EXECUTION MODE, whereas
50% was duration of the original, not modified, part
( ... text continued, decomposed )
The optimized mode is used 50% of the TIME,__________ [FACT Fig.1:D]
measured
as
a percentage of the execution time
AFTER the optimization
has been applied.
( ... first question, decomposed )
(a) What is the global SPEEDUP value
that is achieved
with ( AFTER )
this optimization?
Remind: Amdahl’s law defines the global speedup as a function of the optimized fraction before the optimization is applied. As a consequence, the 50% ratio cannot be directly used to evaluate this speedup value.
( ... second question )
(b) What is the percentage of the original execution time that is affected by this optimization?
full-A-duration ~ 10 x duration-of-B* // == duration-of-B as was BEFORE
+ 1 x duration-of-B* // == duration-of-( A - B ) as is
// == duration-of-( A*- B*) the same
( ref: FACT [Fig.1:D] )
Since here,the classics apply
--- just do not forget what to compare to what ( and keep in mind, that one and the very same word may bear quite different actual meanings - just compare the original paper with Dr. Gene M. AMDAHL's ( IBM Research ) argument with the E. BARSIS' ( Sandia Natl. Lab.s ) "scaled speedup" and the later John L. GUSTAFSON's presented ( reversed optics or "opposite point of view" ) speedup - all use the same word S-P-E-E-D-U-P, yet their respective definitions differ ( and a lot )You might like to read the very original, authentic, Dr. Gene M. AMDAHL's paper, to see the actual argument wording as was archived in FAQs, the file is in section "FAQ part 20: IBM and Amdahl", where the paper is on the very bottom of that text ). Alan KARP's price ( and also its winners ) is also a delightful part of this part of the computing history :o)
( ... third, fourth and fifth questions )
(c) How much should such EXECUTION MODE (improving just the block B-to-B* ) be optimized in order to achieve a global speedup of 5?
Can a global speedup here not restricted to touch only B, so can be smart in improving A-to-A* :P professor will either accept and warmly appreciate your skills and insightful argumentation on doing this, or punish you to dare use crystal-clear logic of the task to the limits the text was not prohibiting us from doing so ;) -- [ SAFETY WARNING ] best not to use this skilled strategy on auto-grader(s) or Artificial-"Intelligence"-powered grading Bots... for obvious reasons these rigid, pre-wired or LSqE-penalised algorithms will hardly award you any extra points for innovative thinking, as thinking is "not included" there, while batteries might 've been, might've been not? )of 12 be achieved?And 11?

Related

Calculate the best distribution for a group of numbers that can FIT on a specific number

I have what I think is a interesting question, about google sheets and some Maths, here is the scenario:
4 numbers as follows:
64.20 | 107 | 535 | 1070
A reference number in which the previous numbers needs to fit leaving the minimum possible residue while setting the number of times each of them fitted in the reference number for example we could say the reference number is the following:
806.45
So here is the problem:
I'm calculating how many times those 4 numbers can fit in the reference number by starting from the higher to the lower number like this:
| 1070 | => =IF(E12/((I15+J15)+IF(H17,K17,0)+IF(H19,K19,0)) > 0,ROUNDDOWN(E12/((I15+J15)+IF(H17,K17,0)+IF(H19,K19,0))),0)
| 535 | => =IF(H15>0,ROUNDDOWN((E12-K15-IF(H17,K17,0)-IF(H19,K19,0))/(I14+J14)),ROUNDDOWN(E12/((I14+J14)+IF(H17,K17,0)+IF(H19,K19,0))))
| 107 | => =IF(OR(H15>0,H14>0),ROUNDDOWN((E12-K15-K14-IF(H17,K17,0)-IF(H19,K19,0))/(I13+J13)),ROUNDDOWN((E12-IF(H17,K17,0)-IF(H19,K19,0))/(I13+J13)))
| 64.20 | => =IF(OR(H15>0,H14>0,H13>0),ROUNDDOWN((E12-K15-K14-K13-IF(H17,K17,0)-IF(H19,K19,0))/(I12+J12)),ROUNDDOWN((E12-IF(H17,K17,0)-IF(H19,K19,0))/(I12+J12)))
As you can notice, I'm checking if the higher values has a concurrence, so I can substract the amount from the original number and calculate again how many times can fit the lower number in the result of that subtraction , you can also see that I'm including some checkboxes to the formula in order to add a fixed number to the main number.
This actually works, and as you can see in the example, the result is:
| 1070 | -> Fits 0 times
| 535 | -> Fits 1 time
| 107 | -> Fits 2 times
| 64.20 | -> Fits 0 times
The residue of 806.45 in this example is: 57.45
But each number that needs to fit on the main number needs to take in consideration others; IF you solve this exercise manually, you could get something much better.. like this:
| 1070 | -> Fits 0 times
| 535 | -> Fits 1 time
| 107 | -> Fits 0 times
| 64.20 | -> Fits 4 times
The residue of 806.45 in this example is: 14.65
When I’m talking about residue I mean the result when subtracting, I’m sorry if this is not clear, it’s hard to me to explain maths in English, since is not my native language, please see the spreadsheet and make a copy to better understand what I’m trying to do, or suggest me a way to explain it better if possible.
So what would you do to make it work more efficient and "smart" with the minimum possible residue after the calculation?
Here is the Google's spreadsheet for reference and practice, please make a copy so others can try their own solutions:
LINK TO SPREADSHEET
Thanks in advance for any help or hints.
Delete all current formulas in H12:H15.
Then place this mega-formula in H12:
=ArrayFormula(QUERY(SPLIT(FLATTEN(SPLIT(VLOOKUP(E12,QUERY(SPLIT(FLATTEN(QUERY(SPLIT(FLATTEN(QUERY(SPLIT(FLATTEN(SEQUENCE(ROUNDUP(E12/I12),1,0)&" "&I12&" / "&TRANSPOSE(SEQUENCE(ROUNDUP(E12/I13),1,0)&" "&I13)&"|"&(SEQUENCE(ROUNDUP(E12/I12),1,0)*I12)+(TRANSPOSE(SEQUENCE(ROUNDUP(E12/I13),1,0)*I13))),"|"),"Select Col1")&" / "&TRANSPOSE(SEQUENCE(ROUNDUP(E12/I14),1,0)&" "&I14)&"|"&QUERY(SPLIT(FLATTEN(SEQUENCE(ROUNDUP(E12/I12),1,0)&" "&I12&" / "&TRANSPOSE(SEQUENCE(ROUNDUP(E12/I13),1,0)&" "&I13)&"|"&((SEQUENCE(ROUNDUP(E12/I12),1,0)*I12)+(TRANSPOSE(SEQUENCE(ROUNDUP(E12/I13),1,0)*I13)))),"|"),"Select Col2")+TRANSPOSE(SEQUENCE(ROUNDUP(E12/I14),1,0)*I14)),"|"),"Select Col1")&" / "&TRANSPOSE(SEQUENCE(ROUNDUP(E12/I15),1,0)&" "&I15)&"|"&QUERY(SPLIT(FLATTEN(QUERY(SPLIT(FLATTEN(SEQUENCE(ROUNDUP(E12/I12),1,0)&" "&I12&" / "&TRANSPOSE(SEQUENCE(ROUNDUP(E12/I13),1,0)&" "&I13)&"|"&(SEQUENCE(ROUNDUP(E12/I12),1,0)*I12)+(TRANSPOSE(SEQUENCE(ROUNDUP(E12/I13),1,0)*I13))),"|"),"Select Col1")&" / "&TRANSPOSE(SEQUENCE(ROUNDUP(E12/I14),1,0)&" "&I14)&"|"&QUERY(SPLIT(FLATTEN(SEQUENCE(ROUNDUP(E12/I12),1,0)&" "&I12&" / "&TRANSPOSE(SEQUENCE(ROUNDUP(E12/I13),1,0)&" "&I13)&"|"&((SEQUENCE(ROUNDUP(E12/I12),1,0)*I12)+(TRANSPOSE(SEQUENCE(ROUNDUP(E12/I13),1,0)*I13)))),"|"),"Select Col2")+TRANSPOSE(SEQUENCE(ROUNDUP(E12/I14),1,0)*I14)),"|"),"Select Col2")+TRANSPOSE(SEQUENCE(ROUNDUP(E12/I15),1,0)*I15)),"|"),"Select Col2, Col1 WHERE Col2 <= "&E12&" ORDER BY Col2 Asc, Col1 Desc"),2,TRUE)," / ",0,0))," "),"Select Col1"))
Typically, I explain my formulas. In this case, I trust that readers will understand why I cannot explain it. I can only offer it in working order.
To briefly give the general idea, this formula figures out how many times each of the four numbers fits into the target number alone and then adds every possible combination of all of those. Those are then limited to only the combinations less than the target number and sorted smallest to largest in total. Then a VLOOKUP looks up the target number in that list, returns the closest match, SPLITs the multiples from the amounts (which, in the end, have been concatenated into long strings), and returns only the multiples.

F#: integer (%) integer - Is Calculated How?

So in my text book there is this example of a recursive function using f#
let rec gcd = function
| (0,n) -> n
| (m,n) -> gcd(n % m,m);;
with this function my text book gives the example by executing:
gcd(36,116);;
and since the m = 36 and not 0 then it ofcourse goes for the second clause like this:
gcd(116 % 36,36)
gcd(8,36)
gcd(36 % 8,8)
gcd(4,8)
gcd(8 % 4,4)
gcd(0,4)
and now hits the first clause stating this entire thing is = 4.
What i don't get is this (%)percentage sign/operator or whatever it is called in this connection. for an instance i don't get how
116 % 36 = 8
I have turned this so many times in my head now and I can't figure how this can turn into 8?
I know this is probably a silly question for those of you who knows this but I would very much appreciate your help the same.
% is a questionable version of modulo, which is the remainder of an integer division.
In the positive, you can think of % as the remainder of the division. See for example Wikipedia on Euclidean Divison. Consider 9 % 4: 4 fits into 9 twice. But two times four is only eight. Thus, there is a remainder of one.
If there are negative operands, % effectively ignores the signs to calculate the remainder and then uses the sign of the dividend as the sign of the result. This corresponds to the remainder of an integer division that rounds to zero, i.e. -2 / 3 = 0.
This is a mathematically unusual definition of division and remainder that has some bad properties. Normally, when calculating modulo n, adding or subtracting n on the input has no effect. Not so for this operator: 2 % 3 is not equal to (2 - 3) % 3.
I usually have the following defined to get useful remainders when there are negative operands:
/// Euclidean remainder, the proper modulo operation
let inline (%!) a b = (a % b + b) % b
So far, this operator was valid for all cases I have encountered where a modulo was needed, while the raw % repeatedly wasn't. For example:
When filling rows and columns from a single index, you could calculate rowNumber = index / nCols and colNumber = index % nCols. But if index and colNumber can be negative, this mapping becomes invalid, while Euclidean division and remainder remain valid.
If you want to normalize an angle to (0, 2pi), angle %! (2. * System.Math.PI) does the job, while the "normal" % might give you a headache.
Because
116 / 36 = 3
116 - (3*36) = 8
Basically, the % operator, known as the modulo operator will divide a number by other and give the rest if it can't divide any longer. Usually, the first time you would use it to understand it would be if you want to see if a number is even or odd by doing something like this in f#
let firstUsageModulo = 55 %2 =0 // false because leaves 1 not 0
When it leaves 8 the first time means that it divided you 116 with 36 and the closest integer was 8 to give.
Just to help you in future with similar problems: in IDEs such as Xamarin Studio and Visual Studio, if you hover the mouse cursor over an operator such as % you should get a tooltip, thus:
Module operator tool tip
Even if you don't understand the tool tip directly, it'll give you something to google.

Should exp2 be faster than exp?

I'm mostly interested in the "exp" and "exp2" functions in C/C++, but this question is probably more related to the IEEE 754 standard than specific language features.
In a homework problem I did some 10 years ago, which tries to rank different floating point operations by the cycles needed, the C function
double exp2 (double)
appear to be slightly faster than
double exp (double)
Given that "double" uses a binary representation for the mantissa, I feel this result is reasonable.
Today, however, after testing the two again in several different ways, I could not see any measurable differences. So my questions are
Should exp2 be (theoretically) faster than exp? and
Should there be any measurable differences? and
Has the answer changed in the recent years?
There are a number of platforms that don't take much care with their math library on which exp2(x) is simply implemented as exp(x * log(2)) or vice-versa. These implementations do not deliver good accuracy (or especially good performance), but they are fairly common. On platforms that do this, one function is exactly as costly as the other but for the cost of an extra multiply, and whichever gets the extra multiply will be the slower of the two.
On platforms that aggressively tune the math library and try to deliver good accuracy, the two functions are very similar in performance. Generating the exponent of the result is easier with exp2, but getting a high-accuracy significand can require slightly more work; the two factors roughly even out to the point that performance is usually equivalent within a factor of 10-15%. Speaking very broadly, exp2 is usually the faster of the two.
I made some measurements, I hope some of you will find it useful.
Conditions:
Intel(R) Xeon(R) CPU E5-2620 v2 # 2.10GHz (the server had high CPU load during the test)
Compiler version: g++ (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
Compiler options: -static -std=gnu++0x -ffast-math -Ofast -flto
The code:
#include <iostream>
#include <random>
#include <cmath>
#include <chrono>
using namespace std;
int main()
{
double g = 1/log(2);
mt19937 engine(1000);
uniform_real_distribution<double> u(0, 1);
double sum = 0;
auto begin = std::chrono::high_resolution_clock::now();
for (int i = 0; i < 1e7/4; ++i) // for non-parallel, `for (int i = 0; i < 1e7; ++i)`
{
sum += exp2(u(engine)*g); // for exp versions, sum += exp(u(engine)); for empty versions, sum += u(engine)*g;
sum += exp2(u(engine)*g); // removed for non-parallel
sum += exp2(u(engine)*g); // removed for non-parallel
sum += exp2(u(engine)*g); // removed for non-parallel
}
auto end = std::chrono::high_resolution_clock::now();
cout << chrono::duration_cast<chrono::nanoseconds>(end - begin).count()/1000./1000 << "ms" << "\t"
<< sum << "\t" << g << " exp2 p4" << endl;
}
Execution with:
for i in {1..100}; do ./empty.bin && ./exp2_p4.bin && ./exp_p4.bin && ./exp.bin && ./exp2.bin; done
where the file name tells whether the executable calls exp or exp2, and whether the summation is grouped by 4 (p4) or not.
Results
The table below shows the average runtime (time), the standard deviation in ms, and the fastest case.
| name | time (ms) | std (ms) | smallest (ms) |
|:-------:|:---------:|:--------:|:-------------:|
| empty | 244.7 | 26.2 | 130.9 |
| exp | 591.7 | 95.8 | 422.5 |
| exp2 | 536.5 | 85.4 | 393.7 |
| exp p4 | 612.3 | 89.6 | 433.2 |
| exp2 p4 | 557.2 | 87.6 | 396.8 |
For one operation, one needs to divide it with 1e7. I approximate the cost of the exponential by subtracting the time of the empty version (i.e. do the loop and summation without calculating the exp) from the exponential ones. These values are shown below:
Conclusion
exp2 can be around 11% faster than exp on Intel Xeon with gcc even if -ffast-math is on, in agreement with the accepted answer.
The manual loop unrolling by grouping the summation into a group of four does not help.
Should exp2 be (theoretically) faster than exp?
Yes.
The only way for x86 FPU to perform an exponentiation for non-integer power is using an instruction F2XM1 which calculates 2x-1.
No ex instruction exists on x86.
Any C library code for x86 is forced to calculate both exp and exp2 using 2x.
Should there be any measurable differences?
No.
The difference is only single FPU multiplication, which is very fast for modern processors.
Has the answer changed in the recent years?
Yes.
15-20 years ago price of multiplication was much higher than price of other operations. Nowadays, multiplication is almost as cheap as addition.

Arbitrary-precision arithmetic Explanation

I'm trying to learn C and have come across the inability to work with REALLY big numbers (i.e., 100 digits, 1000 digits, etc.). I am aware that there exist libraries to do this, but I want to attempt to implement it myself.
I just want to know if anyone has or can provide a very detailed, dumbed down explanation of arbitrary-precision arithmetic.
It's all a matter of adequate storage and algorithms to treat numbers as smaller parts. Let's assume you have a compiler in which an int can only be 0 through 99 and you want to handle numbers up to 999999 (we'll only worry about positive numbers here to keep it simple).
You do that by giving each number three ints and using the same rules you (should have) learned back in primary school for addition, subtraction and the other basic operations.
In an arbitrary precision library, there's no fixed limit on the number of base types used to represent our numbers, just whatever memory can hold.
Addition for example: 123456 + 78:
12 34 56
78
-- -- --
12 35 34
Working from the least significant end:
initial carry = 0.
56 + 78 + 0 carry = 134 = 34 with 1 carry
34 + 00 + 1 carry = 35 = 35 with 0 carry
12 + 00 + 0 carry = 12 = 12 with 0 carry
This is, in fact, how addition generally works at the bit level inside your CPU.
Subtraction is similar (using subtraction of the base type and borrow instead of carry), multiplication can be done with repeated additions (very slow) or cross-products (faster) and division is trickier but can be done by shifting and subtraction of the numbers involved (the long division you would have learned as a kid).
I've actually written libraries to do this sort of stuff using the maximum powers of ten that can be fit into an integer when squared (to prevent overflow when multiplying two ints together, such as a 16-bit int being limited to 0 through 99 to generate 9,801 (<32,768) when squared, or 32-bit int using 0 through 9,999 to generate 99,980,001 (<2,147,483,648)) which greatly eased the algorithms.
Some tricks to watch out for.
1/ When adding or multiplying numbers, pre-allocate the maximum space needed then reduce later if you find it's too much. For example, adding two 100-"digit" (where digit is an int) numbers will never give you more than 101 digits. Multiply a 12-digit number by a 3 digit number will never generate more than 15 digits (add the digit counts).
2/ For added speed, normalise (reduce the storage required for) the numbers only if absolutely necessary - my library had this as a separate call so the user can decide between speed and storage concerns.
3/ Addition of a positive and negative number is subtraction, and subtracting a negative number is the same as adding the equivalent positive. You can save quite a bit of code by having the add and subtract methods call each other after adjusting signs.
4/ Avoid subtracting big numbers from small ones since you invariably end up with numbers like:
10
11-
-- -- -- --
99 99 99 99 (and you still have a borrow).
Instead, subtract 10 from 11, then negate it:
11
10-
--
1 (then negate to get -1).
Here are the comments (turned into text) from one of the libraries I had to do this for. The code itself is, unfortunately, copyrighted, but you may be able to pick out enough information to handle the four basic operations. Assume in the following that -a and -b represent negative numbers and a and b are zero or positive numbers.
For addition, if signs are different, use subtraction of the negation:
-a + b becomes b - a
a + -b becomes a - b
For subtraction, if signs are different, use addition of the negation:
a - -b becomes a + b
-a - b becomes -(a + b)
Also special handling to ensure we're subtracting small numbers from large:
small - big becomes -(big - small)
Multiplication uses entry-level math as follows:
475(a) x 32(b) = 475 x (30 + 2)
= 475 x 30 + 475 x 2
= 4750 x 3 + 475 x 2
= 4750 + 4750 + 4750 + 475 + 475
The way in which this is achieved involves extracting each of the digits of 32 one at a time (backwards) then using add to calculate a value to be added to the result (initially zero).
ShiftLeft and ShiftRight operations are used to quickly multiply or divide a LongInt by the wrap value (10 for "real" math). In the example above, we add 475 to zero 2 times (the last digit of 32) to get 950 (result = 0 + 950 = 950).
Then we left shift 475 to get 4750 and right shift 32 to get 3. Add 4750 to zero 3 times to get 14250 then add to result of 950 to get 15200.
Left shift 4750 to get 47500, right shift 3 to get 0. Since the right shifted 32 is now zero, we're finished and, in fact 475 x 32 does equal 15200.
Division is also tricky but based on early arithmetic (the "gazinta" method for "goes into"). Consider the following long division for 12345 / 27:
457
+-------
27 | 12345 27 is larger than 1 or 12 so we first use 123.
108 27 goes into 123 4 times, 4 x 27 = 108, 123 - 108 = 15.
---
154 Bring down 4.
135 27 goes into 154 5 times, 5 x 27 = 135, 154 - 135 = 19.
---
195 Bring down 5.
189 27 goes into 195 7 times, 7 x 27 = 189, 195 - 189 = 6.
---
6 Nothing more to bring down, so stop.
Therefore 12345 / 27 is 457 with remainder 6. Verify:
457 x 27 + 6
= 12339 + 6
= 12345
This is implemented by using a draw-down variable (initially zero) to bring down the segments of 12345 one at a time until it's greater or equal to 27.
Then we simply subtract 27 from that until we get below 27 - the number of subtractions is the segment added to the top line.
When there are no more segments to bring down, we have our result.
Keep in mind these are pretty basic algorithms. There are far better ways to do complex arithmetic if your numbers are going to be particularly large. You can look into something like GNU Multiple Precision Arithmetic Library - it's substantially better and faster than my own libraries.
It does have the rather unfortunate misfeature in that it will simply exit if it runs out of memory (a rather fatal flaw for a general purpose library in my opinion) but, if you can look past that, it's pretty good at what it does.
If you cannot use it for licensing reasons (or because you don't want your application just exiting for no apparent reason), you could at least get the algorithms from there for integrating into your own code.
I've also found that the bods over at MPIR (a fork of GMP) are more amenable to discussions on potential changes - they seem a more developer-friendly bunch.
While re-inventing the wheel is extremely good for your personal edification and learning, its also an extremely large task. I don't want to dissuade you as its an important exercise and one that I've done myself, but you should be aware that there are subtle and complex issues at work that larger packages address.
For example, multiplication. Naively, you might think of the 'schoolboy' method, i.e. write one number above the other, then do long multiplication as you learned in school. example:
123
x 34
-----
492
+ 3690
---------
4182
but this method is extremely slow (O(n^2), n being the number of digits). Instead, modern bignum packages use either a discrete Fourier transform or a Numeric transform to turn this into an essentially O(n ln(n)) operation.
And this is just for integers. When you get into more complicated functions on some type of real representation of number (log, sqrt, exp, etc.) things get even more complicated.
If you'd like some theoretical background, I highly recommend reading the first chapter of Yap's book, "Fundamental Problems of Algorithmic Algebra". As already mentioned, the gmp bignum library is an excellent library. For real numbers, I've used MPFR and liked it.
Don't reinvent the wheel: it might turn out to be square!
Use a third party library, such as GNU MP, that is tried and tested.
You do it in basically the same way you do with pencil and paper...
The number is to be represented in a buffer (array) able to take on an arbitrary size (which means using malloc and realloc) as needed
you implement basic arithmetic as much as possible using language supported structures, and deal with carries and moving the radix-point manually
you scour numeric analysis texts to find efficient arguments for dealing by more complex function
you only implement as much as you need.
Typically you will use as you basic unit of computation
bytes containing with 0-99 or 0-255
16 bit words contaning wither 0-9999 or 0--65536
32 bit words containing...
...
as dictated by your architecture.
The choice of binary or decimal base depends on you desires for maximum space efficiency, human readability, and the presence of absence of Binary Coded Decimal (BCD) math support on your chip.
You can do it with high school level of mathematics. Though more advanced algorithms are used in reality. So for example to add two 1024-byte numbers :
unsigned char first[1024], second[1024], result[1025];
unsigned char carry = 0;
unsigned int sum = 0;
for(size_t i = 0; i < 1024; i++)
{
sum = first[i] + second[i] + carry;
carry = sum - 255;
}
result will have to be bigger by one place in case of addition to take care of maximum values. Look at this :
9
+
9
----
18
TTMath is a great library if you want to learn. It is built using C++. The above example was silly one, but this is how addition and subtraction is done in general!
A good reference about the subject is Computational complexity of mathematical operations. It tells you how much space is required for each operation you want to implement. For example, If you have two N-digit numbers, then you need 2N digits to store the result of multiplication.
As Mitch said, it is by far not an easy task to implement! I recommend you take a look at TTMath if you know C++.
One of the ultimate references (IMHO) is Knuth's TAOCP Volume II. It explains lots of algorithms for representing numbers and arithmetic operations on these representations.
#Book{Knuth:taocp:2,
author = {Knuth, Donald E.},
title = {The Art of Computer Programming},
volume = {2: Seminumerical Algorithms, second edition},
year = {1981},
publisher = {\Range{Addison}{Wesley}},
isbn = {0-201-03822-6},
}
Assuming that you wish to write a big integer code yourself, this can be surprisingly simple to do, spoken as someone who did it recently (though in MATLAB.) Here are a few of the tricks I used:
I stored each individual decimal digit as a double number. This makes many operations simple, especially output. While it does take up more storage than you might wish, memory is cheap here, and it makes multiplication very efficient if you can convolve a pair of vectors efficiently. Alternatively, you can store several decimal digits in a double, but beware then that convolution to do the multiplication can cause numerical problems on very large numbers.
Store a sign bit separately.
Addition of two numbers is mainly a matter of adding the digits, then check for a carry at each step.
Multiplication of a pair of numbers is best done as convolution followed by a carry step, at least if you have a fast convolution code on tap.
Even when you store the numbers as a string of individual decimal digits, division (also mod/rem ops) can be done to gain roughly 13 decimal digits at a time in the result. This is much more efficient than a divide that works on only 1 decimal digit at a time.
To compute an integer power of an integer, compute the binary representation of the exponent. Then use repeated squaring operations to compute the powers as needed.
Many operations (factoring, primality tests, etc.) will benefit from a powermod operation. That is, when you compute mod(a^p,N), reduce the result mod N at each step of the exponentiation where p has been expressed in a binary form. Do not compute a^p first, and then try to reduce it mod N.
Here's a simple ( naive ) example I did in PHP.
I implemented "Add" and "Multiply" and used that for an exponent example.
http://adevsoft.com/simple-php-arbitrary-precision-integer-big-num-example/
Code snip
// Add two big integers
function ba($a, $b)
{
if( $a === "0" ) return $b;
else if( $b === "0") return $a;
$aa = str_split(strrev(strlen($a)>1?ltrim($a,"0"):$a), 9);
$bb = str_split(strrev(strlen($b)>1?ltrim($b,"0"):$b), 9);
$rr = Array();
$maxC = max(Array(count($aa), count($bb)));
$aa = array_pad(array_map("strrev", $aa),$maxC+1,"0");
$bb = array_pad(array_map("strrev", $bb),$maxC+1,"0");
for( $i=0; $i<=$maxC; $i++ )
{
$t = str_pad((string) ($aa[$i] + $bb[$i]), 9, "0", STR_PAD_LEFT);
if( strlen($t) > 9 )
{
$aa[$i+1] = ba($aa[$i+1], substr($t,0,1));
$t = substr($t, 1);
}
array_unshift($rr, $t);
}
return implode($rr);
}

Help with Probability Equation

I'm trying to put together an app for fun that has a scenario where I need to figure out a probability equation for the following scenario:
Suppose I have a number of attempts at something and each attempt has a success rate (known ahead of time). What are the odds after doing all those attempts that a success happens?
For example there are three attempts (all will be taken individually).
The first is known to have a 60% success rate.
The second is known to have a 30% success rate.
The third is known to have a 75% success rate.
What are the odds of a success occurring if all three attempts are made?
I've tried several formulas and can't pinpoint the correct one.
Thanks for the help!
Probability of winning is probability of not losing all three:
1 - (1 - 0.6)(1 - 0.3)(1 - 0.75)
1 - .4 * .7 * .25
That is, find the probability that all attempts fail, and invert it. So in general, given a finite sequence of events with probabilities P[i], the probability that at least one event is successful is 1 - (1 - P[0]) * (1 - P[1]) * ... * (1 - P[n])
And here's a perl one-liner to compute the value: (input is white-space separated list of success rates)
perl -0777 -ane '$p=1; $p*=1-$_ foreach #F; print 1-$p . "\n"'
Compute the chance of "all failures" (product of all the 1-pj where pj is the jth chance of success -- probability computations that represent probabilities as anything but numbers between 0 and 1 are crazy, so if you absolutely need percentages instead as input or output do your transformations at the start or end!) and the probability of "at least 1 success" is 1 minus that product.
Edit: here's some executable pseudocode -- i.e., Python -- with percentages as input and output, using your numbers (the original ones and the ones you changed in a comment):
$ cat proba.py
def totprob(*percents):
totprob_failure = 1.0
for pc in percents:
prob_this_failure = 1.0 - pc/100.0
totprob_failure *= prob_this_failure
return 100.0 * (1.0 - totprob_failure)
$ python -c'import proba; print proba.totprob(60,30,75)'
93.0
$ python -c'import proba; print proba.totprob(2,30,75)'
82.85
$

Resources