I am given three unsigned numbers of size 128 bits: a, b, and c where a and b <= c I want to be able to calculate (a * b) / c with the highest possible precision.
If a, b, and c were 64 bit integers, I would first calculate a * b in a 128 bit number and then divide it by c to obtain an accurate result. However, I am dealing with 128 bit numbers and I don't have a native 256 bit type to perform the multiplication a * b.
Is there is a way to compute (a * b) / c with high precision while staying in the world of 128 bits?
My (failed) attempts:
Calculating a / (c / b). This looks somewhat unsymmetrical, and as expected I didn't get very accurate results.
Calculating: ((((a+b)/c)^2 - ((a-b)/c)^2)*c)/4 = ((a*b)/c^2) * c = a*b/c This also gave me pretty inaccurate results.
The question was originally tagged as rust, so I've assumed that in my answer, even though the tag has now been removed.
As others have said in the comments, you'll always have to step up a size or else you run the risk of an overflow on the multiplication, unless you have some guarantees about the bounds on the sizes of those numbers. There is no larger primitive type than u128 in Rust.
The usual solution is to switch to structures that support arbitrary-precision arithmetic, often referred to as "bignums" or "bigints". However, they are significantly less performant than using native integer types.
In Rust, you can use the num-bigint crate:
extern crate num_bigint;
use num_bigint::BigUint;
fn main() {
let a: u128 = 234234234234991231;
let b: u128 = 989087987;
let c: u128 = 123;
let big_a: BigUint = a.into();
let big_b: BigUint = b.into();
let big_c: BigUint = c.into();
let answer = big_a * big_b / big_c;
println!("answer: {}", answer);
// answer: 1883563148178650094572699
}
Related
Is there an elegant way to do linear interpolation using integers? (To average ADC measurements in microcontroller, ADC measurements are 12bit, microcontroller works fine with 32bit integers). Coefficient f is in [0, 1] range.
float lerp(float a, float b, float f)
{
return a + f * (b - a);
}
Well, since you have so many extra integer bits to spare, a solution using ints would be:
Use an integer for your parameter F, with F from 0 to 1024 instead of a float from 0 to 1. Then you can just do:
(A*(1024-F) + B * F) >> 10
without risk of overflow.
In fact, if you need more resolution in your parameter, you can pick the maximum value of F as any power of 2 up to 2**19 (if you are using unsigned ints; 2**18 otherwise).
This doesn't do a good job of rounding (it truncates instead) but it only uses integer operations, and avoids division by using the shift operator. It still requires integer multiplication, which a number of MCUs don't have hardware for, but hopefully it won't be too bad.
I am trying to write a program to check whether a number N can be expressed as the sum of two cubes i.e. N = a^3 + b^3
This is my code with complexity O(n):
#include <iostream>
#include<math.h>
#define ll unsigned long long
using namespace std;
int main()
{
ios_base::sync_with_stdio(false);
bool flag=false;
ll t,N;
cin>>t;
while(t--)
{
cin>>N;
flag=false;
for(int i=1; i<=(ll)cbrtl(N/2); i++)
{
if(!(cbrtl(N-i*i*i)-(ll)cbrtl(N-i*i*i))) {flag=true; break;}
}
if(flag) cout<<"Yes\n"; else cout<<"No\n";
}
return 0;
}
As the time limit for code is 2s, This program is giving TLE? can anyone suggest a faster approch
I posted this also in StackExchange, so sorry if you consider duplicate, but I really don´t know if these are the same or different boards (Exchange and Overflow). My profile appears different here.
==========================
There is a faster algorithm to check if a given integer is a sum (or difference) of two cubes n=a^3+b^3
I don´t know if this algorithm is already known (probably yes, but I can´t find it on books or internet). I discovered and use it to compute integers until n < 10^18
This process uses a single trick
4(a^3+b^3)/(a+b) = (a+b)^2 + 3(a-b)^2)
We don´t know in advance what would be "a" and "b" and so what also would be "(a+b)", but we know that "(a+b)" should certainly divide (a^3+b^3) , so if you have a fast primes factorizing routine, you can quickly compute each one of divisors of (a^3+b^3) and then check if
(4(a^3+b^3)/divisor - divisor^2)/3 = square
When (and if) found a square, you have divisor=(a+b) and sqrt(square)=(a-b) , so you have a and b.
If not square found, the number is not sum of two cubes.
We know divisor < (4(a^3+b^3)^(1/3) and this limit improves the task, because when you are assembling divisors of (a^3+b^3) immediately discard those greater than limit.
Now some comparisons with other algorithms - for n = 10^18, by using brute force you should test all numbers below 10^6 to know the answer. On the other hand, to build all divisors of 10^18 you need primes until 10^9.
The max quantity of different primes you could fit into 10^9 is 10 (2*3*5*7*11*13*17*19*23*29 = 5*10^9) so we have 2^10-1 different combinations of primes (which assemble the divisors) to check in worst case, many of them discared because limit.
To compute prime factors I use a table with first 60.000.000 primes which works very well on this range.
Miguel Velilla
To find all the pairs of integers x and y that sum to n when cubed, set x to the largest integer less than the cube root of n, set y to 0, then repeatedly add 1 to y if the sum of the cubes is less than n, subtract 1 from x if the sum of the cubes is greater than n, and output the pair otherwise, stopping when x and y cross. If you only want to know whether or not such a pair exists, you can stop as soon as you find one.
Let us know if you have trouble coding this algorithm.
I was solving a math problem: want to get the sum of the digits of the number 2^1000.
In Java, the solution is like:
String temp = BigInteger.ONE.shiftLeft(1000).toString();
int sum = 0;
for (int i = 0; i < temp.length(); i++)
sum += temp.charAt(i) - '0';
Then came up a solution in Haskell, like this:
digitSum ::(Integral a) => a -> a
digitSum 0 = 0
digitSum n = (mod n 10) + (digitSum (div n 10))
The whole process is pretty smooth, one point seems interesting, we know integer type can not handle 2 ^ 1000, too big, in Java, it's obvious to use BigInteger and treat the big number to string, but in Haskell, no compiling errors means the 2 ^ 1000 could be passed in directly. Here is the thing, does Haskell transform the number into string internally? I want to make sure what the type is and let the compiler to determine, then I type the following lines in GHCi:
Prelude> let i = 2 ^ 1000
Prelude> i
107150860718626732094842504906000181056140481170553360744375038837035105112493612249319
837881569585812759467291755314682518714528569231404359845775746985748039345677748242309
854210746050623711418779541821530464749835819412673987675591655439460770629145711964776
86542167660429831652624386837205668069376
Prelude> :t i
i :: Integer
Here, I was totally confused, apparently, the number of i is oversized, but the return type of i is still Integer. How could we explain this and what's the upper bound or limit of Integer of Haskell?
In Haskell, Integer is a - theoretically - unbounded integer type. Fixed-width types are Int, Int8, Int16, Int32, Int64 and the corresponding unsigned Word, Word8 etc.
In practice, even Integer is of course bounded, by the available memory for instance, or by the internal representation.
By default, GHC uses the GMP package to represent Integer, and that means the bound is 2^(2^37) or so, since GMP uses a 32-bit integer to store the number of limbs.
EDIT
So it seems I "underestimated" what varying length numbers meant. I didn't even think about situations where the operands are 100 digits long. In that case, my proposed algorithm is definitely not efficient. I'd probably need an implementation who's complexity depends on the # of digits in each operands as opposed to its numerical value, right?
As suggested below, I will look into the Karatsuba algorithm...
Write the pseudocode of an algorithm that takes in two arbitrary length numbers (provided as strings), and computes the product of these numbers. Use an efficient procedure for multiplication of large numbers of arbitrary length. Analyze the efficiency of your algorithm.
I decided to take the (semi) easy way out and use the Russian Peasant Algorithm. It works like this:
a * b = a/2 * 2b if a is even
a * b = (a-1)/2 * 2b + a if a is odd
My pseudocode is:
rpa(x, y){
if x is 1
return y
if x is even
return rpa(x/2, 2y)
if x is odd
return rpa((x-1)/2, 2y) + y
}
I have 3 questions:
Is this efficient for arbitrary length numbers? I implemented it in C and tried varying length numbers. The run-time in was near-instant in all cases so it's hard to tell empirically...
Can I apply the Master's Theorem to understand the complexity...?
a = # subproblems in recursion = 1 (max 1 recursive call across all states)
n / b = size of each subproblem = n / 1 -> b = 1 (problem doesn't change size...?)
f(n^d) = work done outside recursive calls = 1 -> d = 0 (the addition when a is odd)
a = 1, b^d = 1, a = b^d -> complexity is in n^d*log(n) = log(n)
this makes sense logically since we are halving the problem at each step, right?
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
Many thanks in advance
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
This actually change everything about the problem (and make your algorithm incorrect).
It means than 1234 is provided as 1,2,3,4 and you cannot operate directly on the whole number. You need to analyze your algorithm in terms of #additions, #multiplications, #divisions.
You should expect a division to be a bit more expensive than a multiplication, and a multiplication to be lot more expensive than an addition. So a good algorithm try to reduce the number of divisions and multiplications.
Check out the Karatsuba algorithm, (ps don't copy it that's not what your teacher want) is one of the fastest for this specification.
Add 3): Native integers are limited in how large (or small) numbers they can represent (32- or 64-bit integers for example). To represent arbitrary length numbers you can choose strings, because then you are not really limited by this. The problem is then, of course, that your arithmetic units are not really made to add strings ;-)
Trying to figure out this pseudo code. The following is assumed....
I can only use unsigned and signed integers (or long).
Division returns a real number with no remainder.
MOD returns a real number.
Fractions and decimals are not handled.
INT I = 41828;
INT C = 15;
INT D = 0;
D = (I / 65535) * C;
How would you handle a fraction (or decimal value) in this situation? Is there a way to use negative value to represent the remainder?
In this example I/65535 should be 0.638, however, with the limitations, I get 0 with a MOD of 638. How can I then multiply by C to get the correct answer?
Hope that makes sense.
MOD here would actually return 23707, not 638. (I hope I'm right on that :) )
If you were to switch your order of operations on that last line, you would get the integer answer you're looking for (9, if my calculations are correct)
D = (I * C) / 65535
/* D == 9 */
Is that the answer you're looking for?
Well, one way to handle decimals is this replacement division function. There are numerous obvious downsides to this technique.
ALT DIV (dividend, divisor) returns (decimal, point)
for point = 0 to 99
if dividend mod divisor = 0 return dividend / divisor, point
dividend = divident * 10
return dividend / divisor, 100
Assuming these are the values you're always using for this computation, then I would do something like:
D = I / (65535 / C);
or
D = I / 4369;
Since C is a factor of 65535. This will help to reduce the possibility of overruning the available range of integers (i.e. if you've only got 16 bit unsigned ints).
In the more general case you, if you think there's a risk that the multiplication of I and C will result in a value outside the allowed range of the integer type you're using (even if the final result will be inside that range) you can factor out the GCD of the numerator and denominator as in:
INT I = 41828;
INT C = 15;
INT DEN = 65535;
INT GCDI = GCD(I, DEN);
DEN = DEN / GCDI;
I = I / GCDI;
INT GCDC = GCD(C, DEN);
DEN = DEN / GCDC;
C = C / GCDC;
INT D = (I * C) / DEN;
Where DEN is your denominator (65535 in this case). This will not provide you with the correct answer in all cases, especially if I and C are both mutually prime to DEN and I*C > MAX_INT.
As to the larger question you raise, division of integer values will always loose the decimal component (equivalent to the floor function). The only way to preserve the information contained in what we think of as the "decimal" part is through the remainder which can be derived from the modulus. I highly encourage you to not mix the meanings of these different number systems. Integers are just that integers. If you need them to be floating point numbers, you should really be using floats, not ints. If all you're interested in doing is displaying the decimal part to the user (i.e. you're not really using it for further computation) then you could write a routine to convert the remainder into a character string representing the remainder.