So let's say we bitshift 1 by some number x; eg, in c:
unsigned char cNum= 1, x= 6;
cNum <<= x;
cNum will equal 01000000b (0x40).
Easy peasy. But without using a lookup table or while loop, is there a simple operation that will take cNum and give me x back?
AFAIK, no 'simple' formula is available.
One can, however, calculate the index of most significant (or least significant) set bit:
a = 000010010, a_left = 3, a_right = 1
b = 001001000, b_left = 5, b_right = 3
The difference of the shifts is 2 (or -2).
One can then shift the smaller by abs(shift) to compare that a << 2 == b. (In some architectures there exists a shift by signed value, which works without absolute value or checking which way the shift needs to be carried.)
In ARM Neon there exists an instruction for counting the MSB bit and in Intel there exists an instruction to scan both from left and right.
log2(cNum)+ 1; will yield x where cNum != 0, at least in GNU c.
And the compiler does the casts automagically which is probably bad form, but it gives me what I need.
Related
I've been working on a hex calculator for a while, but seem to be stuck on the subtraction portion, particularly when B>A. I'm trying to simply subtract two positive integers and display the result. It works fine for A>B and A=B. So far I'm able use two 7-segment displays to show the integers to be subtracted and I get the proper difference as long as A>=B
When B>A I see a pattern that I'm not able to debug because of my limited knowledge in Verilog case/if-else statements. Forgive me if I'm not explaining the best way but what I'm observing is that once the first number, A, "reaches" 0 (after being subtracted from) it loops back to F. The remainder of B is then subtracted from F rather than 0.
For example: If A=1, B=3
A - B =
1 - 1 = 0
0 - 1 = F
F - 1 = E
Another example could be 4-8=C
Below are the important snippets of code I've put together thus far.
First, my subtraction statement
always#*
begin
begin
Cout1 = 7'b1000000; //0
end
case(PrintDifference[3:0])
4'b0000 : Cout0 = 7'b1000000; //0
4'b0001 : Cout0 = 7'b1111001; //1
...
4'b1110 : Cout0 = 7'b0000110; //E
4'b1111 : Cout0 = 7'b0001110; //F
endcase
end
My subtraction is pretty straightforward
output [4:0]Difference;
output [4:0] PrintDifference;
assign PrintDifference = A-B;
I was thinking I could just do something like
if A>=B, Difference = B-A
else, Difference = A-B
Thank you everyone in advance!
This is expected behaviour of twos complement addition / subtraction which I would recommend reading up on since it is so essential.
The result obtained can be changed back into an unsigned form by inverting all the bits and adding one. Checking the most significant bit will tell you if the number is negative or not.
Original problem:
Let N be a positive integer (actually, N <= 2000) and P - set of all possible partitions of the N, where with and . Let A be the number of partitions . Find the A.
Input: N. Output: A - the number of partitions .
What have I tried:
I think that this problem can be solved by dynamic-based algorithm. Let p(n,a,b) be the function, which returns the number of partitons of n using only numbers a. . .b. Then we can compute the A with the code like:
int Ans = 2; // the 1+1+...+1=N & N=N partitions
for(int a = 2; a <= N/2; a += 1){ //a - from 2 to N/2
int b = a*2-1;
Ans += p[N][a][b]; // add all partitions using a..b to Answer
if(a < (a-1)*2-1){ // if a < previous b [ (a-1)*2-1 ]
Ans -= p[N][a][(a-1)*2-1]; // then we counted number of partitions
} // using numbers a..prev_b twice.
}
Next I tried to find the dynamic algorithm computing p(n,a,b) for any integer a <= b <= n. This paper (.pdf) provides the folowing algorithm:
, were I(n<=b) = 1 if n<=b and =0 otherwise.
Question(s):
How should I realize the algorithm from the paper? I'm new at d-p problems and as I can see, this problem has 3 dimensions (n,a & b), which is quite tricky for me.
How actually that algorithm works? I know how work the algorithms for computing p(n,0,b) or p(n,a,n), but a little explanation for p(n,a,b) will be very helpful.
Does original problem have simpler solution? I'm quite sure that there's another clean solution, but I didn't found it.
I calculated all A(1)-A(600) in 23 seconds with memoization approach (top-down dynamic programming). 3D table requires 1.7 GB of memory.
For reference: A[50] = 278, A(200)=465202, A(600)=38860513616
N=2000 requires too large table for 32-bit environment, and map approach worked too slow.
I can make 2D table with reasonable size, but this approach requires table zeroing at every iteration of external loop - slow again.
A(1000) = 107292471486730 in 131 sec. And I think that long arithmetic might be needed for larger values to avoid Int64 overflow.
I am trying to write a program to check whether a number N can be expressed as the sum of two cubes i.e. N = a^3 + b^3
This is my code with complexity O(n):
#include <iostream>
#include<math.h>
#define ll unsigned long long
using namespace std;
int main()
{
ios_base::sync_with_stdio(false);
bool flag=false;
ll t,N;
cin>>t;
while(t--)
{
cin>>N;
flag=false;
for(int i=1; i<=(ll)cbrtl(N/2); i++)
{
if(!(cbrtl(N-i*i*i)-(ll)cbrtl(N-i*i*i))) {flag=true; break;}
}
if(flag) cout<<"Yes\n"; else cout<<"No\n";
}
return 0;
}
As the time limit for code is 2s, This program is giving TLE? can anyone suggest a faster approch
I posted this also in StackExchange, so sorry if you consider duplicate, but I really don´t know if these are the same or different boards (Exchange and Overflow). My profile appears different here.
==========================
There is a faster algorithm to check if a given integer is a sum (or difference) of two cubes n=a^3+b^3
I don´t know if this algorithm is already known (probably yes, but I can´t find it on books or internet). I discovered and use it to compute integers until n < 10^18
This process uses a single trick
4(a^3+b^3)/(a+b) = (a+b)^2 + 3(a-b)^2)
We don´t know in advance what would be "a" and "b" and so what also would be "(a+b)", but we know that "(a+b)" should certainly divide (a^3+b^3) , so if you have a fast primes factorizing routine, you can quickly compute each one of divisors of (a^3+b^3) and then check if
(4(a^3+b^3)/divisor - divisor^2)/3 = square
When (and if) found a square, you have divisor=(a+b) and sqrt(square)=(a-b) , so you have a and b.
If not square found, the number is not sum of two cubes.
We know divisor < (4(a^3+b^3)^(1/3) and this limit improves the task, because when you are assembling divisors of (a^3+b^3) immediately discard those greater than limit.
Now some comparisons with other algorithms - for n = 10^18, by using brute force you should test all numbers below 10^6 to know the answer. On the other hand, to build all divisors of 10^18 you need primes until 10^9.
The max quantity of different primes you could fit into 10^9 is 10 (2*3*5*7*11*13*17*19*23*29 = 5*10^9) so we have 2^10-1 different combinations of primes (which assemble the divisors) to check in worst case, many of them discared because limit.
To compute prime factors I use a table with first 60.000.000 primes which works very well on this range.
Miguel Velilla
To find all the pairs of integers x and y that sum to n when cubed, set x to the largest integer less than the cube root of n, set y to 0, then repeatedly add 1 to y if the sum of the cubes is less than n, subtract 1 from x if the sum of the cubes is greater than n, and output the pair otherwise, stopping when x and y cross. If you only want to know whether or not such a pair exists, you can stop as soon as you find one.
Let us know if you have trouble coding this algorithm.
I created the following simple matlab functions to convert a number from an arbitrary base to decimal and back
this is the first one
function decNum = base2decimal(vec, base)
decNum = vec(1);
for d = 1:1:length(vec)-1
decNum = decNum*base + vec(d+1);
end
and here is the other one
function baseNum = decimal2base(num, base, Vlen)
ii = 1;
if num == 0
baseNum = 0;
end
while num ~= 0
baseNum(ii) = mod(num, base);
num = floor(num./base);
ii = ii+1;
end
baseNum = fliplr(baseNum);
if Vlen>(length(baseNum))
baseNum = [zeros(1,(Vlen)-(length(baseNum))) baseNum ];
end
Due to the fact that there are limitations to how big a number can be these functions can't successfully convert vary big vectors, but while testing them I noticed the following bug
Let's use the following testing function
num = 201;
pCount = 7
x=base2decimal(repmat(num-1, 1, pCount), num)
repmat(num-1, 1, pCount)
y=decimal2base(x, num, 1)
isequal(repmat(num-1, 1, pCount),y)
A supposed vector with seven (7) digits in base201 works fine, but the same vector with base200 does not return the expected result even though it is smaller and theoretically should be converted successfully.
(One preliminary comment: calling base2decimal won't result in a decimal number but rather in a number :-D)
This is due floating-point limited precision (in our case, double). To test it, just type at the MATLAB Command Window:
>> 200^7 - 1 == 200^7
ans =
1
>> mod(200^7 - 1, 200)
ans =
0
which means that the value of your number in base 200 (which is precisely 2007−1) is represented exactly as 2007, and the "true" value of representation is 2007.
On the other hand:
>> 201^7 - 1 == 201^7
ans =
1
so still the two numbers are represented the same, but
>> mod(201^7 - 1, 201)
ans =
200
which means that the two values share the "true" representation of 2017−1, which, by accident, is the value that you expected.
TL;DR
When stored in a double, 2007−1 is inaccurately represented as 2007, while 2017−1 is accurately represented.
"Bigger numbers are less accurately represented than smaller numbers" is a misconception: if it was true, there would be no big numbers that could be exactly represented.
Judging from your own observations:
The code works fine in most cases
The code can give small errors for large numbers
The suspect is apparent:
Rounding issues seem to give you headaces here. This is also illustrated by #RTL in the comments.
The first question should now be:
1. Do you need perfect accuracy for such large numbers? Or is it ok if it is off by a relatively small amount sometimes?
If that is answered with a yes, I would recommend you to try a different storage format.
The simple solution would be to use big integers:
uint64
The alternative would be to make your own storage format. This is required if you need even bigger numbers. I think you can cover a huge range with a cell array and some tricks, but of course it is going to be hard to combine those numbers afterwards without losing the accuracy that you worked so hard for.
I could just use division and modulus in a loop, but this is slow for really large integers. The number is stored in base two, and may be as large as 2^8192. I only need to know if it is a power of ten, so I figure there may be a shortcut (other than using a lookup table).
If your number x is a power of ten then
x = 10^y
for some integer y, which means that
x = (2^y)(5^y)
So, shift the integer right until there are no more trailing zeroes (should be a very low cost operation) and count the number of digits shifted (call this k). Now check if the remaining number is 5^k. If it is, then your original number is a power of 10. Otherwise, it's not. Since 2 and 5 are both prime this will always work.
Let's say that X is your input value, and we start with the assumption.
X = 10 ^ Something
Where Something is an Integer.
So we say the following:
log10(X) = Something.
So if X is a power of 10, then Something will be an Integer.
Example
int x = 10000;
double test = Math.log10(x);
if(test == ((int)test))
System.out.println("Is a power of 10");