I am just trying to implement multi-precision arithmetic on native MIPS. Assume that
one 64-bit integer is in register $12 and $13 and another is in registers $14 and $15.
The sum is to be placed in registers $10 and $11. The most significant word of the 64-bit integer is found in the even-numbered registers, and the least significant word is found in the odd-numbered registers. On the internet, it said, this is the shortest possible implementation.
addu $11, $13, $15 # add least significant word
sltu $10, $11, $15 # set carry-in bit
addu $10, $10, $12 # add in first most significant word
addu $10, $10, $14 # add in second most significant word
I just wanna double check that I understand correctly. The sltu checks if
the sum of the two least significant words is smaller or equal than one of
the operands. If this is the case, than did a carry occur, is this right?
To check if there occured a carry when adding the two most significant
words and store the result in $9 I have to do:
sltu $9, $10, $12 # set carry-in bit
Does this make any sense?
The sltu checks if the sum of the two least significant words is smaller or equal than one of the operands.
Not quite: it sets $10 to 1 if the sum of the two least significant words is strictly less than one of the operands (considered as 32-bit unsigned values); and 0 if the sum is equal to, or greater than, that operand.
If this is the case, than did a carry occur, is this right?
Yes.
Consider what can happen when adding various possible values of b to some particular value a (where everything is an unsigned 32-bit value):
If overflow has not occurred, we must have a <= sum <= 0xFFFFFFFF, so 0 <= b <= (0xFFFFFFFF - a).
The remaining cases for b cause an overflow; the actual sum in these cases must be 0x100000000 <= sum <= a + 0xFFFFFFFF, which when truncated to 32 bits gives 0 <= sum <= a - 1.
To check if there occured a carry when adding the two most significant words and store the result in $9
I have to do:
sltu $9, $10, $12 # set carry-in bit
Not quite.
The problem here is that you're adding two 32-bit values and possibly a carry from the sum of the least significant words. For example, consider the case where there is a carry and both most significant words are 0xFFFFFFFF: the sum will be 1+ 0xFFFFFFFF + 0xFFFFFFFF = 0xFFFFFFFF, and so the carry will not be set (but it should be).
One way to deal with this would be to check for carry after adding $12 to $10, and check again after adding $11 to that sum. Only one of those sums can produce a carry ($12 + $10 only overflows when $12 is 0xFFFFFFFF, because $10 is either 0 or 1; and in that case the sum is 0, so the second sum can't overflow as well).
So this might (disclaimer: it's late, and this is untested) do the trick:
addu $11, $13, $15
sltu $10, $11, $15 # carry from low word
addu $10, $10, $12
sltu $9, $10, $12 # possible carry from high word (1)
addu $10, $10, $14
sltu $8, $10, $14 # possible carry from high word (2)
or $9, $8, $9 # carry in result if either (1) or (2) were true (can't both be true at once)
Related
In assembly we can obtain negative numbers by substracting the positive one from FFFFh and then increasing it by one, so about half of registers is reserved for negative numbers.
And if we multiply two numbers, and the result is too large for one register, the rest is passed to dx.
So, my question is, what is the largest positive number, that can be stored in the register, because when I'm multiplying different pairs of numbers, neither ax nor dx is the same.
You can interpret the same register contents either as signed number, or as unsigned number. In the first case max value is 0x7F*** = 2^(regsize-1)-1 (32767 for 16 bit), in the second case max value is 0xFF** = 2^regsize-1 (65535 for 16 bit)
(Note that there are imul and mul commands for multiplication, the first accounts for signs).
Hex byte example
80 imul 2 = FF:00
80 mul 2 = 01:00
Carry save arithmetic uses twice the number of bits, one word to hold the "virtual sum", one to hold the "virtual carry" to avoid propagating the carry which is the limiting factor in hardware speed.
I have a system that requires dividing these numbers by powers of two, but simply right shifting both numbers does not work in all cases eg. two 16 bit carry save numbers, which you add to produce 4000, C001 is the Virtual Sum, 7FFF is the virtual carry.
C001 + 7FFF = 4000 (discard overflow bits)
but after right shift
6000 + 3FFF = 9FFF (when it should be 2000)
In short: How do you divide a carry save number by a power of two? (While keeping it a carry save number)
First, right shift by 1 effectively does deleting by 2 with forgetting a remainder. But the remainder could be needed for having the exact result. For instance, change your initial example with adding C000 to 8000, or C002 to 7FFE. Both give the same sum but, sum of shifted values is A000 instead of your 9FFF, and this is definitely more correct. So, you can do such shifting only if sum of LSBs could be lost. In your case with 2 summands and 1 bit shift, this means no more than 1 summand could have 1 in its LSB.
Second, consider this is fixed and you've got A000. A simple ideal math says (a+b)/2 == a/2 + b/2. For your case, the carry bit you initially ignored weighed 0x10000, but after shifting by 1, it weighs 0x8000. That is exactly how A000 differs from your expected 2000. So, if you are sure in other aspects of your method, finish it with logical AND with ~0x8000 == 0x7FFF.
There is a technique to correct the representation such that it is shiftable. This originates from a paper "Carry-save architectures for high-speed digital signal processing" by Tobias Noll. You can compute new sign-bits of the carry and sum vectors as
c' = c_out
s' = s xor c xor c_out
where s and c are the original sign-bits and c_out is the discarded carry-bit from the carry-save addition.
The sequence is:
00111011
How do i calculate the parity bit for the above sequence? This question is from Databases- The complete book by jeffery ullman (Exercise 13.4.1 a)
I am not sure what the answer to this question should be.
Is it as simple as :
i)Even Parity : the number of 1s is 5 (odd) so just append a 1 and the answer is : 001110111
ii)Odd Parity: likewise , just append 0: 001110110
OR:
am i on a totally wrong path here? I looked up on the net but could not find anything concrete . Also, the text for the above question in the text book is not clear.
Yes, your answers are correct. For the given sequence,
00111011
Odd Parity is 001110110, the parity bit is zero so that the total number of 1's in the code is 5, which is an Odd number.
The Even Parity is 001110111, the parity bit is one so that the total number of 1's in the code is 6, which is an Even number.
You can also use XOR i.e;
00111011
0XOR0=0
0XOR0=0
0XOR1=1
1XOR1=0
0XOR1=1
1XOR0=1
1XOR1=0
0XOR1=1
, The last bit is the parity bit; 1 for even parity, 0 for odd parity. you should make this bit the LSB of the original number (00111011) thereby becoming (001110111).
unsigned char CalEvenParity(unsigned char data)
{
unsigned char parity=0;
while(data){
parity^=(data &1);
data>>=1;
}
return (parity);
}
Alternate implementation of parity:
This involves doing an XOR between the consecutive bits in a particular number in an integer.
The x>>1 left shifts the value by 1 bit and the & 1, gets us the value of the last bit of the number.
Parity of the entire sequence can be visualized as below:- i.e due to the properties of XOR.
1 ^ 0 ^ 1 is same as (1 ^ 0 ) ^ 1 and we extend the same.
def parity_val(x):
parity=0
while x>>1:
parity = (x & 1)^ ((x >>1) & 1)
x = x>> 1
return parity
This is probably well covered ground, but I'm ignorant on the subject so I'll be using amateurish terminology. Let's say I'm messing around with some set of conditions that each define a non-zero set of numbers within an int, let's just say 8-bit int. So for a bitwise one, I may have this:
11XX00XX
Saying that I want all bytes that have 1s where there are 1s, 0s where there are 0s, and don't care about the Xs. So 11110000 or 11000010 fulfills this, but 01110000 does not. Easy enough, right? For arithmetic conditions, I can only imagine there being some use of ==, !=, >=, >, <=, or < with comparison with a constant number. So I may say:
X > 16
So any number greater than 16 (00010000). What if I want to find all numbers that are in both of those above example sets? I can tell by looking at it that any numbers ending in 100XX will fit the requirements, so the bitwise part of the interseection includes 11X100XX. Then I have to include the region 111X00XX to fill the rest of the range above it, too. Right? Although I think for other cases, it wouldn't turn out so neatly, correct? Anyway, what is the algorithm behind this, for any of those arithmetic conditions vs any possible of those bitwise ones. Surely there must be a general case!
Assuming there is one, and it's probably something obvious, what if things get more complicated? What if my bitwise requirement becomes:
11AA00XX
Where anything marked with A must be the same. So 110000XX or 111100XX, but not 111000XX. For any number of same bit "types" (A, B, C, etc) in any number and at any positions, what is the optimal way of solving the intersection with some arithmetic comparison? Is there one?
I'm considering these bitwise operations to be sort of a single comparison operation/branch, just like how the arithmetic is setup. So maybe one is all the constants that, when some byte B 01110000 is ANDed with them, result in 00110000. So that region of constants, which is what my "bitwise condition" would be, would be X011XXXX, since X011XXXX AND 01110000 = 00110000. All of my "bitwise conditions" are formed by that reversal of an op like AND, OR, and XOR. Not sure if something like NAND would be included or not. This may limit what conditions are actually possible, maybe? If it does, then I don't care about those types of conditions.
Sorry for the meandering attempt at an explanation. Is there a name for what I'm doing? It seems like it'd be well tread ground in CS, so a name could lead me to some nice reading on the subject. But I'm mostly just looking for a good algorithm to solve this. I am going to end up with using more than 2 things in the intersection (potentially dozens or many many more), so a way to solve it that scales well would be a huge plus.
Extending a bit on saluce's answer:
Bit testing
You can build test patterns, so you don't need to check each bit individually (testing the whole number is faster than testing one-bit-a-time, especially that the on-bit-a-time test the whole number just as well):
testOnes = 128 & 64 // the bits we want to be 1
testZeros = ~(8 & 4) // the bits we want to be 0, inverted
Then perform your test this way:
if (!(~(x & testOnes) & testOnes) &&
!(~(~x | testZeros))) {
/* match! */
}
Logic explained:
First of all, in both testOnes and testZeros you have the bits-in-interest set to 1, the rest to 0.
testOnes testZeros
11000000 11110011
Then x & testOnes will zero out all bits we don't want to test for being ones (note the difference between & and &&: & performs the logical AND operation bitwise, whereas && is the logical AND on the whole number).
testOnes
11000000
x x & testOnes
11110000 11000000
11000010 11000000
01010100 01000000
Now at most the bits we are testing for being 1 can be 1, but we don't know if all of them are 1s: by inverting the result (~(x & testOnes)), we get all numbers we don't care about being 1s and the bits we would like to test are either 0 (if they were 1) or 1 (if they were 0).
testOnes
11000000
x ~(x & testOnes)
11110000 00111111
11000010 00111111
01010100 10111111
By bitwise-AND-ing it with testOnes we get 0 if the bits-in-interest were all 1s in x, and non-zero otherwise.
testOnes
11000000
x ~(x & testOnes) & testOnes
11110000 00000000
11000010 00000000
01010100 10000000
At this point we have: 0 if all bits we wanted to test for 1 were actually 1s, and non-0 otherwise, so we perform a logical NOT to turn the 0 into true and the non-0 into false.
x !(~(x & testOnes) & testOnes)
11110000 true
11000010 true
01010100 false
The test for zero-bits is similar, but we need to use bitwise-OR (|), instead of bitwise-AND (&). First, we flip x, so the should-be-0 bits become should-be-1, then the OR-ing turns all non-interest bits into 1, while keeping the rest; so at this point we have all-1s if the should-be 0 bits in x were indeed 0, and non-all-1s, otherwise, so we flip the result again to get 0 in the first case and non-0 in the second. Then we apply logical NOT (!) to convert the result to true (first case) or false (second case).
testZeros
11110011
x ~x ~x | testZeros ~(~x | testZeros) !(~(~x | testZeros))
11110000 00001111 11111111 00000000 true
11000010 00111101 11111111 00000000 true
01010100 10101011 11111011 00000100 false
Note: You need to realize that we have performed 4 operations for each test, so 8 total. Depending on the number of bits you want to test, this might still be less than checking each bit individually.
Arithmetic testing
Testing for equality/difference is easy: XOR the number with the tested one -- you get 0 if all bits were equal (thus the numbers were equal), non-0 if at least one bit was different (thus the numbers were different). (Applying NOT turns the equal test result true, differences to false.)
To test for unequality, however, you are out of luck most of the time, at least as it applies to logical operations. You are correct that checking for powers-of-2 (e.g. 16 in your question), can be done with logical operations (bitwise-AND and test for 0), but for numbers that are not powers-of-2, this is not so easy. As an example, let's test for x>5: the pattern is 00000101, so how do you test? The number is greater if it has a 1 in the fist 5 most-significant-bits, but 6 (00000110) is also larger with all first five bits being 0.
The best you could do is test if the number is at least twice as large as the highest power-of-2 in the number (4 for 5 in the above example). If yes, then it is larger than the original; otherwise, it has to be at least as much as the highest power-of-2 in the number, and then perform the same test on the less-significant bits. As you can see, the number of operations are dynamic based on the number of 1 bits in the test number.
Linked bits
Here, XOR is your friend: for two bits XOR yields 0 if they are the same, and 1 otherwise.
I do not know of a simple way to perform the test, but the following algorithm should help:
Assume you need bits b1, ..., bn to be the same (all 1s or all 0s), then zero-out all other bits (see logical-AND above), then isolate each bit in the test pattern, then line them up at the same position (let's make it the least-significant-bit for convenience). Then XOR-ing two of them then XOR-ing the third with the result, etc. will yield an even number at every odd step, odd number at every even step if the bits were the same in the original number. You will need to test at every step as testing only the final result can be incorrect for a larger number of linked-bits-to-be-tested.
testLinks
00110000
x x & testLinks
11110000 00110000
11000010 00000000
01010100 00010000
x x's bits isolated isolated bits shifted
11110000 00100000 00000001
00010000 00000001
11000010 00000000 00000000
00000000 00000000
01010100 00000000 00000000
00010000 00000001
x x's bits XOR'd result
11110000 00000000 true (1st round, even result)
11000010 00000000 true (1st round, even result)
01010100 00000001 false (1st round, odd result)
Note: In C-like languages the XOR operator is ^.
Note: How to line bits to the same position? Bit-shifting. Shift-left (<<) shifts all bits to the left, "losing" the most significant-bit and "introducing" 0 to the least-significant-bit, essentially multiplying the number by 2; shift-right (>>) operates similarly, shifting bits to the right, essentially integer-dividing the number by 2, however, it "introduces" the same bit to the most-significant-bit that was there already (thus keeping negative numbers negative).
Bitwise
Ok, so we look at bitwise operations, as that is the most efficient way of doing what you want. For clarity (and reference), bitwise values converted to decimal are
00000001 = 1
00000010 = 2
00000100 = 4
00001000 = 8
00010000 = 16
00100000 = 32
01000000 = 64
10000000 = 128
Now, given a bit pattern of 11XX00XX on the variable x we would perform the following checks:
x AND 128 == true
x AND 64 == true
x AND 8 == false
x AND 4 == false
If all those conditions are true, then the value matches the pattern. Essentially, you are checking the following conditions:
1XXXXXXX AND 10000000 == true
X1XXXXXX AND 01000000 == true
XXXX0XXX AND 00001000 == false
XXXXX0XX AND 00000100 == false
To put that together in programming language parlance (I'll use C#), you'd look for
if ((x && 128) && (x && 64) && !(x && 8) && !(x && 4))
{
// We have a match
}
For the more complicated bit pattern of 11AA00XX, you would add the following condition:
NOT ((x AND 32) XOR (x AND 16)) == true
What this does is first check x AND 32, returning either a 0 or 1 based on the value of that bit in x. Then, it makes the same check on the other bit, x AND 16. The XOR operation checks for the difference in bits, returning a 1 if the bits are DIFFERENT and a 0 if the bits are the same. From there, since we want to return a 1 if they are the same, we NOT the whole clause. This will return a 1 if the bits are the same.
Arithmetically
On the arithmetic side, you'd look at using a combination of division and modulus operations to isolate the bit in question. To work with division, you start by finding the highest possible power of two that the number can be divided by. In other words, if you have x=65, the highest power of two is 64.
Once you've done the division, you then use modulus to take the remainder after division. As in the example above, given x=65, x MOD 64 == 1. With that number, you repeat what you did before, finding the highest power of two, and continuing until the modulus returns 0.
TLDR saluce's answer:
Bitwise checks consider individual bits separately, and arithmetic checks consider all the bits together. True enough, they coincide for powers of 2, but not for any arbitrary number.
So if you have both, you will need to implement both sets of checks.
The space of all possible values of a 32-bit int is a bit big to store, so you'll have to check them all each time. Just make sure you're using short-circuits to eliminate duplicative checks like x > 5 || x > 3.
You've defined a decent DSL for specifying the mask. I would write a parser that reads that mask and performs operations specific to each unique character.
AABBB110 = mask
Step 1: extract all unique characters in to an array [01AB]. You can omit 'X', as no operation is needed.
Step 2: iterate through that array, processing your text mask into separate bit masks, one for each unique character, replacing the bit at that character placement with 1 and all others with 0.
Mask_0 = 00000001 = 0x01
Mask_1 = 00000110 = 0x06
Mask_A = 11000000 = 0xC0
Mask_B = 00111000 = 0x38
Step 3: Pass each mask to its appropriate function as defined below.
boolean mask_zero(byte data, byte mask) {
return (data & mask) == 0;
}
boolean mask_one(byte data, byte mask) {
return (data & mask) == mask;
}
boolean mask_same(byte data, byte mask) {
byte masked=data & mask;
return (masked==0) || (masked==mask);
}
What format do you want the result set in? Both the arithmetic set (let's call this A) and the bitwise set (let's call this B) have the both the advantage of being quickly testable, and the advantage of being easily iterable. But each of those kinds of definition can define things that the other can't, so the intersection of them needs to be something else entirely.
What I would do is handle testing and iteration separately. An easily-testable definition can be created easily by converting both sets to arbitrary mathematical expressions (the bitwise set can be converted to a few bitwise operations, as other posters have described) by simply using logical "and". This is easily generalized to sets of any kind - simply store references to both of the parent sets, and when asked whether a number is in both sets, just check with both of the parent sets.
However, an arbitrary mathematical expression is not easy to iterate over at all. For iteration, the simplest method is the iterate over set B (which can be done by changing only the bits that aren't constrained by the set), and allow set A to constrain the result. If A uses > or >=, then iterate down (from the maximum number) and halt on false for maximum efficiency; if A uses < or <=, then iterate up (from the minimum number) and halt on false. If A uses ==, then there's only one number to check, and if A uses !=, then either direction is fine (but you can't halt on false).
Note that a bitwise set can behave like an indexable array of numbers - for example, the bitwise set defined by 11XX00XX can be treated as an array with indexes ranging from 0000 to 1111, with the bits of the index being fit into the corresponding slots. This makes it easy to iterate up or down over the set. Set A can be indexed in a similar way, but since it can easily be an infinite set (unless constrained by your machine's int value, though it doesn't have to be, i.e. BigInteger), it isn't the safest thing to iterate over.
Lets say I have guessed a lottery number of:
1689
And the way the lottery works is, the order of the digits don't matter as long as the digits match up 1:1 with the digits in the actual winning lottery number.
So, the number 1689 would be a winning lottery number with:
1896, 1698, 9816, etc..
As long as each digit in your guess was present in the target number, then you win the lottery.
Is there a mathematical way I can do this?
I've solved this problem with a O(N^2) looping checking each digit against each digit of the winning lottery number (separating them with modulus). Which is fine, it works but I want to know if there are any neat math tricks I can do.
For example, at first... I thought I could be tricky and just take the sum and product of each digit in both numbers and if they matched then you won.
^ Do you think that would work?
However, I quickly disproved this when I found that lottery guess: 222, and 124 have the different digits but the same product and sum.
Anyone know any math tricks I can use to quickly determine if the digits in num1 match the digits in num2 regardless of order?
How about going through each number, and counting up the number of appearances of each digit (into two different 10 element arrays)? After you'd done the totaling, compare the counts of each digit. Since you only look at each digit once, that's O(N).
Code would look something like:
for(int i=0; i<digit_count; i++)
{
guessCounts[guessDigits[i] - '0']++;
actualCounts[actualDigits[i] - '0']++;
}
bool winner = true;
for(int i=0; i<10 && winner; i++)
{
winner &= guessCounts[i] == actualCounts[i];
}
Above code makes the assumption that guessDigits and actualDigits are both char strings; if they held the actual digits then you can just skip the - '0' business.
There are probably optimizations that would make this take less space or terminate sooner, but it's a pretty straightforward example of an O(N) approach.
By the way, as I mentioned in a comment, the multiplication/sum comparison will definitely not work because of zeros. Consider 0123 and 0222. Product is 0, sum is 6 in both cases.
Split into array, sort array, join into string, compare strings.
(Not a math trick, I know)
You can place the digits into an array, sort the array, then compare the arrays element by element. This will give you O( NlogN ) complexity which is better than O( N^2 ).
If N can become large, sorting the digits is the answer.
Because digits are 0..9 you can count the number of occurrences of each digit of the lottery answer in an array [0..9].
To compare you can subtract 1 for each digit you encounter in the guess. When you encounter a digit where the count is already 0, you know the guess is different. When you get through all the digits, the guess is the same (as long as the guess has as many digits as the lottery answer).
For each digit d multiply with the (d+1)-th prime number.
This is more mathematical but less efficient than the sorting or bucket methods. Actually it is the bucket method in disguise.
I'd sort both number's digits and compare them.
If you are only dealing with 4 digits I dont think you have to put much thought into which algorithm you use. They will all perform roughly the same.
Also 222 and 124 dont have the same sum
You have to consider that when n is small, the order of efficiency is irrelevant, and the constants start to matter more. How big can your numbers actually get? Can you get up to 10 digits? 20? 100? If your numbers have just a few digits, n^2 really isn't that bad. If you have strings of thousands of digits, then you might actually need to do something more clever like sorting or bucketing. (i.e. count the 0s, count the 1s, etc.)
I'm stealing the answer from Yuliy, and starblue (upvote them)
Bucketing is the fastest aside from the O(1)
lottonumbers == mynumbers;
Sorting is O(nlog2n)
Bucketsort is an O(n) algorithm.
So all you need to do is do it twice (once for your numbers, once for the target-set), and if the numbers of the digits add up, then they match.
Any kind of sorting is an added overhead that is unnecessary in this case.
array[10] digits;
while(targetnum > 0)
{
short currDig = targetnum % 10;
digits[currDig]++;
targetnum = targetnum / 10;
}
while(mynum > 0)
{
short myDig = mynum % 10;
digits[myDig]--;
mynum = mynum / 10;
}
for(int i = 0; i < 10; i++)
{
if(digits[i] == 0)
continue;
else
//FAIL TO MATCH
}
Not the prettiest code, I'll admit.
Create an array of 10 integers subscripted [0 .. 9].
Initialize each element to a different prime number
Set product to 1.
Use each digit from the number, to subscript into the array,
pull out the prime number, and multiply the product by it.
That gives you a unique representation which is digit order independent.
Do the same procedure for the other number.
If the unique representations match, then the original numbers match.
If there are no repeating digits allowed (not sure if this is the case though) then use a 10-bit binary number. The most significant bit represents the digit 9 and the LSB represents the digit 0. Work through each number in turn and flip the appropriate bit for each digit that you find
So 1689 would be: 1101000010
and 9816 would also be: 1101000010
then a XOR or a subtract will leave 0 if you are a winner
This is just a simple form of bucketing
Just for fun, and thinking outside of the normal, instead of sorting and other ways, do the deletion-thing. If resultstring is empty, you have a winner!
Dim ticket As String = "1324"
Dim WinningNumber As String = "4321"
For Each s As String In WinningNumber.ToCharArray
ticket = Replace(ticket, s, "", 1, 1)
Next
If ticket = "" Then MsgBox("WINNER!")
If ticket.Length=1 then msgbox "Close but no cigar!"
This works with repeating numbers too..
Sort digits before storing a number. After that, your numbers will be equal.
One cute solution is to use a variant of Zobrist hashing. (Yes, I know it's overkill, as well as probabilistic, but hey, it's "clever".)
Initialize a ten-element array a[0..9] to random integers. Then, for each number d[], compute the sum of a[d[i]]. If the numbers contained the same digits, the resulting numbers will be equal; with high probability (~ 1 in how many possible ints there are), the opposite is true as well.
(If you know that there will be at most 10 digits total, then you can use the fixed numbers 1, 10, 100, ... instead of random numbers for guaranteed success. This is bucket sorting in not-too-much disguise.)