I am an instructor with very little knowledge in programming but need some help with some common programming questions from a test that I cannot figure out. The question reads:
Refer to line 004; the expression "value & 1" does which of the following?
001 unsigned int bitCount (int value) {
002 unsigned int count = 0;
003 while (value > 0) { //until all bits are zero
004 if ((value & 1) ==1) { //check lower bit
005 count++;
006 }
007 value >>1;
008 }
009 return count;
010 }
masks all bits except the lsb
masks the msb exclusively
masks all bits except the msb
masks the lsb exclusively
In C, the & operator performs the bitwise AND operation. In other words, it goes bit-by-bit through the two numbers, and sets the corresponding bit of the answer to 1 only if BOTH of the input bits are 1.
In binary, the number 1 is 000....00001, in other words, it only has a single one in the least significant bit position. So, when you & (AND) with it, everything in the answer will be zero except for possibly that least significant bit position, which will be a 1 only if the other input has a 1 in that position.
Also, in your program, line 007 probably doesn't do what you want. It looks like you are shifting value one bit to the right, but then you aren't saving the result of the shift. Perhaps you want to write something like value = value >> 1 so that it actually modifies the value variable?
Related
I'm new to this subject and I'm struggling to comprehend how 0xFFFFFFF & 0x00000001 can have the same sign, yet 0x0000001 and 0x12345678 have different signs. Based on my understanding thus far, hex digits that have the most significant bit between 0-7 are positive & 8-F are negative.
For further context, here is the thing I was trying to understand:
Question: Complete the C function that performs the operations and meets the requirements indicated in the comments.
Comments:
/*
* diffSign – return 1 if x and y have different signs
* Examples: diffSign(0xFFFFFFF, 0x00000001) = 0
* diffSign(0x0000001, 0x12345678) = 1
* Legal ops: & | ^ ~ << >>
* 1-byte const (0x00 to 0xFF)
*/
Answer:
int diffSign(int x, int y) {
return ((x >> 31) & 0x1) ^ ((y >> 31) & 0x1);
}
If possible, I would also greatly appreciate some clarification on how & 0x1 would help me to identify the sign! It seems rather redundant and I'm not too sure about the significance of that in the equation.
If you look closely, it makes perfect sense, just that you are not seeing that the most significant byte of 0xFFFFFFF is actually 0 because there are 7 F's.
0xFFFFFFF = 0x0FFF FFFF
which for a 32 bit integer represents a positive number.
However 0x0000001 and 0x12345678 do also have the same sign. Because what makes the difference is the most significant BIT. You are right that numbers with the most significant BYTE between 0-7 are positive, and 8-F negative. The comment in the function is wrong.
The code however is right, because it does 31 shifts to the right, leaving only the most significant BIT on each of the arguments (the sign bit of each argument) and does an XOR, which returns true only if both are not the same.
I was looking at some code today for integrating a real time clock with an arduino and it had some binary to decimal (and vice versa) that I don't fully understand.
The code in question is below:
byte decToBcd(byte val)
{
return ( (val/10*16) + (val%10) );
}
byte bcdToDec(byte val)
{
return ( (val/16*10) + (val%16) );
}
ex: decToBcd(12);
I really fail to grasp how this works. I am not sure I understand the math, or if some sort of assumptions are being taken advantage of.
Would someone mind explaining how exactly the math and data types below are supposed to work? If possible touching on why the value "16" is used in the conversions instead of "8" when we are supposed to be working with a byte value.
For context, the full code can be found here: http://www.codingcolor.com/microcontrollers/an-arduino-lcd-clock-using-a-chronodot-rtc/
The key hint here is BCD - Binary-coded decimal - in the function name. In BCD each decimal digit is represented by four bits (half of a byte). As a result the maximum (decimal) number you can store using BCD notation is 99 - 9 in the upper nibble (half of the byte) and 9 in the lower nibble.
Let's take a look at number 12 as an example. Number 12 looks as follows in the binary notation:
12 = %00001010
However in BCD it looks as follows:
12 = %00010010
because
0001 0010
1 2
Now if you look at the decToBcd function val%10 is responsible for calculating the value of the ones place (i.e. the last digit). Since this goes to the lower part of the byte we don't need to do anything special here. val/10*16 first calculates the value of the tens place - val/10. However since the value has to go to the upper half of the byte it needs to be shifted up by four bits - hence *16. Another (in my opinion more readable) way of writing this function would be:
((val / 10) << 4) | (val % 10)
The bcdToDec does the reverse conversion.
RTC usually stores Year in 1 byte as 2 digits only, i.e: 2014 is 14.
And some of them stores it as a number from the year 1970 so 2014 = 44.
So maximum it can hold is 99 in both cases.
I'm programming my Arduino micro controller and I found some code for accepting accelerometer sensor data for later use. I can understand all but the following code. I'd like to have some intuition as to what is happening but after all my searching and reading I can't wrap my head around what is going on and truly understand.
I have taken a class in C++ and we did very little with bitwise operations or bit shifting or whatever you'd like to call it. Let me try to explain what I think I understand and you can correct me where it is needed.
So:
I think we are storing a value in x, pretty sure in fact.
It appears that the data in array "buff", slot number 1, is being set to the datatype of integer.
The value in slot 1 is being bit shifted 8 places to the left.(does this point to buff slot 0?)
This new value is being compared to the data in buff slot 0 and if either bits are true then the bit in the data stored in x will also be true so, 0 and 1 = 1, 0 and 0 = 0 and 1 and 0 = 1 in the end stored value.
The code does this for all three axis: x, y, z but I'm not sure why...I need help. I want full understanding before I progress.
//each axis reading comes in 10 bit resolution, ie 2 bytes.
// Least Significant Byte first!!
//thus we are converting both bytes in to one int
x = (((int)buff[1]) << 8) | buff[0];
y = (((int)buff[3]) << 8) | buff[2];
z = (((int)buff[5]) << 8) | buff[4];
This code is being used to convert the raw accelerometer data (in an array of 6 bytes) into three 10-bit integer values. As the comment says, the data is LSB first. That is:
buff[0] // least significant 8 bits of x data
buff[1] // most significant 2 bits of x data
buff[2] // least significant 8 bits of y data
buff[3] // most significant 2 bits of y data
buff[4] // least significant 8 bits of z data
buff[5] // most significant 2 bits of z data
It's using bitwise operators two put the two parts together into a single variable. The (int) typecasts are unnecessary and (IMHO) confusing. This simplified expression:
x = (buff[1] << 8) | buff[0];
Takes the data in buff[1], and shifts it left 8 bits, and then puts the 8 bits from buff[0] in the space so created. Let's label the 10 bits a through j for example's sake:
buff[0] = cdefghij
buff[1] = 000000ab
Then:
buff[1] << 8 = ab00000000
And:
buff[1] << 8 | buff[0] = abcdefghij
The value in slot 1 is being bit shifted 8 places to the left.(does this point to buff slot 0?)
Nah. Bitwise operators ain't pointer arithmetic, don't confuse the two. Shifting by N places to the left is (roughly) equivalent with multiplying by 2 to the Nth power (except some corner cases in C, but let's not talk about those yet).
This new value is being compared to the data in buff slot 0 and if either bits are true then the bit in the data stored in x will also be true
No. | is not the logical OR operator (that would be ||) but the bitwise OR one. All the code does is combining the two bytes in buff[0] and buff[1] into a single 2-byte integer, where buff[1] denotes the MSB of the number.
The device result is in 6 bytes and the bytes need to be rearranged into 3 integers (having values that can only take up 10 bits at most).
So the first two bytes look like this:
00: xxxx xxxx <- binary value
01: ???? ??xx
The ??? part isn't part of the result because the xxx part comprise the 10 bits. I guess the hardware is built in such a way that the ??? part is all zero bits.
To get this into a single integer variable, we need all 8 of the low bits plus the upper-order 2 bits, shifted left by 8 position so they don't interfere with the low order 8 bits. The logical OR (| - vertical bar) will join those two parts into a single integer that looks like this:
x: ???? ??xx xxxx xxxx <- binary value of a single 16 bit integer
Actually it doesn't matter how big the 'int' is (in bits) as the remaining bits (beyond that 16) will be zero in this case.
to expand and clarify the reply by Carl Norum.
The (int) typecast is required because the source is a byte. The bitshift is performed on the source datatype before the result is saved into X. Therefore it must be cast to at least 16 bits (an int) in order to bitshift 8 bits and retain all the data before the OR operation is executed and the result saved.
What the code is not telling you is if this should be an unsigned int or if there is a sign in the bit data. I'd expect -ve data is possible with an Accelerometer.
The sequence is:
00111011
How do i calculate the parity bit for the above sequence? This question is from Databases- The complete book by jeffery ullman (Exercise 13.4.1 a)
I am not sure what the answer to this question should be.
Is it as simple as :
i)Even Parity : the number of 1s is 5 (odd) so just append a 1 and the answer is : 001110111
ii)Odd Parity: likewise , just append 0: 001110110
OR:
am i on a totally wrong path here? I looked up on the net but could not find anything concrete . Also, the text for the above question in the text book is not clear.
Yes, your answers are correct. For the given sequence,
00111011
Odd Parity is 001110110, the parity bit is zero so that the total number of 1's in the code is 5, which is an Odd number.
The Even Parity is 001110111, the parity bit is one so that the total number of 1's in the code is 6, which is an Even number.
You can also use XOR i.e;
00111011
0XOR0=0
0XOR0=0
0XOR1=1
1XOR1=0
0XOR1=1
1XOR0=1
1XOR1=0
0XOR1=1
, The last bit is the parity bit; 1 for even parity, 0 for odd parity. you should make this bit the LSB of the original number (00111011) thereby becoming (001110111).
unsigned char CalEvenParity(unsigned char data)
{
unsigned char parity=0;
while(data){
parity^=(data &1);
data>>=1;
}
return (parity);
}
Alternate implementation of parity:
This involves doing an XOR between the consecutive bits in a particular number in an integer.
The x>>1 left shifts the value by 1 bit and the & 1, gets us the value of the last bit of the number.
Parity of the entire sequence can be visualized as below:- i.e due to the properties of XOR.
1 ^ 0 ^ 1 is same as (1 ^ 0 ) ^ 1 and we extend the same.
def parity_val(x):
parity=0
while x>>1:
parity = (x & 1)^ ((x >>1) & 1)
x = x>> 1
return parity
It's been a while since my assembly class in college (20 years to be exact).
When someone gives you a number, say 19444, and says that X is bits 15 through 8 and Y are bits 7 through 0... how do I calculate values of X and Y?
I promise this is not homework, just a software guy unwisely trying to do some firmware programming.
First of all convert the input number to hexadecimal:
19444 => 0x4BF4
Hex is convenient because every 4 binary bits are one hex digit. Hence, every 2 hex digits are 8 bits, or a byte. Now assuming traditional little-endian notation (look it up!), bits 7 downto 0 are the low byte, bits 15 downto 8 are the high byte:
[7:0] => 0xF4
[15:8] => 0x4B
Using your preferred language, you can get the least significant byte by using a bitwise AND:
Y = 19444 & 0xff
or, the more mathematical:
Y = 19444 % 256
Now, for the most significant byte you can use bit shifts (if the number is larget than two byte, apply the first stage again):
X = 19444 >> 8
(The following assumes C notation). In general, to access the value in bits N through M, where N is the smaller value and the bits are numbered from 0, use:
(value >> N) & (1U << (M - N + 1)) - 1;
So for bits 0..7, use:
(value >> 0) & (1U << 8) - 1
and for bits 8..15, use:
(value >> 8) & (1U << 8) - 1
Note that for the case where "N through M" is the entire width of the type, you can't use the shift as written.
Also, mind the byte order (wheter the most significant byte comes first).
When given bit positions (like "15 through 8"), by convention bit 0 is the least significant bit of the binary number. If you're dealing with a 16-bit number, then bit 15 is the most significant bit.
One hexadecimal digit corresponds to 4 binary digits. So hex FF is 11111111 in binary. Bitwise AND is often used to "mask out" a certain collection of bits.
Nearly all processors provide some form of bitwise shifting. For example, shifting 1010001 right by 4 bits gives you 101.
Combining all this, in C you would typically do something like this:
unsigned short int num;
unsigned char x, y;
num = 19444;
y = num & 0xff; //use bitwise AND to get 8 least-sig bits
x = num >> 8; //right-shift by 8 bits to get 8 most-sig bits