Twos complement on hex - hex

Given:
int number = 0xFFFFFF87;
number = ~number + 1;
printf ("%x", number);
Why does 'number' become '79' instead of '87'? How can i make it '87' ?

It is 0x79 because ~0xFFFFFF87 = 0x00000078 and when 1 is added you get 0x00000079.
To get 0x87, you should use:
int number = 0xFFFFFF87 & 0xFF;
which will select only the least significant byte and mask the other bytes with zero.

~ negates every bit, not only the ones with hex group of 0xF.
To make it 0x87 just reverse the operation, i.e.:
int number = ~(0x87-1); // which is 0xFFFFFF79

Related

Parsing an 8 bit char array to integer

I am new to Arduino and all I want to do is parse a String of binary numbers to an exact integer representation.
char* byte1 = "11111111"
int binary1 = atoi(byte1);
Serial.print(binary1);
However this prints out: -19961
Can anyone explain why? I am coming from a Java and JavaScript perspective.
atoi converts a decimal (base 10) string to int. If you want to convert a binary string to int, you can use strtol:
char *byte1 = "11111111";
int val1 = strtol(byte1, 0, 2);
std::cout << val1 << std::endl;
strtol can convert any base -- the 3rd argument is the base to use.
You get -19961 because on Arduino int is 16 bit wide and cannot hold any number bigger than 32767. To hold an integer representation of 11111111 you have to use long (which on Arduino is 32 bit) and strtol.
long val = strtol(byte1, NULL, 10);

Why does this binary math fail when adding 00000001, but work correctly otherwise?

I've tried everything I can think of and cannot seem to get the below binary math logic to work. Not sure why this is failing but probably indicates my misunderstanding of binary math or C. The ultimate intent is to store large integers (unsigned long) directly to an 8-bit FRAM memory module as 4-byte words so that a micro-controller (Arduino) can recover the values after a power failure. Thus the unsigned long has to be assembled from its four byte words parts as it's pulled from memory, and the arithmetic of assembling these word bytes is not working correctly.
In the below snippet of code, the long value is defined as four bytes A, B, C, and D (simulating being pulled form four 8-bit memory blocks), which get translated to decimal notation to be used as an unsigned long in the arrangement DDDDDDDDCCCCCCCCBBBBBBBBAAAAAAAA. If A < 256 and B, C, D all == 0, the math works correctly. The math also works correctly for any values of B, C, and D if A == 0. But if B, C, or D > 0 and A == 1, the 1 value of A is not added during the arithmetic. A value of 2 works, but not a value of 1. Is there any reason for this? Or am I doing binary math wrong? Is this a known issue that needs a workaround?
// ---- FUNCTIONS
unsigned long fourByte_word_toDecimal(uint8_t byte0 = B00000000, uint8_t byte1 = B00000000, uint8_t byte2 = B00000000, uint8_t byte3 = B00000000){
return (byte0 + (byte1 * 256) + (byte2 * pow(256, 2)) + (byte3 * pow(256, 3)));
}
// ---- MAIN
void setup() {
Serial.begin(9600);
uint8_t addressAval = B00000001;
uint8_t addressBval = B00000001;
uint8_t addressCval = B00000001;
uint8_t addressDval = B00000001;
uint8_t addressValArray[4];
addressValArray[0] = addressAval;
addressValArray[1] = addressBval;
addressValArray[2] = addressCval;
addressValArray[3] = addressDval;
unsigned long decimalVal = fourByte_word_toDecimal(addressValArray[0], addressValArray[1], addressValArray[2], addressValArray[3]);
// Print out resulting decimal value
Serial.println(decimalVal);
}
In the code above, the binary value should result as 00000001000000010000000100000001, AKA a decimal value of 16843009. But the code evaluates the decimal value to 16843008. Changing the value of addressAval to 00000000 also evaluates (correctly) to 16843008, and changing addressAval to 00000010 also correctly evaluates to 16843010.
I'm stumped.
The problem is that you're using pow(). This is causing everything to be calculated as a binary32, which doesn't have enough precision to hold 16843009.
>>> numpy.float32(16843009)
16843008.0
The fix is to use integers, specifically 65536 and 16777216UL.
Do not use pow() for this.
The usual way to do this is with the shift operator:
uint32_t result = uint32_t(byte3 << 24 | byte2 << 16 | byte1 << 8 | byte0);

Large Decimal Value to Binary in Arduino

I am trying to get the following decimal value
51043465443420856213 to binary in Arduino.
The result should be this.
10 1100 0100 0101 1110 1101 0001 0110 0001 1000 0100 0000 0110 1101 1111 1001 0101
Below is a code template to get you going as I did not want to give all the fun away.
void print_bin(char *s) {
// Keep track of carry from previous digit - what should this be initialized to?
int carry = ...;
// Divide the "number string" by 2, one digit/character at a time
// for each array element in `s` starting with the first character
for (... ; ... ; ...) {
// Extract the string element (character), covert to a decimal digit & add 10x carry
int digit = ...
// Divide digit by 2 and then save as a character
array element = ...
// Mod the digit by 2 as the new carry
carry = ...
}
// If first digit is a `0`, then move `s` to the next position.
if (*s == '0') {
...
}
// If `s` is not at the end, recursively call this routine on s
if (...) {
print_bin(s);
}
// Since the more significant digits have been printed, now print
// this digit (which is "mod 2" of the original "number string") as a character
fputc('0' + carry, stdout);
}
int main(void) {
char s[] = "51043465443420856213";
print_bin(s);
}
Result
101100010001011110110100010110000110000100000001101101111110010101
Apparently your value is not stored as a c program unsigned long long because it is bigger than 64 bits. Arduino has python. Here is a python function that does what you want. The example is using 80 bits. You can add spaces if you really want them.
def int2bin(n, count=24):
"""returns the binary of integer n, using count number of digits"""
return "".join([str((n >> y) & 1) for y in range(count-1, -1, -1)])
print int2bin(51043465443420856213, 80)
Output:
00000000000000101100010001011110110100010110000110000100000001101101111110010101

ADC transfer function

I took over the project from someone who had gone a long time ago.
I am now looking at ADC modules, but I don't get what the codes mean by.
MCU: LM3S9B96
ADC: AD7609 ( 18bit/8 channel)
Instrumentation Amp : INA114
Process: Reading volts(0 ~ +10v) --> Amplifier(INA114) --> AD7609.
Here is codes for that:
After complete conversion of 8 channels which stored in data[9]
Convert data to micro volts??
//convert to microvolts and store the readings
// unsigned long temp[], data[]
temp[0] = ((data[0]<<2)& 0x3FFFC) + ((data[1]>>14)& 0x0003);
temp[1] = ((data[1]<<4)& 0x3FFF0) + ((data[2]>>12)& 0x000F);
temp[2] = ((data[2]<<6)& 0x3FFC0) + ((data[3]>>10)& 0x003F);
temp[3] = ((data[3]<<8)& 0x3FF00) + ((data[4]>>8)& 0x00FF);
temp[4] = ((data[4]<<10)& 0x3FC00) + ((data[5]>>6)& 0x03FF);
temp[5] = ((data[5]<<12) & 0x3F000) + ((data[6]>>4)& 0x0FFF);
temp[6] = ((data[6]<<14)& 0x3FFF0) + ((data[7]>>2)& 0x3FFF);
temp[7] = ((data[7]<<16)& 0x3FFFC) + (data[8]& 0xFFFF);
I don't get what these codes are doing...? I know it shifts but how they become micro data format?
transfer function
//store the final value in the raw data array adstor[]
adstor[i] = (signed long)(((temp[i]*2000)/131072)*10000);
131072 = 2^(18-1) but I don't know where other values come from
AD7609 datasheet says The FSR for the AD7609 is 40 V for the ±10 V range and 20 V for the ±5 V range, so I guessed he chose 20vdescribed in the above and it somehow turned to be 2000???
Does anyone have any clues??
Thanks
-------------------Updated question from here ---------------------
I don't get how 18bit concatenated value of data[0] + 16bit concatenated value of data[1] turn to be microvolt after ADC transfer function.
data[9]
+---+---+--- +---+---+---+---+---+---++---+---+---++---+---+---++
analog volts | 1.902v | 1.921v | 1.887v | 1.934v |
+-----------++-----------+------------+------------+------------+
digital value| 12,464 | 12,589 | 12,366 | 12,674 |
+---+---+---++---+---+---++---+---+---++---+---+---++---+---+---+
I just make an example from data[3:0]
1 resolution = 20v/2^17-1 = 152.59 uV/bit and 1.902v/152.59uv = 12,464
Now get thru concatenation:
temp[0] = ((data[0]<<2)& 0x3FFFC) + ((data[1]>>14)& 0x0003) = C2C0
temp[1] = ((data[1]<<4)& 0x3FFF0) + ((data[2]>>12)& 0x000F) = 312D3
temp[2] = ((data[1]<<6)& 0x3FFC0) + ((data[3]>>10)& 0x003F) = 138C
Then put those into transfer function and get microvolts
adstor[i] = (signed long)(((temp[i]*2000)/131072)*10000);
adstor[0]= 7,607,421 with temp[0] !=1.902*e6
adstor[1]= 30,735,321 with temp[1] != 1.921*e6
adstor[2]= 763,549 with temp[2]
As you notice, they are quite different from the analog value in table.
I don't understand why data need to bit-shifting and <<,>> and added up with two data[]??
Thanks,
Please note that the maximum 18-bit value is 2^18-1 = $3FFFF = 262143
For [2] it appears that s/he splits 18-bit word concatenated values into longs for easier manipulation by step [3].
[3]: Regarding adstor[i] = (signed long)(((temp[i]*2000)/131072)*10000);
To convert from raw A/D reading to volts s/he multiplies with the expected volts and divides by the maximum possible A/D value (in this case, $3FFFF) so there seems to be an error in the code as s/he divides by 2^17-1 and not 2^18-1. Another possibility is s/he uses half the range of the A/D and compensates for that this way.
If you want 20V to become microvolts you need to multiply it by 1e6. But to avoid overflow of the long s/he splits the multiplication into two parts (*2000 and *10000). Because of the intermediate division the number gets small enough to be multiplied at the end by 10000 without overflowing at the expense of possibly losing some least significant bit(s) of the result.
P.S. (I use $ as equivalent to 0x due to many years of habit in certain assembly languages)

Is overflow defined for VHDL numeric_std signed/unsigned

If I have an unsigned(MAX downto 0) containing the value 2**MAX - 1, do the VHDL (87|93|200X) standards define what happens when I increment it by one? (Or, similarly, when I decrement it by one from zero?)
Short answer:
There is no overflow handling, the overflow carry is simply lost. Thus the result is simply the integer result of your operation modulo 2^MAX.
Longer answer:
The numeric_std package is a standard package but it is not is the Core the VHDL standards (87,93,200X).
For reference : numeric_std.vhd
The + operator in the end calls the ADD_UNSIGNED (L, R : unsigned; C : std_logic) function (with C = '0'). Note that any integer/natural operand is first converted into an unsigned.
The function's definition is:
function ADD_UNSIGNED (L, R : unsigned; C : std_logic) return unsigned is
constant L_left : integer := L'length-1;
alias XL : unsigned(L_left downto 0) is L;
alias XR : unsigned(L_left downto 0) is R;
variable RESULT : unsigned(L_left downto 0);
variable CBIT : std_logic := C;
begin
for i in 0 to L_left loop
RESULT(i) := CBIT xor XL(i) xor XR(i);
CBIT := (CBIT and XL(i)) or (CBIT and XR(i)) or (XL(i) and XR(i));
end loop;
return RESULT;
end ADD_UNSIGNED;
As you can see an "overflow" occurs if CBIT='1' (carry bit) for i = L_left. The result bit RESULT(i) is calculated normally and the last carry bot value is ignored.
I've had the problem with wanting an unsigned to overflow/underflow as in C or in Verilog and here is what I came up with (result and delta are unsigned):
result <= unsigned(std_logic_vector(resize(('1' & result) - delta, result'length))); -- proper underflow
result <= unsigned(std_logic_vector(resize(('0' & result) + delta, result'length))); -- proper overflow
For overflow '0' & result makes an unsigned which is 1 bit larger to be able to correctly accommodate the value of the addition. The MSB is then removed by the resize command which yields the correct overflow value. Same for underflow.
For a value of MAX equal to 7 adding 1 to 2**7 - 1 (127) will result in the value 2**7 (128).
The maximum unsigned value is determined by the length of an unsigned array type:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
entity foo is
end entity;
architecture faa of foo is
constant MAX: natural := 7;
signal somename: unsigned (MAX downto 0) := (others => '1');
begin
UNLABELED:
process
begin
report "somename'length = " & integer'image(somename'length);
report "somename maximum value = " &integer'image(to_integer(somename));
wait;
end process;
end architecture;
The aggregate (others => '1') represents a '1' in each element of somename which is an unsigned array type and represents the maximum binary value possible.
This gives:
foo.vhdl:15:9:#0ms:(report note): somename'length = 8
foo.vhdl:16:9:#0ms:(report note): somename maximum value = 255
The length is 8 and the numerical value range representable by the unsigned array type is from 0 to 2**8 - 1 (255), the maximum possible value is greater than 2**7 (128) and there is no overflow.
This was noticed in a newer question VHDL modulo 2^32 addition. In the context of your accepted answer it assumes you meant length instead of the leftmost value.
The decrement from zero case does result in a value of 2**8 - 1 (255) (MAX = 7). An underflow or an overflow depending on your math religion.
Hat tip to Jonathan Drolet for pointing this out in the linked newer question.

Resources