Parsing an 8 bit char array to integer - arduino

I am new to Arduino and all I want to do is parse a String of binary numbers to an exact integer representation.
char* byte1 = "11111111"
int binary1 = atoi(byte1);
Serial.print(binary1);
However this prints out: -19961
Can anyone explain why? I am coming from a Java and JavaScript perspective.

atoi converts a decimal (base 10) string to int. If you want to convert a binary string to int, you can use strtol:
char *byte1 = "11111111";
int val1 = strtol(byte1, 0, 2);
std::cout << val1 << std::endl;
strtol can convert any base -- the 3rd argument is the base to use.

You get -19961 because on Arduino int is 16 bit wide and cannot hold any number bigger than 32767. To hold an integer representation of 11111111 you have to use long (which on Arduino is 32 bit) and strtol.
long val = strtol(byte1, NULL, 10);

Related

Why does this binary math fail when adding 00000001, but work correctly otherwise?

I've tried everything I can think of and cannot seem to get the below binary math logic to work. Not sure why this is failing but probably indicates my misunderstanding of binary math or C. The ultimate intent is to store large integers (unsigned long) directly to an 8-bit FRAM memory module as 4-byte words so that a micro-controller (Arduino) can recover the values after a power failure. Thus the unsigned long has to be assembled from its four byte words parts as it's pulled from memory, and the arithmetic of assembling these word bytes is not working correctly.
In the below snippet of code, the long value is defined as four bytes A, B, C, and D (simulating being pulled form four 8-bit memory blocks), which get translated to decimal notation to be used as an unsigned long in the arrangement DDDDDDDDCCCCCCCCBBBBBBBBAAAAAAAA. If A < 256 and B, C, D all == 0, the math works correctly. The math also works correctly for any values of B, C, and D if A == 0. But if B, C, or D > 0 and A == 1, the 1 value of A is not added during the arithmetic. A value of 2 works, but not a value of 1. Is there any reason for this? Or am I doing binary math wrong? Is this a known issue that needs a workaround?
// ---- FUNCTIONS
unsigned long fourByte_word_toDecimal(uint8_t byte0 = B00000000, uint8_t byte1 = B00000000, uint8_t byte2 = B00000000, uint8_t byte3 = B00000000){
return (byte0 + (byte1 * 256) + (byte2 * pow(256, 2)) + (byte3 * pow(256, 3)));
}
// ---- MAIN
void setup() {
Serial.begin(9600);
uint8_t addressAval = B00000001;
uint8_t addressBval = B00000001;
uint8_t addressCval = B00000001;
uint8_t addressDval = B00000001;
uint8_t addressValArray[4];
addressValArray[0] = addressAval;
addressValArray[1] = addressBval;
addressValArray[2] = addressCval;
addressValArray[3] = addressDval;
unsigned long decimalVal = fourByte_word_toDecimal(addressValArray[0], addressValArray[1], addressValArray[2], addressValArray[3]);
// Print out resulting decimal value
Serial.println(decimalVal);
}
In the code above, the binary value should result as 00000001000000010000000100000001, AKA a decimal value of 16843009. But the code evaluates the decimal value to 16843008. Changing the value of addressAval to 00000000 also evaluates (correctly) to 16843008, and changing addressAval to 00000010 also correctly evaluates to 16843010.
I'm stumped.
The problem is that you're using pow(). This is causing everything to be calculated as a binary32, which doesn't have enough precision to hold 16843009.
>>> numpy.float32(16843009)
16843008.0
The fix is to use integers, specifically 65536 and 16777216UL.
Do not use pow() for this.
The usual way to do this is with the shift operator:
uint32_t result = uint32_t(byte3 << 24 | byte2 << 16 | byte1 << 8 | byte0);

Twos complement on hex

Given:
int number = 0xFFFFFF87;
number = ~number + 1;
printf ("%x", number);
Why does 'number' become '79' instead of '87'? How can i make it '87' ?
It is 0x79 because ~0xFFFFFF87 = 0x00000078 and when 1 is added you get 0x00000079.
To get 0x87, you should use:
int number = 0xFFFFFF87 & 0xFF;
which will select only the least significant byte and mask the other bytes with zero.
~ negates every bit, not only the ones with hex group of 0xF.
To make it 0x87 just reverse the operation, i.e.:
int number = ~(0x87-1); // which is 0xFFFFFF79

About pointers and ASCII code

im learning more about c language and i have 1 doubt about 1 code that i have seen.
main(){
int i = (65*256+66)*256+67;
int* pi;
char* pc;
pi = &i;
pc = (char*)pi;
printf("%c %c %c \n", *pc, *(pc+1), *(pc+2));
}
Output is: C B A
I know that ASCII code of A is 65, B is 66, and C is 67 but the variable i is none of them.
If i put variable i=65, the output is just A and dont show B or C, why?
And i would like to know why this code have that output. Thanks for any help.
The line
int i = (65*256+66)*256+67;
turns i into the following
00000000 01000001 01000010 01000011
int = 4 bytes or 4 groups of 8 bits
char = 1 byte or 1 group of 8 bits.
What happens is that a char pointer is used to point to a subset of the original int bits.
At first the pointer points to the 8 least significant bits (the group to the right).
And the letter C is printed. Then, the pointer it self is incremented by 1 which makes it point to the next group of 8 bits in the memory which happens to be B. And once more for the A.
*256 means left shift by 8 bit (1 byte) so the line
int i = (65*256+66)*256+67;
actually put A,B,C on 3 adjacent bytes in memory
then pi pointer made point to the address of integer i, then same address down cast to char pointer pc, so pc actually hold the address to a byte that contains 'A', and of course if you add 1 and 2 to the address that means the adjacent 'B' and 'C' get pointed to and print out.
EDIT: just to clarify a bit more int is 32 bit long but char is 8 bit, that's why u need a char pointer to represent an address valid for 8 bit long.
Characters are stored as bytes, as you probably know. The initializing of the variable 'i' has the following meaning:
65*256 // store 65 ('A') and left shift it by 8 byte (= '*256')
(65*256+66)*256 // add 66 ('B') and shift the whole thing again
(65*256+66)*256+67 // add 67 ('C')
'pi' is initialized as a INT pointer to 'i'
'pc' is initialized as a CHAR pointer to 'pi'
So 'pc' then holds the address of the beginning of the 3 bytes stored in 'i', which holds 'A'.
By adding 1 and 2 to the address in pc, you get the second and third bytes (containing 'B' and 'C'), as follows:
printf("%c %c %c \n", *pc, *(pc+1), *(pc+2));
Working on the bits here ;D

QT: Float to QString

I am new to QT. I am facing some strange problem in floating point values. The following code displays a message box WITH decimal points. i.e., 10.53
QMessageBox Msgbox;
float num = 10.53;
QString str = QString::number(num, 'g', 4);
Msgbox.setText(str);
Msgbox.exec();
Where as the following code displays a message box WITHOUT decimal points. i.e., 1
QMessageBox Msgbox;
float num = 120/77;
QString str = QString::number(num, 'g', 4);
Msgbox.setText(str);
Msgbox.exec();
Why the digits after the decimal point are ignored in the second code snippet? I changed the data type to double and qreal. Nothing worked.
because 120/77 is dividing 2 integers (resulting in a integer) and then converting to float
you need to convert the numbers to float before dividing
float a = 120, b = 77;
float num = a/b;
adding (float) before the numbers solved the issue. i.e., float num = (float)120/77;
float num = 120.0/77.0;
Would also work, standard for C.

What does these mean "{0,2:X2}" in statement String.Format("{0,2:X2}", b);

Code:
SHA1 sha = new SHA1CryptoServiceProvider();
string hashedValue = string.Empty;
//hash the data
byte[] hashedData = sha.ComputeHash(Encoding.Unicode.GetBytes(str));
//loop through each byte in the byte array
foreach (byte b in hashedData)
{
//convert each byte and append
hashedValue += String.Format("{0,2:X2}", b);
}
I searched for arguments passed to String.Format() but didnt able to understand it exactly.
Thanks in advance!
Formatting the string in hexadecimal format...
X = Hexadecimal format
2 = 2 characters
It's basically just formatting the string in uppercase hexadecimal format - see the docs.
The hexadecimal ("X") format specifier converts a number to a string of hexadecimal digits. The case of the format specifier indicates whether to use uppercase or lowercase characters for hexadecimal digits that are greater than 9.
This particular format is known as Composite Formatting, so to break it down:
{0 = parameterIndex, 2 = alignment :X2 = formatString}

Resources