About pointers and ASCII code - pointers

im learning more about c language and i have 1 doubt about 1 code that i have seen.
main(){
int i = (65*256+66)*256+67;
int* pi;
char* pc;
pi = &i;
pc = (char*)pi;
printf("%c %c %c \n", *pc, *(pc+1), *(pc+2));
}
Output is: C B A
I know that ASCII code of A is 65, B is 66, and C is 67 but the variable i is none of them.
If i put variable i=65, the output is just A and dont show B or C, why?
And i would like to know why this code have that output. Thanks for any help.

The line
int i = (65*256+66)*256+67;
turns i into the following
00000000 01000001 01000010 01000011
int = 4 bytes or 4 groups of 8 bits
char = 1 byte or 1 group of 8 bits.
What happens is that a char pointer is used to point to a subset of the original int bits.
At first the pointer points to the 8 least significant bits (the group to the right).
And the letter C is printed. Then, the pointer it self is incremented by 1 which makes it point to the next group of 8 bits in the memory which happens to be B. And once more for the A.

*256 means left shift by 8 bit (1 byte) so the line
int i = (65*256+66)*256+67;
actually put A,B,C on 3 adjacent bytes in memory
then pi pointer made point to the address of integer i, then same address down cast to char pointer pc, so pc actually hold the address to a byte that contains 'A', and of course if you add 1 and 2 to the address that means the adjacent 'B' and 'C' get pointed to and print out.
EDIT: just to clarify a bit more int is 32 bit long but char is 8 bit, that's why u need a char pointer to represent an address valid for 8 bit long.

Characters are stored as bytes, as you probably know. The initializing of the variable 'i' has the following meaning:
65*256 // store 65 ('A') and left shift it by 8 byte (= '*256')
(65*256+66)*256 // add 66 ('B') and shift the whole thing again
(65*256+66)*256+67 // add 67 ('C')
'pi' is initialized as a INT pointer to 'i'
'pc' is initialized as a CHAR pointer to 'pi'
So 'pc' then holds the address of the beginning of the 3 bytes stored in 'i', which holds 'A'.
By adding 1 and 2 to the address in pc, you get the second and third bytes (containing 'B' and 'C'), as follows:
printf("%c %c %c \n", *pc, *(pc+1), *(pc+2));
Working on the bits here ;D

Related

When incrementing double pointer position by 1 the resultant is zero but when incremented by 2 it's the actual value initialised

So I need some help understanding the behaviour at which pointer have, so I have code like this below:
double d1 = 3.5;
double *d1ptr = &d1;
when I increase the pointer position by 1 I get zero i.e d1ptr = d1ptr+1 but when increased by 2 i.e d1ptr = d1ptr + 2 I get the first initialised value that is 3.5
However what is weird that I see is that when I increase by any number greater than 2 the pointer value when I print out gives me 0 i.e when I say printf("%d", *d1ptr)
Can you please explain the behaviour
You are trying to print double using %d, which will result in undefined behavior. Please use %f or %lf so that you can get the expected output.
double d1 = 3.5;
double *d1ptr = &d1;
printf("%lf %u\n", *d1ptr, d1ptr); //prints value and address
d1ptr = d1ptr+2;
printf("%lf %u", *d1ptr, d1ptr); //prints value and address
Output:
3.500000 2255690064
0.000000 2255690080
The address has been increased by 2*(size of double)

Why does this binary math fail when adding 00000001, but work correctly otherwise?

I've tried everything I can think of and cannot seem to get the below binary math logic to work. Not sure why this is failing but probably indicates my misunderstanding of binary math or C. The ultimate intent is to store large integers (unsigned long) directly to an 8-bit FRAM memory module as 4-byte words so that a micro-controller (Arduino) can recover the values after a power failure. Thus the unsigned long has to be assembled from its four byte words parts as it's pulled from memory, and the arithmetic of assembling these word bytes is not working correctly.
In the below snippet of code, the long value is defined as four bytes A, B, C, and D (simulating being pulled form four 8-bit memory blocks), which get translated to decimal notation to be used as an unsigned long in the arrangement DDDDDDDDCCCCCCCCBBBBBBBBAAAAAAAA. If A < 256 and B, C, D all == 0, the math works correctly. The math also works correctly for any values of B, C, and D if A == 0. But if B, C, or D > 0 and A == 1, the 1 value of A is not added during the arithmetic. A value of 2 works, but not a value of 1. Is there any reason for this? Or am I doing binary math wrong? Is this a known issue that needs a workaround?
// ---- FUNCTIONS
unsigned long fourByte_word_toDecimal(uint8_t byte0 = B00000000, uint8_t byte1 = B00000000, uint8_t byte2 = B00000000, uint8_t byte3 = B00000000){
return (byte0 + (byte1 * 256) + (byte2 * pow(256, 2)) + (byte3 * pow(256, 3)));
}
// ---- MAIN
void setup() {
Serial.begin(9600);
uint8_t addressAval = B00000001;
uint8_t addressBval = B00000001;
uint8_t addressCval = B00000001;
uint8_t addressDval = B00000001;
uint8_t addressValArray[4];
addressValArray[0] = addressAval;
addressValArray[1] = addressBval;
addressValArray[2] = addressCval;
addressValArray[3] = addressDval;
unsigned long decimalVal = fourByte_word_toDecimal(addressValArray[0], addressValArray[1], addressValArray[2], addressValArray[3]);
// Print out resulting decimal value
Serial.println(decimalVal);
}
In the code above, the binary value should result as 00000001000000010000000100000001, AKA a decimal value of 16843009. But the code evaluates the decimal value to 16843008. Changing the value of addressAval to 00000000 also evaluates (correctly) to 16843008, and changing addressAval to 00000010 also correctly evaluates to 16843010.
I'm stumped.
The problem is that you're using pow(). This is causing everything to be calculated as a binary32, which doesn't have enough precision to hold 16843009.
>>> numpy.float32(16843009)
16843008.0
The fix is to use integers, specifically 65536 and 16777216UL.
Do not use pow() for this.
The usual way to do this is with the shift operator:
uint32_t result = uint32_t(byte3 << 24 | byte2 << 16 | byte1 << 8 | byte0);

Sprintf Does not work properly

Hello I am working on Pic18f46k22 with xc8 compiler.sprintf Function does not work properly.
My code is:
const char *DATA[4] = {"xxxxxx","yyyyyy","zzzzzz","aaaa"}
unsigned char Data1=2;
unsigned char Data2=3;
char L1Buffer[6];
char L2Buffer[6];
char TotalBuffer[20];
for(int i=0;i<6;i++){L1Buffer[i]=0;L2Buffer[i]=0;}
for(int i=0;i<20;i++){TotalBuffer[i]=0;}
sprintf (L1Buffer,"%s", DATA[Data1]);
sprintf (L2Buffer,"%s%d", DATA[Data2],Data2);
sprintf(TotalBuffer,"L1:%s L2:%s",L1Buffer,L2Buffer);
Lcd_Set_Cursor(2,1);
printf("%s",TotalBuffer);
Lcd_Set_Cursor(3,1);
printf("%s",L2Buffer);
Output :
L1:zzzzzzaaaa3 L2:aa
aaaa3
Expected output :
L1:zzzzzz L2:aaaa3
aaaa3
You are putting 7 characters (six 'z's + one '\0') into six character array. You need to take space for null terminator into account.
You need to declare L1Buffer to hold 7 characters:
unsigned char L1Buffer[7];
In your case, L1Buffer and L2Buffer are placed adjacent in memory. Writing "zzzzzz" into L1Buffer places six 'z's in L1Buffer and '\0' into L2Buffer[0], as it happens to be located right next to it:
z z z z z z\0 . . . . .
`-L1Buffer-'`-L2Buffer-'
Then, L2Buffer is overwritten:
z z z z z z a a a a 3\0
`-L1Buffer-'`-L2Buffer-'
Note there's no terminator after 'z's, so sprintf(TotalBuffer,"L1:%s L2:%s",L1Buffer,L2Buffer); takes L1Buffer values until it encounters nul lterminator at the end of L2Buffer. That's why you get zzzzzzaaaa3.

How to convert a group of Hexadecimal to Decimal (Visual Studio )

I want to retrieve like in Pic2, the values in Decimal. ( hardcoded for visual understanding)
This is the codes to convert Hex to Dec for 16 bit:
string H;
int D;
H = txtHex.Text;
D = Convert.ToInt16(H, 16);
txtDec.Text = Convert.ToString(D);
however it doesn't work for a whole group
So the hex you are looking at does not refer to a decimal number. If it did refer to a single number that number would be far too large to store in any integral type. It might actually be too large to store in floating point types.
That hex you are looking at represents the binary data of a file. Each set of two characters represents one byte (because 16^2 = 2^8).
Take each pair of hex characters and convert it to a value between 0 and 255. You can accomplish this easily by converting each character to its numerical value. In case you don't have a complete understanding of what hex is, here's a map.
'0' = 0
'1' = 1
'2' = 2
'3' = 3
'4' = 4
'5' = 5
'6' = 6
'7' = 7
'8' = 8
'9' = 9
'A' = 10
'B' = 11
'C' = 12
'D' = 13
'E' = 14
'F' = 15
If the character on the left evaluates to n and the character on the right evaluates to m then the decimal value of the hex pair is (n x 16) + m.
You can use this method to get your values between 0 and 255. You then need to store each value in an unsigned char (this is a C/C++/ObjC term - I have no idea what the C# or VBA equivalent is, sorry). You then concatenate these unsigned char's to create the binary of the file. It is very important that you use an 8 bit type to store these values. You should not store these values in 16 bit integers, as you do above, or you will get corrupted data.
I don't know what you're meant to output in your program but this is how you get the data. If you provide a little more information I can probably help you use this binary.
You will need to split the contents into separate hex-number pairs ("B9", "D1" and so on). Then you can convert each into their "byte" value and add it to a result list.
Something like this, although you may need to adjust the "Split" (now it uses single spaces, returns, newlines and tabs as separator):
var byteList = new List<byte>();
foreach(var bytestring in txtHex.Text.Split(new[] {' ', '\r', '\n', '\t'},
StringSplitOptions.RemoveEmptyEntries))
{
byteList.Add(Convert.ToByte(bytestring, 16));
}
byte[] bytes = byteList.ToArray(); // further processing usually needs a byte-array instead of a List<byte>
What you then do with those "bytes" is up to you.

Referencing individual pins, in Hex, Bit masking

I'm using a netduino plus 2, and need to understand how to convert an individual pins's number into hex for bit masking, eg:
PERSUDO
if counter_value_bit_1 is 1, do:
write 1 to D0 pin
else
write 0 to D0 pin
..... counting from bit_1 through bit_9.
if counter_value_bit_9 is 1, do:
write 1 to D0 pin
else
write 0 to D0 pin
Answer
if (counter_value & 0x01) { //bit_1
...}
if (counter_value & 0x200) { //bit_9
...}
My question: how do you get 0x200 = bit 9, ect?
An example or two for bits in between 1 and 9 would be great.
THANKS
Which language do you use?
In C you cant handle a bit "9" in a portable way, see byte ordering. If you know the byte order you could extract the byte with bit 9.
#define BYTE_WITH_BIT_9 ...
int counter_value = 42;
((char*)counter_value)[BYTE_WITH_BIT_9]
Cast to char* to access the raw bytes. Then chose the byte and now you can do magic bit operations.

Resources