What is the difference/use for these 2 types? I have a basic understanding regarding pointers but I just can't wrap my head around this.
uint8_t* address_at_eeprom_location = (uint8_t*)10;
This line found in an Arduino example makes me feel so dumb. :)
So basically this is a double pointer?
The uint_t is the unsigned integer, this is the data stored directly in the memory. The uint_t * is the pointer to the memory in which the number is stored. The (uint_t*) is cast of the 10 - (literal which is translated to a number in the memory so the binary representation of the number ten) to the pointer type. This will create the storage to store the 10, and than will use its address and store it in the address_at_eeprom_location variable.
uint8_t is an unsigned 8 bit integer
uint8_t* is a pointer to an 8 bit integer in ram
(uint8_t*)10 is a pointer to an uint8_t at the address 10 in the ram
So basically this line saves the address of the location for an uint_8 in address_at_eeprom_location by setting it to 10. Most likely later in the code this address is used to write/read an actual uint8_t value to/from there.
Instead of a single value this can also be used as an starting point for an array later in the code:
uint8_t x = address_at_eeprom_location[3]
This would read the 3rd uint8_t starting from address 10 (so at address 13) in ram into the variable x
Related
I have declared a signed multidimensional array as follows:
typedef logic signed [3:0][31:0] hires_frame_t;
typedef hires_frame_t [3:0] hires_capture_t;
hires_capture_t rndpacket;
I want to randomize this array such that each element has a value between -32768 to 32767, in 32-bit two's-complement.
I've tried the following:
assert(std::randomize(rndpacket) with
{foreach (rndpacket[channel])
foreach (rndpacket[channel][subsample])
{rndpacket[channel][subsample] < signed'(32768);
rndpacket[channel][subsample] >= signed'(-32768);}});
This compiles well, but (mentor graphics) modelsim fails in simulation claiming
randomize() failed due to conflicts between the following constraints:
# clscummulativedata.sv(56): (rndpacket[3][3] < 32768);
# cummulativedata.sv(57): (rndpacket[3][3] >= 32'hffff8000);
This is clearly something linked to usage of signed vectors. I had a feeling that everything should be fine as array is declared as signed as well as thresholds in the randomize call, but apparently not. If I replace the range by 0-65535, everything works as expected.
What is the correct way to randomize such a signed array?
Your problem is hires_frame_t is a signed 128-bit 2-dimensional packed array, and selecting a part of a packed array is unsigned. A way of keeping the part-select of a packed dimension signed is using a separate typedef for the dimension you want signed:
typedef bit signed [31:0] int32_t;
typedef int32_t [3:0] hires_frame_t;
typedef hires_frame_t [3:0] hires_capture_t;
Another option is putting the signed cast on the LHS of the comparisons. Your signed cast on the RHS is not doing anything because bare decimal numbers are already treated as signed. A comparison is unsigned if one or both sides are unsigned.
assert(std::randomize(rndpacket) with {
foreach (rndpacket[channel,subsample])
{signed'(rndpacket[channel][subsample]) < 32768;
signed'(rndpacket[channel][subsample]) >= -32768;}});
BTW, I'm showing the LRM compliant way of using a 2-d foreach loop.
i found those at arduino.h library, and was confused about the lowbyte macro
#define lowByte(w) ((uint8_t) ((w) & 0xff))
#define highByte(w) ((uint8_t) ((w) >> 8))
at lowByte : wouldn't the conversion from WORD to uint8_t just take the low byte anyway? i know they w & 0x00ff to get the low byte but wouldn't the casting just take the low byte ?
at both the low/high : why wouldn't they use little endians, and read with size/offset
i.e. if the w is 0x12345678, high is 0x1234, low is 0x5678, they write it to memory as 78 56 34 12 at say offset x
to read the w, you read to size of word at location x
to read the high, you read byte/uint8_t at location x
to read the low, you read byte/uint8_t at location x + 2
at lowByte : wouldn't the conversion from WORD to uint8_t just take the low byte anyway? i know they w & 0x00ff to get the low byte but wouldn't the casting just take the low byte ?
Yes. Some people like to be extra explicit in their code anyway, but you are right.
at both the low/high : why wouldn't they use little endians, and read with size/offset
I don't know what that means, "use little endians".
But simply aliasing a WORD as a uint8_t and using pointer arithmetic to "move around" the original object generally has undefined behaviour. You can't alias objects like that. I know your teacher probably said you can because it's all just bits in memory, but your teacher was wrong; C and C++ are abstractions over computer code, and have rules of their own.
Bit-shifting is the conventional way to achieve this.
In the case of lowByte, yes the cast to uint8_t is equivalent to (w) & 0xff).
Regarding "using little endians", you don't want to access individual bytes of the value because you don't necessarily know whether your system is using big endian or little endian.
For example:
uint16_t n = 0x1234;
char *p = (char *)&n;
printf("0x%02x 0x%02x", p[0], p[1]);
If you ran this code on a little endian machine it would output:
0x34 0x12
But if you ran it on a big endian machine you would instead get:
0x12 0x34
By using shifts and bitwise operators you operate on the value which must be the same on all implementations instead of the representation of the value which may differ.
So don't operate on individual bytes unless you have a very specific reason to.
In GCC(Ubuntu 12 .04) Following code is the program which i need to understand for the concept of size of integer,character and float pointer.
#include<stdio.h>
main()
{
int i=20,*p;
char ch='a',*cp;
float f=22.3,*fp;
printf("%d %d %d\n",sizeof(p),sizeof(cp),sizeof(fp));
printf("%d %d %d\n",sizeof(*p),sizeof(*cp),sizeof(*fp));
}
Here i am getting following output when i run the above code in "UBUNTU 12.04"
Output:
8 8 8
4 1 4
As per this lines,"Irrespective of data types,size of pointer for address it will allow 4 bytes BY DEFAULT"
Then what is the reason behind getting sizeof(p)=8 instead it should be sizeof(p)=4....
Please explain me.
sizeof(x) will return the size of x. A pointer is like any other variable, except that it holds an address. On your 64 bit machine, the pointer takes 64 bits or 8 bytes, and that is what sizeof will return. All pointers on your machine will be 8 bytes long, regardless of what data they point to.
The data they point to may be of a different length.
int x = 5; // x is a 32 bit int, takes up 4 bytes
int *y = &x; // y holds the address of x, & is 8 bytes
float *z; // z holds the address of a float, and an address is still 8 bytes long
You're probably getting confused because you previously have done this on a 32 bit computer. You see, the 32 / 64 bit indicates the size of a machine address. So, on a 32 bit computer, a pointer holds an address that is at most 32 bits long, or four bytes. Your current machine must be a 64 bit machine, which is why the pointer needs to be 8 bytes long.
Read more about this here.
Heck, it's not just the address length. The size of other data types is also platform AND implementation dependent. For example, an int may be 16 bits on one platform & 32 bits on another. A third implementation might go crazy and have 128 bit ints. The only guarantee in the spec is that an int will be at least 16 bits long. When in doubt, always check. The Wikipedia page on C data types would be helpful.
sizeof(p) will return an address, and you are most likely running on a 64-bit machine, so your addresses will be (8*8) or 64 bits in length.
The size of the value dereferenced by p is a 32 bit integer (4*8).
You can verify this by seeing that all:
All pointers have sizeof as 8
Your char value is size 1 (typical of many implementations of c)
Print p and *p (for all variables). You will see the actual address length this way.
I'm not sure which documentation you're using but my guess is that they're talking about pointers in 32bit.
In 64 bit the size of a pointer becomes 8 bytes
QImage has a constructor QImage (uchar *data, int width, int height, int bytesPerLine, Format format) that creates a QImage from an existing memory buffer.
Is the order of bytes (uchars) platform-dependent? If I put the values for alpha, red, green, and blue in it with increasing indices, alpha is swapped with blue and red is swapped with green. This indicates a problem with endian-ness.
I now wonder whether the endian-ness is platform-dependent or not. The Qt documentation does not say anything about this.
If it is NOT platform-dependent, I would just change the order of storing the values:
texture[ startIndex + 0 ] = pixelColor.blue();
texture[ startIndex + 1 ] = pixelColor.green();
texture[ startIndex + 2 ] = pixelColor.red();
texture[ startIndex + 3 ] = pixelColor.alpha();
If it is platform-dependent, I would create an array of uint32, store values computed as alpha << 24 | red << 16 | green << 8 | blue, and reinterpret_cast the array before passing it to the QImage() constructor.
Best regards,
Jens
It depends on the format. Formats that state the total number of bits in a pixel are endian-dependent. Like Format_ARGB32 indicates a 32-bit integer whose highest 8 bits are alpha, which on a little endian machine, the same 8 bits are the last byte in the byte sequence.
Formats with individual bits in the sequence like Format_RGB888 are not endian-dependent. Format_RGB888 says the bytes are arranged in memory in R,G,B order regardless of endian.
To access bytes in the buffer, I would use Q_BYTE_ORDER macro to conditionally compile in the corresponding byte access code, instead of using shifts.
I personally use Format_RGB888 since I don't deal with alpha directly in the image. That saves me the problem of dealing with endian difference.
From the Qt Docs:
Warning: If you are accessing 32-bpp image data, cast the returned
pointer to QRgb* (QRgb has a 32-bit size) and use it to read/write the
pixel value. You cannot use the uchar* pointer directly, because the
pixel format depends on the byte order on the underlying platform. Use
qRed(), qGreen(), qBlue(), and qAlpha() to access the pixels.
Greetings everybody. I have seen examples of such operations for so many times that I begin to think that I am getting something wrong with binary arithmetic. Is there any sense to perform the following:
byte value = someAnotherByteValue & 0xFF;
I don't really understand this, because it does not change anything anyway. Thanks for help.
P.S.
I was trying to search for information both elsewhere and here, but unsuccessfully.
EDIT:
Well, off course i assume that someAnotherByteValue is 8 bits long, the problem is that i don't get why so many people ( i mean professionals ) use such things in their code. For example in SharpZlib there is:
buffer_ |= (uint)((window_[windowStart_++] & 0xff |
(window_[windowStart_++] & 0xff) << 8) << bitsInBuffer_);
where window_ is a byte buffer.
The most likely reason is to make the code more self-documenting. In your particular example, it is not the size of someAnotherByteValue that matters, but rather the fact that value is a byte. This makes the & redundant in every language I am aware of. But, to give an example of where it would be needed, if this were Java and someAnotherByteValue was a byte, then the line int value = someAnotherByteValue; could give a completely different result than int value = someAnotherByteValue & 0xff. This is because Java's long, int, short, and byte types are signed, and the rules for conversion and sign extension have to be accounted for.
If you always use the idiom value = someAnotherByteValue & 0xFF then, no matter what the types of the variable are, you know that value is receiving the low 8 bits of someAnotherByteValue.
uint s1 = (uint)(initial & 0xffff);
There is a point to this because uint is 32 bits, while 0xffff is 16 bits. The line selects the 16 least significant bits from initial.
Nope.. There is no use in doing this. Should you be using a value that is having its importance more than 8 bits, then the above statement has some meaning. Otherwise, its the same as the input.
If sizeof(someAnotherByteValue) is more than 8 bits and you want to extract the least signficant 8 bits from someAnotherByteValue then it makes sense. Otherwise, there is no use.
No, there is no point so long as you are dealing with a byte. If value was a long then the lower 8 bits would be the lower 8 bits of someAnotherByteValue and the rest would be zero.
In a language like C++ where operators can be overloaded, it's possible but unlikely that the & operator has been overloaded. That would be pretty unusual and bad practice though.
EDIT: Well, off course i assume that
someAnotherByteValue is 8 bits long,
the problem is that i don't get why so
many people ( i mean professionals )
use such things in their code. For
example in Jon Skeet's MiscUtil there
is:
uint s1 = (uint)(initial & 0xffff);
where initial is int.
In this particular case, the author might be trying to convert an int to a uint. The & with 0xffff would ensure that it would still convert Lowest 2 Bytes, even if the system is not one which has a 2 byte int type.
To be picky, there is no guaranty regarding a machine's byte size. There is no reason to assume in a extremely portable program that the architecture byte is 8 bits wide. To the best of my memory, according to the C standard (for example), a char is one byte, short is wider or the same as char, int is wider or the same as short, long is wider or the same as int and so on. Hence, theoretically there can be a compiler where a long is actually one byte wide, and that byte will be, say, 10 bits wide. Now, to ensure your program behaves the same on that machine, you need to use that (seemingly redundant) coding style.
"Byte" # Wikipedia gives examples for such peculiar architectures.