Guarantees on sizeof(int) in Single UNIX or POSIX - unix

What are the size guarantees on int in Single UNIX or POSIX? This sure is a FAQ, but I can't find the answer...

With icecrime's answer, and a bit further searching on my side, I got a complete picture:
ANSI C and C99 both mandate that INT_MAX be at least +32767 (i.e. 2^15-1). POSIX doesn't go beyong that. Single Unix v1 has the same guarantee, while Single Unix v2 states that the minimum acceptable value is 2 147 483 647 (i.e. 2^31-1).

The C99 standard specifies the content of header <limits.h> in the following way :
Their implementation-defined values shall be equal or greater in magnitude
(absolute value) to those shown, with the same sign.
minimum value for an object of type int
INT_MIN -32767 // -(215 - 1)
maximum value for an object of type int
INT_MAX +32767 // 215 - 1
maximum value for an object of type unsigned int
UINT_MAX 65535 // 216 - 1
There are no size requirements expressed on the int type.
However, the <stdint.h> header offer the additional exact-width integer types int8_t, int16_t, int32_t, int64_t and their unsigned counterpart :
The typedef name intN_t designates a
signed integer type with width N, no
padding bits, and a two’s complement
representation. Thus, int8_t denotes a
signed integer type with a width of
exactly 8 bits.

POSIX doesn't cover that. The ISO C standard guarantees that types will be able to handle at least a certain range of values but not that they'll be of a particular size.
The <stdint.h> header introduced with C99 will get you access to types like int16_t that do.

Related

Adding 0X80 + 0X80

As a preparation for my exam in Microcontrollers, I have this question:
How are the condition bits set when the Byte operation 0x80 + 0x80 is executed?
I understand how to add those 2, but I get 256 and I don't know which condition bits are set in this case.
First, the highest value one byte can hold is 255 (0xFF), so I do not think the result would be 256, but rather, overflow would cause the resulting value to be 0 (0x00).
Secondly, the condition bits would depend on your processor, but going by some ARM notes, I might reasonably expect:
Z: Zero
The Z flag is set if the result of the flag-setting instruction is zero.
C: Carry (or Unsigned Overflow)
The C flag is set if the result of an unsigned operation overflows the 32-bit result register. This bit can be used to implement 64-bit unsigned arithmetic, for example.

how those bit-wise operation work and why wouldn't it use little/small endian instead

i found those at arduino.h library, and was confused about the lowbyte macro
#define lowByte(w) ((uint8_t) ((w) & 0xff))
#define highByte(w) ((uint8_t) ((w) >> 8))
at lowByte : wouldn't the conversion from WORD to uint8_t just take the low byte anyway? i know they w & 0x00ff to get the low byte but wouldn't the casting just take the low byte ?
at both the low/high : why wouldn't they use little endians, and read with size/offset
i.e. if the w is 0x12345678, high is 0x1234, low is 0x5678, they write it to memory as 78 56 34 12 at say offset x
to read the w, you read to size of word at location x
to read the high, you read byte/uint8_t at location x
to read the low, you read byte/uint8_t at location x + 2
at lowByte : wouldn't the conversion from WORD to uint8_t just take the low byte anyway? i know they w & 0x00ff to get the low byte but wouldn't the casting just take the low byte ?
Yes. Some people like to be extra explicit in their code anyway, but you are right.
at both the low/high : why wouldn't they use little endians, and read with size/offset
I don't know what that means, "use little endians".
But simply aliasing a WORD as a uint8_t and using pointer arithmetic to "move around" the original object generally has undefined behaviour. You can't alias objects like that. I know your teacher probably said you can because it's all just bits in memory, but your teacher was wrong; C and C++ are abstractions over computer code, and have rules of their own.
Bit-shifting is the conventional way to achieve this.
In the case of lowByte, yes the cast to uint8_t is equivalent to (w) & 0xff).
Regarding "using little endians", you don't want to access individual bytes of the value because you don't necessarily know whether your system is using big endian or little endian.
For example:
uint16_t n = 0x1234;
char *p = (char *)&n;
printf("0x%02x 0x%02x", p[0], p[1]);
If you ran this code on a little endian machine it would output:
0x34 0x12
But if you ran it on a big endian machine you would instead get:
0x12 0x34
By using shifts and bitwise operators you operate on the value which must be the same on all implementations instead of the representation of the value which may differ.
So don't operate on individual bytes unless you have a very specific reason to.

#define returns a wrong number in C

I'm trying to this very basic task in C, where I want to define a number of ints in a header file. I've done this like so:
#define MINUTE (60)
#define HOUR (60 * MINUTE)
#define DAY (24 * HOUR)
The problem is that while MINUTE and HOUR return the correct answer, DAY returns something weird.
Serial.println(MINUTE); // 60
Serial.println(HOUR); // 3600
Serial.println(DAY); // 20864
Can someone explain why this happens?
Assuming you have something like
int days = DAY;
or
unsigned days = DAY;
You seem to have 16 bit integers. The max. representable positive value for (signed) 2s complement integers with 16 bits is 32767, for unsigned it is 65535.
So , as 24 * 3600 == 86400, you invoke undefined behaviour for the signed int and wrap for the unsigned (the int will likely wrap, too, but that is not guaranteed).
This results in 86400 modulo 65356 (which is 2 to the power of 16) which happens to be 20864.
Solution: use stdint.h types: uint32_t or int32_t to get defined sized integers.
Edit: Using function arguments follows basically the same principle as the initialisers above.
Update: As you clamed, when directly passing the integr constant 86400 to the function, this will have type long, because the compiler will automatically choose the smallest type which can hold the values. It is very likely that the println methods are overloaded for long arguments, so they will print the correct value.
However, for the expression the original types are relevant. And all values 24, 60, 60 will have int type, so the result will also be int. The compiler will not use a larger type, just because the result might overflow. Use 24L and you will get a long result for the macros, too.
It looks like you actually managed to dig up an ancient 16 bit compiler (where did you find it? ) Otherwise I'd like to see the code that produces these numbers.
20864 = 86400 % 65536
Try storing the value in an int instead of a short.

Nested as_type Casting

When I nest OpenCL's as_type operators, I get some strange errors. For example, this line works:
a = as_uint(NAN)&4290772991;
But these lines do not work:
a = as_float(as_uint(NAN)&4290772991);
a = as_uint(as_float(as_uint(NAN)&4290772991));
The error reads:
invalid reinterpretation: sizes of 'float' and 'long' must match
This error message is confusing, because it seems like no long is created by this code. All values here appear to be 32-bits, so it should be possible to reinterpret cast anything.
So why is this error happening?
In C99, undecorated decimal constants are assumed to be signed integers and the compiler will automagically define the constant as the smallest signed integer type which can hold the value using the progression int, then long int, then finally unsigned long int.
The smallest signed integer type which can hold 4290772991 is a 64 bit signed type (because of the sign bit requirement). Thus, the as_type calls you have where the reinterpret type is a 32 bit type fail because of the size mismatch between the 64 bit long int the compiler selects for your constant and the target float type.
You should be able to get around the problem by changing 4290772991 to 4290772991u. The suffix will explicitly denote the value as unsigned, and the compiler should select a 32 bit unsigned integer. Alternatively, you could also use 0xFFBFFFFF - there are different rules for hexadecimal constants and it should be assigned a type from the progression int, then unsigned int, then long int, then finally unsigned long int.

What is the maximum integer value in Flex?

I was trying to display a number: 2893604342.00. But, when i am displaying it it is displayed as: -2893604342.
Following is the code snippet ...
avg += int(totalData[i][col.dataField]);
I have even replaced it with Number, but it's still showing the same negative number.
Please let me know whether there is any problem with int or Number!
The maximum values are accessible through each numeric type's static properties:
Number.MAX_VALUE
uint.MAX_VALUE
int.MAX_VALUE
(Just trace 'em.)
integers in flash are 32 bits, so an unsigned int's max value is (2^32)-1, 0xffffff or 4294967295. a signed int's max positive value is (2^(32-1))-1 or 2147483647 (one of the bits is used for the sign). the Number type is 64 bits.
in order to guarantee space for your result, type the variable to Number and cast the result to Number (or not at all).
var avg : Number = 0;
...
avg += totalData[i][col.dataField] as Number;
The largest exact integral value is 2^53, Remember ActionScript is ECMA at heart. Look for the operator ToInt32 for more info on that.
Try casting it to a uint instead of an int

Resources