When I nest OpenCL's as_type operators, I get some strange errors. For example, this line works:
a = as_uint(NAN)&4290772991;
But these lines do not work:
a = as_float(as_uint(NAN)&4290772991);
a = as_uint(as_float(as_uint(NAN)&4290772991));
The error reads:
invalid reinterpretation: sizes of 'float' and 'long' must match
This error message is confusing, because it seems like no long is created by this code. All values here appear to be 32-bits, so it should be possible to reinterpret cast anything.
So why is this error happening?
In C99, undecorated decimal constants are assumed to be signed integers and the compiler will automagically define the constant as the smallest signed integer type which can hold the value using the progression int, then long int, then finally unsigned long int.
The smallest signed integer type which can hold 4290772991 is a 64 bit signed type (because of the sign bit requirement). Thus, the as_type calls you have where the reinterpret type is a 32 bit type fail because of the size mismatch between the 64 bit long int the compiler selects for your constant and the target float type.
You should be able to get around the problem by changing 4290772991 to 4290772991u. The suffix will explicitly denote the value as unsigned, and the compiler should select a 32 bit unsigned integer. Alternatively, you could also use 0xFFBFFFFF - there are different rules for hexadecimal constants and it should be assigned a type from the progression int, then unsigned int, then long int, then finally unsigned long int.
Related
I have declared a signed multidimensional array as follows:
typedef logic signed [3:0][31:0] hires_frame_t;
typedef hires_frame_t [3:0] hires_capture_t;
hires_capture_t rndpacket;
I want to randomize this array such that each element has a value between -32768 to 32767, in 32-bit two's-complement.
I've tried the following:
assert(std::randomize(rndpacket) with
{foreach (rndpacket[channel])
foreach (rndpacket[channel][subsample])
{rndpacket[channel][subsample] < signed'(32768);
rndpacket[channel][subsample] >= signed'(-32768);}});
This compiles well, but (mentor graphics) modelsim fails in simulation claiming
randomize() failed due to conflicts between the following constraints:
# clscummulativedata.sv(56): (rndpacket[3][3] < 32768);
# cummulativedata.sv(57): (rndpacket[3][3] >= 32'hffff8000);
This is clearly something linked to usage of signed vectors. I had a feeling that everything should be fine as array is declared as signed as well as thresholds in the randomize call, but apparently not. If I replace the range by 0-65535, everything works as expected.
What is the correct way to randomize such a signed array?
Your problem is hires_frame_t is a signed 128-bit 2-dimensional packed array, and selecting a part of a packed array is unsigned. A way of keeping the part-select of a packed dimension signed is using a separate typedef for the dimension you want signed:
typedef bit signed [31:0] int32_t;
typedef int32_t [3:0] hires_frame_t;
typedef hires_frame_t [3:0] hires_capture_t;
Another option is putting the signed cast on the LHS of the comparisons. Your signed cast on the RHS is not doing anything because bare decimal numbers are already treated as signed. A comparison is unsigned if one or both sides are unsigned.
assert(std::randomize(rndpacket) with {
foreach (rndpacket[channel,subsample])
{signed'(rndpacket[channel][subsample]) < 32768;
signed'(rndpacket[channel][subsample]) >= -32768;}});
BTW, I'm showing the LRM compliant way of using a 2-d foreach loop.
Could someone please explain the below code to me? It takes the Integer and converts it to a Single floating point number but if someone could break this down and elaborate that would be helpful.
singleVar := PSingle(#intVar)^
This doesn't convert the integer to a float. It reinterprets the bytes of the 32-bit integer as a single (a floating point data type that also has 32 bits).
#intVar is the address of the integer data in memory. The type is pointer to integer (PInteger). By writing PSingle(#intVar), you tell the compiler to pretend that it is a pointer to a single; in effect, you tell the compiler that it should interpret the data at this place in memory as a single. Finally, PSingle(#intVar)^ is simply dereferencing the pointer. Hence, it is the "single" value at this location in memory, that is, the original bytes now interpreted as a single.
Interpreting the bytes of an integer as a single doesn't give you the same numerical value in general. For instance, if the integer value is 123, the bytes are 7B00 0000. If you interpret this sequence of bytes as a single, you obtain 1,72359711111953E-43 which is not numerically equivalent.
To actually convert an integer to a single, you would write singleVar := intVar.
julia> typeof(-0b111)
Uint64
julia> typeof(-0x7)
Uint64
julia> typeof(-7)
Int64
I find this result a bit surprising. Why does the numeric base of the number determine signed or unsgined-ness?
Looks like this is expected behavior:
This behavior is based on the observation that when one uses unsigned
hex literals for integer values, one typically is using them to
represent a fixed numeric byte sequence, rather than just an integer
value.
http://docs.julialang.org/en/latest/manual/integers-and-floating-point-numbers/#integers
...seems like a bit of an odd choice.
This is a subjective call, but I think it's worked out pretty well. In my experience when you use hex or binary, you're interested in a specific pattern of bits – and you generally want it to be unsigned. When you're just interested a numeric value you use decimal because that's what we're most familiar with. In addition, when you're using hex or binary, the number of digits you use for input is typically significant, whereas in decimal, it isn't. So that's how literals work in Julia: decimal gives you a signed integer of a type that the value fits in, while hex and binary give you an unsigned value whose storage size is determined by the number of digits.
What are the size guarantees on int in Single UNIX or POSIX? This sure is a FAQ, but I can't find the answer...
With icecrime's answer, and a bit further searching on my side, I got a complete picture:
ANSI C and C99 both mandate that INT_MAX be at least +32767 (i.e. 2^15-1). POSIX doesn't go beyong that. Single Unix v1 has the same guarantee, while Single Unix v2 states that the minimum acceptable value is 2 147 483 647 (i.e. 2^31-1).
The C99 standard specifies the content of header <limits.h> in the following way :
Their implementation-defined values shall be equal or greater in magnitude
(absolute value) to those shown, with the same sign.
minimum value for an object of type int
INT_MIN -32767 // -(215 - 1)
maximum value for an object of type int
INT_MAX +32767 // 215 - 1
maximum value for an object of type unsigned int
UINT_MAX 65535 // 216 - 1
There are no size requirements expressed on the int type.
However, the <stdint.h> header offer the additional exact-width integer types int8_t, int16_t, int32_t, int64_t and their unsigned counterpart :
The typedef name intN_t designates a
signed integer type with width N, no
padding bits, and a two’s complement
representation. Thus, int8_t denotes a
signed integer type with a width of
exactly 8 bits.
POSIX doesn't cover that. The ISO C standard guarantees that types will be able to handle at least a certain range of values but not that they'll be of a particular size.
The <stdint.h> header introduced with C99 will get you access to types like int16_t that do.
I have the following code:
NSUInteger one = 1;
CGPoint p = CGPointMake(-one, -one);
NSLog(#"%#", NSStringFromCGPoint(p));
Its output:
{4.29497e+09, 4.29497e+09}
On the other hand:
NSUInteger one = 1;
NSLog(#"%i", -one); // prints -1
I know there’s probably some kind of overflow going on, but why do the two cases differ and why doesn’t it work the way I want? Should I always remind myself of the particular numeric type of my variables and expressions even when doing trivial arithmetics?
P.S. Of course I could use unsigned int instead of NSUInteger, makes no difference.
When you apply the unary - to an unsigned value, the unsigned value is negated and then forced back into unsigned garb by having Utype_MAX + 1 repeatedly added to that value. When you pass that to CGPointMake(), that (very large) unsigned value is then assigned to a CGFloat.
You don't see this in your NSLog() statement because you are logging it as a signed integer. Convert that back to a signed integer and you indeed get -1. Try using NSLog("%u", -one) and you'll find you're right back at 4294967295.
unsigned int versus NSUInteger DOES make a difference: unsigned int is half the size of NSUInteger under an LP64 architecture (x86_64, ppc64) or when you compile with NS_BUILD_32_LIKE_64 defined. NSUInteger happens to always be pointer-sized (but use uintptr_t if you really need an integer that's the size of a pointer!); unsigned is not when you're using the LP64 model.
OK without actually knowing, but reading around on the net about all of these datatypes, I'd say the issue was with the conversion from a NUSInteger (which resolves to either an int (x32) or a long (x64)) to a CGFloat (which resolves to either a float(x32) or double(x64)).
In your second example that same conversion is not happening. The other thing that may be effecting it is that from my reading, NSUinteger is not designed contain negative numbers, only positive ones. So that is likely to be where things start to go wrong.