GCC pointer cast warning - pointers

I am wondering why GCC is giving me this warning:
test.h: In function TestRegister:
test.h:12577: warning: cast to pointer from integer of different size
Code:
#define Address 0x1234
int TestRegister(unsigned int BaseAddress)
{
unsigned int RegisterValue = 0;
RegisterValue = *((unsigned int *)(BaseAddress + Address)) ;
if((RegisterValue & 0xffffffff) != (0x0 << 0))
{
return(0);
}
else
{
return(1);
}
}

Probably because you're on a 64-bit platform, where pointers are 64-bit but ints are 32-bit.
Rule-of-thumb: Don't try to use integers to store addresses.

If you include <stdint.h> and if you compile for the C99 standard using gcc -Wall -std=c99 you could cast to and from intptr_t which is an integer type of the same size as pointers.
RegisterValue = *((unsigned int *)((intptr_t)(BaseAddress + Address))) ;

Among other things, you're assuming that a pointer will fit into an unsigned int, where C gives no such guarantee… there are a number of platforms in use today where this is untrue, apparently including yours.
A pointer to data can be stored in a (void*) or (type*) safely. Pointers can be added to (or subtracted to yield) a size_t or ssize_t. There's no guaranteed relationship between sizeof(int), sizeof(size_t), sizeof(ssize_t), and (void*) or (type*)…
(Also, in this case, there's no real point in initializing the var and overwriting it on the next line…)
Also unrelated, but you realise that != (0x0 << 0) → != 0 and can be omitted, since if (x) = if (x != 0) … ? Perhaps that's because this is cut down from a larger sample, but that entire routine could be presented as
int TestRegister (unsigned int* BaseAddress)
{ return ( (0xffffffff & *(BaseAddress + Address)) ? 0 : 1 ); }
(Edited: changed to unsigned int* as it seems far more likely he wants to skip through at int-sized offsets?)

Related

Can I skip eva's assertion on signed overflow?

Sample code:
void main(){
unsigned int x;
x = 1U << 31; // OK
x = 1 << 31; // Sign overflowed
return;
}
frama-c-gui -eva main.c:
void main(void)
{
unsigned int x;
x = 1U << 31;
/*# assert Eva: signed_overflow: 1 << 31 ≤ 2147483647; */
x = (unsigned int)(1 << 31);
return;
}
Get red alarm because of signed overflow on line 4. I have existing code with ton of hardware registers defined with mask bits and shifting bits like this. It's unreasonable to modify the code add "U" for all the mask bits. Is there a option in eval plugin to treat these constants as unsigned integer?
There are some options in the kernel to control which kinds of alarms should be emitted (see frama-c -kernel-h or the manual, especially its section 6.3, for more information).
In your particular case, you are probably interested in -no-warn-signed-overflow, that will disable alarms related to overflows on signed arithmetic. Eva will then assume 2-complement arithmetic, and emit a warning about that if the situation occurs, but only once for the whole analysis.

What could be the reason to not be able to use Math built-in functions in OpenCL? Should I use some directive to active?

The build returns -11 error. Removing pow function compiles fine. I'm not using embedded profile.
__kernel void VectorAdd(__global int* a)
{
unsigned int n = get_global_id(0);
a[n] = pow(2, 2);
}
Im catching the error but the string is empty
int err = clBuildProgram(OpenCLProgram, 0, NULL, NULL, NULL, NULL);
if (err != CL_SUCCESS)
{
size_t len;
char buffer[2048];
printf("Error: Failed to build program executable!\n");
clGetProgramBuildInfo(OpenCLProgram, cdDevice, CL_PROGRAM_BUILD_LOG, sizeof(buffer), buffer, &len);
printf("%s\n", buffer);
exit(1);
}
Some useful info:
CL_DEVICE_NAME: AMD Radeon HD - FirePro D300 Compute Engine
CL_DRIVER_VERSION: 1.2 (Jan 10 2017 22:25:08)
If you look at the OpenCL documentation for pow you will notice that it is defined as gentype pow(gentype x, gentype y). The document also states that
The generic type name gentype is used to indicate that the function can take float, float2, float3, float4, float8, float16, double, double2, double3, double4, double8, or double16 as the type for the arguments.
So pow() takes two float or two double values or vectors thereof and returns a value of the same type. Since the compiler cannot determine wether you wanted to call pow(2.0, 2.0) (double precision) or pow(2.0f, 2.0f) (single precision), you get an error instead.
Note that there is also the similar-named function float pown(float x, int y) which takes an integer value for the exponent (e.g. pown(2.0f, 2)) and may provide an optimized implementation of this case.
What does clGetProgramBuildInfo() with param_name=CL_PROGRAM_BUILD_LOG say? This should give you a much more detailed error message. Update the question with this and I might be able to expand this answer.
What version of OpenCL is this? Note that prior to 1.2, the pow() function was only defined for floating-point types; you're expecting it to work with integers.

Get ints (of various sizes) from boolean array

OK, say I have a boolean array called bits, and an int called cursor
I know I can access individual bits by using bits[cursor], and that I can use bit logic to get larger datatypes from bits, for example:
short result = (bits[cursor] << 3) |
(bits[cursor+1] << 2) |
(bits[cursor+2] << 1) |
bits[cursor+3];
This is going to result in lines and lines of code when reading larger types like int32 and int64 though.
Is it possible to do a cast of some kind and achieve the same result? I'm not concerned about safety at all in this context (these functions will be wrapped into a class that handles that)
Say I wanted to get an uint64_t out of bits, starting at an arbitrary address specified by cursor, when cursor isn't necessarily a multiple of 64; is this possible by a cast? I thought this
uint64_t result = (uint64_t *)(bits + cursor)[0];
Would work, but it doesn't want to compile.
Sorry I know this is a dumb question, I'm quite inexperienced with pointer math. I'm not looking just for a short solution, I'm also looking for a breakdown of the syntax if anyone would be kind enough.
Thanks!
You could try something like this and cast the result to your target data size.
uint64_t bitsToUint64(bool *bits, unsigned int bitCount)
{
uint64_t result = 0;
uint64_t tempBits = 0;
if(bitCount > 0 && bitCount <= 64)
{
for(unsigned int i = 0, j = bitCount - 1; i < bitCount; i++, j--)
{
tempBits = (bits[i])?1:0;
result |= (tempBits << j);
}
}
return result;
}

how to convert double between host and network byte order?

Could somebody tell me how to convert double precision into network byte ordering.
I tried
uint32_t htonl(uint32_t hostlong);
uint16_t htons(uint16_t hostshort);
uint32_t ntohl(uint32_t netlong);
uint16_t ntohs(uint16_t netshort);
functions and they worked well but none of them does double (float) conversion because these types are different on every architecture. And through the XDR i found double-float precision format representations (http://en.wikipedia.org/wiki/Double_precision) but no byte ordering there.
So, I would much appreciate if somebody helps me out on this (C code would be great!).
NOTE: OS is Linux kernel (2.6.29), ARMv7 CPU architecture.
You could look at IEEE 754 at the interchanging formats of floating points.
But the key should be to define a network order, ex. 1. byte exponent and sign, bytes 2 to n as mantissa in msb order.
Then you can declare your functions
uint64_t htond(double hostdouble);
double ntohd(uint64_t netdouble);
The implementation only depends of your compiler/plattform.
The best should be to use some natural definition,
so you could use at the ARM-platform simple transformations.
EDIT:
From the comment
static void htond (double &x)
{
int *Double_Overlay;
int Holding_Buffer;
Double_Overlay = (int *) &x;
Holding_Buffer = Double_Overlay [0];
Double_Overlay [0] = htonl (Double_Overlay [1]);
Double_Overlay [1] = htonl (Holding_Buffer);
}
This could work, but obviously only if both platforms use the same coding schema for double and if int has the same size of long.
Btw. The way of returning the value is a bit odd.
But you could write a more stable version, like this (pseudo code)
void htond (const double hostDouble, uint8_t result[8])
{
result[0] = signOf(hostDouble);
result[1] = exponentOf(hostDouble);
result[2..7] = mantissaOf(hostDouble);
}
This might be hacky (the char* hack), but it works for me:
double Buffer::get8AsDouble(){
double little_endian = *(double*)this->cursor;
double big_endian;
int x = 0;
char *little_pointer = (char*)&little_endian;
char *big_pointer = (char*)&big_endian;
while( x < 8 ){
big_pointer[x] = little_pointer[7 - x];
++x;
}
return big_endian;
}
For brevity, I've not include the range guards. Though, you should include range guards when working at this level.

Cuda : use the global memory to store contiguous various size datas

I have a problem using a buffer of bytes in global memory to store some integer of various size (8 bits, 16 bits, 32 bits, 64 bits).
If i store an integer at an pointer value non multiple of 4 bytes (for instance because i just stored a 8bit integer), the adress is rounded down, erasing the previous data.
__global__ void kernel(char* pointer)
{
*(int*)(pointer+3)=3300000;
}
In this example code, using any of : (pointer), (pointer+1), (pointer+2), (pointer+3) the integer is stored at (pointer), considering pointer is a multiple of 4.
Is cuda memory organised in 32 bit blocks at the hardware level ?
Is there any way to make this work ?
The word size alignment is non-negotiable in CUDA. However, if you're willing to take the performance hit for some reason, you could pack your data into char * and then just write your own custom storage function, e.g.
__inline __device__ void Assign(int val, char * arr, int len)
{
for (int idx = 0; idx < len; idx++)
*(arr+idx)=(val & (0xFF<<(idx<<8))
}
__inline __device__ int Get(char * arr, int idx, int len)
{
int val;
for (int idx = 0; idx < len; idx++)
val=(int)(*arr[idx+len*idx]<<(idx<<8)));
return val;
}
Hope that helps!

Resources