The reason why I ask this is because there is some strange bug in my code and I suspect it could be some aliasing problem:
__shared__ float x[32];
__shared__ unsigned int xsum[32];
int idx=threadIdx.x;
unsigned char * xchar=(unsigned char *)x;
//...do something
if (threadIdx.x<32)
{
xchar[4*idx]&=somestring[0];
xchar[4*idx+1]&=somestring[1];
xchar[4*idx+2]&=somestring[2];
xchar[4*idx+3]&=somestring[3];
xsum[idx]+=*((unsigned int *)(x+idx));//<-Looks like the compiler sometimes fail to recongize this as the aliasing of xchar;
};
The compiler only needs to honour aliasing between compatible types. Since char and float are not compatible, the compiler is free to assume the pointers never alias.
If you want to do bitwise operations on float, firstly convert (via __float_as_int()) to unsigned integer, then operate on that, and finally convert back to float (using __int_as_float()).
I think you have a race condition here. But I don't know what is somestring. If it is the same for all threads you can do like this:
__shared__ float x[32];
unsigned char * xchar=(unsigned char *)x;
//...do something
if(threadIdx.x<4) {
xchar[threadIdx.x]&=somestring[threadIdx.x];
}
__syncthreads();
unsigned int xsum+=*((unsigned int *)x);
It means that every thread shares the same array and therefore, xsum is the same between all threads. If you want that each thread has its own array, you have to allocate an array of 32*number_of_threads_in_block and use an offset.
PS: the code above works only in 1D block. In 2D or 3D you have to compute you own threadID and be sure that only 4 threads execute the code.
Related
I have the following example code:
int compute_stuff(int *array)
{
/* do stuff with array */
...
return x;
}
__kernel void my_kernel()
{
__local int local_mem_block[LENGTH*MY_LOCAL_WORK_SIZE];
int result;
/* do stuff with local memory block */
result = compute_stuff(local_mem_block + (LENGTH*get_local_id(0)));
...
}
The above example compiles and executes fine on my NVIDIA card (RTX 2080).
But when I try to compile on a Macbook with AMD card, I get the following error:
error: passing '__local int *' to parameter of type '__private int *' changes address space of pointer
OK, so then I change the "compute_stuff" function to the following:
int compute_stuff(__local int *array)
Now both NVIDIA and AMD compile it fine, no problem...
But then I have one more test, to compile it on the same Macbook using WINE (rather than boot to Windows in bootcamp), and it gives the following error:
error: parameter may not be qualified with an address space
So it seems as though one is not supposed to qualify a function parameter with an address space. Fair enough. But if I do not do that, then the AMD on native Windows thinks that I am trying to change the address space of the pointer to private (I guess because it assumes that all function arguments will be private?).
What is a good way to handle this so that all three environments are happy to compile it? As a last resort, I am thinking of simply having the program check to see if the build failed without qualifier, and if so, substitute in the "__local" qualifier and build a second time... Seems like a hack, but it could work.
I agree with ProjectPhysX that it appears to be a bug with the WINE implementation. I also found the following appears to satisfy all three environments:
int compute_stuff(__local int * __private array)
{
...
}
__kernel void my_kernel()
{
__local int local_mem_block[LENGTH*MY_LOCAL_WORK_SIZE];
__local int * __private samples;
samples = local_mem_block + (LENGTH*get_local_id(0));
result = compute_stuff(samples);
}
The above is explicitly stating that the pointer itself is private while the memory it is pointing to is kept in local address space. So this removes any ambiguity.
The int* in int compute_stuff(int *array) is __generic address space. The call result = compute_stuff(local_mem_block+...); implicitly converts it to __local, which is allowed according to the OpenCL 2.0 Khronos specification.
It could be that AMD defaults to OpenCL 1.2. Maybe explicitely set –cl-std=CL2.0 in clBuildProgram() or clCompileProgram().
To keep the code compatible with OpenCL 1.2, you can explicitly set the pointer in the function to __local: int compute_stuff(__local int *array). OpenCL allows to set function parameters to the address spaces __global and __local. WINE seems to have a bug here. Maybe inlining the function can solve it: int __attribute__((always_inline)) compute_stuff(__local int *array).
As a last resort, you can do your proposed method. You can detect if it runs on WINE system like this. With that, you could switch between the two code variants without compiling twice and detecting the error.
I have the following struct:
typedef union
{
struct
{
unsigned char ID;
unsigned short Vdd;
unsigned char B1State;
unsigned short B1FloatV;
unsigned short B1ChargeV;
unsigned short B1Current;
unsigned short B1TempC;
unsigned short B1StateTimer;
unsigned short B1DutyMod;
unsigned char B2State;
unsigned short B2FloatV;
unsigned short B2ChargeV;
unsigned short B2Current;
unsigned short B2TempC;
unsigned short B2StateTimer;
unsigned short B2DutyMod;
} bat_values;
unsigned char buf[64];
} BATTERY_CHARGE_STATUS;
and I am stuffing it from an array as follows:
for(unsigned char ii = 0; ii < 64; ii++) usb_debug_data.buf[ii]=inBuffer[ii];
I can see that the array has the following (arbitrary) values:
inBuffer[0] = 80;
inBuffer[1] = 128;
inBuffer[2] = 12;
inBuffer[3] = 0;
inBuffer[4] = 23;
...
now I want display these values by changing the text of a QEditLine:
str=QString::number((int)usb_debug_data.bat_values.ID);
ui->batID->setText(str);
str=QString::number((int)usb_debug_data.bat_values.Vdd)
ui->Vdd->setText(str);
str=QString::number((int)usb_debug_data.bat_values.B1State)
ui->B1State->setText(str);
...
however, the QEditLine text values are not turning up as expected. I see the following:
usb_debug_data.bat_values.ID = 80 (correct)
usb_debug_data.bat_values.Vdd = 12 (incorrect)
usb_debug_data.bat_values.B1State = 23 (incorrect)
seems like 'usb_debug_data.bat_values.Vdd', which is a short, is not taking its value from inBuffer[1] and inBuffer[2]. Likewise, 'usb_debug_data.bat_values.B1State' should get its value from inBuffer[3] but for some reason is picking up its value from inBuffer[4].
Any idea why this is happening?
C and C++ are free to insert padding between elements of a structure, and beyond the last element, for whatever purposes it desires (usually efficiency but sometimes because the underlying architecture does not allow unaligned access at all).
So you'll probably find that items of two-bytes length are aligned to two-byte boundaries, so you'll end up with something like:
unsigned char ID; // 1 byte
// 1 byte filler, aligns following short
unsigned short Vdd; // 2 bytes
unsigned char B1State; // 1 byte
// 3 bytes filler, aligns following int
unsigned int myVar; // 4 bytes
Many compilers will allow you to specific how to pack structures, such as with:
#pragma pack(1)
or the gcc:
__attribute__((packed))
attribute.
If you don't want to (or can't) pack your structures, you can revert to field-by-filed copying (probably best in a function):
void copyData (BATTERY_CHARGE_STATUS *bsc, unsigned char *debugData) {
memcpy (&(bsc->ID), debugData, sizeof (bsc->ID));
debugData += sizeof (bsc->ID);
memcpy (&(bsc->Vdd), debugData, sizeof (bsc->Vdd));
debugData += sizeof (bsc->Vdd);
: : :
memcpy (&(bsc->B2DutyMod), debugData, sizeof (bsc->B2DutyMod));
debugData += sizeof (bsc->B2DutyMod); // Not really needed
}
It's a pain that you have to keep the structure and function synchronised but hopefully it won't be changing that much.
Structs are not packed by default so the compiler is free to insert padding between members. The most common reason is to ensure some machine dependent alignment. The wikipedia entry on data structure alignment is a pretty good place to start. You essentially have two choices:
insert compiler specific pragmas to force alignment (e.g, #pragma packed or __attribute__((packed))__.
write explicit serialization and deserialization functions to transform your structures into and from byte arrays
I usually prefer the latter since it doesn't make my code ugly with little compiler specific adornments everywhere.
The next thing that you are likely to discover is that the byte order for multi-byte integers is also platform specific. Look up endianness for more details
Which of the following two approches is more efficient on an ATmega328P?
unsigned int value;
unsigned char char_high, char_low;
char_high = value>>8;
value = value<<8;
char_low = value>>8;
OR
unsigned int value;
unsigned char char_high, char_low;
char_high = value>>8;
char_low = value & 0xff;
You really should measure. I won't answer your question (since you'd benefit more from measuring than I would), but I'll give you a third option:
struct {
union {
uint16_t big;
uint8_t small[2];
};
} nums;
(be aware of the difference between big endian and little endian here)
One option would be to measure it (as has already been said).
Or, compile both and see what the assembly language output looks like.
but actually, the 2nd code you have won't work - if you take value << 8 and assign it to a char, all you get is zero in the char. The subsequent >>8 will still leave you with zero.
have C sources that must compile in 32bit and 64bit for multiple platforms.
structure that takes the address of a buffer - need to fit address in a 32bit value.
obviously where possible these structures will use natural sized void * or char * pointers.
however for some parts an api specifies the size of these pointers as 32bit.
on x86_64 linux with -m64 -mcmodel=small tboth static data and malloc()'d data fit within the 2Gb range. data on the stack, however, still starts in high memory.
so given a small utility _to_32() such as:
int _to_32( long l ) {
int i = l & 0xffffffff;
assert( i == l );
return i;
}
then:
char *cp = malloc( 100 );
int a = _to_32( cp );
will work reliably, as would:
static char buff[ 100 ];
int a = _to_32( buff );
but:
char buff[ 100 ];
int a = _to_32( buff );
will fail the assert().
anyone have a solution for this without writing custom linker scripts?
or any ideas how to arrange the linker section for stack data, would appear it is being put in this section in the linker script:
.lbss :
{
*(.dynlbss)
*(.lbss .lbss.* .gnu.linkonce.lb.*)
*(LARGE_COMMON)
}
thanks!
The stack location is most likely specified by the operating system and has nothing to do with the linker.
I can't imagine why you are trying to force a pointer on a 64 bit machine into 32 bits. The memory layout of structures is mainly important when you are sharing the data with something which may run on another architecture and saving to a file or sending across a network, but there are almost no valid reasons that you would send a pointer from one computer to another. Debugging is the only valid reason that comes to mind.
Even storing a pointer to be used later by another run of your program on the same machine would almost certainly be wrong since where your program is loaded can differ. Making any use of such a pointer would be undefined abd unpredictable.
the short answer appears to be there is no easy answer. at least no easy way to reassign range/location of the stack pointer.
the loader 'ld-linux.so' at a very early stage in process activation gets the address in the hurd loader - in the glibc sources, elf/ and sysdeps/x86_64/ search out elf_machine_load_address() and elf_machine_runtime_setup().
this happens in the preamble of calling your _start() entry and related setup to call your main(), is not for the faint hearted, even i couldn't convince myself this was a safe route.
as it happens - the resolution presents itself in some other old school tricks... pointer deflations/inflation...
with -mcmodel=small then automatic variables, alloca() addresses, and things like argv[], and envp are assigned from high memory from where the stack will grow down. those addresses are verified in this example code:
#include <stdlib.h>
#include <stdio.h>
#include <alloca.h>
extern char etext, edata, end;
char global_buffer[128];
int main( int argc, const char *argv[], const char *envp )
{
char stack_buffer[128];
static char static_buffer[128];
char *cp = malloc( 128 );
char *ap = alloca( 128 );
char *xp = "STRING CONSTANT";
printf("argv[0] %p\n",argv[0]);
printf("envp %p\n",envp);
printf("stack %p\n",stack_buffer);
printf("global %p\n",global_buffer);
printf("static %p\n",static_buffer);
printf("malloc %p\n",cp);
printf("alloca %p\n",ap);
printf("const %p\n",xp);
printf("printf %p\n",printf);
printf("First address past:\n");
printf(" program text (etext) %p\n", &etext);
printf(" initialized data (edata) %p\n", &edata);
printf(" uninitialized data (end) %p\n", &end);
}
produces this output:
argv[0] 0x7fff1e5e7d99
envp 0x7fff1e5e6c18
stack 0x7fff1e5e6a80
global 0x6010e0
static 0x601060
malloc 0x602010
alloca 0x7fff1e5e69d0
const 0x400850
printf 0x4004b0
First address past:
program text (etext) 0x400846
initialized data (edata) 0x601030
uninitialized data (end) 0x601160
all access to/from the 32bit parts of structures must be wrapped with inflate() and deflate() routines, e.g.:
void *inflate( unsigned long );
unsigned int deflate( void *);
deflate() tests for bits set in the range 0x7fff00000000 and marks the pointer so that inflate() will recognize how to reconstitute the actual pointer.
hope that helps if anyone similarly must support structures with 32bit storage for 64bit pointers.
I need to send floating point numbers using a UDP connection to a Qt application. Now in Qt the only function available is
qint64 readDatagram ( char * data, qint64 maxSize, QHostAddress * address = 0, quint16 * port = 0 )
which accepts data in the form of signed character buffer. I can convert my float into a string and send it but it will obviously not be very efficient converting a 4 byte float into a much longer sized character buffer.
I got hold of these 2 functions to convert a 4 byte float into an unsinged 32 bit integer to transfer over network which works fine for a simple C++ UDP program but for Qt I need to receive the data as unsigned char.
Is it possible to avoid converting the floatinf point data into a string and then sending it?
uint32_t htonf(float f)
{
uint32_t p;
uint32_t sign;
if (f < 0) { sign = 1; f = -f; }
else { sign = 0; }
p = ((((uint32_t)f)&0x7fff)<<16) | (sign<<31); // Whole part and sign.
p |= (uint32_t)(((f - (int)f) * 65536.0f))&0xffff; // Fraction.
return p;
}
float ntohf(uint32_t p)
{
float f = ((p>>16)&0x7fff); // Whole part.
f += (p&0xffff) / 65536.0f; // Fraction.
if (((p>>31)&0x1) == 0x1) { f = -f; } // Sign bit set.
return f;
}
Have you tried using readDatagram? Or converting the data to a QByteArray after reading? In many cases a char* is really just a byte array. This is one of those cases. Note that the writeDatagram can take a QByteArray.
Generally every thing sent across sockets is in bytes not strings, layers on either end do the conversions. Take a look here, especially the Broadcaster examples. They show how to create a QByteArray for broadcast and receive.
Not sure why the downvote, since the question is vague in requirements.
A 4-byte float is simply a 4 character buffer, if cast as one. If the systems are homogenous, the float can be sent as a signed char *, and bit for bit it'll be the same read into the signed char * on the receiver directly, no conversion needed. If the systems are heterogenous, then this won't work and you need to convert it to a portable format, anyway. IEEE format is often used, but my question is still, what are the requirements, is the float format the same between systems?
If I read it correctly, your primary question seems to be how to receive data of type unsigned char with QT's readDatagram function which uses a pointer to a buffer of type char.
The short answer is use a cast along these lines:
const size_t MAXSIZE = 1024;
unsigned char* data = malloc(MAXSIZE);
readDatagram ( (unsigned char *)data, MAXSIZE, address, port )
I'm going to assume you have multiple machines which use the same IEEE floating point format but some of which are big endian and some of which are little endian. See this SO post for a good discussion of this issue.
In that case you could do something a bit simpler like this:
const size_t FCOUNT = 256;
float* data = malloc(FCOUNT * sizeof(*data));
readDatagram ( (char *)data, FCOUNT * sizeof(*data), address, port )
for (int i = 0; i != FCOUNT; ++i)
data[i] = ntohf(*((uint32_t*)&data[i]));
The thing to remember is that as far as networking functions like readDatagram are concerned, the data is just a bunch of bits and it doesn't care what type those bits are interpreted as.
If both ends of your UDP connection use Qt, I would suggest looking at QDataStream. You can create this from a QByteArray each time you read a datagram, and then read whatever values you require - floats, maps, lists, QVariants, and of course string.
Similarly, on the sending side, you'd create a data stream, push data into it, then send the resulting QByteArray over writeDatagram.
Obviously this only works if both ends use Qt - the data encoding is well-defined, but non-trivial to generate by hand.
(If you want stream orientated behaviour, you could use the fact that QUDPSocket is a QIODevice with a data-stream, but it sounds as if you want per-datagram behaviour)