(Implementation?) error with value noise causing strange artefacts across image - runtime-error

I am doing experiments with terrain generation using heightmaps, and I just wrote a simple value noise. I cannot see anything in my code, but I cannot see anything other than my own stupidity or a floating-point error occuring:
#include
#include <stdlib.h>
#include <math.h>
long jenkins( long a ){
a = (a+0x7ed55d16) + (a>19);
a = (a+0x165667b1) + (a>16);
return a;
}
float random(long x, long y, long seed1, long seed2, long seed3){
long state = jenkins( x ) - jenkins(y);
state = jenkins(state - seed1);
state -= jenkins(state ^ seed2);
state -= jenkins(state - seed2);
state ^= (x*y) - jenkins( (seed1-seed2) ^ seed3 );
float ret = ((float)(state&0xFFFF))/65535.0f;
return ret;
}
float valueNoise( float x, float y, long seed1, long seed2, long seed3 ){
float fx = x-floor(x);
float fy = y-floor(y);
long flx = (long) x-fx;
long fly = (long) y-fy;
fx = fx*fx*(3 - 2*fx);
fy = fy*fy*(3 - 2*fy);
float r00 = random(flx,fly,seed1,seed2,seed3);
float r10 = random(flx+1,fly,seed1,seed2,seed3);
float r01 = random(flx,fly+1,seed1,seed2,seed3);
float r11 = random(flx+1,fly+1,seed1,seed2,seed3);
float i0 = (1-fx)*r00 + fx*r10;
float i1 = (1-fx)*r01 + fx*r11;
return ( (1-fy)*i0 + fy*i1 );
}
void writeOutPGM(char *filename, unsigned char *data){
FILE *file = fopen(filename,"w");
char header[] = "P5 512 512 255\n";
fwrite(header,sizeof(header),1,file);
fwrite(data,262144,1,file);
}
extern float valueNoise( float x, float y, long seed1, long seed2, long seed3 );
int main(){
unsigned char *data = malloc(512*512);
float fx,fy;
int x,y;
long s1 = 123,s2 = 456,s3 = 789;
for( y = 0; y < 512; ++y ){
for( x = 0; x < 512; ++x ){
fx = floor( ((float)x)/10.0f );
fy = floor( ((float)y)/10.0f );
data[x + y*512] = (unsigned char) 255.0f*valueNoise(fx,fy,s1,s2,s3);
}
}
writeOutPGM("noise.pgm",data);
}
I put floor() in to show the lines more clearly, otherwise they are hard to see ( though the noise is still REALLY messed up )
It outputs good noise when I don't divide the x/y. I guess it's either the division or my interpolation. But I am really clueless as to what is happening.
Thanks for any help, Erkling

The error was not with the noise, but rather with the PGM writing code. The file was opened in text-mode on Windows and all UNIX new-lines where converted to the Windows once, causing the weird glitches.

Related

How to print a number on with HT1632 only accepting text

I just bought a 8x32 lattice board (led matrix) and I control it with Arduino. The problem is that I can only use text with the library I got on github. But not numbers, how can I do it?
I'm going to put the code below, the code of the scrolling text and the part of the code in the library that specifies the function used to set the text.
The arduino code that program the scrolling text is here:
#include <HT1632.h>
#include <font_5x4.h>
#include <images.h>
int i = 0;
int wd;
char disp[] = "Hello, how are you?";
int x = 10;
void setup() {
HT1632.begin(A5, A4, A3);
wd = HT1632.getTextWidth(disp, FONT_5X4_END, FONT_5X4_HEIGHT);
}
void loop() {
HT1632.renderTarget(1);
HT1632.clear();
HT1632.drawText(disp, OUT_SIZE - i, 2, FONT_5X4, FONT_5X4_END,
FONT_5X4_HEIGHT);
HT1632.render();
i = (i + 1) % (wd + OUT_SIZE);
delay(100);
}
The library code that specifies the printing of the text is this:
void HT1632Class::drawText(const char text[], int x, int y, const byte font[],
int font_end[], uint8_t font_height,
uint8_t gutter_space) {
int curr_x = x;
char i = 0;
char currchar;
// Check if string is within y-bounds
if (y + font_height < 0 || y >= COM_SIZE)
return;
while (true) {
if (text[i] == '\0')
return;
currchar = text[i] - 32;
if (currchar >= 65 &&
currchar <=
90) // If character is lower-case, automatically make it upper-case
currchar -= 32; // Make this character uppercase.
if (currchar < 0 || currchar >= 64) { // If out of bounds, skip
++i;
continue; // Skip this character.
}
// Check to see if character is not too far right.
if (curr_x >= OUT_SIZE)
break; // Stop rendering - all other characters are no longer within the
// screen
// Check to see if character is not too far left.
int chr_width = getCharWidth(font_end, font_height, currchar);
if (curr_x + chr_width + gutter_space >= 0) {
drawImage(font, chr_width, font_height, curr_x, y,
getCharOffset(font_end, currchar));
// Draw the gutter space
for (char j = 0; j < gutter_space; ++j)
drawImage(font, 1, font_height, curr_x + chr_width + j, y, 0);
}
curr_x += chr_width + gutter_space;
++i;
}
}
You need to look at snprintf. This allows you to format a string of characters just like printf. It allows you to convert something like a int into a part of a string.
an example:
int hour = 10;
int minutes = 50;
char buffer[60];
int status = snprintf(buffer, 60, "the current time is: %i:%i\n", hour, minutes);
buffer now contains:"the current time is: 10:50" (and several empty characters past the \0).

Different results GPU & CPU when more than one 8 work items per group

I'm new in open cl. And tried as my first work to write code that checks intersection between many polylines to single polygon.
I'm running the code in both cpu and gpu.. and get different results.
First I sent NULL as local parameter when called clEnqueueNDRangeKernel.
clEnqueueNDRangeKernel(command_queue, kIntersect, 1, NULL, &global, null, 2, &evtCalcBounds, &evtKernel);
After trying many things i saw that if i send 1 as local it is working good. and returning the same results for the cpu and gpu.
size_t local = 1;
clEnqueueNDRangeKernel(command_queue, kIntersect, 1, NULL, &global, &local, 2, &evtCalcBounds, &evtKernel);
Played abit more and found that the cpu returns false result when i run the kernel with local 8 or more (for some reason).
I'm not using any local memory, just globals and privates.
I didn't added the code because i think it is irrelevant to the problem (note that for single work group it is working good), and it is long. If it is needed, i will try to simplify it.
The code flow is going like this:
I have polylines coordinates stored in a big buffer. and the single polygon in another. In addition i'm providing another buffer with single int that holds the current results count. All buffers are __global arguments.
In the kernel i'm simply checking intersection between all the lines of the "polyline[get_global(0)]" with the lines of the polygon. If true,
i'm using atomic_inc for the results count. There is no read and write memory from the same buffer, no barriers or mem fences,... the atomic_inc is the only thread safe mechanism i'm using.
-- UPDATE --
Added my code:
I know that i can maybe have better use of open cl functions for calculating some vectors, but for now, i'm simply convert code from my old regular CPU single threaded program to CL. so this is not my concern now.
bool isPointInPolygon(float x, float y, __global float* polygon) {
bool blnInside = false;
uint length = convert_uint(polygon[4]);
int s = 5;
uint j = length - 1;
for (uint i = 0; i < length; j = i++) {
uint realIdx = s + i * 2;
uint realInvIdx = s + j * 2;
if (((polygon[realIdx + 1] > y) != (polygon[realInvIdx + 1] > y)) &&
(x < (polygon[realInvIdx] - polygon[realIdx]) * (y - polygon[realIdx + 1]) / (polygon[realInvIdx + 1] - polygon[realIdx + 1]) + polygon[realIdx]))
blnInside = !blnInside;
}
return blnInside;
}
bool isRectanglesIntersected(float p_dblMinX1, float p_dblMinY1,
float p_dblMaxX1, float p_dblMaxY1,
float p_dblMinX2, float p_dblMinY2,
float p_dblMaxX2, float p_dblMaxY2) {
bool blnResult = true;
if (p_dblMinX1 > p_dblMaxX2 ||
p_dblMaxX1 < p_dblMinX2 ||
p_dblMinY1 > p_dblMaxY2 ||
p_dblMaxY1 < p_dblMinY2) {
blnResult = false;
}
return blnResult;
}
bool isLinesIntersects(
double Ax, double Ay,
double Bx, double By,
double Cx, double Cy,
double Dx, double Dy) {
double distAB, theCos, theSin, newX, ABpos;
// Fail if either line is undefined.
if (Ax == Bx && Ay == By || Cx == Dx && Cy == Dy)
return false;
// (1) Translate the system so that point A is on the origin.
Bx -= Ax; By -= Ay;
Cx -= Ax; Cy -= Ay;
Dx -= Ax; Dy -= Ay;
// Discover the length of segment A-B.
distAB = sqrt(Bx*Bx + By*By);
// (2) Rotate the system so that point B is on the positive X axis.
theCos = Bx / distAB;
theSin = By / distAB;
newX = Cx*theCos + Cy*theSin;
Cy = Cy*theCos - Cx*theSin; Cx = newX;
newX = Dx*theCos + Dy*theSin;
Dy = Dy*theCos - Dx*theSin; Dx = newX;
// Fail if the lines are parallel.
return (Cy != Dy);
}
bool isPolygonInersectsPolyline(__global float* polygon, __global float* polylines, uint startIdx) {
uint polylineLength = convert_uint(polylines[startIdx]);
uint start = startIdx + 1;
float x1 = polylines[start];
float y1 = polylines[start + 1];
float x2;
float y2;
int polygonLength = convert_uint(polygon[4]);
int polygonLength2 = polygonLength * 2;
int startPolygonIdx = 5;
for (int currPolyineIdx = 0; currPolyineIdx < polylineLength - 1; currPolyineIdx++)
{
x2 = polylines[start + (currPolyineIdx*2) + 2];
y2 = polylines[start + (currPolyineIdx*2) + 3];
float polyX1 = polygon[0];
float polyY1 = polygon[1];
for (int currPolygonIdx = 0; currPolygonIdx < polygonLength; ++currPolygonIdx)
{
float polyX2 = polygon[startPolygonIdx + (currPolygonIdx * 2 + 2) % polygonLength2];
float polyY2 = polygon[startPolygonIdx + (currPolygonIdx * 2 + 3) % polygonLength2];
if (isLinesIntersects(x1, y1, x2, y2, polyX1, polyY1, polyX2, polyY2)) {
return true;
}
polyX1 = polyX2;
polyY1 = polyY2;
}
x1 = x2;
y1 = y2;
}
// No intersection found till now so we check containing
return isPointInPolygon(x1, y1, polygon);
}
__kernel void calcIntersections(__global float* polylines, // My flat points array - [pntCount, x,y,x,y,...., pntCount, x,y,... ]
__global float* pBounds, // The rectangle bounds of each polyline - set of 4 values [top, left, bottom, right....]
__global uint* pStarts, // The start index of each polyline in the polylines array
__global float* polygon, // The polygon i want to intersect with - first 4 items are the rectangle bounds [top, left, bottom, right, pntCount, x,y,x,y,x,y....]
__global float* output, // Result array for saving the intersections polylines indices
__global uint* resCount) // The result count
{
int i = get_global_id(0);
uint start = convert_uint(pStarts[i]);
if (isRectanglesIntersected(pBounds[i * 4], pBounds[i * 4 + 1], pBounds[i * 4 + 2], pBounds[i * 4 + 3],
polygon[0], polygon[1], polygon[2], polygon[3])) {
if (isPolygonInersectsPolyline(polygon, polylines, start)){
int oldVal = atomic_inc(resCount);
output[oldVal] = i;
}
}
}
Can anyone explain it to me ?

OpenCV capture gray scale image with videocapture card API

I am developing a Qt application where I have to capture a video image from different video capture cards (different versions) for a project in my work.
I've captured a few cards successfully, using OpenCV and DirectShow drivers (thinking as a standard method), I can read images in a cv::Mat and, then, convert them in a QImage. Then I emit a signal with the QImage ready and, the MainWindow, receives this signal a paint the captured image in a QLabel (like many examples that I saw here :P).
But, now, I need to capture images from a card with a custom manufacter API without DirectShow.
In summary: With the API, you can assign a Windows Handle (WHND) associated with a component (a widget for example) and register a callback when the driver receives a captured image, rendering the images and paint them into the associated handle. The event invoked for rendering and painting is:
int CALLBACK OnVideoRawStreamCallback( BYTE* buffer, ULONG bufLen, unsigned __int64 timeStamp, void* context, ULONG channel, ULONG boardID, ULONG productID );
Then, it calls a ret = AvVideoRender( handle, buffer, bufLen );, where it render the image and paint into the handle.
Well, I'm trying to replace that "AvVideoRender" for a OpenCv conversion. I think converting the BYTES* received in a cv::Mat and, then, convert this cv::Mat in a QImage it could works, right?.
The problem is that I can't get a color image... only gray scale. I mean, if I do this:
int CALLBACK OnVideoRawStreamCallback( BYTE* buffer, ULONG bufLen, unsigned __int64 timeStamp, void* context, ULONG channel, ULONG boardID, ULONG productID )
{
// Original syntax
//ret = AvVideoRender( m_videoRender[0], buffer, bufLen );
// => New
// Converting in a OpenCV Matrix
cv::Mat mMatFrame(IMAGE_HEIGHT, IMAGE_WIDTH, CV_8U , buffer);
// Converting cv::Mat in a QImage
QImage qVideoCam((uchar*)mMatFrame.data, mMatFrame.cols, mMatFrame.rows, mMatFrame.step, QImage::Format_Indexed8);
// Emit a SIGNAL with the new QImage
emit imageReady(qVideoCam);
}
It works correctly and I can see the video capture... but in grayscale color.
I think I have to convert cv::Mat with CV_8UC3 instead CV_8U... but I have an unhandle exception when the application tries to convert the cv::Mat to a QImage. Here's my sample code trying convert it in a color image:
int CALLBACK OnVideoRawStreamCallback( BYTE* buffer, ULONG bufLen, unsigned __int64 timeStamp, void* context, ULONG channel, ULONG boardID, ULONG productID )
{
// Original syntax
//ret = AvVideoRender( m_videoRender[0], buffer, bufLen );
// => New
// Converting in a OpenCV Matrix
cv::Mat mMatFrame(IMAGE_HEIGHT, IMAGE_WIDTH, CV_8UC3 , buffer);
// Converting cv::Mat in a QImage
QImage qVideoCam((uchar*)mMatFrame.data, mMatFrame.cols, mMatFrame.rows, mMatFrame.step, QImage::Format_RGB888);
// Emit a SIGNAL with the new QImage
emit imageReady(qVideoCam);
}
The video specs are the following:
Resolution: 720x576 (PAL)
FrameRate: 25 fps
Color: YUV12
So, I would like to know if, with this parameters I can convert the BYTES* in a colored image. I think it's possible.. I'm sure that I'm doing something wrong...but I don't know what :S
I've tested with the original AvVideoRender and I can see color video into the QLabel...so, I can know that I'm receiveng color images. But with this solution I have some problems related to my project (for example, isn't general solution) and I think that I have no control with the handle (I can't get the Pixmap and scale it keeping the aspect ratio, for example)
Thanks for reading and sorry the inconveniences!
I get the solution :). Finally, I had to convert an YV12 array into an three dimensions RGB array. I don't know why, but, the cv::cvtColor conversions didn't work for me (I tried many combinations).
I found this yv12 to rgb conversion (the yv12torgb function):
http://sourcecodebrowser.com/codeine/1.0/capture_frame_8cpp_source.html
And made some modifications to get a cv::Mat as return value (insted of an unsigned char*). Here is the solution:
cv::Mat yv12ToRgb( uchar *pBuffer, const int w, const int h )
{
#define clip_8_bit(val) \
{ \
if( val < 0 ) \
val = 0; \
else if( val > 255 ) \
val = 255; \
}
cv::Mat result(h, w, CV_8UC3);
long ySize=w*h;
long uSize;
uSize=ySize>>2;
uchar *output=result.data;
uchar *pY=pBuffer;
uchar *pU=pY+ySize;
uchar *pV=pU+uSize;
int y, u, v;
int r, g, b;
int sub_i_uv;
int sub_j_uv;
const int uv_width = w / 2;
const int uv_height = h / 2;
uchar * const rgb = new uchar[(w * h * 4)]; //qt needs a 32bit align
if( !rgb )
return result;
for( int i = 0; i < h; ++i ) {
// calculate u & v rows
sub_i_uv = ((i * uv_height) / h);
for( int j = 0; j < w; ++j ) {
// calculate u & v columns
sub_j_uv = (j * uv_width) / w;
/***************************************************
*
* Colour conversion from
* http://www.inforamp.net/~poynton/notes/colour_and_gamma/ColorFAQ.html#RTFToC30
*
* Thanks to Billy Biggs <vektor#dumbterm.net>
* for the pointer and the following conversion.
*
* R' = [ 1.1644 0 1.5960 ] ([ Y' ] [ 16 ])
* G' = [ 1.1644 -0.3918 -0.8130 ] * ([ Cb ] - [ 128 ])
* B' = [ 1.1644 2.0172 0 ] ([ Cr ] [ 128 ])
*
* Where in xine the above values are represented as
*
* Y' == image->y
* Cb == image->u
* Cr == image->v
*
***************************************************/
y = pY[(i * w) + j] - 16;
u = pU[(sub_i_uv * uv_width) + sub_j_uv] - 128;
v = pV[(sub_i_uv * uv_width) + sub_j_uv] - 128;
r = (int)((1.1644 * (double)y) + (1.5960 * (double)v));
g = (int)((1.1644 * (double)y) - (0.3918 * (double)u) - (0.8130 * (double)v));
b = (int)((1.1644 * (double)y) + (2.0172 * (double)u));
clip_8_bit( b );
clip_8_bit( g );
clip_8_bit( r );
/*rgb[(i * w + j) * 4 + 0] = r;
rgb[(i * w + j) * 4 + 1] = g;
rgb[(i * w + j) * 4 + 2] = b;
rgb[(i * w + j) * 4 + 3] = 0;*/
*output++=b;
*output++=g;
*output++=r;
}
}
return result;
}
And then, my call is:
mMatFrame = yv12ToRgb( buffer, IMAGE_WIDTH, IMAGE_HEIGHT );
QImage qVideoCam((uchar*)mMatFrame.data, mMatFrame.cols, mMatFrame.rows, mMatFrame.step, QImage::Format_RGB888);
emit imageReady(qVideoCam);
Thank's to all for the help and sorry the inconvenieces :)

Higher radix (or better) formulation for Stockham FFT

Background
I've implemented this algorithm from Microsoft Research for a radix-2 FFT (Stockham auto sort) using OpenCL.
I use floating point textures (256 cols X N rows) for input and output in the kernel, because I will need to sample at non-integral points and I thought it better to delegate that to the texture sampling hardware. Note that my FFTs are always of 256-point sequences (every row in my texture). At this point, my N is 16384 or 32768 depending on the GPU i'm using and the max 2D texture size allowed.
I also need to perform the FFT of 4 real-valued sequences at once, so the kernel performs the FFT(a, b, c, d) as FFT(a + ib, c + id) from which I can extract the 4 complex sequences out later using an O(n) algorithm. I can elaborate on this if someone wishes - but I don't believe it falls in the scope of this question.
Kernel Source
const sampler_t fftSampler = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_NEAREST;
__kernel void FFT_Stockham(read_only image2d_t input, write_only image2d_t output, int fftSize, int size)
{
int x = get_global_id(0);
int y = get_global_id(1);
int b = floor(x / convert_float(fftSize)) * (fftSize / 2);
int offset = x % (fftSize / 2);
int x0 = b + offset;
int x1 = x0 + (size / 2);
float4 val0 = read_imagef(input, fftSampler, (int2)(x0, y));
float4 val1 = read_imagef(input, fftSampler, (int2)(x1, y));
float angle = -6.283185f * (convert_float(x) / convert_float(fftSize));
// TODO: Convert the two calculations below into lookups from a __constant buffer
float tA = native_cos(angle);
float tB = native_sin(angle);
float4 coeffs1 = (float4)(tA, tB, tA, tB);
float4 coeffs2 = (float4)(-tB, tA, -tB, tA);
float4 result = val0 + coeffs1 * val1.xxzz + coeffs2 * val1.yyww;
write_imagef(output, (int2)(x, y), result);
}
The host code simply invokes this kernel log2(256) times, ping-ponging the input and output textures.
Note: I tried removing the native_cos and native_sin to see if that impacted timing, but it doesn't seem to change things by very much. Not the factor I'm looking for, in any case.
Access pattern
Knowing that I am probably memory-bandwidth bound, here is the memory access pattern (per-row) for my radix-2 FFT.
X0 - element 1 to combine (read)
X1 - element 2 to combine (read)
X - element to write to (write)
Question
So my question is - can someone help me with/point me toward a higher-radix formulation for this algorithm? I ask because most FFTs are optimized for large cases and single real/complex valued sequences. Their kernel generators are also very case dependent and break down quickly when I try to muck with their internals.
Are there other options better than simply going to a radix-8 or 16 kernel?
Some of my constraints are - I have to use OpenCL (no cuFFT). I also cannot use clAmdFft from ACML for this purpose. It would be nice to also talk about CPU optimizations (this kernel SUCKS big time on the CPU) - but getting it to run in fewer iterations on the GPU is my main use-case.
Thanks in advance for reading through all this and trying to help!
I tried several versions, but the one with the best performance on CPU and GPU was a radix-16 kernel for my specific case.
Here is the kernel for reference. It was taken from Eric Bainville's (most excellent) website and used with full attribution.
// #define M_PI 3.14159265358979f
//Global size is x.Length/2, Scale = 1 for direct, 1/N to inverse (iFFT)
__kernel void ConjugateAndScale(__global float4* x, const float Scale)
{
int i = get_global_id(0);
float temp = Scale;
float4 t = (float4)(temp, -temp, temp, -temp);
x[i] *= t;
}
// Return a*EXP(-I*PI*1/2) = a*(-I)
float2 mul_p1q2(float2 a) { return (float2)(a.y,-a.x); }
// Return a^2
float2 sqr_1(float2 a)
{ return (float2)(a.x*a.x-a.y*a.y,2.0f*a.x*a.y); }
// Return the 2x DFT2 of the four complex numbers in A
// If A=(a,b,c,d) then return (a',b',c',d') where (a',c')=DFT2(a,c)
// and (b',d')=DFT2(b,d).
float8 dft2_4(float8 a) { return (float8)(a.lo+a.hi,a.lo-a.hi); }
// Return the DFT of 4 complex numbers in A
float8 dft4_4(float8 a)
{
// 2x DFT2
float8 x = dft2_4(a);
// Shuffle, twiddle, and 2x DFT2
return dft2_4((float8)(x.lo.lo,x.hi.lo,x.lo.hi,mul_p1q2(x.hi.hi)));
}
// Complex product, multiply vectors of complex numbers
#define MUL_RE(a,b) (a.even*b.even - a.odd*b.odd)
#define MUL_IM(a,b) (a.even*b.odd + a.odd*b.even)
float2 mul_1(float2 a, float2 b)
{ float2 x; x.even = MUL_RE(a,b); x.odd = MUL_IM(a,b); return x; }
float4 mul_1_F4(float4 a, float4 b)
{ float4 x; x.even = MUL_RE(a,b); x.odd = MUL_IM(a,b); return x; }
float4 mul_2(float4 a, float4 b)
{ float4 x; x.even = MUL_RE(a,b); x.odd = MUL_IM(a,b); return x; }
// Return the DFT2 of the two complex numbers in vector A
float4 dft2_2(float4 a) { return (float4)(a.lo+a.hi,a.lo-a.hi); }
// Return cos(alpha)+I*sin(alpha) (3 variants)
float2 exp_alpha_1(float alpha)
{
float cs,sn;
// sn = sincos(alpha,&cs); // sincos
//cs = native_cos(alpha); sn = native_sin(alpha); // native sin+cos
cs = cos(alpha); sn = sin(alpha); // sin+cos
return (float2)(cs,sn);
}
// Return cos(alpha)+I*sin(alpha) (3 variants)
float4 exp_alpha_1_F4(float alpha)
{
float cs,sn;
// sn = sincos(alpha,&cs); // sincos
// cs = native_cos(alpha); sn = native_sin(alpha); // native sin+cos
cs = cos(alpha); sn = sin(alpha); // sin+cos
return (float4)(cs,sn,cs,sn);
}
// mul_p*q*(a) returns a*EXP(-I*PI*P/Q)
#define mul_p0q1(a) (a)
#define mul_p0q2 mul_p0q1
//float2 mul_p1q2(float2 a) { return (float2)(a.y,-a.x); }
__constant float SQRT_1_2 = 0.707106781186548; // cos(Pi/4)
#define mul_p0q4 mul_p0q2
float2 mul_p1q4(float2 a) { return (float2)(SQRT_1_2)*(float2)(a.x+a.y,-a.x+a.y); }
#define mul_p2q4 mul_p1q2
float2 mul_p3q4(float2 a) { return (float2)(SQRT_1_2)*(float2)(-a.x+a.y,-a.x-a.y); }
__constant float COS_8 = 0.923879532511287; // cos(Pi/8)
__constant float SIN_8 = 0.382683432365089; // sin(Pi/8)
#define mul_p0q8 mul_p0q4
float2 mul_p1q8(float2 a) { return mul_1((float2)(COS_8,-SIN_8),a); }
#define mul_p2q8 mul_p1q4
float2 mul_p3q8(float2 a) { return mul_1((float2)(SIN_8,-COS_8),a); }
#define mul_p4q8 mul_p2q4
float2 mul_p5q8(float2 a) { return mul_1((float2)(-SIN_8,-COS_8),a); }
#define mul_p6q8 mul_p3q4
float2 mul_p7q8(float2 a) { return mul_1((float2)(-COS_8,-SIN_8),a); }
// Compute in-place DFT2 and twiddle
#define DFT2_TWIDDLE(a,b,t) { float2 tmp = t(a-b); a += b; b = tmp; }
// T = N/16 = number of threads.
// P is the length of input sub-sequences, 1,16,256,...,N/16.
__kernel void FFT_Radix16(__global const float4 * x, __global float4 * y, int pp)
{
int p = pp;
int t = get_global_size(0); // number of threads
int i = get_global_id(0); // current thread
////// y[i] = 2*x[i];
////// return;
int k = i & (p-1); // index in input sequence, in 0..P-1
// Inputs indices are I+{0,..,15}*T
x += i;
// Output indices are J+{0,..,15}*P, where
// J is I with four 0 bits inserted at bit log2(P)
y += ((i-k)<<4) + k;
// Load
float4 u[16];
for (int m=0;m<16;m++) u[m] = x[m*t];
// Twiddle, twiddling factors are exp(_I*PI*{0,..,15}*K/4P)
float alpha = -M_PI*(float)k/(float)(8*p);
for (int m=1;m<16;m++) u[m] = mul_1_F4(exp_alpha_1_F4(m * alpha), u[m]);
// 8x in-place DFT2 and twiddle (1)
DFT2_TWIDDLE(u[0].lo,u[8].lo,mul_p0q8);
DFT2_TWIDDLE(u[0].hi,u[8].hi,mul_p0q8);
DFT2_TWIDDLE(u[1].lo,u[9].lo,mul_p1q8);
DFT2_TWIDDLE(u[1].hi,u[9].hi,mul_p1q8);
DFT2_TWIDDLE(u[2].lo,u[10].lo,mul_p2q8);
DFT2_TWIDDLE(u[2].hi,u[10].hi,mul_p2q8);
DFT2_TWIDDLE(u[3].lo,u[11].lo,mul_p3q8);
DFT2_TWIDDLE(u[3].hi,u[11].hi,mul_p3q8);
DFT2_TWIDDLE(u[4].lo,u[12].lo,mul_p4q8);
DFT2_TWIDDLE(u[4].hi,u[12].hi,mul_p4q8);
DFT2_TWIDDLE(u[5].lo,u[13].lo,mul_p5q8);
DFT2_TWIDDLE(u[5].hi,u[13].hi,mul_p5q8);
DFT2_TWIDDLE(u[6].lo,u[14].lo,mul_p6q8);
DFT2_TWIDDLE(u[6].hi,u[14].hi,mul_p6q8);
DFT2_TWIDDLE(u[7].lo,u[15].lo,mul_p7q8);
DFT2_TWIDDLE(u[7].hi,u[15].hi,mul_p7q8);
// 8x in-place DFT2 and twiddle (2)
DFT2_TWIDDLE(u[0].lo,u[4].lo,mul_p0q4);
DFT2_TWIDDLE(u[0].hi,u[4].hi,mul_p0q4);
DFT2_TWIDDLE(u[1].lo,u[5].lo,mul_p1q4);
DFT2_TWIDDLE(u[1].hi,u[5].hi,mul_p1q4);
DFT2_TWIDDLE(u[2].lo,u[6].lo,mul_p2q4);
DFT2_TWIDDLE(u[2].hi,u[6].hi,mul_p2q4);
DFT2_TWIDDLE(u[3].lo,u[7].lo,mul_p3q4);
DFT2_TWIDDLE(u[3].hi,u[7].hi,mul_p3q4);
DFT2_TWIDDLE(u[8].lo,u[12].lo,mul_p0q4);
DFT2_TWIDDLE(u[8].hi,u[12].hi,mul_p0q4);
DFT2_TWIDDLE(u[9].lo,u[13].lo,mul_p1q4);
DFT2_TWIDDLE(u[9].hi,u[13].hi,mul_p1q4);
DFT2_TWIDDLE(u[10].lo,u[14].lo,mul_p2q4);
DFT2_TWIDDLE(u[10].hi,u[14].hi,mul_p2q4);
DFT2_TWIDDLE(u[11].lo,u[15].lo,mul_p3q4);
DFT2_TWIDDLE(u[11].hi,u[15].hi,mul_p3q4);
// 8x in-place DFT2 and twiddle (3)
DFT2_TWIDDLE(u[0].lo,u[2].lo,mul_p0q2);
DFT2_TWIDDLE(u[0].hi,u[2].hi,mul_p0q2);
DFT2_TWIDDLE(u[1].lo,u[3].lo,mul_p1q2);
DFT2_TWIDDLE(u[1].hi,u[3].hi,mul_p1q2);
DFT2_TWIDDLE(u[4].lo,u[6].lo,mul_p0q2);
DFT2_TWIDDLE(u[4].hi,u[6].hi,mul_p0q2);
DFT2_TWIDDLE(u[5].lo,u[7].lo,mul_p1q2);
DFT2_TWIDDLE(u[5].hi,u[7].hi,mul_p1q2);
DFT2_TWIDDLE(u[8].lo,u[10].lo,mul_p0q2);
DFT2_TWIDDLE(u[8].hi,u[10].hi,mul_p0q2);
DFT2_TWIDDLE(u[9].lo,u[11].lo,mul_p1q2);
DFT2_TWIDDLE(u[9].hi,u[11].hi,mul_p1q2);
DFT2_TWIDDLE(u[12].lo,u[14].lo,mul_p0q2);
DFT2_TWIDDLE(u[12].hi,u[14].hi,mul_p0q2);
DFT2_TWIDDLE(u[13].lo,u[15].lo,mul_p1q2);
DFT2_TWIDDLE(u[13].hi,u[15].hi,mul_p1q2);
// 8x DFT2 and store (reverse binary permutation)
y[0] = u[0] + u[1];
y[p] = u[8] + u[9];
y[2*p] = u[4] + u[5];
y[3*p] = u[12] + u[13];
y[4*p] = u[2] + u[3];
y[5*p] = u[10] + u[11];
y[6*p] = u[6] + u[7];
y[7*p] = u[14] + u[15];
y[8*p] = u[0] - u[1];
y[9*p] = u[8] - u[9];
y[10*p] = u[4] - u[5];
y[11*p] = u[12] - u[13];
y[12*p] = u[2] - u[3];
y[13*p] = u[10] - u[11];
y[14*p] = u[6] - u[7];
y[15*p] = u[14] - u[15];
}
Note that I have modified the kernel to perform the FFT of 2 complex-valued sequences at once instead of one. Also, since I only need the FFT of 256 elements at a time in a much larger sequence, I perform only 2 runs of this kernel, which leaves me with 256-length DFTs in the larger array.
Here's some of the relevant host code as well.
var ev = new[] { new Cl.Event() };
var pEv = new[] { new Cl.Event() };
int fftSize = 1;
int iter = 0;
int n = distributionSize >> 5;
while (fftSize <= n)
{
Cl.SetKernelArg(fftKernel, 0, memA);
Cl.SetKernelArg(fftKernel, 1, memB);
Cl.SetKernelArg(fftKernel, 2, fftSize);
Cl.EnqueueNDRangeKernel(commandQueue, fftKernel, 1, null, globalWorkgroupSize, localWorkgroupSize,
(uint)(iter == 0 ? 0 : 1),
iter == 0 ? null : pEv,
out ev[0]).Check();
if (iter > 0)
pEv[0].Dispose();
Swap(ref ev, ref pEv);
Swap(ref memA, ref memB); // ping-pong
fftSize = fftSize << 4;
iter++;
Cl.Finish(commandQueue);
}
Swap(ref memA, ref memB);
Hope this helps someone!

OpenCL pass by reference different addres space

Short storry:
I have function with pass by reference of output variables
void acum( float2 dW, float4 dFE, float2 *W, float4 *FE )
which is supposed increment variables *W, *FE, by dW, dFE if some coditions are fulfilled.
I want to make this function general - the output varibales can be either local or global.
acum( dW, dFE, &W , &FE ); // __local acum
acum( W, FE, &Wout[idOut], &FEout[idOut] ); // __global acum
When I try to compile it I got error
error: illegal implicit conversion between two pointers with different address spaces
is it possible to make it somehow??? I was thinking if I can use macro instead of function (but I'm not very familier with macros in C).
An other possibility would be probably:
to use function which returns a struct{Wnew, FEnew}
or (float8)(Wnew,FEnew,0,0) but I don't like it because it makes the code more confusing.
Naturally I also don't want to just copy the body of "acum" all over the places (like manual inlining it :) )
Backbround (Not necessary to read):
I'm trying program some thermodynamical sampling using OpenCL. Because statistical weight W = exp(-E/kT) can easily overflow float (2^64) for low temperature, I made a composed datatype W = float2(W_digits, W_exponent) and I defined functions to manipulate these numbers "acum".
I try to minimize number of global memory operations, so I let work_items go over Vsurf rather than FEout, becauese I expect that only few points in Vsurf would have considerable statistical weight, so accumulation of FEout would be called only afew times for each workitem. While iteratin over FEout would require sizeof(FEout)*sizeof(Vsurf) memory operations.
The whole code is here (any recomandations how to make it more efficinet are welcome):
// ===== function :: FF_vdW - Lenard-Jones Van Der Waals forcefield
float4 FF_vdW ( float3 R ){
const float C6 = 1.0;
const float C12 = 1.0;
float ir2 = 1.0/ dot( R, R );
float ir6 = ir2*ir2*ir2;
float ir12 = ir6*ir6;
float E6 = C6*ir6;
float E12 = C12*ir12;
return (float4)(
( 6*E6 - 12*E12 ) * ir2 * R
, E12 - E6
);}
// ===== function :: FF_spring - harmonic forcefield
float4 FF_spring( float3 R){
const float3 k = (float3)( 1.0, 1.0, 1.0 );
float3 F = k*R;
return (float4)( F,
0.5*dot(F,R)
);}
// ===== function :: EtoW - compute statistical weight
float2 EtoW( float EkT ){
float Wexp = floor( EkT);
return (float2)( exp(EkT - Wexp)
, Wexp
); }
// ===== procedure : addExpInplace -- acumulate F,E with statistical weight dW
void acum( float2 dW, float4 dFE, float2 *W, float4 *FE )
{
float dExp = dW.y - (*W).y; // log(dW)-log(W)
if(dExp>-22){ // e^22 = 2^32 , single_float = 2^+64
float fac = exp(dExp);
if (dExp<0){ // log(dW)<log(W)
dW.x *= fac;
(*FE) += dFE*dW.x;
(*W ).x += dW.x;
}else{ // log(dW)>log(W)
(*FE) = dFE + fac*(*FE);
(*W ).x = dW.x + fac*(*W).x;
(*W ).y = dW.y;
}
}
}
// ===== __kernel :: sampler
__kernel void sampler(
__global float * Vsurf, // in : surface potential (including vdW) // may be faster to precomputed endpoints positions like float8
__global float4 * FEout, // out : Fx,Fy,Fy, E
__global float2 * Wout, // out : W_digits, W_exponent
int3 nV ,
float3 dV ,
int3 nOut ,
int3 iOut0 , // shift of Fout in respect to Vsurf
int3 nCopy , // number of copies of
int3 nSample , // dimension of sampling in each dimension around R0 +nSample,-nSample
float3 RXe0 , // postion Xe relative to Tip
float EcutSurf ,
float EcutTip ,
float logWcut , // accumulate only when log(W) > logWcut
float kT // maximal energy which should be sampled
) {
int id = get_global_id(0); // loop over potential grid points
int idx = id/nV.x;
int3 iV = (int3)( idx/nV.y
, idx%nV.y
, id %nV.x );
float V = Vsurf[id];
float3 RXe = dV*iV;
if (V<EcutSurf){
// loop over tip position
for (int iz=0;iz<nOut.z;iz++ ){
for (int iy=0;iy<nOut.y;iy++ ){
for (int ix=0;ix<nOut.x;ix++ ){
int3 iTip = (int3)( iz, iy, ix );
float3 Rtip = dV*iTip;
float4 FE = 0;
float2 W = 0;
// loop over images of potential
for (int ix=0;ix<nCopy.x;ix++ ){
for (int iy=0;iy<nCopy.y;iy++ ){
float3 dR = RXe - Rtip;
float4 dFE = FF_vdW( dR );
dFE += FF_spring( dR - RXe0 );
dFE.w += V;
if( dFE.w < EcutTip ){
float2 dW = EtoW( - FE.w / kT );
acum( dW, dFE, &W , &FE ); // __local acum
}
}
}
if( W.y > logWcut ){ // accumulate force
int idOut = iOut0.x + iOut0.y*nOut.x + iOut0.z*nOut.x*nOut.y;
acum( W, FE, &Wout[idOut], &FEout[idOut] ); // __global acum
}
}}}}
}
I'm using pyOpenCL on ubuntu 12.04 64bit but I think it has nothink to do with the problem
OK, here what's happening, from the OpenCL man pages:
http://www.khronos.org/registry/cl/sdk/1.1/docs/man/xhtml/global.html
"If the type of an object is qualified by an address space name, the object is allocated in the specified address name; otherwise, the object is allocated in the generic address space"
...
"The generic address space name for arguments to a function in a program, or local variables of a function is __private. All function arguments shall be in the __private address space. "
So your acum(... ) function args are in the __private address space.
So the compiler is correctly saying that
acum( ..&Wout[idOut], &FEout[idOut] )
is being called with &Wout and &FEout in the global addrress space when the function args must in the private address space.
The solution is conversion between global and private.
Create two private temporary vars to receive the results.
call acum( ... ) with these vars.
assign the temporary private values to the global values, after you've called acum( .. )
The code will look is bit messy
Remember on a GPU you have many address spaces, you can't magically jump between them by casting. You have to move data explicitly between address spaces by assignment.

Resources