I am getting the error error: implicit declaration of function 'printf' is invalid in OpenCL when I try to build an OpenCL kernel. The kernel code is this
__kernel void conj_grad(int dim, int num_vals, __local float *r,
__local float *x, __local float* A_times_p, __local float *p,
__global int *rows, __global int *cols, __global float *A,
__global float *b, __global float *result) {
local float alpha, r_length, old_r_dot_r, new_r_dot_r;
local int iteration;
int id = get_local_id(0);
int start_index = -1;
int end_index = -1;
float Ap_dot_p;
printf("OpenCL Kernel ID: %d\n", id);
This gives me the error below
input.cl:14:4: error: implicit declaration of function 'printf' is invalid in OpenCL
input.cl:14:4: note: did you mean 'rint'?
/usr/include/clc/math/unary_decl.inc:1:39: note: 'rint' declared here
/usr/include/clc/math/rint.h:2:24: note: expanded from macro '__CLC_FUNCTION'
input.cl:46:45: warning: double precision constant requires cl_khr_fp64, casting to single precision
I am getting a negative return code from this function err = clBuildProgram(program, 0, NULL, NULL, NULL, NULL);
I have already tried the solutions from these questions Using printf() in OpenCL kernel and printf function doesn't work in OpenCL kernel, and neither of these solutions fix it. When I try these solutions, I see a warning such as input.cl:1:26: warning: unknown OpenCL extension 'cl_amd_printf' - ignoring
Conclusion
Looks like my system does not support the printf extensions. The code below (stealing from pmdj's answer) gives me the following output. Looks like a classic story of don't depend on vendor specific extensions to a standard.
#include <stdio.h>
#include <CL/cl.h>
int main(void) {
char ext_str[1024] = "";
size_t ext_str_len = sizeof(ext_str) - 1;
cl_device_id device_id;
cl_int err;
cl_platform_id platform;
err = clGetPlatformIDs(1, &platform, NULL);
if(err < 0) {
perror("Couldn't identify a platform");
exit(1);
}
err = clGetDeviceIDs(platform, CL_DEVICE_TYPE_GPU, 1, &device_id, NULL);
if(err < 0) {
perror("Couldn't access any devices");
exit(1);
}
err = clGetDeviceInfo(device_id, CL_DEVICE_EXTENSIONS, sizeof(ext_str), ext_str, &ext_str_len);
if(err < 0) {
perror("Couldn't get device info");
exit(1);
}
printf("CL extensions (%lu): '%s'\n", ext_str_len, ext_str);
return 0;
}
CL extensions (248): 'cl_khr_byte_addressable_store cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_fp64 cl_khr_fp16'
As you've found, printf isn't part of standard OpenCL, but some implementations offer extensions which enable it.
To check for extensions supported by your implementation, try something like this:
char ext_str[1024] = "";
size_t ext_str_len = sizeof(ext_str) - 1;
err = clGetDeviceInfo(device_id, CL_DEVICE_EXTENSIONS, sizeof(ext_str), ext_str, &ext_str_len);
printf("CL extensions (%lu): '%s'\n", ext_str_len, ext_str);
If there aren't any cl_*_printf extensions listed, you are probably out of luck. If one is listed, you'll need to enable it as described in the other answers you linked. You might also want to check the specification for the specific extension yours supports in case it exhibits any particular quirks.
Related
From this question and this question I managed to compile a minimal example of summing a vector into a single double inside OpenCL 1.2.
/* https://suhorukov.blogspot.com/2011/12/opencl-11-atomic-operations-on-floating.html */
inline void AtomicAdd(volatile __global double *source, const double operand) {
union { unsigned int intVal; double floatVal; } prevVal, newVal;
do {
prevVal.floatVal = *source;
newVal.floatVal = prevVal.floatVal + operand;
} while( atomic_cmpxchg((volatile __global unsigned int *)source, prevVal.intVal, newVal.intVal) != prevVal.intVal );
}
void kernel cost_function(__constant double* inputs, __global double* outputs){
int index = get_global_id(0);
if(0 == error_index){ outputs[0] = 0.0; }
barrier(CLK_GLOBAL_MEM_FENCE);
AtomicAdd(&outputs[0], inputs[index]); /* (1) */
//AtomicAdd(&outputs[0], 5.0); /* (2) */
}
As in fact this solution is incorrect because the result is always 0 when the buffer is accessed. What might the problem with this?
the code at /* (1) */ doesn't work, and neither does the code at /* (2) */, which is only there to test the logic independent of any inputs.
Is barrier(CLK_GLOBAL_MEM_FENCE); used correctly here to reset the output before any calculations are done to it?
According to the specs in OpenCL 1.2 single precision floating point numbers are supported by atomic operations, is this(AtomicAdd) a feasible method of extending the support to double precision numbers or am I missing something?
Of course the device I am testing with supports cl_khr_fp64˙of course.
Your AtomicAdd is incorrect. Namely, the 2 errors are:
In the union, intVal must be a 64-bit integer and not 32-bit integer.
Use the 64-bit atom_cmpxchg function and not the 32-bit atomic_cmpxchg function.
The correct implementation is:
#pragma OPENCL EXTENSION cl_khr_int64_base_atomics : enable
inline void AtomicAdd(volatile __global double *source, const double operand) {
union { unsigned ulong u64; double f64; } prevVal, newVal;
do {
prevVal.f64 = *source;
newVal.f64 = prevVal.f64 + operand;
} while(atom_cmpxchg((volatile __global ulong*)source, prevVal.u64, newVal.u64) != prevVal.u64);
}
barrier(CLK_GLOBAL_MEM_FENCE); is used correctly here. Note that a barrier must not be in an if- or else-branch.
UPDATE: According to STREAMHPC, the original implementation you use is not guaranteed to produce correct results. There is an improved implementation:
void __attribute__((always_inline)) atomic_add_f(volatile global float* addr, const float val) {
union {
uint u32;
float f32;
} next, expected, current;
current.f32 = *addr;
do {
next.f32 = (expected.f32=current.f32)+val; // ...*val for atomic_mul_f()
current.u32 = atomic_cmpxchg((volatile global uint*)addr, expected.u32, next.u32);
} while(current.u32!=expected.u32);
}
#ifdef cl_khr_int64_base_atomics
#pragma OPENCL EXTENSION cl_khr_int64_base_atomics : enable
void __attribute__((always_inline)) atomic_add_d(volatile global double* addr, const double val) {
union {
ulong u64;
double f64;
} next, expected, current;
current.f64 = *addr;
do {
next.f64 = (expected.f64=current.f64)+val; // ...*val for atomic_mul_d()
current.u64 = atom_cmpxchg((volatile global ulong*)addr, expected.u64, next.u64);
} while(current.u64!=expected.u64);
}
#endif
I have this Mandelbrot Kernel written for an OpenCL program. For test I've decided to have all my complex plane on a vector. My problem is when I print the output I obtain a list of 1 (like the initialization of the results array) and not the result of the kernel work.
Where can I have the problem?
#include <iostream>
#ifdef __APPLE__
#include <OpenCL/opencl.h>
#else
#include <CL/cl.h>
#endif
int main(){
using namespace std;
int xPixel=100;
int yPixel=100;
float ics[xPixel];
for(int i=0;i<xPixel;++i)
ics[i]=-2+i*((float)4/xPixel);
float ypsilon[yPixel];
for(int i=0;i<yPixel;++i)
ypsilon[i]=-2+i*((float)4/yPixel);
int results[xPixel*yPixel];
for(int i=0;i<xPixel*yPixel;++i)
results[i]=1;
cl_context context;
cl_context_properties properties[3];
cl_kernel kernel;
cl_command_queue command_queue;
cl_program program;
cl_int err;
cl_uint num_of_platforms=0;
cl_platform_id platform_id;
cl_device_id device_id;
cl_uint num_of_devices=0;
cl_mem memX, memY, memOutput;
size_t global;
const char *KernelSource =
"__kernel void mandelbrot(__global float *ics, __global float *ypsilon, __global int *output){\n"\
"size_t id=get_global_id(0);\n"\
"int yPixel=100;\n"\
"for(int i=0;i<yPixel;i++){\n"\
"float x=0;\n"\
"float y=0;\n"\
"int counter=0;\n"\
"while(counter<1000){\n"\
"if(x*x+y*y>2*2){\n"\
"output[(id*yPixel)+i]=counter;\n"\
"break;\n"\
"}\n"\
"float xTemp=x*x-y*y+ics[id];\n"\
"y=2*x*y+ypsilon[i];\n"\
"x=xTemp;\n"\
"counter++;\n"\
"}\n"\
"}\n"\
"}\n";
// retreives a list of platforms available
if (clGetPlatformIDs(1, &platform_id, &num_of_platforms)!= CL_SUCCESS){
cout<<"Unable to get platform_id\n"<<endl;;
return 1;
}
// try to get a supported GPU device
if (clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_GPU, 1, &device_id,&num_of_devices) != CL_SUCCESS){
cout<<"Unable to get device_id\n"<<endl;
return 1;
}
//context properties list - nust be terminated with 0
properties[0]=CL_CONTEXT_PLATFORM;
properties[1]=(cl_context_properties)platform_id;
properties[2]=0;
//create a context with the GPU device
context=clCreateContext(properties,1,&device_id,NULL,NULL,&err);
//create a command queue using the context and device
command_queue=clCreateCommandQueue(context,device_id,0,&err);
//create a program from the kernel source code
program=clCreateProgramWithSource(context,1,(const char**)&KernelSource,NULL,&err);
//compile the program
if(clBuildProgram(program,0,NULL,NULL,NULL,NULL)!=CL_SUCCESS){
cout<<"Error building program"<<endl;
return 1;
}
//specify which kernel from the program to execute
kernel=clCreateKernel(program,"mandelbrot",&err);
//create buffers for input and output
memX=clCreateBuffer(context,CL_MEM_READ_ONLY,sizeof(float)*xPixel,NULL,NULL);
memY=clCreateBuffer(context,CL_MEM_READ_ONLY,sizeof(float)*yPixel,NULL,NULL);
memOutput=clCreateBuffer(context,CL_MEM_WRITE_ONLY,sizeof(int)*(xPixel*yPixel),NULL,NULL);
//load data into the input buffer
clEnqueueWriteBuffer(command_queue,memX,CL_TRUE,0,sizeof(float)*xPixel,ics,0,NULL,NULL);
clEnqueueWriteBuffer(command_queue,memY,CL_TRUE,0,sizeof(float)*yPixel,ypsilon,0,NULL,NULL);
//set the argument list for the kernel command
clSetKernelArg(kernel,0,sizeof(cl_mem),&memX);
clSetKernelArg(kernel,1,sizeof(cl_mem),&memY);
clSetKernelArg(kernel,2,sizeof(cl_mem),&memOutput);
global=xPixel*yPixel;
//enqueue the kernel command for execution
clEnqueueNDRangeKernel(command_queue,kernel,1,NULL,&global,NULL,0,NULL,NULL);
clFinish(command_queue);
//copy the results from out of the output buffer
clEnqueueReadBuffer(command_queue,memOutput,CL_TRUE,0,sizeof(int)*(xPixel*yPixel),results,0,NULL,NULL);
//print output
for(int i=0;i<xPixel;++i){
for(int j=0;j<yPixel;++j){
cout<<results[(i*yPixel)+j]<<" ";
}
cout<<endl;
}
//cleanup - release OpenCL resources
clReleaseMemObject(memX);
clReleaseMemObject(memY);
clReleaseMemObject(memOutput);
clReleaseProgram(program);
clReleaseKernel(kernel);
clReleaseCommandQueue(command_queue);
clReleaseContext(context);
}
I'm not seeing the exact reason, but I do have a question: If you're running this on every element then what is the "i" looping over "yPixel" for? It seems like you're doing X*Y*Y work instead of X*Y work (your global size is X*Y then the kernel loops on Y again).
If you add "output[(id*yPixel)+i]=42" before the "i" loop then what does your output buffer hold? That will tell you if the problem lies in your kernel or your host code.
To help anyone else looking at this, I've reformatted the kernel code:
__kernel void mandelbrot(__global float *ics, __global float *ypsilon, __global int *output)
{
size_t id=get_global_id(0);
int yPixel=100;
for(int i=0;i<yPixel;i++)
{
float x=0;
float y=0;
int counter=0;
while(counter<1000)
{
if(x*x+y*y>2*2)
{
output[(id*yPixel)+i]=counter;
break;
}
float xTemp=x*x-y*y+ics[id];
y=2*x*y+ypsilon[i];
x=xTemp;
counter++;
}
}
}
I am having an issue with my OpenCL kernel. The input arguments are corrupt when they are passed to the kernel. What makes this strange is this same exact kernel executes flawlessly on mac osx. Once I started porting my code over to windows (windows 8 64-bit) I started having this issue.
I have provided an example using my camera struct. The x,y,z coordinates are defined as <0,0,200>. However, when they make it to my kernel they show as <0,-0.00132704, -0.00132704>.
I have a kernel that accepts two structs.
typedef struct{
cl_float d;
cl_float3 eye;
cl_float3 lookat;
cl_float3 u;
cl_float3 v;
cl_float3 w;
cl_float3 up;
}rt_cl_camera;
typedef struct {
float r;
float g;
float b;
} rt_cl_rgb;
I have slimmed down my kernel for the sake of testing. After tracking down the issues I noticed that my input paramaters were not coming over correctly. However, I have determined that my output is being passed back correctly.
__kernel void ray_trace_scene( __global rt_cl_rgb* output,
__global rt_cl_camera* camera,
const unsigned int pcount)
{
int pixel = get_global_id(0);
if(pixel < pcount){
output[pixel].r = camera->eye.x;
output[pixel].g = camera->eye.y;
output[pixel].b = camera->eye.z;
}// End Pixel computation
}//End kernel
I am creating my input buffer with the follwoing:
cl_mem cam_input;
cl_uint cam_error;
cam_input = clCreateBuffer(context, CL_MEM_READ_ONLY, sizeof(rt_cl_camera), NULL, &cam_error);
I am also checking to make sure my buffer was created successfully with
if (cam_error != CL_SUCCESS || !cam_input) {
throw std::runtime_error(CLERROR_FAILED_DEVBUFF);
}
I then write my data into my buffer with the following.
cl_uint err = 0;
err = clEnqueueWriteBuffer(commands, cam_input, CL_TRUE, 0, sizeof(rt_cl_camera), cam_ptr, 0, NULL, NULL);
if (err != CL_SUCCESS) {
throw std::runtime_error("Failed to write camera");
}
and finally linking my argument for the appropriate command line slot. Please note that slot zero is being used for my output.
err |= clSetKernelArg(kernel, 1, sizeof(cl_mem), &cam_input);
and checking that everything was successful..
if (err != CL_SUCCESS) {
throw std::runtime_error(CLERROR_FAILED_CMDARGS);
}
I am not receiving any error messages from openCL at any step of the process. Has anyone run into this? Any help is greatly appreciated.
side note - At each step of the way I am printing out my local variables to make sure they are correct and valid before I pass them over to the GPU.
Looks an alignment/packing issue. Try using float4 instead of float3 in the struct, and move float d at the end.
I am trying to implement atomic functions in my opencl kernel. Multiple threads I am creating are parallely trying to write a single memory location. I want them to perform serial execution on that particular line of code. I have never used an atomic function before.
I found similar problems on many blogs and forums,and I am trying one solution.,i.e. use of two different functions 'acquire' and 'release' for locking and unlocking the semaphore. I have included necessary opencl extensions, which are all surely supported by my device (NVIDIA GeForce GTX 630M).
My kernel execution configuration:
global_item_size = 8;
ret = clEnqueueNDRangeKernel(command_queue2, kernel2, 1, NULL, &global_item_size2, &local_item_size2, 0, NULL, NULL);
Here is my code: reducer.cl
#pragma OPENCL EXTENSION cl_khr_fp64 : enable
#pragma OPENCL EXTENSION cl_khr_global_int32_base_atomics : enable
#pragma OPENCL EXTENSION cl_khr_local_int32_base_atomics : enable
#pragma OPENCL EXTENSION cl_khr_global_int32_extended_atomics : enable
#pragma OPENCL EXTENSION cl_khr_local_int32_extended_atomics : enable
typedef struct data
{
double dattr[10];
int d_id;
int bestCent;
}Data;
typedef struct cent
{
double cattr[5];
int c_id;
}Cent;
__global void acquire(__global int* mutex)
{
int occupied;
do {
occupied = atom_xchg(mutex, 1);
} while (occupied>0);
}
__global void release(__global int* mutex)
{
atom_xchg(mutex, 0); //the previous value, which is returned, is ignored
}
__kernel void reducer(__global int *keyMobj, __global int *valueMobj,__global Data *dataMobj,__global Cent *centMobj,__global int *countMobj,__global double *sumMobj, __global int *mutex)
{
__local double sum[2][2];
__local int cnt[2];
int i = get_global_id(0);
int n,j;
if(i<2)
cnt[i] = countMobj[i];
barrier(CLK_GLOBAL_MEM_FENCE);
n = keyMobj[i];
for(j=0; j<2; j++)
{
barrier(CLK_GLOBAL_MEM_FENCE);
acquire(mutex);
sum[n][j] += dataMobj[i].dattr[j];
release(mutex);
}
if(i<2)
{
for(j=0; j<2; j++)
{
sum[i][j] = sum[i][j]/countMobj[i];
centMobj[i].cattr[j] = sum[i][j];
}
}
}
Unfortunately the solution doesn't seem like working for me. When I am reading back the centMobj into the host memory, using
ret = clEnqueueReadBuffer(command_queue2, centMobj, CL_TRUE, 0, (sizeof(Cent) * 2), centNode, 0, NULL, NULL);
ret = clEnqueueReadBuffer(command_queue2, sumMobj, CL_TRUE, 0, (sizeof(double) * 2 * 2), sum, 0, NULL, NULL);
it is giving me error with error code = -5 (CL_OUT_OF_RESOURCES) for both centMobj and sumMobj.
I am not getting if there is any problem in my atomic function code or problem is in reading back data into the host memory. If I am using the atomic function incorrectly, please make me correct.
Thank you in advance.
In OpenCL, synchronization between work items can be done only inside a work-group. Code trying to synchronize work-items across different work-groups may work in some very specific (and implementation/device dependent) cases, but will fail in the general case.
The solution is to either use atomics to serialize accesses to the same memory location (but without blocking any work item), or redesign the code differently.
This is a part of some sort of parallel reduction/extremum kernel. I have reduced it to the minimum code that still gets clBuildProgram crashing (note that it really crashes, and doesn't just return an error code):
EDIT: It seems like this also happens when local_value is declared global instead of local.
EDIT2 / SOLUTION: The problem was that there was an infinite loop. I should have written remaining_items >>= 1 instead of remaining_items >> 1. As has been said in the answers, the nvidia compiler seems not very robust when it comes to compile/optimization errors.
kernel void testkernel(local float *local_value)
{
size_t thread_id = get_local_id(0);
int remaining_items = 1024;
while (remaining_items > 1)
{
// throw away the right half of the threads
remaining_items >> 1; // <-- SPOTTED THE BUG
if (thread_id > remaining_items)
{
return;
}
// look for a greater value in the right half of the memory space
int right_index = thread_id + remaining_items;
float right_value = local_value[right_index];
if (right_value > local_value[thread_id])
{
local_value[thread_id] = right_value;
}
barrier(CLK_GLOBAL_MEM_FENCE);
}
}
Removing the lines return; and/or local_value[thread_id] = right_value; causes clBuildProgram to finish successfully.
I can reproduce this problem on all of my computers (NVIDIA GTX 560, GT 555M, GT 540M, they're all Fermi 2.1 architecture). It's apparent on the NVIDIA CUDA Toolkit SDK versions 4.0, 4.1 and 4.2, when using either x64 or x86 libraries.
Does anyone have an idea what could be the problem?
Is it possible, that local (aka shared) memory is automatically assumed to be (WORK_GROUP_SIZE) * siezof(its_base_type)? That would explain why it works when the lines I mentioned above are removed.
Minimal host code (C99 compatible) for reproduction:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#ifdef __APPLE__
#include <OpenCL/opencl.h>
#else
#include <CL/cl.h>
#endif
#define RETURN_THROW(expression) do { cl_int ret = expression; if (ret) { printf(#expression " FAILED: %d\n" , ret); exit(1); } } while (0)
#define REF_THROW(expression) do { cl_int ret; expression; if (ret) { printf(#expression " FAILED: %d\n" , ret); exit(1); } } while (0)
int main(int argc, char **argv)
{
// Load the kernel source code into the array source_str
FILE *fp;
fp = fopen("testkernel.cl", "rb");
if (!fp)
{
fprintf(stderr, "Failed to load kernel.\n");
exit(1);
}
fseek(fp, 0, SEEK_END);
int filesize = ftell(fp);
rewind(fp);
char *source_str = (char*)calloc(filesize, sizeof(char));
size_t bytes_read = fread(source_str, 1, filesize, fp);
source_str[bytes_read] = 0;
fclose(fp);
// Get platform information
cl_uint num_platforms;
RETURN_THROW(clGetPlatformIDs(0, NULL, &num_platforms));
cl_platform_id *platform_ids = (cl_platform_id *)calloc(num_platforms, sizeof(cl_platform_id));
RETURN_THROW(clGetPlatformIDs(num_platforms, platform_ids, NULL));
cl_device_id selected_device_id = NULL;
printf("available platforms:\n");
for (cl_uint i = 0; i < num_platforms; i++)
{
char platform_name[50];
RETURN_THROW(clGetPlatformInfo(platform_ids[i], CL_PLATFORM_NAME, 50, platform_name, NULL));
printf("%s\n", platform_name);
// get devices for this platform
cl_uint num_devices;
RETURN_THROW(clGetDeviceIDs(platform_ids[i], CL_DEVICE_TYPE_GPU, 0, NULL, &num_devices));
cl_device_id *device_ids = (cl_device_id *)calloc(num_devices, sizeof(cl_device_id));
RETURN_THROW(clGetDeviceIDs(platform_ids[i], CL_DEVICE_TYPE_GPU, num_devices, device_ids, NULL));
// select first nvidia device
if (strstr(platform_name, "NVIDIA")) // ADAPT THIS ACCORDINGLY
{
selected_device_id = device_ids[0];
}
}
if (selected_device_id == NULL)
{
printf("No NVIDIA device found\n");
exit(1);
}
// Create an OpenCL context
cl_context context;
REF_THROW(context = clCreateContext(NULL, 1, &selected_device_id, NULL, NULL, &ret));
// Create a program from the kernel source
cl_program program;
REF_THROW(program = clCreateProgramWithSource(context, 1, (const char **)&source_str, NULL, &ret));
// Build the program
cl_int ret = clBuildProgram(program, 1, &selected_device_id, NULL, NULL, NULL);
if (ret)
{
printf("BUILD ERROR\n");
// build error - get build log and display it
size_t build_log_size;
ret = clGetProgramBuildInfo(program, selected_device_id, CL_PROGRAM_BUILD_LOG, 0, NULL, &build_log_size);
char *build_log = new char[build_log_size];
ret = clGetProgramBuildInfo(program, selected_device_id, CL_PROGRAM_BUILD_LOG, build_log_size, build_log, NULL);
printf("%s\n", build_log);
exit(1);
}
printf("build finished successfully\n");
return 0;
}
In my experience the nvidia compiler isn't very robust when it comes to handling build errors, so you probably have a compile error somewhere.
I think your problem is indeed the return, or more to the point its combination with barrier. According to the opencl spec about barriers:
All work-items in a work-group executing the kernel on a processor
must execute this function before any are allowed to continue
execution beyond the barrier. This function must be encountered by all
work-items in a work-group executing the kernel.
If barrier is inside a conditional statement, then all work-items must enter the
onditional if any work-item enters the conditional statement and
executes the barrier.
If barrer is inside a loop, all work-items
must execute the barrier for each iteration of the loop before any are
allowed to continue execution beyond the barrier.
So I think your problem is probably that a lot of threads would return before getting to the barrier, making this code invalid. Maybe you should try something like this:
kernel void testkernel(local float *local_value) {
size_t thread_id = get_local_id(0);
int remaining_items = 1024;
while (remaining_items > 1) {
remaining_items >>= 1;// throw away the right half of the threads
if (thread_id <= remaining_items) {
// look for a greater value in the right half of the memory space
int right_index = thread_id + remaining_items;
float right_value = local_value[right_index];
if (right_value > local_value[thread_id])
local_value[thread_id] = right_value;
}
barrier(CLK_GLOBAL_MEM_FENCE);
}
}
Edit: Furthermore as noted in the comments it needs to be remaining_items>>=1 instead of remaining_items>>1 in order to avoid producing an infinite loop.