clBuildProgram failed with error code -11 and without build log - opencl

I have worked little bit in OpenCL now but recently "clBuildProgram" failed in one of my program. My code excerpt is below:
cl_program program;
program = clCreateProgramWithSource(context, 1, (const char**) &kernel_string, NULL, &err);
if(err != CL_SUCCESS)
{
cout<<"Unable to create Program Object. Error code = "<<err<<endl;
exit(1);
}
if(clBuildProgram(program, 0, NULL, NULL, NULL, NULL) != CL_SUCCESS)
{
cout<<"Program Build failed\n";
size_t length;
char buffer[2048];
clGetProgramBuildInfo(program, device_id[0], CL_PROGRAM_BUILD_LOG, sizeof(buffer), buffer, &length);
cout<<"--- Build log ---\n "<<buffer<<endl;
exit(1);
}
Normally earlier I got syntax or other errors inside kernel file here with the help of "clGetProgramBuildInfo()" function whenever "clBuildProgram" Failed but when this program runs, on console it only prints:
Program Build failed
--- Build log ---
And when I tried to print the error code returned by "clBuildProgram"; it is "-11"......
What can be the problem with my kernel file that I dont get any build fail information ?

You can learn the meaning of OpenCL error codes by searching in cl.h. In this case, -11 is just what you'd expect, CL_BUILD_PROGRAM_FAILURE. It's certainly curious that the build log is empty. Two questions:
1.) What is the return value from clGetProgramBuildInfo?
2.) What platform are you on? If you are using Apple's OpenCL implementation, you could try setting CL_LOG_ERRORS=stdout in your environment. For example, from Terminal:
$ CL_LOG_ERRORS=stdout ./myprog
It's also pretty easy to set this in Xcode (Edit Scheme -> Arguments -> Environment Variables).

If you are using the C instead of C++:
err = clBuildProgram(program, 0, NULL, NULL, NULL, NULL);
////////////////Add the following lines to see the log file///////////
if (err != CL_SUCCESS) {
char *buff_erro;
cl_int errcode;
size_t build_log_len;
errcode = clGetProgramBuildInfo(program, devices[0], CL_PROGRAM_BUILD_LOG, 0, NULL, &build_log_len);
if (errcode) {
printf("clGetProgramBuildInfo failed at line %d\n", __LINE__);
exit(-1);
}
buff_erro = malloc(build_log_len);
if (!buff_erro) {
printf("malloc failed at line %d\n", __LINE__);
exit(-2);
}
errcode = clGetProgramBuildInfo(program, devices[0], CL_PROGRAM_BUILD_LOG, build_log_len, buff_erro, NULL);
if (errcode) {
printf("clGetProgramBuildInfo failed at line %d\n", __LINE__);
exit(-3);
}
fprintf(stderr,"Build log: \n%s\n", buff_erro); //Be careful with the fprint
free(buff_erro);
fprintf(stderr,"clBuildProgram failed\n");
exit(EXIT_FAILURE);
}

I encountered the same problem with an empty log file. I was testing my ocl kernel on a different computer. It had 2 platforms instead of one. One Intel GPU and one AMD GPU. I only had AMD OCL SDK installed. Installing the Intel OCL SDK fixed the problem. Also selecting the AMD platform instead of the Intel GPU platform fixed it.

I've seen this happen on OSX 10.14.6 when the OpenCL kernel source is missing the _kernel attribute tag. If both the _kernel tag and return type are missing it seems to crash the system OpenCL compiler daemon, which then takes a few seconds to restart before new kernels will compile again.

Related

Can I link two separate executables with MPI_open_port and share port information in a text file?

I'm trying to create a shared MPI COMM between two executables which are both started independently, e.g.
mpiexec -n 1 ./exe1
mpiexec -n 1 ./exe2
I use MPI_Open_port to generate port details and write these to a file in exe1 and then read with exe2. This is followed by MPI_Comm_connect/MPI_Comm_accept and then send/recv communication (minimal example below).
My question is: can we write port information to file in this way, or is the MPI_Publish_name/MPI_Lookup_name required for MPI to work as in this, this and this? As supercomputers usually share a file system, this file based approach seems simpler and maybe avoids establishing a server.
It seems this should work according to the MPI_Open_Port documentation in the MPI 3.1 standard,
port_name is essentially a network address. It is unique within the communication universe to which it belongs (determined by the implementation), and may be used by any client within that communication universe. For instance, if it is an internet (host:port) address, it will be unique on the internet. If it is a low level switch address on an IBM SP, it will be unique to that SP
In addition, according to documentation on the MPI forum:
The following should be compatible with MPI: The server prints out an address to the terminal, the user gives this address to the client program.
MPI does not require a nameserver
A port_name is a system-supplied string that encodes a low-level network address at which a server can be contacted.
By itself, the port_name mechanism is completely portable ...
Writing the port information to file does work as expected, i.e creates a shared communicator and exchanges information using MPICH (3.2) but hangs at the MPI_Comm_connect/MPI_Comm_accept line when using OpenMPI versions 2.0.1 and 4.0.1 (on my local workstation running Ubuntu 12.04 but eventually needs to work on a tier 1 supercomputer). I have raised as an issue here but welcome a solution or workaround in the meantime.
Further Information
If I use the MPMD mode with OpenMPI,
mpiexec -n 1 ./exe1 : -n 1 ./exe2
this works correctly, so must be an issue with allowing the jobs to share ompi_global_scope as in this question. I've also tried adding,
MPI_Info info;
MPI_Info_create(&info);
MPI_Info_set(info, "ompi_global_scope", "true");
with info passed to all commands, with no success. I'm not running a server/client model as both codes run simultaneously so sharing a URL/PID from one is not ideal, although I cannot get this to work even using the suggested approach, which for OpenMPI 2.0.1,
mpirun -n 1 --report-pid + ./OpenMPI_2.0.1 0
1234
mpirun -n 1 --ompi-server pid:1234 ./OpenMPI_2.0.1 1
gives,
ORTE_ERROR_LOG: Bad parameter in file base/rml_base_contact.c at line 161
This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
pmix server init failed
--> Returned value Bad parameter (-5) instead of ORTE_SUCCESS
and with OpenMPI 4.0.1,
mpirun -n 1 --report-pid + ./OpenMPI_4.0.1 0
1234
mpirun -n 1 --ompi-server pid:1234 ./OpenMPI_4.0.1 1
gives,
ORTE_ERROR_LOG: Bad parameter in file base/rml_base_contact.c at line 50
...
A publish/lookup server was provided, but we were unable to connect
to it - please check the connection info and ensure the server
is alive:
Using 4.0.1 means the error should not be related to this bug in OpenMPI.
Minimal code
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <iostream>
#include <fstream>
using namespace std;
int main( int argc, char *argv[] )
{
int num_errors = 0;
int rank, size;
char port1[MPI_MAX_PORT_NAME];
char port2[MPI_MAX_PORT_NAME];
MPI_Status status;
MPI_Comm comm1, comm2;
int data = 0;
char *ptr;
int runno = strtol(argv[1], &ptr, 10);
for (int i = 0; i < argc; ++i)
printf("inputs %d %d %s \n", i,runno, argv[i]);
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (runno == 0)
{
printf("0: opening ports.\n");fflush(stdout);
MPI_Open_port(MPI_INFO_NULL, port1);
printf("opened port1: <%s>\n", port1);
//Write port file
ofstream myfile;
myfile.open("port");
if( !myfile )
cout << "Opening file failed" << endl;
myfile << port1 << endl;
if( !myfile )
cout << "Write failed" << endl;
myfile.close();
printf("Port %s written to file \n", port1); fflush(stdout);
printf("Attempt to accept port1.\n");fflush(stdout);
//Establish connection and send data
MPI_Comm_accept(port1, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &comm1);
printf("sending 5 \n");fflush(stdout);
data = 5;
MPI_Send(&data, 1, MPI_INT, 0, 0, comm1);
MPI_Close_port(port1);
}
else if (runno == 1)
{
//Read port file
size_t chars_read = 0;
ifstream myfile;
//Wait until file exists and is avaialble
myfile.open("port");
while(!myfile){
myfile.open("port");
cout << "Opening file failed" << myfile << endl;
usleep(30000);
}
while( myfile && chars_read < 255 ) {
myfile >> port1[ chars_read ];
if( myfile )
++chars_read;
if( port1[ chars_read - 1 ] == '\n' )
break;
}
printf("Reading port %s from file \n", port1); fflush(stdout);
remove( "port" );
//Establish connection and recieve data
MPI_Comm_connect(port1, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &comm1);
MPI_Recv(&data, 1, MPI_INT, 0, 0, comm1, &status);
printf("Received %d 1\n", data); fflush(stdout);
}
//Barrier on intercomm before disconnecting
MPI_Barrier(comm1);
MPI_Comm_disconnect(&comm1);
MPI_Finalize();
return 0;
}
The 0 and 1 simply specify if this code writes a port file or reads it in the example above. This is then run with,
mpiexec -n 1 ./a.out 0
mpiexec -n 1 ./a.out 1

SGX Local Attestation sample returning 0x3002 in simulator

I cannot for the life of me get the LocalAttestation sample to run correctly on a fresh Linux install, following the instructions successfully. Given this is being built in simulation mode, I would have thought there were no additional dependencies?
I modified the demo to provide extra output, and this line returns 3002 SGX_ERROR_INVALID_ATTRIBUTE:
printf("creating enclave 1\n");
ret = sgx_create_enclave(ENCLAVE1_PATH, SGX_DEBUG_FLAG, &launch_token, &launch_token_updated, &e1_enclave_id, NULL);
if (ret != SGX_SUCCESS) {
printf("Failed. Return value is: %X\n", ret);
return ret;
}
This is the sample, right out of the linux SDK: https://github.com/intel/linux-sgx and this error isn't even mentioned as a possibility in Intel's documentation on the function: https://software.intel.com/en-us/sgx-sdk-dev-reference-sgx-create-enclave
Any help would be greatly appreciated!
-- Henry
Turns out this was an issue with the sample code. By initializing the launch_token zeroed out, everything works as expected:
sgx_launch_token_t launch_token = {0};
ret =sgx_create_enclave(_T(ENCLAVE_PATH),1,&launch_token,&launch_token_update,&enclave_id, NULL);
if(ret !=SGX_SUCCESS)
{
printf("Failed to create enclave");
}
printf("Successfully create enclave.");
printf("\nEnclaveID %llx", enclave_id);

breakpad not generate minidump on erase iterator twice

I find breakpad does not handle sigsegv sometimes.
and i wrote a simple example to reproduce it:
#include <vector>
#include <breakpad/client/linux/handler/exception_handler.h>
int InitBreakpad()
{
char core_file_folder[] = "/tmp/cores/";
google_breakpad::MinidumpDescriptor descriptor(core_file_folder);
auto exception_handler_ =
new google_breakpad::ExceptionHandler(descriptor,
nullptr,
nullptr,
nullptr,
true,
-1);
}
int main()
{
InitBreakpad();
// int* ptr = nullptr;
// *ptr = 1;
std::vector<int> sum;
sum.push_back(1);
auto it = sum.begin();
sum.erase(it);
sum.erase(it);
return 0;
}
and gcc is 4.8.5 and my comiple cmd is
g++ test_breakpad.cpp -I./include -I./include/breakpad -L./lib -lbreakpad -lbreakpad_client -std=c++11 -lpthread
run a.out, get "Segmentation fault" but no minidump is generated.
if i uncomment nullptr write, breakpad works!
what should i do to correct it?
GDB debug output:
(gdb) b google_breakpad::ExceptionHandler::~ExceptionHandler()
Breakpoint 2 at 0x402ed0: file src/client/linux/handler/exception_handler.cc, line 264.
(gdb) c
The program is not being run.
(gdb) r
Starting program: /home/zen/tmp/a.out
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Breakpoint 1, google_breakpad::ExceptionHandler::ExceptionHandler (this=0x619040, descriptor=..., filter=0x0, callback=0x0, callback_context=0x0, install_handler=true, server_fd=-1) at src/client/linux/handler/exception_handler.cc:224
224 ExceptionHandler::ExceptionHandler(const MinidumpDescriptor& descriptor,
Missing separate debuginfos, use: debuginfo-install glibc-2.17-157.el7_3.1.x86_64 libgcc-4.8.5-11.el7.x86_64 libstdc++-4.8.5-11.el7.x86_64
(gdb) c
Continuing.
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff712f19d in __memmove_ssse3_back () from /lib64/libc.so.6
(gdb) c
Continuing.
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff712f19d in __memmove_ssse3_back () from /lib64/libc.so.6
(gdb) c
Continuing.
Program terminated with signal SIGSEGV, Segmentation fault.
The program no longer exists.
and i tried breakpad out of process dump, but still got nothing(nullptr write works).
After some debugging I think that the reason that the sum.erase(it) does not create a minidump in your example is due to stack corruption.
While debugging you can see that the variable g_handler_stack_ in src/client/linux/handler/exception_handler.cc is correctly initialized and the google_breakpad::ExceptionHandler instance is correctly added to the vector. However when google_breakpad::ExceptionHandler::SignalHandler is called the vector is reported empty despite no calls to google_breakpad::ExceptionHandler::~ExceptionHandler or any of the std::vector methods that would change the vector.
Some further data points that point to stack corruption is that the code works with clang++. Additionally, as soon as we change the std::vector<int> sum; to a std::vector<int>* sum, which will ensure that we don't corrupt the stack, the minidump is written to disk.

OpenCL cannot find GPU device: NVIDIA GPU (Quadro K4000) + Visual Studio 2015

Just started to learn OpenCL and setup a Visual Studio project using VS2015. Somehow, the code can find only 1 platform (I guess it should be the CPU), and cannot find the GPU device. Can someone please help? The detailed information is as follows:
GPU: Nvidia Quadro K4000
CUDA Installation
CUDA is at: “C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5”
OpenCL related files are located at "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\include\CL" and "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\lib\Win32" (assuming 32bit system)
The installer created two environment variables “CUDA_PATH” and “CUDA_PATH_V7_5”. They both point to the above location.
In Visual Studio, the project is set up as
"Project Properties" -> "C/C++" -> "Additional Include Directories" -> "$(CUDA_PATH)\include"
"Project Properties" -> "Linker" -> "Additional Library Directories" -> "$(CUDA_PATH)\lib\Win32"
"Project Properties" -> "Linker" -> "Input" -> "Additional Dependencies" -> "OpenCL.lib"
The code is very simple:
#include "stdafx.h"
#include <iostream>
#include <CL/cl.h>
using namespace std;
int main()
{
cl_int err;
cl_uint numPlatforms;
err = clGetPlatformIDs(0, NULL, &numPlatforms);
if (CL_SUCCESS == err)
cout << "Detected OpenCL platforms: " << numPlatforms << endl;
else
cout << "Error calling clGetPlatformIDs. Error code:" << err << endl;
cl_device_id device = NULL;
err = clGetDeviceIDs(NULL, CL_DEVICE_TYPE_GPU, 1, &device, NULL);
if (err == CL_SUCCESS)
cout << device << endl;
return 0;
}
The code compiles and runs, but it cannot the GPU device. Specifically, the returned value of variable device is device = 0x00000000 <NULL>. What would be the problem? Thanks for the help.
This is not the way you use the OpenCL API.
You need to obtain a valid cl_platform_id object which it needs to be used to retrieve a cl_device_id. You are always passing NULL, this can't work.
The first time you invoke the clGetPlatformIds, you do it in order to obtain the number of platforms in the system. After than you need to invoke the method again in order to retrieve the actual cl_platform_ids:
size_t numPlatforms;
err = clGetPlatformIDs(0, NULL, &numPlatforms);
assert(numPlatforms > 0);
cl_platform_id platform_ids[numPlatforms];
err = clGetPlatformIDs(numPlatforms, platform_ids, NULL);
However, if you already know there is going to be only one platform in the system, then you can do speedup things as follows, but make sure to check for errors:
cl_platform_id platform_id;
err = clGetPlatformIDs(1, &platform_id, NULL);
assert(err == CL_SUCCESS);
After you have obtained a platform you need to follow the same procedure to first obtain the number of devices and then retrieve the list of OpenCL devices (which you then will need to build a cl_context, queues...):
// Note: this has to be done for each `cl_platform_id`
// until you find the device you were looking for
size_t numDevices;
err = clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_GPU, 0, NULL, &numDevices);
assert(numDevices > 0);
cl_device_id devices[numDevices];
err = clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_GPU, numDevices, devices, NULL);
I guess you understand the procedure now. If like above, you already know that there is only 1 GPU device in the system, you can directly get its cl_device_id as follows:
cl_device_id device;
err = clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_GPU, 1, &device, NULL);
assert(err == CL_SUCCESS);

clGetPlatformID error when running OpenCL program.

I've been trying to make OpenCL work in my laptop. I followed this link for that. I have an NVIDIA GT525M video card and Windows 8.1.
The steps I followed were:
Installing the up-to-date NVIDIA drivers.
Installing Visual Studio 2013 Community Edition.
Adding the paths for the linker and the compiler.
and then I tried to run the following code as given in that page:
#include <stdio.h>
#include <CL/cl.h>
int main(void)
{
cl_int err;
cl_uint* numPlatforms=NULL;
err = clGetPlatformIDs(0, NULL,numPlatforms);
if (CL_SUCCESS == err)
printf("\nDetected OpenCL platforms: %d", numPlatforms);
else
printf("\nError calling clGetPlatformIDs. Error code: %d", err);
getchar();
return 0;
}
The code builds successfully but the result I get is:
Error calling clGetPlatformIDs. Error code: -30
I get zero as the number of platforms.
I've been looking all over the internet for a solution but couldn't find one. Please help.
This:
cl_uint* numPlatforms=NULL;
err = clGetPlatformIDs(0, NULL,numPlatforms);
Is equivalent to this:
err = clGetPlatformIDs(0, NULL, NULL);
Which doesn't make sense. As per the documentation:
Returns CL_SUCCESS if the function is executed successfully. Otherwise it returns CL_INVALID_VALUE if num_entries is equal to zero and platforms is not NULL, or if both num_platforms and platforms are NULL.
CL_INVALID_VALUE is the -30 you are getting, for the reasons stated above.
What you really want is the following:
cl_uint numPlatforms = 0;
err = clGetPlatformIDs(0, NULL, &numPlatforms);

Resources