I have altered slob.c so that it gathers stats on the last 100 small list allocations. I made necessary edits to make sure SLOB is being used.
I am running a test program that calls malloc() about 10,000 or 100,000 times on a size of 20 bytes.
But my SLOB test results immediately after the test program runs states that the average claimed size was 140 bytes (when I was expecting it to at least be somewhere near 20 bytes).
What am I doing wrong, is there a way to accurately test SLOB?
I am pretty sure that my stat collecting is accurate, as I have had a few professors check it out for me. This is my current test program:
int main()
{
char * a ;
int i ;
for( i = 0; i < 1000000; i++)
{
a = (char*) malloc(20*sizeof(char)) ;
if(a == NULL) printf("NULL\n") ;
}
//Here I print the system call resulting stats for memory claimed and free memory
My original answer was actually correct:
char * a ;
int i ;
for( i = 0; i < 10000; i++)
{
a = (char*) malloc(20*sizeof(char)) ;
if(a == NULL) printf("NULL\n") ;
}
This simple code can be used to test how the altered slob file allocates memory (considering you are gathering statistics within the slob file).
Related
So after I Bcast the data (clusters[10][5] 2d array) to every other process and then when each one calculates its new local values I want to send them back to the process 0.
But some of the data is missing sometimes (depends on no. of clusters) or the data is not equal to the ones I have in the sending processes.
I don't know why but the max value of recvcount and recvcount need to be divided by size or by some factor, they can't be array size (10 or 10*5 - no. of elements).
If I put its full size for instance cluster.lenght(10) it says indexoutofbounce 19 and if I run with more processes (mpjrun.bat -np 11 name) the higher index occurs in the outofbounce and it always goes up or down by 2 with higher/lower no. of processes (for example I use 5 processes and get outofbounce 9 and then next run use 6 and get 11).
Can someone explain why is Gather's count connected to number of processes or why it can't accept the array size?
And also the program doesn't end after the data is calculated correctly, only if I use 1 process it ends but otherwise it goes out of the loop and then print something to the terminal and after that I have MPI.finalize but nothing happens and I have Ctrl+c to terminate bat job so I can use the terminal again.
The clusterget variable is set to number of clusters*size of proceses so that it stores all the new clusters from other processes so that I can then use them all in the first process so the problem isn't in clusterget variable or maybe is it? Since there isn't really anything documented about sending through a 2d array of floats (yeah I need to use MPI.OBJECT because java doesn't like float if I use float it says Float can't be casted to Float).
MPI.COMM_WORLD.Bcast(clusters, 0, clusters.length, MPI.OBJECT, 0);
//calculate and then send back to 0
MPI.COMM_WORLD.Gather(clusters, 0, clusters.length / size, MPI.OBJECT, clusterget, 0, clusters.length / size, MPI.OBJECT, 0);
if (me == 0) {
for (int j = 0; j < clusters.length; j++) { //adds clusters from each other process to the first ones
for (int i = 0; i < size - 1; i++) {
System.out.println(clusterget[j+i*cluster][4]+" tock "+clusters[j][4]);
clusters[j][2] += clusterget[j + i * cluster][2]; //dodaj
clusters[j][3] += clusterget[j + i * cluster][3];
clusters[j][4] += clusterget[j + i * cluster][4];
}
}
}
In Summmary:
The data from each process isn't the same as the one collected after gather, in which i can't put the full size of 2d float array.
I've changed gather to Send and Recv and it works and I needed to add a Barrier so that the data is in synch before sending. But this only works for 2 procesess.
MPI.COMM_WORLD.Barrier();
if (me != 0){
MPI.COMM_WORLD.Send(clusters,0,clusters.length,MPI.OBJECT,0,MPI.ANY_TAG);
}
if (me == 0) {
for (int i = 1; i < size; i++) {
MPI.COMM_WORLD.Recv(clusterget,0,clusters.length,MPI.OBJECT,i,MPI.ANY_TAG);
for (int j = 0; j < clusters.length; j++) {
clusters[j][2] += clusterget[j][2];
clusters[j][3] += clusterget[j][3];
clusters[j][4] += clusterget[j][4];
}
}
I am in trouble passing values between host code and kernel code due to some vector data types. The following code/explanation is just for referencing my problem, my code is much bigger and complicated. With this small example, hopefully, I will be able to explain where I am having a problem. I f anything more needed please let me know.
std::vector<vector<double>> output;
for (int i = 0;i<2; i++)
{
auto& out = output[i];
sum =0;
for (int l =0;l<3;l++)
{
for (int j=0;j<4; j++)
{
if (some condition is true)
{ out[j+l] = 0.;}
sum+= .....some addition...
}
out[j+l] = sum
}
}
Now I want to parallelize this code, from the second loop. This is what I have done in host code:
cl::buffer out = (context,CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, output.size(), &output, NULL)
Then, I have set the arguments
cl::SetKernelArg(0, out);
Then the loop,
for (int i = 0,i<2, i++)
{
auto& out = output[i];
// sending some more arguments(which are changing accrding to loop) for sum operations
queue.enqueueNDRangeKernel(.......)
queue.enqueuereadbuffer(.....,&out,...)
}
In Kernel Code:
__kernel void sumout(__global double* out, ....)
{
int l = get_global_id(0);
int j = get_global_id(1);
if (some condition is true)
{ out[j+l] = 0.; // Here it goes out of the loop then
return}
sum+= .....some addition...
}
out[j+l] = sum
}
So now, in if condition out[j+l] is getting 0 in the loop. So out value is regularly changing. In normal code, it is a reference pointer to a vector. I am not able to read the values in output from out during my kernel and host code. I want to read the values in output[i] for every out[j+l]. But I am confused due this buffer and vector.
just for more clarification,output is a vector of vector and out is reference vector to output vector. I need to update values in output for every change in out. Since these are vectors, I passed out as cl buffer. I hope it is clear.
Please let me know, if the code is required, I will try to provide as much as I can.
You are sending pointers of vectors to opencl(ofcourse they are contiguous on pointer level) but whole data is not contiguous in memory since each inner vector points to different memory area. Opencl cannot map host pointers to device memory and there is no such command in this api.
You could use vector of arrays(latest version) or pure arrays.
I'm trying to allocate multi dimensional arrays by using CUDA UMA on Power 8 system. However, I'm having issue while size is getting bigger. The code I'm using is below. When size is 24 x 24 x 24 x 5 works fine. When I increase it to 64 x 64 x 64 x 8 I am having " out of memory" even though I have memory in my device. Afaik, I suppose to be able to allocate memory via UMA as much as GPU device physical memory. So I would not expect any error. Currently my main configuration is Power 8 and Tesla k40 where I am having seg fault during runtime. However, I tried the code piece I provided on x86 + k40 machine. It surprisingly worked.
BTW, if you tell me another way to do that apart from transforming all my code from 4d array to 1d array, I'll so appreciate.
Thanks in advance
Driver: Nvidia 361
#include <iostream>
#include <cuda_runtime.h>
void* operator new[] (size_t len) throw(std::bad_alloc) {
void *ptr;
cudaMallocManaged(&ptr, len);
return ptr;
}
template<typename T>
T**** create_4d(int a, int b, int c, int d){
T**** ary = new T***[a];
for(int i = 0; i < a; ++i)
{
ary[i] = new T**[b];
for(int j = 0; j < b; ++j){
ary[i][j] = new T*[c];
for(int k = 0; k < c; ++k){
ary[i][j][k] = new T[d];
}
}
}
return ary;
}
int main() {
double ****data;
std::cout << "allocating..." << std::endl;
data = create_4d<double>(32,65,65,5);
std::cout << "Hooreey !!!" << std::endl;
//segfault here
std::cout << "allocating..." << std::endl;
data = create_4d<double>(64,65,65,5);
std::cout << "Hooreey !!!" << std::endl;
return 0;
}
There's been a considerable amount of dialog on your cross-posting here including an answer to your main question. I'll use this answer to summarize what is there as well as to answer this question specifically:
BTW, if you tell me another way to do that apart from transforming all my code from 4d array to 1d array, I'll so appreciate.
One of your claims is that you are doing proper error checking (" I caught error propoerly."). You are not. CUDA runtime API calls (including cudaMallocManaged) by themselves do not generate C++ style exceptions, so your throw specification on the new operator definition is meaningless. CUDA runtime API calls return an error code. If you want to do proper error checking, you must collect this error code and process it. If you collect the error code, you can use it to generate an exception if you wish, and an example of how you might do that is contained in the canonical proper CUDA error checking question, as one of the answers by Jared Hoberock. As a result of this oversight, when your allocations eventually fail, you are ignoring this, and then when you attempt to use those (non-) allocated areas for subsequent pointer storage, you generate a seg fault.
The proximal reason for the allocation failure is that you are in fact running out of memory, as discussed in your cross-posting. You can confirm this easily enough with proper error checking. Managed allocations have a granularity, and so when you request allocations of relatively small amounts, you are in fact using more memory than you think - the small allocations you are requesting are each being rounded up to the allocation granularity. The size of the allocation granularity varies by system type, and so the OpenPower system you are operating on has a much larger allocation granularity than the x86 system you compared it to, and as a result you were not running out of memory on the x86 system, but you were on the Power system. As discussed in your cross-posting, this is easy to verify with strategic calls to cudaMemGetInfo.
From a performance perspective, this is a pretty bad approach to multidimensional allocations for several reasons:
The allocations you are creating are disjoint, connected by pointers. Therefore, to access an element by pointer dereferencing, it requires 3 or 4 such dereferences to go through a 4-subscripted pointer array. Each of these dereferences will involve a device memory access. Compared to using simulated 4-D access into a 1-D (flat) allocation, this will be noticeably slower. The arithmetic associated with converting the 4-D simulated access into a single linear index will be much faster than traversing through memory via pointer-chasing.
Since the allocations you are creating are disjoint, the managed memory subsystem cannot coalesce them into a single transfer, and therefore, under the hood, a number of transfers equal to the product of your first 3 dimensions will take place, at kernel launch time (and presumably at termination, ie. at the next cudaDeviceSynchronize() call). This data must all be transferred of course, but you will be doing a large number of very small transfers, compared to a single transfer for a "flat" allocation. The associated overhead of the large number of small transfers can be significant.
As we've seen, the allocation granularity can seriously impact the memory usage efficiency of such an allocation scheme. What should be only using a small percentage of system memory ends up using all of system memory.
Operations that work on contiguous data from "row" to "row" of such an allocation will fail, because the allocations are disjoint. For example, such a matrix or a subsection of such a matrix could not be reliably passed to a CUBLAS linear algebra routine, as the expectation for that matrix would have contiguity of row storage in memory associated with it.
The ideal solution would be to create a single flat allocation, and then use simulated 4-D indexing to create a single linear index. Such an approach would address all 4 concerns above. However it requires perhaps substantial code refactoring.
We can however come up with an alternate approach, which preserves the 4-subscripted indexing, but otherwise addresses the concerns in items 2, 3, and 4 above by creating a single underlying flat allocation.
What follows is a worked example. We will actually create 2 managed allocations: one underlying flat allocation for data storage, and one underlying flat allocation (regardless of dimensionality) for pointer storage. It would be possible to combine these two into a single allocation with some careful alignment work, but that is not required to achieve any of the proposed benefits.
The basic methodology is covered in various other CUDA questions here on the SO tag, but most of those have host-side usage (only) in view, since they did not have UM in view. However, UM allows us to extend the methodology to host- and device-side usage. We will start by creating a single "base" allocation of the necessary size to store the data. Then we will create an allocation for the pointer array, and we will then work through the pointer array, fixing up each pointer to point to the correct location in the pointer array, or else to the correct location in the "base" data array.
Here's a worked example, demonstrating host and device usage, and including proper error checking:
$ cat t1271.cu
#include <iostream>
#include <assert.h>
template<typename T>
T**** create_4d_flat(int a, int b, int c, int d){
T *base;
cudaError_t err = cudaMallocManaged(&base, a*b*c*d*sizeof(T));
assert(err == cudaSuccess);
T ****ary;
err = cudaMallocManaged(&ary, (a+a*b+a*b*c)*sizeof(T*));
assert(err == cudaSuccess);
for (int i = 0; i < a; i++){
ary[i] = (T ***)((ary + a) + i*b);
for (int j = 0; j < b; j++){
ary[i][j] = (T **)((ary + a + a*b) + i*b*c + j*c);
for (int k = 0; k < c; k++)
ary[i][j][k] = base + ((i*b+j)*c + k)*d;}}
return ary;
}
template<typename T>
void free_4d_flat(T**** ary){
if (ary[0][0][0]) cudaFree(ary[0][0][0]);
if (ary) cudaFree(ary);
}
template<typename T>
__global__ void fill(T**** data, int a, int b, int c, int d){
unsigned long long int val = 0;
for (int i = 0; i < a; i++)
for (int j = 0; j < b; j++)
for (int k = 0; k < c; k++)
for (int l = 0; l < d; l++)
data[i][j][k][l] = val++;
}
void report_gpu_mem()
{
size_t free, total;
cudaMemGetInfo(&free, &total);
std::cout << "Free = " << free << " Total = " << total <<std::endl;
}
int main() {
report_gpu_mem();
unsigned long long int ****data2;
std::cout << "allocating..." << std::endl;
data2 = create_4d_flat<unsigned long long int>(64, 63, 62, 5);
report_gpu_mem();
fill<<<1,1>>>(data2, 64, 63, 62, 5);
cudaError_t err = cudaDeviceSynchronize();
assert(err == cudaSuccess);
std::cout << "validating..." << std::endl;
for (int i = 0; i < 64*63*62*5; i++)
if (*(data2[0][0][0] + i) != i) {std::cout << "mismatch at " << i << " was " << *(data2[0][0][0] + i) << std::endl; return -1;}
free_4d_flat(data2);
return 0;
}
$ nvcc -arch=sm_35 -o t1271 t1271.cu
$ cuda-memcheck ./t1271
========= CUDA-MEMCHECK
Free = 5904859136 Total = 5975900160
allocating...
Free = 5892276224 Total = 5975900160
validating...
========= ERROR SUMMARY: 0 errors
$
Notes:
This still involves pointer chasing inefficiency. I don't know of a method to avoid that without removing the multiple subscript arrangement.
I've elected to use 2 different indexing schemes in host and device code. In device code, I am using a normal 4-subscripted index, to demonstrate the utility of that. In host code, I am using a "flat" index, to demonstrate that the underlying storage is contiguous and contiguously addressable.
I have two questions about memory fetch size of a OpenCL kernel.
In the kernel, I just put a for-loop statement and a printf statement inside the for-loop.
Currently, I am passing two arguments, a float array and a integer number that decides a number of looping of the for-loop.
However, when I passed 0 for the number of looping (In other word, I made the kernel doing nothing), the kernel fetched 0.3125 kilobytes.
Why does the kernel fetch that amount of memory?
My another question is
when there is a cache miss, occurred, why dose the kernel fetches memory less than a size of cache line?
Thanks!
Edit
Here is my kernel
__kernel void sequence(__global float *input, const unsigned int N) {
size_t id = get_global_id(0);
for(int i = 0; i < N; i++) {
printf("Val: %f \n", input[i]);
}
}
I'm currently working on a project suing OpenCL on a NVIDIA Tesla C1060 (driver version 195.17). However I'm getting some strange behaviour I can't really explain. Here is the code which puzzles me (reduced for clarity and testing purpose):
kernel void TestKernel(global const int* groupOffsets, global float* result,
local int* tmpData, const int itemcount)
{
unsigned int groupid = get_group_id(0);
unsigned int globalsize = get_global_size(0);
unsigned int groupcount = get_num_groups(0);
for(unsigned int id = get_global_id(0); id < itemcount; id += globalsize, groupid += groupcount)
{
barrier(CLK_LOCAL_MEM_FENCE);
if(get_local_id(0) == 0)
tmpData[0] = groupOffsets[groupid];
barrier(CLK_LOCAL_MEM_FENCE);
int offset = tmpData[0];
result[id] = (float) offset;
}
}
This code should load the offset for each workgroup into local memory and then read it back and write it into the corresponding outputvector entry. For most workitems this is working, but for each workgroup the workitems with local ids 1 to 31 read an incorrect value.
My output vector (for workgroupsize=128) is as following:
index 0: 0
index 1- 31: 470400
index 32-127: 0
index 128: 640
index 129-159: 471040
index 160-255: 640
index 256: 1280
index 257-287: 471680
index 288-511: 1280
...
the output i expected would be
index 0-127: 0
index 128-255: 640
index 256-511: 1280
...
Strange thing is: the problem only occurs when I use less then itemcount workitems (so it works as expected when globalsize>=itemcount, meaning that every workitem processes only one entry). So I'm guessing it has something to do with the loop.
Does anyone know what I'm doing wrong and how to fix it?
Update:
I found out that it seems to work if I change
if(get_local_id(0) == 0)
tmpData[0] = groupOffsets[groupid];
to
if(get_local_id(0) < 32)
tmpData[0] = groupOffsets[groupid];
Which astonishes me even more, so while it might fix the problem, I'm don't feel comfortable fixing it this way (as in it might break some other time).
Besides I would rather avoid losing performance when running on Geforce 8xxx class hardware due to additional (uncoalesced for that hardware as far as I understand) memory accesses.
So the question still remains.
Firstly, and importantly, you need to be careful that itemcount is a multiple of the local work size to avoid divergence when executing the barrier.
All work-items in a work-group executing the kernel on a processor must execute this function before any are allowed to continue execution beyond the barrier. This function must be encountered by all work-items in a work-group executing the kernel.
You could implement this as follows:
unsigned int itemcountrounded = get_local_size(0) * ((itemcount + get_local_size(0) - 1) / get_local_size(0));
for(unsigned int id = get_global_id(0); id < itemcountrounded; id += globalsize, groupid += groupcount)
{
// ...
if (id < itemcount)
result[id] = (float) offset;
}
You said the code was reduced for simplicity, what happens if you run what you posted? Just wondering whether you need to put the barrier on global memory as well.