free() for 2D malloc array not freeing memory? - pointers

I am drafting a code from WSL1 and I am scratching the back of my head because I have a very confusing problem.
First, I allocate ~10 GB of RAM using malloc
int** megaarray;
megaarray=(int**)malloc( x*sizeof(int*) * y * count );
for(int i=0; i<x * count; i++){ // x = 3200, count =1000, both integers
megaarray[i]=(int*)malloc(y*sizeof(int) ); // y = 4200, integer
}
Allocation goes well, and I use that segment of RAM, then at the end of some computing I try to de-allocate:
for(int i=0; i<x * count; i++){ // x = 3200, count =1000, both integers
free(megaarray[i]);
}
free(megaarray);
I kept on getting a crash when the same function using the above is used the second time and I put a sleeper in between each allocation/de-allocation to find that de-allocation is simply not happening! What's going on?

Such a weird behaviour, on WSL1, when count is 1000, free-ing doesn't quite work and code crashes.
On Full Linux x64 machine, freeing works from sampling, but the code crashes right away.
On WSL1, when count is brought down to 715, code works perfectly fine, on Full Linux, code runs perfectly but if and only if the count is brought down to 600.
I am guessing this has to do with buffer and max RAM allowed per application.

Okay, I found the issue, it is the dumbest part on my side. I initialized to the wrong mallocation size:
megaarray=(int**)malloc( x*sizeof(int*) * y * count );
is actually 4 bytes times the memory I need. I would have ended up using 410003200*4200/1000000000 ~= 54GB!
Reducing count to 600: ~31GB, my systems have 32GB,
Reducing count to 715: ~35GB, this should not have worked...
It should be actually:
megaarray=(int**)malloc( x * y * count );
and the code works throughout all of the platforms.
By getting rid of the premature integer pointer size declaration, I use ~13GB even with 1000 counts
Now, the questions left standing:
How did this work at least half of the way on WSL?!
Why is WSL1 not "freeing" RAM back to Windows when the memory is freed and why is it eating up more RAM instead of the same pre-allocated sections when allocating more RAM?

Related

Optimization in R: Levenberg-Marquardt using nls.lm in minpack.lm: resetting `maxiter' to 1024

I am trying to learn how to work with nls.lm in the R library minpack.lm by using the Rosenbrock function to see if the algorithm converges to the global minimum at f(x,y) = (1,1). I do so both with and without the analytic Jacobian. In both instances, I get a warning telling me that the algorithm has decided to revert the maximum number of iterations specified in the call to nls.lm to 1024:
Warning messages:
1: In nls.lm(par = initpar, fn = objective_rosenbrock, jac = gradient_rosenbrock, :
resetting `maxiter' to 1024!
2: In nls.lm(par = initpar, fn = objective_rosenbrock, jac = gradient_rosenbrock, :
lmder: info = -1. Number of iterations has reached `maxiter' == 1024.
The algorithm never quite reaches (1,1) as a result given my initial guess of (-1.2, 1.0). I found the source code for the library on GitHub and the following lines of code are pertinent here:
https://github.com/cran/minpack.lm/blob/master/src/nls_lm.c
OS->maxiter = INTEGER_VALUE(getListElement(control, "maxiter"));
if(OS->maxiter > 1024) {
OS->maxiter = 1024;
warning("resetting `maxiter' to 1024!");
}
Is there any logic to why the maximum number of iterations is capped to 1024? Something with bits and 2^10? I would like to use the library for a different application, but this cap on iterations might prevent that. Any insight would be appreciated.
Git blame says that this code limiting the max iterations was introduced in version 1.1-0, in 2008. The NEWS file for the package only goes back as far as version 1.1-6. I can't find the code in any public repo other than the one you point to (which is only a CRAN mirror; it doesn't contain any comments/commit messages/etc. from developers that might give us clues.)
Other than contacting the maintainer I think it's going to be hard to figure out what the rationale is for this limit.
I do have some guesses though.
The only places that maxiter is actually used in the code are here and here - in R code, not Fortran or C code, so it seems extremely unlikely that we are dealing with something like a 10-bit unsigned integer type (which seems an unlikely choice in any case). I think the limitation is there because we also have a buffer defined for holding trace information here:
double rsstrace[1024];
which, as you can see, is hard-coded to a length of 1024. Presumably bad things would happen if we tried to stuff 1025 iterations'-worth of tracing information into this array ...
My suggestions:
change all instances of '1024' in the code to something larger and see what happens. There are only four:
$ find . -type f -exec grep -Hn 1024 {} \;
./src/nls_lm.c:141: if(OS->maxiter > 1024) {
./src/nls_lm.c:142: OS->maxiter = 1024;
./src/nls_lm.c:143: warning("resetting `maxiter' to 1024!");
./src/minpack_lm.h:20: double rsstrace[1024];
it would be best to #define MAXITER 2048 (or whatever) in src/minpack_lm.h and use that instead of the numerical value.
Contact the maintainer (maintainer("minpack.lm")) and ask them about this issue.

Complete R Session Size

Due to I constantly reach memory size limit in my R Session (8GB Windows PC) I start to remove big objects loaded in. However once I reach this limit, removing objects seems not to work.
So, I was wondering if there's a way to get the R Session size. I know that it's possible to retrieve objects' size (saw in this thread).I want to know if there's a way to count the complete R Session size though (loaded packages, objects, etc).
Thank you!
I personally use this function to get the available memory:
getAvailMem <- function(format = TRUE) {
gc()
if (Sys.info()[["sysname"]] == "Windows") {
memfree <- 1024^2 * (utils::memory.limit() - utils::memory.size())
} else {
# http://stackoverflow.com/a/6457769/6103040
memfree <- 1024 * as.numeric(
system("awk '/MemFree/ {print $2}' /proc/meminfo", intern = TRUE))
}
`if`(format, format(structure(memfree, class = "object_size"),
units = "auto"), memfree)
}
To get the total memory used by R, you may try mem_used() from pryr package. Unlike memory.size, this one is not OS dependent, because it uses the R function gc() underneath it. Try to look in the function body and also this pryr:::node_size and pryr:::show_bytes
pryr::mem_used()
The help file ?pryr::mem_used describes
R breaks down memory usage into Vcells (memory used by vectors) and
Ncells (memory used by everything else). However, neither this
distinction nor the "gc trigger" and "max used" columns are typically
important. What we're usually most interested in is the the first
column: the total memory used. This function wraps around gc() to
return the total amount of memory (in megabytes) currently used by R.
You can also use pryr::mem_change to track the size of the memory used by the R code. Try the example in its documentation page.
The numbers such as 28L and 56L used to refer node size with pryr:::node_size comes from the help file of ?gc, which describes
gc returns a matrix with rows "Ncells" (cons cells), usually 28 bytes
each on 32-bit systems and 56 bytes on 64-bit systems, and "Vcells"
(vector cells, 8 bytes each),
After removing a large object run gc() to free memory

XC8 builds font tables from top ROM

I wrote a barebone progran template in XC8 (1.37) that I use to develop and test new GLCD functions for the 18F family. Programming is done via a PICkit3. Since I need to quicky reprogram several times the code it is really important that programming is faster as much as possible.
Tipically, the code size is around 2K and it takes less than 10 sec to program,
Everiything is fine until I must use a font table, defined as:
const char font8[] = {....
Now, with just $400 bytes added, the compiler place the table at the ROM's end and the programming of 64K memory takes more than 1 minute.
Is there any way to avoid this?
I tried to manually limit the memory range in the MPLABX options, but this is annoying and a little unsafe (sometimes part of code is truncated).
A while back I had to write some code for emissions testing, where I needed to copy data between extreme ends of RAM. To do that I needed to specify the exact memory addresses. You can also use the C extension __at() construct. http://ww1.microchip.com/downloads/en/DeviceDoc/50002053F.pdf#page=27
int scanMode __at(0x200);
const char keys[] __at(123) = { ’r’, ’s’, ’u’, ’d’};
int modify(int x) __at(0x1000) {
return x * 2 + 3;
}

Parallel copy and opencl kernel execution

I would like to implement an image filtering algorithm using OpenCL but the image size is very large (4096 x 4096). I understand that the copy time to the OpenCL device may take too long.
Do you think it makes sense to address this problem by using a parallel copy in combination with OpenCL kernel execution?
E.g., below is my approach:
1) Split the full image into 2 parts.
2) Copy the first half to the device.
3) Execute the image filtering kernel on the device, then copy the 2nd half of the image to the device.
4) Block the kernel execution until the first half completes, then call the kernel again to process the 2nd part.
5) Block until the 2nd part finishes.
Best regards,
OpenCL thread of execution is completely independent to your application. So there is no need to "wait" after each call. Just flush all the order to OpenCL and it should schedule them properly.
The only need is to have 2 queues, in order to be able to run commands in parallel. So you will need a IO queue, and an execution queue. A single queue (even in out of order mode), can never run 2 operations in parallel.
Here you have one example approach with events, you can call clFlush() on the queues just after doing the enqueues in order to speed them up.
//Create 2 queues (at creation only!)
mQueueIO = cl::CommandQueue(context, device[0], 0);
mQueueRun = cl::CommandQueue(context, device[0], 0);
//Everytime you run your image filter
//Queue the 2 writes
cl::Event wev1; //Event to known when the write finishes
mQueueIO.enqueueWriteBuffer(ImageBufferCL, CL_FALSE, 0, size/2, imageCPU, NULL, &wev1);
cl::Event wev2; //Event to known when the write finishes
mQueueIO.enqueueWriteBuffer(ImageBufferCL, CL_FALSE, size/2, size/2, imageCPU+size/2, &wev2);
//Queue the 2 runs (with the proper dependency)
std::vector<cl::Event> wait;
wait.push_back(wev1);
cl::Event ev1; //Event to track the finish of the run command
mQueueRun.enqueueNDRangeKernel(kernel, cl::NDRange(0), cl::NDRange(size/2), cl::NDRange(localsize), &wait, &ev1);
wait[0] = wev2;
cl::Event ev2; //Event to track the finish of the run command
mQueueRun.enqueueNDRangeKernel(kernel, cl::NDRange(size/2), cl::NDRange(size/2), cl::NDRange(localsize), &wait, &ev2);
//Read back the data when it has finished
std::vector<cl::Event> rev(2);
wait[0] = ev1;
mQueueIO.enqueueReadBuffer(ImageBufferCL, CL_FALSE, 0, size/2, imageCPU, &wait, &rev[0]);
wait[0] = ev1;
mQueueIO.enqueueReadBuffer(ImageBufferCL, CL_FALSE, size/2, size/2, imageCPU + size/2, &wait, &rev[1]);
rev[0].wait();
rev[1].wait();
Notice how I create 2 events for writing, these are the wait events of the execution; and 2 events for execution that are the wait events for reading.
In the last part I create another 2 events for reading but they are not really needed, you can use a blocking read.
Try using out of order queues - most implementation's hardware should support them. You'll want to use the global offset parameter in your kernels along with global_id where applicable. At some point you will get diminishing returns with a division strategy like this but there should exist a number such that you can get a good payoff in latency reduction - I would guess it's in [2, 100] is probably a good interval to brute force profile. Be aware that only one kernel can write to a memory buffer at a time and make sure the input buffer is const (read-only). Be aware that you must also merge the result from N buffer splits in one kernel to an output - this means you will effectively write all pixels twice to GDS. OpenCL 2.0 may be able to save us all these divided writes with it's image types, if you are able to use it.
cl::CommandQueue queue(context, device, CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE|CL_QUEUE_ON_DEVICE);
cl::Event last_event;
std::vector<Event> events;
std::vector<cl::Buffer> output_buffers;//initialize with however many splits you have, ensure there is at least enough for what is written and update the kernel perhaps to only write to it's relative region.
//you might approach finer granularity with even more splits
//just make sure the kernel is using the global offset -
//in which case adjust this code into a loop
set_args(kernel, image_input, image_outputs[0]);
queue.enqueueNDRangeKernel(kernel, cl::NDRange(0, 0), cl::NDRange(cols * local_size[0], (rows/2) * local_size[0]), cl::NDRange(local_size[0], local_size[1]), &events, &last_event); events.push_back(last_event);
set_args(kernel, image_input, image_outputs[0]);
queue.enqueueNDRangeKernel(kernel, cl::NDRange(0, size/2 * local_size), cl::NDRange(cols * local_size[0], (size - size/2) * local_size[1]), cl::NDRange(local_size[0], local_size[1]), &events, &last_event); events.push_back(last_event);
set_args(merge_buffers_kernel, output_buffers...)
queue.enqueueNDRangeKernel(merge_buffers_kernel, NDRange(), NDRange(cols * local_size[0], rows * local_size[1])
cl::waitForEvents(events);

Queued kernels slower than expected on AMD gpus only

I am performing a benchmark like show below
CHECK( context = clCreateContext(props, 1, &device, NULL, NULL, &_err); );
CHECK( queue = clCreateCommandQueue(context, device, 0, &_err); );
#define SYNC() clFinish(queue)
#define LAUNCH(glob, loc, kernel) OCL(clEnqueueNDRangeKernel(queue, kernel, 2,\
NULL, glob, loc,\
0, NULL, NULL))
/* Build program, set arguments over here */
START;
for (int i = 0; i < iter; i++) {
LAUNCH(global, local, plus_kernel);
}
SYNC();
STOP;
printf("Time taken (plus) : %lf\n", uSec / iter);
START;
for (int i = 0; i < iter; i++) {
LAUNCH(global, local, minus_kernel);
}
SYNC();
STOP;
printf("Time taken (minus): %lf\n", uSec / iter);
START;
for (int i = 0; i < iter; i++) {
LAUNCH(global, local, plus_kernel);
LAUNCH(global, local, minus_kernel);
}
SYNC();
STOP;
printf("Time taken (both) : %lf\n", uSec / iter);
The results look weird:
Time taken (plus) : 31.450000
Time taken (minus): 28.120000
Time taken (both) : 2256.380000
START, and STOP are just macros that start and stop a timer.
Here are the relevant macros.
I am not sure why queuing up is the kernels is slowing them down (and only on AMD GPUs)!
EDIT I am using Radeon 7970
EDIT Both kernels are operating on independent memory. Also here is the system information.
OS: Ubuntu 11.10
fglrxinfo:
display: :0 screen: 0
OpenGL vendor string: Advanced Micro Devices, Inc.
OpenGL renderer string: AMD Radeon HD 7900 Series
OpenGL version string: 4.2.11762 Compatibility Profile Context
I think the answer has to do with caching of data on newer GPUs (Specifically the Radeon 7970, which uses the Graphics Compute Next (GCN) architecture.
One of the advantages of this architecture is it's caching capabilities (somewhat close to CPU caching at this point). If you perform calls like this:
PLUS
PLUS
PLUS
....
Then the memory that is resident in the inner caches of the GPU. On the other hand if you make calls like this:
PLUS
MINUS
PLUS
MINUS
...
Where the two kernels have different memory objects associated with them, then the data is kicked out of the hardware devices on each CU, causing a need for them to be brought in from the very sluggish global memory.
Two easy ways to test if this is the case:
Run only Pluses with varying numbers of iterations. As the number of iterations increases, the average time will go down because the cost of the first run (which brings the data in) is amortized. Also, you should notice that all calls after the first should be relatively equal.
Make the Plus and Minus kernels run on the same memory objects. If the reason for the slowdown is because of the caching of memory objects, then the overall run time should be the average of the individual running times of PLUS and MINUS (depending perhaps on experiment 1).
Let me know if you find out if this is actually the case!

Resources