I am trying to create a simple vecAdd function in OpenCL for the very first time and I am having problems with clEnqueueReadBuffer which gives out CL_INVALID_MEM_OBJECT which is actually an error of clSetKernelArg. I have done some hunting and found out that this is caused due to premature memory freeing. I am guessing that I am not doing that but the problem still persists.
Related
I am running a very long simulation, and about 1 in every 20 iterations a model fails to converge which trashes the whole thing.
I know I can use try() with the offending model to skip past it in the event of an error, but I was wondering if this could be extended into a conditional? By which I mean if an error did occur, it would execute another short script instead of the code that caused the error. Kind of like an if statement for errors.
Thank you all.
You can use tryCatch function, a tutorial can be found here:
https://www.r-bloggers.com/2012/10/error-handling-in-r/
I am making a simulation with NetLogo and extension R. I have made a supply chain model, where I have distributors and consumers. Consumers provide orders to distributors and distributors forecast future demand and places orders to suppliers in advance to fulfill market demand. The forecast is implemented with extension R (https://ccl.northwestern.edu/netlogo/docs/r.html) by calling elmNN package. The model works fine when simply using "go".
However, when I want to conduct experiments by using behavior space, I keep getting errors. If I set only a few ticks with behavior space, the model works fine. But when I want to launch a few hundred ticks behavior space keeps crashing. For example, "Extension exception: Error in R-extension: error in eval, operator is invalid for atomic vector", "Extension exception: Error in R-extension: error in eval: cannot have attributes on CHARSXP". Sometimes the behavior simply crashes without any error.
I assume that the errors are related to computability issues between NetLogo, R, R extension and java. I am using NetLogo 5.3.1, 64-bit; R-3.3.3 64-bit; rJava 0.9-8.
Model example: https://www.youtube.com/watch?v=zjQpPBgj0A8
A similar question was posted previously, but it has no answer: NetLogo BehaviorSpace crashing when using R extension
The problem was with programming style, which is not suitable for behavior space. Behavior space supports parallel programming due to which some variables were rewritten by new information in the process. When I set Simultaneous runs in parallel to 1 in the behavior space everything was fine.
A few times when dealing with modifying large objects (5gb), on a windows machine with 30gb of RAM, I have been reciving an error
Reached total allocation of 31249Mb: see help(memory.size). However the process seems to complete, i.e. I get a file with what looks like the right values. Checking every bit of a large file for exactly the right returns by cutting it up and comparing it to the right section is time consuming, but when I've done it it appears that the returned objects are correct with my expectations.
What risks/side effects can I expect from this error? What should I be checking? Is the process automatically recovering because I'm getting back the returns I'm expecting, or are the errors going to be more subtle? My entire analysis process is being written using tidyverse, does this mean I can rely on good error handling from Hadley et al., and is that why my process is warning, but also completing?
N.B. I have not included any attempt at an MWE, as every machine will have different limitations of what memory is available, though happy to be shown and MWE for this kind of process if there are suggestions.
Use memory.limit(x) where x is the amount of MB of memory to give it.
See link for more details:
Increasing (or decreasing) the memory available to R processes
Does a memory warning affect my R analysis?
When running a large data analysis script in R I get a warning something like:
In '... '
reached total allocation of ___Mb: see help...
But my script continues without error, just the warning. With other data sets I get an error something like:
Error: cannot allocate vector of size ___Mb:
I know the error breaks my data analysis, but is there anything wrong with just getting the warning? I have not noticed anything missing in my data set but it is very large and I have no good means to check everything. I am at 18000Mb allocated to memory and cannot reasonably allocate more.
Way back in the R 2.5.1 news I found this reference to memory allocation warnings:
malloc.c has been updated to version 2.8.3. This version has a
slightly different allocation strategy, and is likely to work a
little better close to address space limits but may give more
warnings about reaching the total allocation before successfully
allocating.
Based on this note, I hypothesize (without any advanced knowledge of the inner implementation) that the warning is given when the memory allocation call in R (malloc.c) failed an attempt to allocate memory. Multiple attempts are made to allocate memory, possibly using different methods, and possibly with calls to the garbage collector. Only when malloc is fairly certain that the allocation cannot be made will it return an error.
Warnings do not compromise existing R objects. They just inform the user that R is nearing the limits of computer memory.
(I hope a more knowledgeable user can confirm this...)
When I try to calculate Gest in spatstat I get the error:
bootstrap output matrix missing.
Does anyone know what am I doing wrong?
I think that "bootstrap output matrix missing" is a fairly generic error, and (unless someone has explicit experience with your case) I would imagine that more information is needed to solve this.
Without more information, I would suggest that you debug the Gest function. You have two good options for that:
1) Use the debug() function:
debug(Gest)
Now run your code. Then you can walk through the Gest function and see where is breaks. Before that point, look at all the environment variables by (for instance) using ls() and see if any assumptions are broken. Presumably something isn't being set correctly.
2) Use recover:
option(error=recover)
Then you will go into browser mode whenever the error occurs, and you can explore the workspace at that point.
This error message does not originate from the spatstat package.
To identify the location of the error, type traceback() immediately after the error has occurred. This will give you a list of the nested commands that were being executed at the time the error occurred. The top one is the location which raised the error.