I was running a spatstat envelop to generate simulations sample, however, it got stuck and did not run. So, I attempted to close the application but fail.
RStudio diagnostic log
Additional error message:
This application has requested the Runtime to terminate it in an
unusual way. Please contact the application's support team for more
information
There are several typing errors in the command shown in the question. The argument rank should be nrank and the argument glocal should be global. I will assume that these were typed correctly when you ran the command.
Since global=TRUE this command will generate 2 * nsim = 198 realisations of a completely random pattern and calculate the L function for each one of them. In my experience it should take only a minute or two to compute this, unless the geometry of the window is very complicated. One hour is really extraordinary.
So I'm guessing either you have a very complicated window (so that the edge correction calculation is taking a long time) or that RStudio is hanging somehow.
Try setting correction="border" or correction="none" and see if that makes it run faster. (These are the fastest choices.) If that works, then read the help for Lest or Kest about edge corrections, and choose an edge correction that you like. If not, then try running the same command in R instead of RStudio.
Related
I'm trying to figure out is it possible in R to run different lines of code depending on the situation you have. In my situation we take in new data from a 3rd party and i would like to run a quality check on it at the middle point of our calculations.
I would like to do the starting calculations that I need. Then check if it fit's our needs and no irregularities are present. If it detects something then i would like to to move to lines 100-150 (calculate and print an error report) and if everything is great then go to line 151-200 and finish the calculations.
I could of course run codes separately bur since errors are rare i'm afraid that people will just skip quality checks. My plan is to make it really simple to run all the code but if an error is present then they can't get an end report. Will get an error report instead. Then they will check it.
Does mxnet store data/models somewhere outside of R? I keep running into scenarios where the first NN run of the day will produce good results, and every following run (even of the exact same code) will produce NA/NaN for all training steps.
Example: https://github.com/xup6fup/MxNetR-examples/blob/master/1.%20Basic%20models/3.%20softmax%20regression/1.%20Standard%20example.R
I copied and pasted the code as is, ran it and got about 70% accuracy. I noticed that the device was set to cpu, and I have gpu version compiled. So I changed it to gpu, reran ..... all NaN. Clear R session workspace, rerun original code with cpu, all NA.
Restart Rstudio server, rerun exact code.... all NA. It seems like SOMETHING is being stored outside of rstudio server and it interferes with subsequent FeedForward. I have this issue with multiple mxnet tutorials, where often they will work the first time, but subsequently will fail, even with identical code run.
If library was compiled somewhere before Nov 12 2017, then there's been a bug present in the random initialisation for some time which resulted in the initialization weights to be all nearly 0s.
I am making a simulation with NetLogo and extension R. I have made a supply chain model, where I have distributors and consumers. Consumers provide orders to distributors and distributors forecast future demand and places orders to suppliers in advance to fulfill market demand. The forecast is implemented with extension R (https://ccl.northwestern.edu/netlogo/docs/r.html) by calling elmNN package. The model works fine when simply using "go".
However, when I want to conduct experiments by using behavior space, I keep getting errors. If I set only a few ticks with behavior space, the model works fine. But when I want to launch a few hundred ticks behavior space keeps crashing. For example, "Extension exception: Error in R-extension: error in eval, operator is invalid for atomic vector", "Extension exception: Error in R-extension: error in eval: cannot have attributes on CHARSXP". Sometimes the behavior simply crashes without any error.
I assume that the errors are related to computability issues between NetLogo, R, R extension and java. I am using NetLogo 5.3.1, 64-bit; R-3.3.3 64-bit; rJava 0.9-8.
Model example: https://www.youtube.com/watch?v=zjQpPBgj0A8
A similar question was posted previously, but it has no answer: NetLogo BehaviorSpace crashing when using R extension
The problem was with programming style, which is not suitable for behavior space. Behavior space supports parallel programming due to which some variables were rewritten by new information in the process. When I set Simultaneous runs in parallel to 1 in the behavior space everything was fine.
A few times when dealing with modifying large objects (5gb), on a windows machine with 30gb of RAM, I have been reciving an error
Reached total allocation of 31249Mb: see help(memory.size). However the process seems to complete, i.e. I get a file with what looks like the right values. Checking every bit of a large file for exactly the right returns by cutting it up and comparing it to the right section is time consuming, but when I've done it it appears that the returned objects are correct with my expectations.
What risks/side effects can I expect from this error? What should I be checking? Is the process automatically recovering because I'm getting back the returns I'm expecting, or are the errors going to be more subtle? My entire analysis process is being written using tidyverse, does this mean I can rely on good error handling from Hadley et al., and is that why my process is warning, but also completing?
N.B. I have not included any attempt at an MWE, as every machine will have different limitations of what memory is available, though happy to be shown and MWE for this kind of process if there are suggestions.
Use memory.limit(x) where x is the amount of MB of memory to give it.
See link for more details:
Increasing (or decreasing) the memory available to R processes
I am running some large regression models in R in a grid computing environment. As far as I know, the grid just gives me more memory and faster processors, so I think this question would also apply for those who are using R on a powerful computer.
The regression models I am running have lots of observations, and several factor variables that have many (10s or 100s) of levels each. As a result, the regression can get computationally intensive. I have noticed that when I line up 3 regressions in a script and submit it to the grid, it exits (crashes) due to memory constraints. However, if I run it as 3 different scripts, it runs fine.
I'm doing some clean up, so after each model runs, I save the model object to a separate file, rm(list=ls()) to clear all memory, then run gc() before the next model is run. Still, running all three in one script seems to crash, but breaking up the job seems to be fine.
The sys admin says that breaking it up is important, but I don't see why, if I'm cleaning up after each run. 3 in one script runs them in sequence anyways. Does anyone have an idea why running three individual scripts works, but running all the models in one script would cause R to have memory issues?
thanks! EXL
Similar questions that are worth reading through:
Forcing garbage collection to run in R with the gc() command
Memory Usage in R
My experience has been that R isn't superb at memory management. You can try putting each regression in a function in the hope that letting variables go out of scope works better than gc(), but I wouldn't hold your breath. Is there a particular reason you can't run each in its own batch? More information as Joris requested would help as well.