Netlogo create plot of the average of various runs - plot

I have been trying to create average plot of various runs (if possible with their variation).
So far the only way I found was by using the xls from Behavior Space and do it externally.
Is there a way to do this in Netlogo?
Many thanks for your help!

It is possible, but it is not entierly convenient. To get a start you could look at the "Simple Birth Rates" model from the NetLogo models library. In this model, the setup procedure is split up into a basic setup, which is executed once, when the model is initialized for the first time. And then a second "setup-experiment", which is executed in-between multiple runs. This allows you to control, which things get cleared in between runs (turtles, patches, plots, ...).
For performing multiple runs, the model uses a second go-procedure, named go-experiment. This procedure runs the model (go) until a stop condition is true. Then it calls the setup-experiment procedure and continues with the next simulation run (go).
To store data for a plot, you only would need to store the final results of interest of each run in a global list (right after the stop condition became true and right before the setup-experiment of the next run is performed). This can then be used by a plot on your interface to summarize the data of various runs. You only have to make sure, that you dont clear the global variables in your setup-experiment procedure and that your setup-experiment procedure resets all other global variables (if any) to their initial-state.

Related

Trying to automate an R script that always runs against one dataset and conditionally against another

Very new to R and trying to modify a script to help my end users.
Every week a group of files are produced and my modified script, reaches out to the network, makes the necessary changes and puts the files back, all nice and tidy. However, every quarter, there is a second set of files, that needs the EXACT same transformation completed. My thoughts were to check if the files exist on the network with a file.exists statement and then run through script and then continue with the normal weekly one, but my limited experience can only think of writing it this way (lots of stuff is a couple hundred lines)and I'm sure there's something I can do other than double the size of the program:
if file.exists("quarterly.txt"){
do lots of stuff}
else{
do lots of stuff}
Both starja and lemonlin were correct, my solution was to basically turn my program into a function and just create a program that calls the function with each dataset. I also skipped the 'else' portion of my if statement, which works perfectly (for me).

Using Predefined Splits in PCR function R PLS package

In order to to ensure a good population representation I have created custom validation sets from my training data. However, I am not sure how I interface this in PCR in R
I have tried to add a list in the segments argument with each index similar to what you do in python predefined splits cv iterator, which runs but takes forever. So I feel I must be making an error somewhere
pcr(y~X,scale=FALSE,data=tdata,validation="CV",segments=test_fold)
where test fold is a list containing the validation set which belongs in the index
For example if the training data is composed on 9 samples and I want to use the first three as the first validation set on son
test_fold<-c(1,1,1,2,2,2,3,3,3)
This runs but it is very slow where if I do regular "CV" it runs in minutes. So far the results look okay but I have a over a thousand runs I need to do and it took 1 hr to get through one. So if anybody knows how I can speed this up I would be grateful.
So the segments parameters needs to be a list of multiple vectors. So going again with 9 samples if I want the first three to be in the first validation set, the next three in the second validation set and so on it should be
test_vec<-list(c(1,2,3),c(4,5,6),c(7,8,9))

Particle Swarm Optimization Calling Counts

While trying to use the pso or hydroPSO package from R-CRAN, I have a need to use/access the counts (current iteration, current number function evaluation, and current restart) to work with the function I've been writing. However, I can't seem to wrap my head around this. Any suggestions on figuring out how to call the current iteration/function/restarts within the objective function would be great. A piece of example code would be appreciated as I seem to fail to fully understand the documentation.
Background:
My function requires the iteration number as it is a wrapper to some code written in FORTRAN where the input files are generated and the output files are read back in to R. I want the iteration number surviving so that I can return back to the previous output files for further analysis. An example of this would be:
~/runs/<restart #>/<iteration>/<particle>/input/
~/runs/<restart #>/<iteration>/<particle>/output/
The wrapper function accepts the parameters, automatically generates the input files, runs the FORTRAN model, then parses in the output and post-processes them (e.g. performance index calculations).

Appending text file for parallel simulation output

I am running R simulation using a multi-core system. The simulation result I am monitoring is a vector of length 900. My plan was to append this vector (row-wise) in text file (using write.table) once each simulation ends. My simulation run from 1:1000. While I was working on my laptop the result was fine because the work is sequential. When I work i cluster the simulations are divided and there might be a conflict on who to write first. The reason for my claim is I am even getting even impossible values for the first column of my text file (This column used to store the simulation index). If you need a sample code I can attach.
There is no way to write to a text file with parallel threads that respects order. Each thread has it's own buffer and has no indication when it is appropriate to write, because there is no cross communication. So what will happen is they'll all try to the same file at the same time, even in the middle of another thread's write which is why you're getting impossible values in your first column.
The solution is to write to separate files for each thread, or return the value as the output of the multithreaded apply loop. Then combine the results at the end sequentially.

Avoiding using global objects when building an R package with multiple separate functions

I have built an R package that runs a complex Bayesian model (Dirichlet Process Mixture model on spatial data) including an MCMC, thinning and validation and interface with Googlemaps. I'm very happy with performance and it runs without problems. The only issue is I would like to get it up on CRAN and it will be rejected because I extensively use global variables.
The package is built around the use of 8 core functions (which the user interacts with):
1) LoadData: Loads in data, extracts key information and sets up a series of global matrices as well as other small list objects.
2) ModelParameters: Sets model parameters, option to plot prior on parameter sigma on Googlemap. Calculates a hyper-prior at this point and saves a large matrix to the global environment
3) GraphicParameters: Sets graphic parameters of maps and plots (see code below)
4) CreateMaps: Creates the prior surface on source location tau and plots the data on a Google map. Keeps a number of global objects saved for repeated plotting of this map.
5) RunMCMC: Runs the bulk of the analysis using MCMC (a time intensive step), creates many global objects.
6) ThinandAnalsye: Thins the posterior samples and constructs the geoprofile (a time intensive step)
7) PlotGP: Plots the data and overlays the geoprofile onto a Google map
8) reporthitscores: OPTIONAL if source data is imported, calculates the hit scores of potential sources
Each one is run in turn before the next, and I pass global variables out which are used by one or more of the other functions.
I built it this way for a reason, as the user must stop and evaluate the results of these functions before rushing ahead to the future ones.
Each of these functions passes not just fixed parameters, but also large map objects, lists and matrices as global objects. I thought it was a nice simple solution with a smooth workflow (you can check the results in your main working environment before moving on, possibly applying transformations etc) and I have given all the objects unique and informative names.
How do I get around this, and pass the checks of CRAN whilst keeping my user friendly workflow of a series of interacting functions?
I dont want to post up a lot of code (as just the MCMC part is several hundred lines long)
But I will include one of the simple examples. GraphicParameters is one of my simple parameter setting functions, that comes with the default values set. This is a simple example, there are much more complex ones in the package. There is a model parameters function that pulls many of the variables from an existing data loading function for example.
GraphicParameters <-
function(Guardrail=0.05, nring=20,transp=0.4,gridsize=640,gridsize2=300,MapType= "roadmap",Location=getwd(),pointcol="black") {
Guardrail<<-Guardrail
nring<<-nring
transp<<-transp
gridsize<<-gridsize
gridsize2<<-gridsize2
MapType<<-MapType
Location<<-Location
pointcol<<-pointcol
}
Most of the material I have seen concerning avoiding global objects resolves around a single function that will do all the work. I want to keep my step by step multi-function approach, but loose the global objects.
Any help would be greatly appreciated.
I understand this may be a major reworking of the code (which is several 1000 lines currently), so I would also love solutions that minimally affect the overall structure of the package.
P.S. I wish I had known about CRANs displeasure with global objects before I started!!!
Your problem is very amenable to OOP-style design. You can use reference classes or S4 to export a single global, e.g., a MapAnalysis class generator. The idea is then that someone creates this using
ma <- new('MapAnalysis', option1 = ..., option2 = ..., ...) # S4
# or
ma <- MapAnalysis$new(option1 = ..., ...) # refClass
and can then call your methods with
ma$loadData(...)
ma$setParameters(...)
with the object doing any bookkeeping of options and auxiliary objects internally. It should not be that much work to refactor. If you read the page I linked to at the top of this post, you should see it's probably possible to just wrap all your functions with a refClass('MapAnalysis', fields = (...), methods = (...)) with few further modifications. (Although it would do you a lot of good down the road to re-think the architecture in OOP terms.)

Resources