Cluster job management in R via future package - r

I want to use the R package future (supports asynchronous calculations) to make a cluster-jobserver that can dynamically add/remove jobs to/from a queue.
One specific functionality that I would like to add to my jobserver is to distribute memory-demanding jobs to the more powerful machines in my cluster. However, since I have no experience with the package, I am not quite sure whether my approach (given below) has any pitfalls. Specifically, do the subsequent calls of plan have any side effects that might mess things up? Please see the comments in the code for more details.
Thanks in advance!
library(parallel)
library(future)
slaveIPs=c("172.16.2.10","172.16.2.21")
masterIP="172.16.2.33"
workers=makePSOCKcluster(slaveIPs,master=masterIP)
#check whether PSOCK cluster was correctly set up
unlist(clusterCall(workers,function(x) unname(Sys.info()["nodename"]))
#[1] "ip-172-16-2-10" "ip-172-16-2-21"
#now the first important part that I am not sure about
#as you can see, I only use workers[1] for the first task
#is it OK to use workers[1] like that?
plan(cluster,workers=workers[1])
f=future({
#do memory-hungry work
unname(Sys.info()["nodename"])
})
message(value(f))
#ip-172-16-2-10
#now I am only using workers[2] for the second task
#Is this ok? Does the previous call to 'plan' need some cleaning before?
plan(cluster,workers=workers[2])
f=future({
#do low-memory work
unname(Sys.info()["nodename"])
})
message(value(f))
#ip-172-16-2-21
stopCluster(workers)

Author of future here:
Yes, it alright to change future strategies like that, i.e. by using plan(). An alternative is to use:
f <- cluster({
#do low-memory work
unname(Sys.info()["nodename"])
}, workers = workers[2])
which is basically what is happening internally.
The downside of explicitly specifying future strategies like this is that your code will be hard coded to use cluster futures.
FYI, I'm planning to add some kind of mechanism for specifying preferred or required "resources" per future. This is just conceptual for now and will not exists anytime soon, but I'm thinking of something in line with:
f <- future({ ... }, needs = "himem")
where one can query workers for the himem tag / property, e.g. attr(workers[2], "provides") <- c("himem", "superfast"). I'm sharing these thoughts just so you know that I'm aware of needs like yours. Again, it will be quite some time before such mechanisms are available, so in the meanwhile, you need to explicitly specify the future strategy as above.
BTW, instead of:
slaveIPs=c("172.16.2.10","172.16.2.21")
masterIP="172.16.2.33"
workers=makePSOCKcluster(slaveIPs,master=masterIP)
you can try:
slaveIPs <- c("172.16.2.10", "172.16.2.21")
workers <- makeClusterPSOCK(slaveIPs)
provided by the future package - this avoids having to know/specify the IP address of master.

Related

Is there a way of "chunking" drake outputs to speed up plan verification and display?

I'm conducting simulations over a range of models and parameter values. At this point in time my drake workflow involves over 3k thousand simulated data.frames and corresponding stanfit objects.
Trying to run make currently incurs a delay of ~2 minutes before plan execution begins. I assume that this is because drake is going through its cache to verify which steps in the plan will need updating. I would like to have some way of letting it know that it can represent all of these models as a single monolithic chunk of output. What I could do is make a function that writes all my output objects as a side-effect and then outputs a hash of sorts so that drake is "fooled" as to what needs to be checked but I can't restructure my code at this point in time given an upcoming deadline and the processing time involved.
Similarly, for purposes of using the dependency graph, having 3k+ objects show up makes it unusable. It would be nice to be able to collapse certain objects under a single "output type" group.
Great question. I know what you are saying, and I think about this problem all the time. In fact, trying to get rid of the delay is one of my top two priorities for drake for 2019.
Unfortunately, drake does not have a solution right now that will allow you to keep your targets up to date. The long-term solution will probably be speed improvements + https://github.com/ropensci/drake/issues/304 + https://github.com/ropensci/drake/issues/233. These are important areas of development, but also huge undertakings.
For new projects, you could have each target be a list of fitted stan models.
drake_plan(
data1 <- generate_data(...),
data2 <- generate_data(...),
models_data1 <- fit_models(data1),
models_data2 <- fit_models(data2)
)
fit_models <- function(data){
list(
run_stan(data, "normal_priors"),
run_stan(data, "t_priors")
)
}
And for the graph visualizations, there is support for target clusters. See https://ropenscilabs.github.io/drake-manual/vis.html#clusters
EDIT: parallel computing and verbosity
If you run make(jobs = c(imports = 4, targets = 6)), drake will use 4 processes on your local machine to do the preprocessing. And make(verbose = 4) shows more progress messages than with the default setting.

How to check that a user-defined function works in r?

THis is probably a very silly question, but how can I check if a function written by myself will work or not?
I'm writing a not very simple function involving many other functions and loops and was wondering if there are any ways to check for errors/bugs, or simply just check if the function will work. Do I just create a simple fake data frame and test on it?
As suggested by other users in the comment, I have added the part of the function that I have written. So basically I have a data frame with good and bad data, and bad data are marked with flags. I want to write a function that allows me to produce plots as usual (with the flag points) when user sets flag.option to 1, and remove the flag points from the plot when user sets flag.option to 0.
AIR.plot <- function(mydata, flag.option) {
if (flag.option == 1) {
par(mfrow(2,1))
conc <- tapply(mydata$CO2, format(mydata$date, "%Y-%m-%d %T"), mean)
dates <- seq(mydata$date[1], mydata$date[nrow(mydata(mydata))], length = nrow(conc))
plot(dates, conc,
type = "p",
col = "blue",
xlab = "day",
ylab = "CO2"), error = function(e) plot.new(type = "n")
barplot(mydata$lines, horiz = TRUE, col = c("red", "blue")) # this is just a small bar plot on the bottom that specifies which sample-taking line (red or blue) is providing the samples
} else if (flag.option == 0) {
# I haven't figured out how to write this part yet but essentially I want to remove all
# of the rows with flags on
}
}
Thanks in advance, I'm not an experienced R user yet so please help me.
Before we (meaning, at my workplace) release any code to our production environment we run through a series of testing procedures to make sure our code behaves the way we want it to. It usually involves several people with different perspectives on the code.
Ideally, such verification should start before you write any code. Some questions you should be able to answer are:
What should the code do?
What inputs should it accept? (including type, ranges, etc)
What should the output look like?
How will it handle missing values?
How will it handle NULL values?
How will it handle zero-length values?
If you prepare a list of requirements and write your documentation before you begin writing any code, the probability of success goes up pretty quickly. Naturally, as you begin writing your code, you may find that your requirements need to be adjusted, or the function arguments need to be modified. That's okay, but document those changes when they happen.
While you are writing your function, use a package like assertthat or checkmate to write as many argument checks as you need in your code. Some of the best, most reliable code where I work consists of about 100 lines of argument checks and 3-4 lines of what the code actually is intended to do. It may seem like overkill, but you prevent a lot of problems from bad inputs that you never intended for users to provide.
When you've finished writing your function, you should at this point have a list of requirements and clearly documented expectations of your arguments. This is where you make use of the testthat package.
Write tests that verify all of the requirements you wrote are met.
Write tests that verify you can no put in unintended inputs and get the results you want.
Write tests that verify you get the output you intended on your test data.
Write tests that test any edge cases you can think of.
It can take a long time to write all of these tests, but once it is done, any further development is easier to check since anything that violates your existing requirements should fail the test.
That being said, I'm really bad at following this process in my own work. I have the tendency to write code, then document what I did. But the best code I've written has been where I've planned it out conceptually, wrote my documentation, coded, and then tested against my documentation.
As #antoine-sac pointed out in the links, some things cannot be checked programmatically; for example, if your function terminates.
Looking at it pragmatically, have a look at the packages assertthat and testthat. assertthat will help you insert checks of results "in between", testthat is for writing proper tests. Yes, the usual way of writing tests is creating a small test example including test data.

What to do with imperfect-but-useful functions?

I could equally have titled this question, "Is it good enough for CRAN?"
I have a collection of functions that I've built up for specific tasks. Some of these are convenience functions:
# Returns odds/evens from a vector
odds=function(vec) {
stopifnot(class(vec)=="integer")
ret = vec[fpart(vec/2)!=0]
ret
}
evens=function(vec) {
stopifnot(class(vec)=="integer")
ret = vec[fpart(vec/2)==0]
ret
}
Some are minor additions that have proven useful in answering common SO question:
# Shift a vector over by n spots
# wrap adds the entry at the beginning to the end
# pad does nothing unless wrap is false, in which case it specifies whether to pad with NAs
shift <- function(vec,n=1,wrap=TRUE,pad=FALSE) {
if(length(vec)<abs(n)) {
#stop("Length of vector must be greater than the magnitude of n \n")
}
if(n==0) {
return(vec)
} else if(length(vec)==n) {
# return empty
length(vec) <- 0
return(vec)
} else if(n>0) {
returnvec <- vec[seq(n+1,length(vec) )]
if(wrap) {
returnvec <- c(returnvec,vec[seq(n)])
} else if(pad) {
returnvec <- c(returnvec,rep(NA,n))
}
} else if(n<0) {
returnvec <- vec[seq(1,length(vec)-abs(n))]
if(wrap) {
returnvec <- c( vec[seq(length(vec)-abs(n)+1,length(vec))], returnvec )
} else if(pad) {
returnvec <- c( rep(NA,abs(n)), returnvec )
}
}
return(returnvec)
}
The most important are extensions to existing classes that can't be found anywhere else (e.g. a CDF panel function for lattice plots, various xtable and LaTeX output functions, classes for handling and converting between geospatial object types and performing various GIS-like operations such as overlays).
I would like to make these available somewhere on the internet in R-ized form (e.g. posting them on a blog as plain text functions is not what I'm looking for), so that maintenance is easier and so that I and others can access them from any computer that I go to. The logical thing to do is to make a package out of them and post them to CRAN--and indeed I already have them packaged up. But is this collection of functions suitable for a CRAN package?
I have two main concerns:
The functions don't seem to have any coherent overlay. It's just a
collection of functions that do lots of different things.
My code isn't always the prettiest. I've tried to clean it up as I
learned better coding practices, but producing R Core-worthy beautiful
code is not in the cards.
The CRAN webpage is surprisingly bereft of guidelines on posting. Should I post to CRAN, given that some people will find it useful but that it will in some sense forever lock R into having some pretty basic function names taken up? Or is there another place I can use an install.packages-like command to install from? Note I'd rather avoid posting the package to a webpage and having people have to memorize the URL to install the package (not least for version control issues).
I would use http://r-forge.r-project.org/. From the top of the page:
R-Forge offers a central platform for the development of R packages,
R-related software and further projects. It is based on FusionForge
offering easy access to the best in SVN, daily built and checked
packages, mailing lists, bug tracking, message boards/forums, site
hosting, permanent file archival, full backups, and total web-based
administration.
Most packages should be collections of related functions with an obvious purpose, so a useful thing to do would be to try and group what you have together, and see if you can classify them. Several smaller packages are better than one huge incoherent package.
That said, there are some packages that are collections of miscellaneous utility functions, most notably Hmisc and gregmisc, so it is okay to do that sort of thing. If you just have a few functions like that, it might be worth contacting the author of some of the misc packages and seeing if they'll let you include your code in their package.
As for writing pretty code, the most important thing you can do is to use a style guide.
In my opinion it is not a good idea to make this type material into packages.
Misc-packages do exist, but mostly for historical reason and/or due to their authoritative contributors, see Frank Harrell Hmisc .
I see three main reason why this choice does non fit for disparate collection of functions.
There are by and large 7000 packages on CRAN only. It is unlikely that your package will be chosen if it does not target a specific field and, even when this happens, it is very possible that other established packages do the same. Therefore your package should also sport an original/better solution to the problem it deals with.
Repositories, and CRAN in particular, are task-oriented, which suggests packages' functions should address a coherent task. And for a good reason: there is no point in downloading a whole package with say, 50 autonomous functions, when I need just a couple of them. Instead, if a package solves a specific data problem of mine, than I will most likely need most (if not all) of them.
R repositories tend to mask the content. Contrary to tech blogs, you do not immediately see the functions' source. You need to download a separate source package and there is a lot of overhead due to the package structure, which buries the actual functions you are willing to show and the others need to read.
In my opinion the best place for general convenience functions, are sites like GitHub. In fact:
One immediately reads them with the comfort of syntax highlight. If they are interesting, they can be pasted in R to give a try and possibly keep them, otherwise one simply steps over to read next function.
There is the possibility of organising code, but without all the constraints of an actual package. Similar functions might go in the same file and coherent files in the same subfolder.
You can show your ideas to the others in a simple way. The readme file can immediately become a sort of mini webpage (via markdown). In comparison CRAN is quite rigid.
There are a lot of other benefits (revision history, accepting contributions, GitHub pages), which may or may not interest you.
Of course, after several functions grow in a stable coherent direction, you will turn them into an actual CRAN package. Also because the copy and paste method to try them becomes then inconvenient.
EDIT: Nowadays there are alternatives to GitHub, which can be taken into consideration too and GitHub has become a common way to distribute packages not yet ready for CRAN or to integrate the official CRAN distribution page.

Strategies for repeating large chunk of analysis

I find myself in the position of having completed a large chunk of analysis and now need to repeat the analysis with slightly different input assumptions.
The analysis, in this case, involves cluster analysis, plotting several graphs, and exporting cluster ids and other variables of interest. The key point is that it is an extensive analysis, and needs to be repeated and compared only twice.
I considered:
Creating a function. This isn't ideal, because then I have to modify my code to know whether I am evaluating in the function or parent environments. This additional effort seems excessive, makes it harder to debug and may introduce side-effects.
Wrap it in a for-loop. Again, not ideal, because then I have to create indexing variables, which can also introduce side-effects.
Creating some pre-amble code, wrapping the analysis in a separate file and source it. This works, but seems very ugly and sub-optimal.
The objective of the analysis is to finish with a set of objects (in a list, or in separate output files) that I can analyse further for differences.
What is a good strategy for dealing with this type of problem?
Making code reusable takes some time, effort and holds a few extra challenges like you mention yourself.
The question whether to invest is probably the key issue in informatics (if not in a lot of other fields): do I write a script to rename 50 files in a similar fashion, or do I go ahead and rename them manually.
The answer, I believe, is highly personal and even then, different case by case. If you are easy on the programming, you may sooner decide to go the reuse route, as the effort for you will be relatively low (and even then, programmers typically like to learn new tricks, so that's a hidden, often counterproductive motivation).
That said, in your particular case: I'd go with the sourcing option: since you plan to reuse the code only 2 times more, a greater effort would probably go wasted (you indicate the analysis to be rather extensive). So what if it's not an elegant solution? Nobody is ever going to see you do it, and everybody will be happy with the swift results.
If it turns out in a year or so that the reuse is higher than expected, you can then still invest. And by that time, you will also have (at least) three cases for which you can compare the results from the rewritten and funky reusable version of your code with your current results.
If/when I do know up front that I'm going to reuse code, I try to keep that in mind while developing it. Either way I hardly ever write code that is not in a function (well, barring the two-liners for SO and other out-of-the-box analyses): I find this makes it easier for me to structure my thoughts.
If at all possible, set parameters that differ between sets/runs/experiments in an external parameter file. Then, you can source the code, call a function, even utilize a package, but the operations are determined by a small set of externally defined parameters.
For instance, JSON works very well for this and the RJSONIO and rjson packages allow you to load the file into a list. Suppose you load it into a list called parametersNN.json. An example is as follows:
{
"Version": "20110701a",
"Initialization":
{
"indices": [1,2,3,4,5,6,7,8,9,10],
"step_size": 0.05
},
"Stopping":
{
"tolerance": 0.01,
"iterations": 100
}
}
Save that as "parameters01.json" and load as:
library(RJSONIO)
Params <- fromJSON("parameters.json")
and you're off and running. (NB: I like to use unique version #s within my parameters files, just so that I can identify the set later, if I'm looking at the "parameters" list within R.) Just call your script and point to the parameters file, e.g.:
Rscript --vanilla MyScript.R parameters01.json
then, within the program, identify the parameters file from the commandArgs() function.
Later, you can break out code into functions and packages, but this is probably the easiest way to make a vanilla script generalizeable in the short term, and it's a good practice for the long-term, as code should be separated from the specification of run/dataset/experiment-dependent parameters.
Edit: to be more precise, I would even specify input and output directories or files (or naming patterns/prefixes) in the JSON. This makes it very clear how one set of parameters led to one particular output set. Everything in between is just code that runs with a given parametrization, but the code shouldn't really change much, should it?
Update:
Three months, and many thousands of runs, wiser than my previous answer, I'd say that the external storage of parameters in JSON is useful for 1-1000 different runs. When the parameters or configurations number in the thousands and up, it's better to switch to using a database for configuration management. Each configuration may originate in a JSON (or XML), but being able to grapple with different parameter layouts requires a larger scale solution, for which a database like SQLite (via RSQLite) is a fine solution.
I realize this answer is overkill for the original question - how to repeat work only a couple of times, with a few parameter changes, but when scaling up to hundreds or thousands of parameter changes in ongoing research, more extensive tools are necessary. :)
I like to work with combination of a little shell script, a pdf cropping program and Sweave in those cases. That gives you back nice reports and encourages you to source. Typically I work with several files, almost like creating a package (at least I think it feels like that :) . I have a separate file for the data juggling and separate files for different types of analysis, such as descriptiveStats.R, regressions.R for example.
btw here's my little shell script,
#!/bin/sh
R CMD Sweave docSweave.Rnw
for file in `ls pdfs`;
do pdfcrop pdfs/"$file" pdfs/"$file"
done
pdflatex docSweave.tex
open docSweave.pdf
The Sweave file typically sources the R files mentioned above when needed. I am not sure whether that's what you looking for, but that's my strategy so far. I at least I believe creating transparent, reproducible reports is what helps to follow at least A strategy.
Your third option is not so bad. I do this in many cases. You can build a bit more structure by putting the results of your pre-ample code in environments and attach the one you want to use for further analysis.
An example:
setup1 <- local({
x <- rnorm(50, mean=2.0)
y <- rnorm(50, mean=1.0)
environment()
# ...
})
setup2 <- local({
x <- rnorm(50, mean=1.8)
y <- rnorm(50, mean=1.5)
environment()
# ...
})
attach(setup1) and run/source your analysis code
plot(x, y)
t.test(x, y, paired = T, var.equal = T)
...
When finished, detach(setup1) and attach the second one.
Now, at least you can easily switch between setups. Helped me a few times.
I tend to push such results into a global list.
I use Common Lisp but then R isn't so different.
Too late for you here, but I use Sweave a lot, and most probably I'd have used a Sweave file from the beginning (e.g. if I know that the final product needs to be some kind of report).
For repeating parts of the analysis a second and third time, there are then two options:
if the results are rather "independent" (i.e. should produce 3 reports, comparison means the reports are inspected side by side), and the changed input comes in the form of new data files, that goes into its own directory together with a copy of the Sweave file, and I create separate reports (similar to source, but feels more natural for Sweave than for plain source).
if I rather need to do the exactly same thing once or twice again inside one Sweave file I'd consider reusing code chunks. This is similar to the ugly for-loop.
The reason is that then of course the results are together for the comparison, which would then be the last part of the report.
If it is clear from the beginning that there will be some parameter sets and a comparison, I write the code in a way that as soon as I'm fine with each part of the analysis it is wrapped into a function (i.e. I'm acutally writing the function in the editor window, but evaluate the lines directly in the workspace while writing the function).
Given that you are in the described situation, I agree with Nick - nothing wrong with source and everything else means much more effort now that you have it already as script.
I can't make a comment on Iterator's answer so I have to post it here. I really like his answer so I made a short script for creating the parameters and exporting them to external JSON files. And I hope someone finds this useful: https://github.com/kiribatu/Kiribatu-R-Toolkit/blob/master/docs/parameter_configuration.md

What is the best way to avoid passing a data frame around?

I have 12 data.frames to work with. They are similar and I have to do the same processing to each one, so I wrote a function that takes a data.frame, processes it, and then returns a data.frame. This works. But I am afraid that I am passing around a very big structure. I may be making temporary copies (am I?) This can't be efficient. What is the best way to avoid passing a data.frame around?
doSomething <- function(df) {
// do something with the data frame, df
return(df)
}
You are, indeed, passing the object around and using some memory. But I don't think you can do an operation on an object in R without passing the object around. Even if you didn't create a function and did your operations outside of the function, R would behave basically the same.
The best way to see this is to set up an example. If you are in Windows open Windows Task Manager. If you are in Linux open a terminal window and run the top command. I'm going to assume Windows in this example. In R run the following:
col1<-rnorm(1000000,0,1)
col2<-rnorm(1000000,1,2)
myframe<-data.frame(col1,col2)
rm(col1)
rm(col2)
gc()
this creates a couple of vectors called col1 and col2 then combines them into a data frame called myframe. It then drops the vectors and forces garbage collection to run. Watch in your windows task manager at the mem usage for the Rgui.exe task. When I start R it uses about 19 meg of mem. After I run the above commands my machine is using just under 35 meg for R.
Now try this:
myframe<-myframe+1
your memory usage for R should jump to over 144 meg. If you force garbage collection using gc() you will see it drop back to around 35 meg. To try this using a function, you can do the following:
doSomething <- function(df) {
df<-df+1-1
return(df)
}
myframe<-doSomething(myframe)
when you run the code above, memory usage will jump up to 160 meg or so. Running gc() will drop it back to 35 meg.
So what to make of all this? Well, doing an operation outside of a function is not that much more efficient (in terms of memory) than doing it in a function. Garbage collection cleans things up real nice. Should you force gc() to run? Probably not as it will run automatically as needed, I just ran it above to show how it impacts memory usage.
I hope that helps!
I'm no R expert, but most languages use a reference counting scheme for big objects. A copy of the object data will not be made until you modify the copy of the object. If your functions only read the data (i.e. for analysis) then no copy should be made.
I came across this question looking for something else, and it's old - so I'll just provide a brief answer for now (leave a comment if you'd like more explanation).
You can pass around environments in R which contain anywhere from 1 to all of your variables. But probably you don't need to worry about it.
[You might also be able to do something similar with classes. I only currently understand how to use classes for polymorphic functions - and note there's more than 1 class system kicking around.]

Resources