Parallelizing random forests - r

Through searching and asking, I've found many packages I can use to make use of all the cores of my server, and many packages that can do random forest.
I'm quite new at this, and I'm getting lost between all the ways to parallelize the training of my random forest. Could you give some advice on reasons to use and/or avoid each of them, or some specific combinations of them (and with or without caret ?) that have made their proof ?
Packages for parallelization :
doParallel,
doSNOW,
doSMP (discontinued ?),
doMC
(and what about mclapply ?)
Packages for random forest :
[caret + some of the following]
rf,
parRF,
randomForest,
ranger,
Rborist,
parallelRandomForest (crashes my R Studio session...)
Thanks

There are a few answers on SO, such as parallel execution of random forest in R and Suggestions for speeding up Random Forests, that I would take a look at.
Those posts are helpful, but are a bit older. the ranger package is an especially fast implementation of random forest, so if you are new to this it might be the easiest way to speed up your model training. Their paper discusses the tradeoffs of some of the available packages - depending on your data size and number of features, which package gives you the best performance will vary.

Related

How to implement regularization / weight decay in R

I'm surprised at the number of R neural network packages that don't appear to have a parameter for regularization/lambda/weight decay. I'm assuming I'm missing something obvious. When I use a package like MLR and look at the integrated learners, I don't see parameters for regularization.
For example: nnTrain from the deepnet package:
list of params
I see parameters for just about everything - even drop out - but not lambda or anything else that looks like regularization.
My understanding of both caret and mlr is that they basically organize other ML packages and try to provide a consistent way to interact with them. I'm not finding L1/L2 regularization in any of them.
I've also done 20 google searches looking for R packages with regularization but found nothing. What am I missing? Thanks!
I looked through more of the models within mlr, (a daunting task), and eventually found the h2o package learners. In mlr, the classif.h2o.deeplearning model has every parameter I could think of, including L1 and L2.
Installing h2o is as simple as:
install.packages('h2o')

Has anyone tried to parallelize multiple imputation in 'mice' package?

I'm aware of the fact that Amelia R package provides some support for parallel multiple imputation (MI). However, preliminary analysis of my study's data revealed that the data is not multivariate normal, so, unfortunately, I can't use Amelia. Consequently, I've switched to using mice R package for MI, as this package can perform MI on data that is not multivariate normal.
Since the MI process via mice is very slow (currently I'm using AWS m3.large 2-core instance), I've started wondering whether it's possible to parallelize the procedure to save processing time. Based on my review of mice documentation and the corresponding JSS paper, as well as mice's source code, it appears that currently the package doesn't support parallel operations. This is sad, because IMHO the MICE algorithm is naturally parallel and, thus, its parallel implementation should be relatively easy and it would result in a significant economy in both time and resources.
Question: Has anyone tried to parallelize MI in mice package, either externally (via R parallel facilities), or internally (by modifying the source code) and what are results, if any? Thank you!
Recently, I've tried to parallelize multiple imputation (MI) via mice package externally, that is, by using R multiprocessing facilities, in particular parallel package, which comes standard with R base distribution. Basically, the solution is to use mclapply() function to distribute a pre-calculated share of the total number of needed MI iterations and then combine resulting imputed data into a single object. Performance-wise, the results of this approach are beyond my most optimistic expectations: the processing time decreased from 1.5 hours to under 7 minutes(!). That's only on two cores. I've removed one multilevel factor, but it shouldn't have much effect. Regardless, the result is unbelievable!

How can I tell if R is still estimating my SVM model or has crashed?

I am using the library e1071. In particular, I'm using the svm function. My dataset has 270 fields and 800,000 rows. I've been running this program for 24+ hours now, and I have no idea if it's hung or still running properly. The command I issued was:
svmmodel <- svm(V260 ~ ., data=traindata);
I'm using windows, and using the task manager, the status of Rgui.exe is "Not Responding". Did R crash already? Are there any other tips / tricks to better gauge to see what's happening inside R or the SVM learning process?
If it helps, here are some additional things I noticed using resource monitor (in windows):
CPU usage is at 13% (stable)
Number of threads is at 3 (stable)
Memory usage is at 10,505.9 MB +/- 1 MB (fluctuates)
As I'm writing this thread, I also see "similar questions" and am clicking on them. It seems that SVM training is quadratic or cubic. But still, after 24+ hours, if it's reasonable to wait, I will wait, but if not, I will have to eliminate SVM as a viable predictive model.
As mentioned in the answer to this question, "SVM training can be arbitrary long" depending on the parameters selected.
If I remember correctly from my ML class, running time is roughly proportional to the square of the number training examples, so for 800k examples you probably do not want to wait.
Also, as an anecdote, I once ran e1071 in R for more than two days on a smaller data set than yours. It eventually completed, but the training took too long for my needs.
Keep in mind that most ML algorithms, including SVM, will usually not achieve the desired result out of the box. Therefore, when you are thinking about how fast you need it to run, keep in mind that you will have to pay the running time every time you tweak a tuning parameter.
Of course you can reduce this running time by sampling down to a smaller training set, with the understanding that you will be learning from less data.
By default the function "svm" from e1071 uses radial basis kernel which makes svm induction computationally expensive. You might want to consider using a linear kernel (argument kernel="linear") or use a specialized library like LiblineaR built for large datasets. But your dataset is really large and if linear kernel does not do the trick then as suggested by others you can use a subset of your data to generate the model.

Parallel Random forest in R utilizing CARET package

I am utilizing one of the regression technique parallel random forest named as method="parRF" in R under the caret package; it seems to work faster than regular random forest. May I kindly request the difference in the implementation detail that speed up the process.
Any link to document explaining parallel random forest algorithm and implementation would be of great help.
It is a parallel implementation using your machine's multiple cores and an MPI package.
Check out the page on parallel implementations at http://caret.r-forge.r-project.org/parallel.html and of course the package's CRAN page. I hope these are enough detail for you.

Parallelize rfcv() function for feature selection in randomForest package

I wonder if anyone knows how to parallelize rfcv() function implemented in R-package 'randomForest'. Sorry if the question sounds very basic, but I tried to do this using 'foreach' without any results.
Have a look at the caret package and its documentation.
It not only is more general (allowing for more models than "just" random forests) but also integrates pre- and post-processing --- while also giving you parallel execution where feasible, particularly for evaluation and cross-validation which is an "embarassingly parallel" problem.

Resources