How can I make SVM and RandomForest run fast in R? - r

I have dataset with 30k rows and 12 columns. I tried to apply SVM and RandomForest for my training data (20k rows and 11 columns) but it's taking a long time to get the result .
I have MacBook with processor 1.1 GHz Dual-Core Intel Core M and memory 8 GB 1600 MHz DDR3

Related

Why is H2O autoencoder so slow for one data set but not the other?

When I run H2O autoencoder on two different data sets of about the same size (see below), I can finish one data set (A) within 5 minutes, but the other data set (B) is really slow. It takes >30 minutes to complete only 1% for data set B. I tried restarting R session and H2O a couple of times, but that didn’t help. There are about the same number of parameters (or coefficients) in the model for both data sets.
Data set A: 4 * 1,000,000 in size (<5 minutes)
Data set B: 8 * 477,613 in size (very slow)
The model below is used for both data sets:
model.dl = h2o.deeplearning(x = x, training_frame = data.hex, autoencoder = TRUE, activation = "Tanh", hidden = c(25,25,25), variable_importances = TRUE)
The memory of the H2O cluster is 15GB for both data sets. The same computer is used (OS X 10.14.6, 16 GB memory). Below is some information about the versions of H2O and R.
H2O cluster version: 3.30.0.1
H2O cluster total nodes: 1
H2O cluster total memory: 15.00 GB
H2O cluster total cores: 16
H2O cluster allowed cores: 16
H2O cluster healthy: TRUE
R Version: R version 3.6.3 (2020-02-29)
Please let me know if there is any other information I can provide to get this issue resolved.
This problem has been resolved.
The problem is that there are a lot more columns for data set B after one-hot-encoding during the model run. Please see below.
Data set A:
There are 4 categorical features. The number of unique values for these categorical features is 12, 14, 25, and 10, respectively.
Data set B:
There are 7 categorical features and 1 numerical feature. The number of unique values for the categorical features is 17, 49, 52, 85, 5032 (!), 18445 (!!) and 392124 (!!!), respectively. This explains why it's so slow.

Parallel processing in R with H2O

I am setting up a piece of code to parallel processes some computations for N groups in my data using foreach.
I have a computation that involves a call to h2o.gbm.
In my current, sequential set-up, I use up to about 70% of my RAM.
How do I correctly set-up my h2o.init() within the parallel piece of code? I am afraid that I might run out of RAM when I use multiple cores.
My Windows 10 machine has 12 cores and 128GB of RAM.
Would something like this pseudo-code work?
library(foreach)
library(doParallel)
#setup parallel backend to use 12 processors
cl<-makeCluster(12)
registerDoParallel(cl)
#loop
df4 <-foreach(i = as.numeric(seq(1,999)), .combine=rbind) %dopar% {
df4 <- data.frame()
#bunch of computations
h2o.init(nthreads=1, max_mem_size="10G")
gbm <- h2o.gbm(train_some_model)
df4 <- data.frame(someoutput)
}
fwrite(df4, append=TRUE)
stopCluster(cl)
The way your code is currently set up won't be the best option. I understand what you are trying to do -- execute a bunch of GBMs in parallel (each on a single core H2O cluster), so you can maximize the CPU usage across the 12 cores on your machine. However, what your code will do is try to run all the GBMs in your foreach loop in parallel on the same single-core H2O cluster. You can only connect to one H2O cluster at a time from a single R instance, however the foreach loop will create a new R instance.
Unlike most machine learning algos in R, the H2O algos are all multi-core enabled so the training process will already be parallelized at the algorithm level, without the need for a parallel R package like foreach.
You have a few options (#1 or #3 is probably best):
Set h2o.init(nthreads = -1) at the top of your script to use all 12 of your cores. Change the foreach() loop to a regular loop and train each GBM (on a different data partition) sequentially. Although the different GBMs are trained sequentially, each single GBM will be fully parallelized across the H2O cluster.
Set h2o.init(nthreads = -1) at the top of your script, but keep your foreach() loop. This should run all your GBMs at once, with each GBM parallelized across all cores. This could overwhelm the H2O cluster a bit (this is not really how H2O is meant to be used) and could be a bit slower than #1, but it's hard to say without knowing the size of your data and the number of partitions of you want to train on. If you are already using 70% of your RAM for a single GBM, then this might not be the best option.
You can update your code to do the following (which most closely resembles your original script). This will preserve your foreach loop, creating a new 1-core H2O cluster at a different port on your machine. See below.
Updated R code example which uses the iris dataset and returns the predicted class for iris as a data.frame:
library(foreach)
library(doParallel)
library(h2o)
h2o.shutdown(prompt = FALSE)
#setup parallel backend to use 12 processors
cl <- makeCluster(12)
registerDoParallel(cl)
#loop
df4 <- foreach(i = seq(20), .combine=rbind) %dopar% {
library(h2o)
port <- 54321 + 3*i
print(paste0("http://localhost:", port))
h2o.init(nthreads = 1, max_mem_size = "1G", port = port)
df4 <- data.frame()
data(iris)
data <- as.h2o(iris)
ss <- h2o.splitFrame(data)
gbm <- h2o.gbm(x = 1:4, y = "Species", training_frame = ss[[1]])
df4 <- as.data.frame(h2o.predict(gbm, ss[[2]]))[,1]
}
In order to judge which option is best, I would try running this on a few data partitions (maybe 10-100) to see which approach seems to scale the best. If your training data is small, it's possible that #3 will be faster than #1, but overall, I'd say #1 is probably the most scalable/stable solution.
Following Erin LeDell's answer, I just wanted to add that in many cases a decent practical solution can be something in between #1 and #3. To increase CPU utilization and still save RAM you can use multiple H2O instances in parallel, but they each can use multiple cores without much performance loss relative to running more instances with only one core.
I ran an experiment using a relatively small 40MB dataset (240K rows, 22 columns) on a 36 core server.
Case 1: Use all 36 cores (nthreads=36) to estimate 120 GBM models (with default
hyper-parameters) sequentially.
Case 2: Use foreach to run 4 H2O instances on this machine, each
using 9 cores to estimate 30 GBM default models sequentially (total = 120 estimations).
Case 3: Use foreach to run 12 H2O instances on this machine, each
using 3 cores to estimate 10 GBM default models sequentially (total = 120 estimations).
Using 36 cores estimating a single GBM model on this dataset is very inefficient. CPU utilization in Case 1 is jumping around a lot, but is on average below 50%. So there is definitely something to gain using more than one H2O instance at a time.
Runtime Case 1: 264 seconds
Runtime Case 2: 132 seconds
Runtime Case 3: 130 seconds
Given the small improvement from 4 to 12 H2O instances, I did not even run 36 H2O instances each using one core in parallel.

optimization, lpSolve lp.transport: computational time?

I am trying to run an optimization using lp.transport from the package lpSolve in R, using the generic form
lp.transport (cost, "min", row.signs, row.rhs, col.signs, col.rhs)
The cost matrix is large, 6791 x 15594. The rows correspond to food producers and the columns to consumers, and obviously the sum of all values of row.rhs is equal to that of col.rhs.
The optimization has been running about 12 hours now (using about 30 Mb of memory, in a 64-bits R). Is there any way to estimate the time it will take? Any advice on how to modify the inputs to eventually reduce computational time?

parallel execution of random forest in R

I am running random forest in R in parallel
library(doMC)
registerDoMC()
x <- matrix(runif(500), 100)
y <- gl(2, 50)
Parallel execution (took 73 sec)
rf <- foreach(ntree=rep(25000, 6), .combine=combine, .packages='randomForest') %dopar%
randomForest(x, y, ntree=ntree)
Sequential execution (took 82 sec)
rf <- foreach(ntree=rep(25000, 6), .combine=combine) %do%
randomForest(x, y, ntree=ntree)
In parallel execution, the tree generation is pretty quick like 3-7 sec, but the rest of the time is consumed in combining the results (combine option). So, its only worth to run parallel execution is the number of trees are really high. Is there any way I can tweak "combine" option to avoid any calculation at each node which I dont need and make it more faster
PS. Above is just an example of data. In real I have some 100 thousands features for some 100 observations.
Setting .multicombine to TRUE can make a significant difference:
rf <- foreach(ntree=rep(25000, 6), .combine=randomForest::combine,
.multicombine=TRUE, .packages='randomForest') %dopar% {
randomForest(x, y, ntree=ntree)
}
This causes combine to be called once rather than five times. On my desktop machine, this runs in 8 seconds rather than 19 seconds.
Are you aware that the caret package can do a lot of the hand-holding for parallel runs (as well as data prep, summaries, ...) for you?
Ultimately, of course, if there are some costly operations left in the random forest computation itself, there is little you can do as Andy spent quite a few years on improving it. I would expect few to no low-hanging fruits to be around for the picking...
H20 package can be used to solve your problem.
According to H20 documentation page H2O is "the open source
math engine for big data that computes parallel distributed
machine learning algorithms such as generalized linear models,
gradient boosting machines, random forests, and neural networks
(deep learning) within various cluster environments."
Random Forest implementation using H2O:
https://www.analyticsvidhya.com/blog/2016/05/h2o-data-table-build-models-large-data-sets/
I wonder if the parallelRandomForest code would be helpful to you?
According to the author it ran about 6 times faster on his data set with 16 times less memory consumption.
SPRINT also has a parallel implementation here.
Depending on your CPU, you could probably get 5%-30% speed-up choosing number of jobs to match your number of registered cores matching the number of system logical cores. (sometimes it is more efficient to match number of system physical cores).
If you have a generic Intel dual-core laptop with Hyper Threading(4 logical cores), then DoMC probably registered a cluster of 4 cores. Thus 2 cores will idle when iteration 5 and 6 are computed plus the extra time starting/stopping two extra jobs. It would more efficient to make only 2-4 jobs of more trees.

R 64 running out of memory during SIMPROF

I'm relatively new to R and am currently trying to run a SIMPROF analysis (clustsig package) on a small dataset of 1000 observations and 24 variables. After ~30 iterations I receive the following error:
Error: cannot allocate vector of size 1.3 Mb.
In addition: There were 39 warnings (use warnings() to see them)
All the additional warnings relate to R reaching a total allocation of 8183Mb
The method I'm using to run the analysis is below.
Data <- read.csv(file.choose(), header=T, colClasses="numeric")
Matrix <- function(Data) vegan::vegdist(Data, method="gower")
SimprofOutput <- simprof(Data, num.expected=1000, num.simulated=999, method.cluster="average", method.distance=Matrix, alpha = 0.10, silent=FALSE, increment=100)
I'm wondering if anybody else has had trouble running the SIMPROF analysis or any ideas how to stop R running out of RAM. I'm running 64 bit Win7 Enterprise and using R 2.15.1 on a machine with 8gb RAM

Resources