Adding priority to rows in LpsolveAPI in R - r

I have an optimised time allocation lp code in R using LpSolverAPI. The code works fine with the given constraints which are :
The time allotted per job must be met
The time a worker works cannot exceed the paid work time
But, i need to add a "priority" while the time is allocated to each job. Meaning, there are jobs that need to be completed first/ must be given first priority while allocating workers.
The code i have is below.
The time must be allotted to job4, job2, job3 and finally job1 (as per priority_matrix)
If no time is workers are free then lower priority jobs can remain un allocated.
With the below code (without priority) no time is allocated to job 4.
library(lpSolveAPI)
jobs <- c(1,1,1,1)
workermax <- c(8,6,9)
jobmax <- c(6,6,5,6)
priority_matrix <- as.matrix(c(4,2,3,1),ncol = 1, bycol = T) ##priority of each job
rownames(priority_matrix) <- c("Job1","Job2","Job3","Job4")
scheduler_input2 <- matrix(c(8, 0, 0, 0,6,0,8,0,9,0,6,0), nrow=4 , byrow =T)
obj.vals <- scheduler_input2
colnames(obj.vals) <- c("worker1","worker2","worker3")
rownames(obj.vals) <- c("Job1","Job2","Job3","Job4")
nworkers <- ncol(scheduler_input2)
njobs <- nrow(scheduler_input2) #4
ncol = nworkers*njobs
distribution<- make.lp(ncol=ncol)
set.type(distribution, columns=1:ncol, type = c("integer"))
set.objfn(distribution, obj.vals)
lp.control(distribution,sense='max')
#cosntraint1
time_per_job <- function (job_index) {
skill_cols <- (0:(nworkers-1))*njobs + job_index
add.constraint(distribution, rep(1,nworkers), indices=skill_cols,type="<=", rhs=jobmax[job_index])
}
lapply(1:njobs, time_per_job)
#cosntraint2
max_hrs_by_worker <- function (DA_index) {
DA_cols <- (DA_index-1)*njobs + (1:njobs) #relevant columns for a given room
add.constraint(distribution, xt=jobs, indices=DA_cols,type="<=",rhs=workermax[DA_index])
}
lapply(1:nworkers, max_hrs_by_worker)
solve(distribution)
get.objective(distribution)
distribution<-matrix(get.variables(distribution), njobs,nworkers)
Thanks in advance.

The description is not precise enough to give a definite answer. Also, write the problem down in mathematical notation before starting to code. The code is not very easy to read and not at all structure-revealing. This is partly a problem with LpSolveAPI which has a somewhat medieval way to represent optimization models.
If you want to enforce that job2 can only be executed if job4 is executed then introduce binary variables:
y[j] = 1 if job j is executed
0 otherwise
and the precedence constraint:
y[job4] >= y[job2]

Related

R script runs on external cluster without performance improvement

I'm using a topic modeling approach that works well on my computer in RStudio, except that it takes ages. So I'm using a linux cluster. However, also I seem to request a lot of capacity, it doesn't really speed up:
I'm sorry I'm a greenhorn... So this is what I'm using in the pageant shell:
salloc -N 240 --mem=61440 -t 06:00:00 -p med
#!/bin/sh
#SBATCH --nodes=200
#SBATCH --time=06:00:00
#SBATCH --partition=med
#SBATCH --mem=102400
#SBATCH --job-name=TestJobUSERNAME
#SBATCH --mail-user=username#ddomain.com
#SBATCH --mail-type=ALL
#SBATCH --cpus-per-task=100
squeue –u username
cd /work/username/data
module load R
export OMP_NUM_THREADS=100
echo "sbatch: START SLURM_JOB_ID $SLURM_JOB_ID (SLURM_TASK_PID $SLURM_TASK_PID) on $SLURMD_NODENAME"
echo "sbatch: SLURM_JOB_NODELIST $SLURM_JOB_NODELIST"
echo "sbatch: SLURM_JOB_ACCOUNT $SLURM_JOB_ACCOUNT"
Rscript myscript.R
I'm pretty sure there's sth. wrong with my inputs because:
it isn't really faster (but my R code of course could also be just slow - so I tried various R codes with different calculation types)
whether I'm using 1 oder 200 nodes, the calculation of the same R script takes almost exactly the same time (there should be at least 244 nodes, though)
the echo results do not give complete information and I do not receive e-Mail notifications
so these are my typical outcomes:
#just very small request to copy/paste the results, usually I request the one above
[username#gw02 ~]$ salloc -N 2 --mem=512 -t 00:10:00 -p short
salloc: Granted job allocation 1234567
salloc: Waiting for resource configuration
salloc: Nodes cstd01-[218-219] are ready for job
Disk quotas for user username (uid 12345):
-- disk space --
Filesystem limit used avail used
/home/user 32G 432M 32G 2%
/work/user 1T 219M 1024G 0%
[username#gw02 ~]$ squeue -u username
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
1234567 short bash username R 2:14 2 cstd01-[218-219]
#(directory, module load, etc.)
#missing outcomes for SLURM_TAST_PID and SLUMD_NODENAME:
[username#gw02 data]$ echo "sbatch: START SLURM_JOB_ID $SLURM_JOB_ID (SLURM_TASK_PID $SLURM_TASK_PID) on $SLURMD_NODENAME"
sbatch: START SLURM_JOB_ID 1314914 (SLURM_TASK_PID ) on
Can anybody help? Thank you so much!
EDIT:
As Ralf Stubner points out in his comment, I don't do parallelization in the R Code. I have absolutely no idea on how to do that.
Here is one example calculation:
# Create the data frame
col1 <- runif (12^5, 0, 2)
col2 <- rnorm (12^5, 0, 2)
col3 <- rpois (12^5, 3)
col4 <- rchisq (12^5, 2)
df <- data.frame (col1, col2, col3, col4)
# Original R code: Before vectorization and pre-allocation
system.time({
for (i in 1:nrow(df)) { # for every row
if ((df[i, "col1"] + df[i, "col2"] + df[i, "col3"] + df[i, "col4"]) > 4) { # check if > 4
df[i, 5] <- "greater_than_4" # assign 5th column
} else {
df[i, 5] <- "lesser_than_4" # assign 5th column
}
}
})
... and a shortened "real code":
library(NLP)
library(tm)
library(SnowballC)
library(topicmodels)
library(lda)
library(textclean)
# load data and create corups
filenames <- list.files(getwd(),pattern='*.txt')
files <- lapply(filenames,readLines)
docs <- Corpus(VectorSource(files))
# clean data (shortened, just two examples)
docs.adj <- tm_map(docs.adj, removeWords, stopwords('english'))
docs.adj <-tm_map(docs.adj,content_transformer(tolower))
# create document-term matrix
dtm <- DocumentTermMatrix(docs.adj)
dtm_stripped <- removeSparseTerms(dtm, 0.8)
rownames(dtm_stripped) <- filenames
freq <- colSums(as.matrix(dtm_stripped))
ord <- order(freq,decreasing=TRUE)
### find optimal number of k
burnin <- 10000
iter <- 250
thin <- 50
seed <-list(3)
nstart <- 1
best <- TRUE
seq_start <- 2
seq_end <- length(files)
iteration <- floor(length(files)/5)
best.model <- lapply(seq(seq_start,seq_end, by=iteration), function(k){LDA(dtm_stripped, k, method = 'Gibbs',control=list(nstart=nstart, seed = seed, best=best, burnin = burnin, iter = iter, thin=thin))})
best.model.logLik <- as.data.frame(as.matrix(lapply(best.model, logLik)))
best.model.logLik.df <- data.frame(topics=c(seq(seq_start,seq_end, by=iteration)), LL=as.numeric(as.matrix(best.model.logLik)))
optimal_k <- best.model.logLik.df[which.max(best.model.logLik.df$LL),]
print(optimal_k)
### do topic modeling with more iterations on optimal_k
burnin <- 4000
iter <- 1000
thin <- 100
seed <-list(2003,5,63)
nstart <- 3
best <- TRUE
ldaOut <-LDA(dtm_stripped,optimal_k, method='Gibbs', control=list(nstart=nstart, seed = seed, best=best, burnin = burnin, iter = iter, thin=thin))
From a quick look at your R script, it looks like it is in:
best.model <- lapply(seq(seq_start,seq_end, by=iteration), function(k){
LDA(dtm_stripped, k, method = 'Gibbs', control=list(nstart=nstart, seed = seed, best=best, burnin = burnin, iter = iter, thin=thin))
})
where most of the processing time takes place. Here you could try to parallelize the code using future_lapply() instead of lapply(), i.e.
best.model <- future_lapply(seq(seq_start,seq_end, by=iteration), function(k){
LDA(dtm_stripped, k, method = 'Gibbs', control=list(nstart=nstart, seed = seed, best=best, burnin = burnin, iter = iter, thin=thin))
}, future.seed = TRUE)
I've also added future.seed = TRUE to make sure you random number generation is statistically sound when done in parallel. The future_lapply() function is in the future.apply package (*) so you need to do:
library(future.apply)
at the top of your script. Now there's one final thing you need to do - you need to tell it to run in parallel (the default is sequential) by adding:
plan(multiprocess)
also at the top (after attaching future.apply). The default is to use whatever cores are "available" where "available" means it is also agile to the number of cores the HPC scheduler (e.g. Slurm) allocates to your job. If you try the above on your local machine, it will default to using the number of cores it has. That is, you can verify your code also on your local machine and you should see some speedup. When you know it works, then you can rerun it on the cluster via your Slurm allocation and it should work there out of the box - but run with more parallel processes.
You might find my blog post on future.apply from 2018-06-23 useful - it has some FAQ at the end.
(*) Disclaimer: I'm the author of future.apply.

Only one processor being used while running NetLogo models using parApply

I am using the 'RNetLogo' package to run sensitivity analyses on my NetLogo model. My model has 24 parameters I need to vary - so parallelising this process would be ideal! I've been following along with the example in Thiele's "Parallel processing with the RNetLogo package" vignette, which uses the 'parallel' package in conjunction with 'RNetLogo'.
I've managed to get R to initialise the NetLogo model across all 12 of my processors, which I've verified using gui=TRUE. The problem comes when I try to run the simulation code across the 12 processors using 'parApply'. This line runs without error, but it only runs on one of the processors (using around 8% of my total CPU power). Here's a mock up of my R code file - I've included some commented-out code at the end, showing how I run the simulation without trying to parallelise:
### Load packages
library(parallel)
### Set up initialisation function
prepro <- function(dummy, gui, nl.path, model.path) {
library(RNetLogo)
NLStart(nl.path, gui=gui)
NLLoadModel(model.path)
}
### Set up finalisation function
postpro <- function(x) {
NLQuit()
}
### Set paths
# For NetLogo
nl.path <- "C:/Program Files/NetLogo 6.0/app"
nl.jarname <- "netlogo-6.0.0.jar"
# For the model
model.path <- "E:/Model.nlogo"
# For the function "sim" code
sim.path <- "E:/sim.R"
### Set base values for parameters
base.param <- c('prey-max-velocity' = 25,
'prey-agility' = 3.5,
'prey-acceleration' = 20,
'prey-deceleration' = 25,
'prey-vision-distance' = 10,
'prey-vision-angle' = 240,
'time-to-turn' = 5,
'time-to-return-to-foraging' = 300,
'time-spent-circling' = 2,
'predator-max-velocity' = 35,
'predator-agility' = 3.5,
'predator-acceleration' = 20,
'predator-deceleration' = 25,
'predator-vision-distance' = 20,
'predator-vision-angle' = 200,
'time-to-give-up' = 120,
'number-of-safe-zones' = 1,
'number-of-target-patches' = 5,
'proportion-obstacles' = 0.05,
'obstacle-radius' = 2.0,
'obstacle-radius-range' = 0.5,
'obstacle-sensitivity-for-prey' = 0.95,
'obstacle-sensitivity-for-predators' = 0.95,
'safe-zone-attractiveness' = 500
)
## Get names of parameters
param.names <- names(base.param)
### Load the code of the simulation function (name: sim)
source(file=sim.path)
### Convert "base.param" to a matrix, as required by parApply
base.param <- matrix(base.param, nrow=1, ncol=24)
### Get the number of simulations we want to run
design.combinations <- length(base.param[[1]])
already.processed <- 0
### Initialise NetLogo
processors <- detectCores()
cl <- makeCluster(processors)
clusterExport(cl, 'sim')
gui <- FALSE
invisible(parLapply(cl, 1:processors, prepro, gui=gui, nl.path=nl.path, model.path=model.path))
### Run the simulation across all processors, using parApply
sim.result.base <- parApply(cl, base.param, 1, sim,
param.names,
no.repeated.sim = 100,
trace.progress = FALSE,
iter.length = design.combinations,
function.name = "base parameters")
### Run the simulation on a single processor
#sim.result.base <- sim(base.param,
# param.names,
# no.repeated.sim = 100,
# my.nl1,
# trace.progress = TRUE,
# iter.length = design.combinations,
# function.name = "base parameters")
Here's a mock up for the 'sim' function (adapted from Thiele's paper "Facilitating parameter estimation and sensitivity analyses of agent-based models - a cookbook using NetLogo and R"):
sim <- function(param.set, parameter.names, no.repeated.sim, trace.progress, iter.length, function.name) {
# Some security checks
if (length(param.set) != length(parameter.names))
{ stop("Wrong length of param.set!") }
if (no.repeated.sim <= 0)
{ stop("Number of repetitions must be > 0!") }
if (length(parameter.names) <= 0)
{ stop("Length of parameter.names must be > 0!") }
# Create an empty list to save the simulation results
eval.values <- NULL
# Run the repeated simulations (to control stochasticity)
for (i in 1:no.repeated.sim)
{
# Create a random-seed for NetLogo from R, based on min/max of NetLogo's random seed
NLCommand("random-seed",runif(1,-2147483648,2147483647))
## This is the stuff for one simulation
cal.crit <- NULL
# Set NetLogo parameters to current parameter values
lapply(seq(1:length(parameter.names)), function(x) {NLCommand("set ",parameter.names[x], param.set[x])})
NLCommand("setup")
# This should run "go" until prey-win =/= 5, i.e. when the pursuit ends
NLDoCommandWhile("prey-win = 5", "go")
# Report a value
prey <- NLReport("prey-win")
# Report another value
pred <- NLReport("predator-win")
## Extract the values we are interested in
cal.crit <- rbind(cal.crit, c(prey, pred))
# append to former results
eval.values <- rbind(eval.values,cal.crit)
}
## Make sure eval.values has column names
names(eval.values) <- c("PreySuccess", "PredSuccess")
# Return the mean of the repeated simulation results
if (no.repeated.sim > 1) {
return(colMeans(eval.values))
}
else {
return(eval.values)
}
}
I think the problem might lie in the "nl.obj" string that RNetLogo uses to identify the NetLogo instance you want to run the code on - however, I've tried several different methods of fixing this, and I haven't been able to come up with a solution that works. When I initialise NetLogo across all the processors using the code provided in Thiele's example, I don't set an "nl.obj" value for each instance, so I'm guessing RNetLogo uses some kind of default list? However, in Thiele's original code, the "sim" function requires you to specify which NetLogo instance you want to run it on - so R will spit an error when I try to run the final line (Error in checkForRemoteErrors(val) : one node produced an error: argument "nl.obj" is missing, with no default). I have modified the "sim" function code so that it doesn't require this argument and just accepts the default setting for nl.obj - but then my simulation only runs on a single processor. So, I think that by default, "sim" must only be running the code on a single instance of NetLogo. I'm not certain how to fix it.
This is also the first time I've used the 'parallel' package, so I could be missing something obvious to do with 'parApply'. Any insight would be much appreciated!
Thanks in advance!
I am still in the process of applying a similar technique to perform a Morris Elementary Effects screening with my NetLogo model. For me the parallel execution works fine. I compared your script to mine and noticed that in my version the 'parApply' call of the simulation function (simfun) is embedded in a function statement (see below). Maybe including the function already solves your issue.
sim.results.morris <- parApply(cl, mo$X, 1, function(x) {simfun(param.set=x,
no.repeated.sim=no.repeated.sim,
parameter.names=input.names,
iter.length=iter.length,
fixed.values=fixed.values,
model.seed=new.model.seed,
function.name="Morris")})

How could this custom process be done in parallel? or multicores?

I am trying to figure out how i could use any of the parallel processing packages like foreach or doParallel in this random forest loop i have created:
ModelInfo <- data.frame ( model=as.numeric()
,Nodesize=as.numeric()
,Mrty=as.numeric()
,Maxdepth=as.numeric()
,Cp=as.numeric()
,Accuracy_Training=as.numeric()
,AUC_Training=as.numeric())
w=1
set.seed(1809)
NumberOfSamples=1
# Number of iterations
rfPred=list()
pred=list()
roundpred=list()
cTab=list()
Acc=list()
pred.to.roc=list()
pred.rocr=list()
perf.rocr=list()
AUC=list()
Var_imp=list()
rf_model_tr = list()
length(rf_model_tr) <- NumberOfSamples
for (i in 1:NumberOfSamples)
{
rf_model_tr[[i]] = list()
rfPred[[i]]=list()
pred[[i]]=list()
roundpred[[i]]=list()
cTab[[i]]=list()
Acc[[i]]=list()
pred.to.roc[[i]]=list()
pred.rocr[[i]]=list()
perf.rocr[[i]]=list()
AUC[[i]]=list()
Var_imp[[i]]=list()
## Tune nodesize
nodesize =c(10,20,50,80,100,200)
n=length(nodesize)
length(rf_model_tr[[i]]) <- n
for ( j in 1: length (nodesize))
{
rf_model_tr[[i]][[j]] = list()
rfPred[[i]][[j]]=list()
pred[[i]][[j]]=list()
roundpred[[i]][[j]]=list()
cTab[[i]][[j]]=list()
Acc[[i]][[j]]=list()
pred.to.roc[[i]][[j]]=list()
pred.rocr[[i]][[j]]=list()
perf.rocr[[i]][[j]]=list()
AUC[[i]][[j]]=list()
Var_imp[[i]][[j]]=list()
## Tune mrty
mrtysize =c(2,3,4)
m=length(mrtysize)
length(rf_model_tr[[i]][[j]]) <- m
for ( k in 1: length (mrtysize))
{
rf_model_tr[[i]][[j]][[k]] = list()
rfPred[[i]][[j]][[k]]=list()
pred[[i]][[j]][[k]]=list()
roundpred[[i]][[j]][[k]]=list()
cTab[[i]][[j]][[k]]=list()
Acc[[i]][[j]][[k]]=list()
pred.to.roc[[i]][[j]][[k]]=list()
pred.rocr[[i]][[j]][[k]]=list()
perf.rocr[[i]][[j]][[k]]=list()
AUC[[i]][[j]][[k]]=list()
Var_imp[[i]][[j]][[k]]=list()
## Tune maxdepth
maxdep =c(10,20,30)
z=length(maxdep)
length(rf_model_tr[[i]][[j]][[k]]) <- z
for (l in 1:length (maxdep))
{
rf_model_tr[[i]][[j]][[k]][[l]] = list()
rfPred[[i]][[j]][[k]][[l]]=list()
pred[[i]][[j]][[k]][[l]]=list()
roundpred[[i]][[j]][[k]][[l]]=list()
cTab[[i]][[j]][[k]][[l]]=list()
Acc[[i]][[j]][[k]][[l]]=list()
pred.to.roc[[i]][[j]][[k]][[l]]=list()
pred.rocr[[i]][[j]][[k]][[l]]=list()
perf.rocr[[i]][[j]][[k]][[l]]=list()
AUC[[i]][[j]][[k]][[l]]=list()
Var_imp[[i]][[j]][[k]][[l]]=list()
## Tune cp
cp =c(0,0.01,0.001)
p=length(cp)
length(rf_model_tr[[i]][[j]][[k]][[l]]) <- p
for (m in 1:length (cp))
{
rf_model_tr[[i]][[j]][[k]][[l]][[m]]= randomForest (as.factor(class) ~.
, data=train,mtry=mrtysize[[k]],maxDepth = maxdep[[l]], replace=F, importance=T, do.trace=10, ntree=200,nodesize=nodesize[j],cp=cp[[m]])
#Accuracy
rfPred[[i]][[j]][[k]][[l]][[m]] <- predict(rf_model_tr[[i]][[j]][[k]][[l]][[m]], train, type = "prob")
pred[[i]][[j]][[k]][[l]][[m]] <- colnames(rfPred[[i]][[j]][[k]][[l]][[m]] )[apply(rfPred[[i]][[j]][[k]][[l]][[m]] ,1,which.max)]
cTab[[i]][[j]][[k]][[l]][[m]] = table(pred[[i]][[j]][[k]][[l]][[m]],train$class)
Acc[[i]][[j]][[k]][[l]][[m]]<- sum(diag(cTab[[i]][[j]][[k]][[l]][[m]])) / sum(cTab[[i]][[j]][[k]][[l]][[m]])
#AUC
pred.to.roc[[i]][[j]][[k]][[l]][[m]]<-rfPred[[i]][[j]][[k]][[l]][[m]][,2]
pred.rocr[[i]][[j]][[k]][[l]][[m]]<-prediction(pred.to.roc[[i]][[j]][[k]][[l]][[m]],as.factor(train$class))
perf.rocr[[i]][[j]][[k]][[l]][[m]]<-performance(pred.rocr[[i]][[j]][[k]][[l]][[m]],measure="auc",x.measure="cutoff")
AUC[[i]][[j]][[k]][[l]][[m]]<-as.numeric(perf.rocr[[i]][[j]][[k]][[l]][[m]]#y.values)
#Variable Importance
Var_imp[[i]][[j]][[k]][[l]][[m]]<-(importance(rf_model_tr[[i]][[j]][[k]][[l]][[m]],type=2))
ModelInfo[w,1]<-w
ModelInfo[w,2]<-nodesize[[j]]
ModelInfo[w,3]<-mrtysize[[k]]
ModelInfo[w,4]<-maxdep[[l]]
ModelInfo[w,5]<-cp[[m]]
ModelInfo[w,6]<-Acc[[i]][[j]][[k]][[l]][[m]]
ModelInfo[w,7]<-AUC[[i]][[j]][[k]][[l]][[m]]
w=w+1
}
}
}
}
}
Basically ,what i am doing is that i am creating all possible model variations with one dataset based on the available tuning parameters for a random forest (nodesize,cp ect) and storing that information to the table model info as every iteration goes by. In addition i add measures like accuracy and AUC, so as to compare the different models created in the end and make a pick.
The reason i am looking for an alternative, is that the caret package offers me only to tune the mtry allthough there i do have the chance to run parRF which could solve my problem, but i prefer to incorporate something here, how would that be possible?
I have read about the foreach and doParallel packages but i dont quite get how this could be syntaxed here.
If the initial data is needed please let me know, i just thought at this point to show the part that neeeds to be parallel computed.
Thank you in advance
Hi I normally just code everything manually. In linux/mac I use parallel package and mclapply which can use memory forking. Forking processes use less memory and are faster to start up. Windows do not support forking thus I use doParallel package (other packages could do also). the foreach() function is a user friendly parallel mapper. I find myself to spend more time setting up single PC parallel computing than saving from speed-up. Still fun :)
If you work on a university, you may have access to a large cluster. The BatchJobs package is another mapper which can use many different backends, e.g. a Torque/PBS que system. I can borrow 80 nodes with 4 CPU's giving me a potential 320 times speedup (more like 150 times in practice). I learned about BatchJobs from this great introduction. I like that BatchJobs also can run single or multi-core locally, which is much easier to debug.
The code below introduces how to create a list of jobs with both foreach and BatchJobs. Each job is a set of arguments. The job arguments are fused with standard arguments and a model is trained. Some statistics is returned and all results and arguments are combined into a data.frame.
useForeach = FALSE #If FALSE, will run as batchjobs. Only faster for cluster computing.
library(randomForest)
#load a data set
url = "http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv"
download.file(url,destfile="winequality-white.csv",mode="w")
wwq = read.csv(file="winequality-white.csv",header=T,sep=";")
X = wwq[,names(wwq) != "quality"]
y = wwq[,"quality"]
#2 - make jobs
pars = expand.grid(
mtry = c(1:3),
sampsize = floor(seq(1000,1898,length.out = 3)),
nodesize = c(1,3)
)
jobs = lapply(1:dim(pars)[1], function(i) pars[i,])
#3 - make node function there will excute a number of jobs
test.pars = function(someJobs,useForeach=TRUE) {
#if running cluster, global environment imported manually
if(!useForeach) load(file="thisState.rda")
do.call(rbind,lapply(someJobs,function(aJob){ #do jobs and bind results by rows
print(aJob)
merged.args = c(alist(x=X,y=y),as.list(aJob)) #merge std. and job args
run.time = system.time({rfo = do.call(randomForest,merged.args)}) #run a job
data.frame(accuracy=tail(rfo$rsq,1),run.time=run.time[3],mse=tail(rfo$mse,1))
}))
}
##test function single core
jobsToTest = 1:5
out = test.pars(jobs[jobsToTest])
print(cbind(out,do.call(rbind,jobs[jobsToTest])))
#4a execute jobs with foreach package:
if(useForeach) {
library(foreach)
library(doParallel)
CPUs=4
cl = makeCluster(CPUs)#works both for windows and linux, otherwise forking is better
registerDoParallel(cl)
nodes=min(CPUs,length(jobs)) #how many splits of jobList, not so important for foreach...
job.array = suppressWarnings(split(jobs,1:nodes)) #split warns if each core cannot get same amount of jobs
out = foreach(i=job.array,.combine=rbind,.packages="randomForest") %dopar% test.pars(i)
stopCluster(cl)
} else {
library(BatchJobs)
#4b - execute jobs with BatchJobs package (read manual how to set up on cluster)
nodes=min(80,length(jobs)) # how many nodes to split job onto
job.array = split(jobs,1:nodes)
save(list=ls(),file="thisState.rda") #export this state(global environment) to every node
#initiate run
reg = makeRegistry(id ="myFirstBatchJob",packages="randomForest")
batchMap(reg,fun=test.pars,someJobs = job.array,more.args=list(useForeach=FALSE))
submitJobs(reg)
waitForJobs(reg)
out = loadResults(reg)
#6- wrap up save filnalResults to user
finalResult = cbind(do.call(rbind,jobs),do.call(rbind,out))
save(out,file="finalResult.rda")
removeRegistry(reg,ask="no")
}
#7- print final result
print(cbind(do.call(rbind,jobs),out))

Using ifelse to create a running tally in R

I am trying to do some quantitative modeling in R. I'm not getting an error message, but the results are not what I actually need.
I am a newbie, but here is my complete code sample.
`library(quantmod)
#Building the data frame and xts to show dividends, splits and technical indicators
getSymbols(c("AMZN"))
Playground <- data.frame(AMZN)
Playground$date <- as.Date(row.names(Playground))
Playground$wday <- as.POSIXlt(Playground$date)$wday #day of the week
Playground$yday <- as.POSIXlt(Playground$date)$mday #day of the month
Playground$mon <- as.POSIXlt(Playground$date)$mon #month of the year
Playground$RSI <- RSI(Playground$AMZN.Adjusted, n = 5, maType="EMA") #can add Moving Average Type with maType =
Playground$MACD <- MACD(AMZN, nFast = 12, nSlow = 26, nSig = 9)
Playground$Div <- getDividends('AMZN', from = "2007-01-01", to = Sys.Date(), src = "google", auto.assign = FALSE)
Playground$Split <- getSplits('AMZN', from = "2007-01-01", to = Sys.Date(), src = "google", auto.assign = FALSE)
Playground$BuySignal <- ifelse(Playground$RSI < 30 & Playground$MACD < 0, "Buy", "Hold")
All is well up until this point when I start using some logical conditions to come up with decision points.
Playground$boughts <- ifelse(Playground$BuySignal == "Buy", lag(Playground$boughts) + 1000, lag(Playground$boughts))
It will execute but the result will be nothing but NA. I suppose this is because you are trying to add NA to a number, but I'm not 100% sure. How do you tell the computer I want you to keep a running tally of how much you have bought?
Thanks so much for the help.
So we want ot buy 1000 shares every time a buy signal is generated?
Your problem stems from MACD idicator. It actually generates two columns, macd and signal. You have to decide which one you want to keep.
Playground$MACD <- MACD(AMZN, nFast = 12, nSlow = 26, nSig = 9)$signal
This should solve the problem at hand.
Also, please check the reference for ifelse. The class of return value can be tricky at times, and so the approach suggested by Floo0 is preferable.
Also, I'd advocate using 1 and 0 instead of buy and sell to show weather you are holding . It makes the math much easier.
And I'd strongly suggest reading some beginner tutorial on backtesting with PerformanceAnalytics. They make the going much much easier.
BTW, you missed this line in the code:
Playground$boughts<- 0
Hope it helps.
EDIT: And I forgot to mention the obvious. discard the first few rows where MACD will be NA
Something like:
Playground<- Playground[-c(1:26),]
Whenever you want to do an ifelse like
if ... Do something, else stay the same: Do not use ifelse
Try this instead
ind <- which(Playground$BuySignal == "Buy")
Playground$boughts[ind] <- lag(Playground$boughts) + 1000

Relative Performance loop in R

I'm not a programmer by any means and have been trying to learn R to code various trading strategies. I'm trying to calculate the relative performance of a list of stocks versus the S&P 500 and save it to a matrix. It appears that what I've written only goes through the first symbol and then stops. Below is the code that I've come up with. I appreciate any help, input and advice on how to proceed. Thank you.
library(quantmod)
library(PerformanceAnalytics)
Sys.setenv(TZ = "UTC")
symbols <- c('IBM', 'GE', '^GSPC')
getSymbols(symbols, src = "yahoo", from = "2010-12-31", to = Sys.Date())
symadj <- cbind(IBM[,6], GE[,6])
sp5adj <- GSPC[,6]
# Calculate Relative Performance vs S&P and save data
for (i in length(symadj)) {
rp <- matrix(symadj[,1]/sp5adj, nrow = 1070, ncol = 3)
print(tail(rp))
}
_You are not looping over an array but over a single number:
for (i in length(symadj))
Try (see the seq added, watch the parenthesis. Plus, be careful with length, the iteration is over ncol - i.e. the columns):
for (i in seq(1,ncol(rp),1))
_Also, you are going always through the same column:
rp <- matrix(symadj[,1]/sp5adj, nrow = 1070, ncol = 3)
_A thing I skipped: you should build your matrix before the loop:
rp <- matrix(0,nrow=1071,ncol=2)
And then assign without overwritting your previous matrix - you have already build it (plus, look at the i where the 1 was, now you are iterating)
rp[,i] <- symadj[,i]/sp5adj #This inside the loop
_Your for loop should end up looking something like this:
rp <- matrix(0,nrow=1071,ncol=2)
for (i in seq(1,ncol(rp),1)) {
rp[,i] <- symadj[,i]/sp5adj #This inside the loop
print(tail(rp))
}
\!/ Now there are 1071 days in that period, so the matrix should have one more row - that's why the 1071.

Resources