Large Monitor List for runjags - r

With runjags, I am trying to monitor a very large number of values. The format for the monitor list is a string of values, In this case I am asking to moitor just 3, Y[14], Y[15], Y[3].
run.jags(model="model.MC.txt",data=list(Y=Y.NA.Rep,sizes=sizesB,cumul=cumul),
monitor=c("thetaj", "Y[14]", "Y[15]","Y[3]"))
Suppose I wanted to monitor hundreds of values. I can create this string, but it just returns to the prompt "+". and fails to run.
Is there some upper limit on the size of strings that can be created and passed in as arguments?
Is there a better way (non string) to pass this list into run.jags?
The only way I have been able to get it to run is to paste the string literal
into the function call, a variable containing the string does not work.
The longer run list looks something like this:
run.jags(model="model.MC.txt",data=list(Y=Y.NA.Rep,sizes=sizesB,cumul=cumul)
,monitor=c('Y[14]', 'Y[15]', 'Y[18]', 'Y[26]', 'Y[41]',
'Y[55]', 'Y[62]', 'Y[72]', 'Y[80]', 'Y[81]', 'Y[128]', 'Y[138]',
'Y[180]', 'Y[188]', 'Y[191]', 'Y[209]', 'Y[224]', 'Y[244]', '
'Y[255]', 'Y[263]', 'Y[282]', 'Y[292]', 'Y[303]', 'Y[324]',
'Y[349]', 'Y[358]', 'Y[359]', 'Y[365]', 'Y[384]',
... many lines deleted
'Y[1882]', 'Y[1895]', 'Y[1899]', 'Y[1903]', 'Y[1918]', 'Y[1922]',
'Y[1929]', 'Y[1942]', 'Y[1953]', 'Y[1990]'))

I'm not sure that this is a problem with runjags - the following code has 1002 monitors and runs just fine:
model <- "model {
for(i in 1 : N){ #data# N
Y[i] ~ dnorm(true.y[i], precision) #data# Y
true.y[i] <- (m * X[i]) + c #data# X
}
m ~ dnorm(0, 10^-3)
c ~ dnorm(0, 10^-3)
precision ~ dgamma(10^-3, 10^-3)
}"
X <- 1:1000
Y <- rnorm(length(X), 2*X + 10, 1)
N <- length(X)
monitors <- c('m','c',paste0('Y[',1:1000,']'))
results <- run.jags(model, n.chains=2, monitor=monitors, sample=100, method='rjags')
results <- run.jags(model, n.chains=2, monitor=monitors, sample=100, method='inter')
I have also tried writing the string directly into the function call by using:
cat('monitor = c("'); cat(monitors, sep='", "'); cat('")\n')
...and copy/pasting the resulting text as the monitor argument - that still works for me in R.app but when pasting into RStudio I get:
> results <- run.jags(model, n.chains=2, monitor = c("m", "c", "Y[1]", "Y[2]", "Y[3]", "Y[4]", "Y[5]", "Y[6]", "Y[7]", "Y[8]", "Y[9]", "Y[10]", "Y[11]", "Y[12]", "Y[13]", "Y[14]", "Y[15]", "Y[16]", "Y[17]", "Y[18]", "Y[19]", "Y[20]", "Y[21]", "Y[22]", "Y[23]", "Y[24]", "Y[25]", "Y[26]", "Y[27]", "Y[28]", "Y[29]", "Y[30]", "Y[31]", "Y[32]", "Y[33]", "Y[34]", "Y[35]", "Y[36]", "Y[37]", "Y[38]", "Y[39]", "Y[40]", "Y[41]", "Y[42]", "Y[43]", "Y[44]", "Y[45]", "Y[46]", "Y[47]", "Y[48]", "Y[49]", "Y[50]", "Y[51]", "Y[52]", "Y[53]", "Y[54]", "Y[55]", "Y[56]", "Y[57]", "Y[58]", "Y[59]", "Y[60]", "Y[61]", "Y[62]", "Y[63]", "Y[64]", "Y[65]", "Y[66]", "Y[67]", "Y[68]", "Y[69]", "Y[70]", "Y[71]", "Y[72]", "Y[73]", "Y[74]", "Y[75]", "Y[76]", "Y[77]", "Y[78]", "Y[79]", "Y[80]", "Y[81]", "Y[82]", "Y[83]", "Y[84]", "Y[85]", "Y[86]", "Y[87]", "Y[88]", "Y[89]", "Y[90]", "Y[91]", "Y[92]", "Y[93]", "Y[94]", "Y[95]", "Y[96]", "Y[97]", "Y[98]", "Y[99]", "Y[100]", "Y[101]", "Y[102]", "Y[103]", "Y[104]", "Y[105]... <truncated>
+
+
Which is somewhat similar to your description. So I'm guessing that you are using RStudio and that the problem is to do with the maximum length of a line of code that can be interpreted by RStudio.
If so, the fix is to simply hard wrap the command so it is broken over multiple lines - I tried this with 72 character width (100+ lines) and it works fine in RStudio. If my assumption is incorrect please modify your question to give more details of how you are running R, and your system using:
> sessionInfo()
R version 3.4.0 (2017-04-21)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Sierra 10.12.5
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/3.4/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.4/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] runjags_2.0.4-2
loaded via a namespace (and not attached):
[1] compiler_3.4.0 tools_3.4.0 parallel_3.4.0 coda_0.19-1 grid_3.4.0 rjags_4-6 lattice_0.20-35

Related

glmmTMB_phylo: Error in Matrix::rankMatrix(TMBStruc$data.tmb[[whichX]]) : length(d <- dim(x)) == 2 is not TRUE

I am trying to run the following model:
mod1<- phylo_glmmTMB(response ~ sv1 + # sampling variables
sv2 + sv3 + sv4 + sv5 +
sv6 + sv7 +
(1|phylo) + (1|reference_id), #random effects
ziformula = ~ 0,
#ar1(pos + 0| group) # spatial autocorrelation structure; group is a dummy variable
phyloZ = supertreenew,
phylonm = "phylo",
family = "binomial",
data = data)
But I keep getting the error:
Error in Matrix::rankMatrix(TMBStruc$data.tmb[[whichX]]) :
length(d <- dim(x)) == 2 is not TRUE
This error is also occurring with other reproducible example (data) that I found.
Before I run the model, I just loaded my data (data and supertree) and computed a Z matrix from supertree:
#Compute Z matrix
#supertreenew <- vcv.phylo(supertreenew)
#or
supertreenew <- phylo.to.Z(supertreenew)
#enforced match between
supertreenew <- supertreenew[levels(factor(data$phylo)), ]
I have installed the development version via:
remotes::install_github("wzmli/phyloglmm/pkg")
But no success.
The dimension of my supertree are:
[[1]]
... [351]
[[2]]
... [645]
Any guess?
My session info:
R version 4.2.2 (2022-10-31 ucrt)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 22621)
Matrix products: default
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] phyloglmm_0.1.0.9001 brms_2.18.0 cpp_1.0.9 performance_0.10. DHARMa_0.4.6
[6] phytools_1.2-0 maps_3.4.0 ape_5.6-2 lme4_1.1-31 Matrix_1.5-1
[11] TMB_1.9.1 glmmTMB_1.1.5.9000 remotes_2.4.2
(First error, "Error in Matrix::rankMatrix") This is a consequence of the addition of a check of the rank of the fixed-effects matrix in recent versions of glmmTMB. For now, adding
control = glmmTMB::glmmTMBControl(rank_check = "skip")
to your phylo_glmmTMB call should work around the problem.
(Second error, "Error in getParameterOrder(data, parameters, new.env(), DLL = DLL) ...") I just updated the refactor branch to handle this problem [caused by internal changes in glmmTMB]. Use remotes::install_github("wzmli/phyloglmm/pkg#refactor") to install this version, then try your example again.

Plot Line Types in R

Update:
This has been confirmed as a current bug on Apple OS as of Feb 28, 2022.
Update:
Below is my sessionInfo. I have tried to restart my RStudio and tried dev.off(), but neither works. I am still getting the odd dashed line.
I have tried the codes on R (not RStudio) and the dashed line is still wrong.
> sessionInfo()
R version 4.1.2 (2021-11-01)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Monterey 12.2.1
Matrix products: default
LAPACK: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] compiler_4.1.2 tools_4.1.2
The following codes will produce Plot 1.
zeta.ppt <- function(v){
ppt=function(i){
result <- numeric(length(i))
for (j in i){
if (j < 11) {result[j] <- (11-j)/110}
else {result[j] <- 3/pi^2/(j-10)^2}
}
result
}
p <- ppt(1:10000)
printout <- numeric(length(v))
for (k in 1:length(v)) {
printout[k] <- sum(p*(1-p)^v[k])
}
printout
}
zeta.sept <- function(v){
sept=function(i){
result <- numeric(length(i))
for (j in i){
if (j < 11) {result[j] <- (11-j)/110}
else {result[j] <- 0.5/1.670407*exp(-sqrt(j-10))}
}
result
}
p <- sept(1:10000)
printout <- numeric(length(v))
for (k in 1:length(v)) {
printout[k] <- sum(p*(1-p)^v[k])
}
printout
}
tau.ppt <- function(v){
v*zeta.ppt(v)
}
tau.sept <- function(v){
v*zeta.sept(v)
}
plot(log(tau.ppt(1:20000))~log(1:20000), xlim = c(0,10), ylim=c(0, 5), axes = F, ylab = "", xlab = "", type = "l")
lines(log(tau.sept(1:20000))~log(1:20000), lty = 2, type = "l")
box()
If you look at the dashed line, it is not evenly separated in its right-hand side portion. How can I make it an evenly spaced dashed line like Plot 2?
Thanks!
I have plot it in RStudio and the line is separated (Author asked to show my image). You could also try to run dev.off() before you plot your image.
My RStudio version:
R version 4.1.2 (2021-11-01)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 22000)

Parallelization of Rcpp without inline/ creating a local package

I am creating a package that I hope to eventually put onto CRAN. I have coded much of the package in C++ with the help of Rcpp and now would like to enable parallelization of this C++ code. I am using the foreach package, however, I am open to switch to snow or a different library if this would work better.
I started by trying to parallelize a simple function:
#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
using namespace Rcpp;
// [[Rcpp::export]]
arma::vec rNorm_c(int length) {
return arma::vec(length, arma::fill::randn);
}
/*** R
n_workers <- parallel::detectCores(logical = F)
cl <- parallel::makeCluster(n_workers)
doParallel::registerDoParallel(cl)
n <- 10
library(foreach)
foreach(j = rep(n, n),
.noexport = c("rNorm_c"),
packages = "Rcpp") %dopar% {rNorm_c(j)}
*/
I added the .noexport because without, I get the error Error in { : task 1 failed - "NULL value passed as symbol address". This led me to this SO post which suggested doing this.
However, I now receive the error Error in { : task 1 failed - "could not find function "rNorm_c"", presumably because I have not followed the top answers instructions to load the function separately at each node. I am unsure of how to do this.
This SO post demonstrates how to do this by writing the C++ code inline, however, since the C++ code for my package is multiple functions, this is likely not the best solution. This SO post advises to create a local package for the workers to load and make calls to, however, since I am hoping to make this code available in a CRAN package, it does not seem as though a local package would be possible unless I wanted to attempt to publish two CRAN packages.
Any suggestions for how to approach this or references to resources for parallelization of Rcpp code would be appreciated.
EDIT:
I used the above function to create a package called rnormParallelization. In this package, I also included a couple of R functions, one of which made use of the snow package to parallelize a for loop using the rNorm_c function:
rNorm_samples_for <- function(num_samples, length){
sample_mat <- matrix(NA, length, num_samples)
for (j in 1:num_samples){
sample_mat[ , j] <- rNorm_c(length)
}
return(sample_mat)
}
rNorm_samples_snow1 <- function(num_samples, length){
clus <- snow::makeCluster(3)
snow::clusterExport(clus, "rNorm_c")
out <- snow::parSapply(clus, rep(length, num_samples), rNorm_c)
snow::stopCluster(clus)
return(out)
}
Both functions work as expected:
> rNorm_samples_for(2, 3)
[,1] [,2]
[1,] -0.82040308 -0.3284849
[2,] -0.05169948 1.7402912
[3,] 0.32073516 0.5439799
> rNorm_samples_snow1(2, 3)
[,1] [,2]
[1,] -0.07483493 1.3028315
[2,] 1.28361663 -0.4360829
[3,] 1.09040771 -0.6469646
However, the parallelized version works considerably slower:
> microbenchmark::microbenchmark(
+ rnormParallelization::rNorm_samples_for(1e3, 1e4),
+ rnormParallelization::rNorm_samples_snow1(1e3, 1e4)
+ )
Unit: milliseconds
expr min lq
rnormParallelization::rNorm_samples_for(1000, 10000) 217.0871 249.3977
rnormParallelization::rNorm_samples_snow1(1000, 10000) 1242.8315 1397.7643
mean median uq max neval
320.5456 285.9787 325.3447 802.7488 100
1527.0406 1482.5867 1563.0916 3411.5774 100
Here is my session info:
> sessionInfo()
R version 4.1.1 (2021-08-10)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19043)
Matrix products: default
locale:
[1] LC_COLLATE=English_United States.1252
[2] LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C
[5] LC_TIME=English_United States.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] rnormParallelization_1.0
loaded via a namespace (and not attached):
[1] microbenchmark_1.4-7 compiler_4.1.1 snow_0.4-4
[4] parallel_4.1.1 tools_4.1.1 Rcpp_1.0.7
GitHub repo with both of these scripts

Slow dot product in R

I am trying to take the dot product from a 331x23152 and 23152x23152 matrix.
In Python and Octave this is a trivial operation, but in R this seems to be incredibly slow.
N <- 331
M <- 23152
mat_1 = matrix( rnorm(N*M,mean=0,sd=1), N, M)
mat_2 = matrix( rnorm(N*M,mean=0,sd=1), M, M)
tm3 <- system.time({
mat_3 = mat_1%*%mat_2
})
print(tm3)
The output is
user system elapsed
101.95 0.04 101.99
In other words, this dot product takes over 100 seconds to execute.
I am running R-3.4.0 64-bit, with RStudio v1.0.143 on a i7-4790 with 16 GB RAM. As such, I did not expect this operation to take so long.
Am I overlooking something? I have started looking into the packages bigmemory and bigalgebra, but I can't help but think there's a solution without having to resort to packages.
EDIT
To give you an idea of time difference, here's a script for Octave:
n = 331;
m = 23152;
mat_1 = rand(n,m);
mat_2 = rand(m,m);
tic
mat_3 = mat_1*mat_2;
toc
The output is
Elapsed time is 3.81038 seconds.
And in Python:
import numpy as np
import time
n = 331
m = 23152
mat_1 = np.random.random((n,m))
mat_2 = np.random.random((m,m))
tm_1 = time.time()
mat_3 = np.dot(mat_1,mat_2)
tm_2 = time.time()
tm_3 = tm_2 - tm_1
print(tm_3)
The output is
2.781277894973755
As you can see, these numbers are not even in the same ballpark.
EDIT 2
At Zheyuan Li's request, here are toy examples for dot products.
In R:
mat_1 = matrix(c(1,2,1,2,1,2), nrow = 2, ncol = 3)
mat_2 = matrix(c(1,1,1,2,2,2,3,3,3), nrow = 3, ncol = 3)
mat_3 = mat_1 %*% mat_2
print(mat_3)
The output is:
[,1] [,2] [,3]
[1,] 3 6 9
[2,] 6 12 18
In Octave:
mat_1 = [1,1,1;2,2,2];
mat_2 = [1,2,3;1,2,3;1,2,3];
mat_3 = mat_1*mat_2
The output is:
mat_3 =
3 6 9
6 12 18
In Python:
import numpy as np
mat_1 = np.array([[1,1,1],[2,2,2]])
mat_2 = np.array([[1,2,3],[1,2,3],[1,2,3]])
mat_3 = np.dot(mat_1, mat_2)
print(mat_3)
The output is:
[[ 3 6 9]
[ 6 12 18]]
For more information on matrix dot products: https://en.wikipedia.org/wiki/Matrix_multiplication
EDIT 3
The output for sessionInfo() is:
> sessionInfo()
R version 3.4.0 (2017-04-21)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
Matrix products: default
locale:
[1] LC_COLLATE=Dutch_Netherlands.1252 LC_CTYPE=Dutch_Netherlands.1252 LC_MONETARY=Dutch_Netherlands.1252
[4] LC_NUMERIC=C LC_TIME=Dutch_Netherlands.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] compiler_3.4.0 tools_3.4.0
EDIT 4
I tried the bigalgebra package but this did not seem to speed things up:
library('bigalgebra')
N <- 331
M <- 23152
mat_1 = matrix( rnorm(N*M,mean=0,sd=1), N, M)
mat_1 <- as.big.matrix(mat_1)
mat_2 = matrix( rnorm(N*M,mean=0,sd=1), M, M)
tm3 <- system.time({
mat_3 = mat_1%*%mat_2
})
print(tm3)
The output is:
user system elapsed
101.79 0.00 101.81
EDIT 5
James suggested to alter my randomly generated matrix:
N <- 331
M <- 23152
mat_1 = matrix( runif(N*M), N, M)
mat_2 = matrix( runif(M*M), M, M)
tm3 <- system.time({
mat_3 = mat_1%*%mat_2
})
print(tm3)
The output is:
user system elapsed
102.46 0.05 103.00
This is a trivial operation?? Matrix multiplication is always an expensive operation in linear algebra computations.
Actually I think it is quite fast. A matrix multiplication at this size has
2 * 23.152 * 23.152 * 0.331 = 354.8 GFLOP
With 100 seconds your performance is 3.5 GFLOPs. Note that on most machines, the performance is at most 0.8 GLOPs - 2 GFLOPs, unless you have an optimized BLAS library.
If you think implementation elsewhere is faster, check the possibility of usage of optimized BLAS, or parallel computing. R is doing this with a standard BLAS and no parallelism.
Important
From R-3.4.0, more tools are available with BLAS.
First of all, sessionInfo() now returns the full path of the linked BLAS library. Yes, this does not point to the symbolic link, but the final shared object! The other answer here just shows this: it has OpenBLAS.
The timing result (in the other answer) implies that parallel computing (via multi-threading in OpenBLAS) is in place. It is hard for me to tell the number of threads used, but looks like hyperthreading is on, as the slot for "system" is quite big!
Second, options can now set matrix multiplications methods, via matprod. Although this was introduced to deal with NA / NaN, it offers testing of performance, too!
"internal" is an implementation in non-optimized triple loop nest. This is written in C, and has equal performance to the standard (reference) BLAS written in F77;
"default", "blas" and "default.simd" mean using linked BLAS for computation, but the way for checking NA and NaN differs. If R is linked to standard BLAS, then as said, it has the same performance with "internal"; but otherwise we see significant boost. Also note that R team says that "default.simd" might be removed in future.
Based off the replies from knb and Zheyuan Li, I started investigating optimized BLAS packages. I came across GotoBlas, OpenBLAS, and MKL, e.g. here.
My conclusion is that MKL should outperform default BLAS by far.
It seems R has to be built from source in order to incorporate MKL. Instead, I found R Open. This has MKL (optionally) built-in, so installing is a breeze.
With the following code:
N <- 331
M <- 23152
mat_1 = matrix( rnorm(N*M,mean=0,sd=1), N, M)
mat_2 = matrix( rnorm(N*M,mean=0,sd=1), M, M)
tm3 <- system.time({
mat_3 = mat_1%*%mat_2
})
print(tm3)
The output is:
user system elapsed
10.61 0.10 3.12
As such, one solution to this problem is to use MKL instead of default BLAS.
However, upon investigation my real life matrices are highly sparse. I was able to take advantage of that fact by using the Matrix package. In practice I used it like e.g. Matrix(x = mat_1, sparse = TRUE), where mat_1 would be a highly sparse matrix. This brought down the execution time to around 3 seconds.
I have a similar machine: Linux PC, 16 GB RAM, intel 4770K ,
Relevant output from sessionInfo()
R version 3.4.0 (2017-04-21)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 16.04.2 LTS
Matrix products: default
BLAS: /usr/lib/openblas-base/libblas.so.3
LAPACK: /usr/lib/libopenblasp-r0.2.18.so
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=de_DE.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=de_DE.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_PAPER=de_DE.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] knitr_1.15.1 clipr_0.3.2 tibble_1.3.0 colorout_1.1-2
loaded via a namespace (and not attached):
[1] compiler_3.4.0 tools_3.4.0 Rcpp_0.12.10
On my machine, your code snippet takes ~5 seconds (started RStudio, created empty .R file, ran snippet, output):
user system elapsed
27.608 5.524 4.920
Snippet:
N <- 331
M <- 23152
mat_1 = matrix( rnorm(N*M,mean=0,sd=1), N, M)
mat_2 = matrix( rnorm(N*M,mean=0,sd=1), M, M)
tm3 <- system.time({
mat_3 = mat_1 %*% mat_2
})
print(tm3)

Quantstrat WFA with intraday Data

I've been getting WFA to run on the full set of intraday GBPUSD 30min data, and have come across a couple of things that need addressing. The first is I believe the save function needs changing to remove the time from the string (as shown here as a pull request on the R-Finance/quantstrat repo on github). The walk.forward function throws this error:
Error in gzfile(file, "wb") : cannot open the connection
In addition: Warning message:
In gzfile(file, "wb") :
cannot open compressed file 'wfa.GBPUSD.2002-10-21 00:30:00.2002-10-23 23:30:00.RData', probable reason 'Invalid argument'
The second is a rare case scenario where its ends up calling runSum on a data set with less rows than the period you are testing (n). This is the traceback():
8: stop("Invalid 'n'")
7: runSum(x, n)
6: runMean(x, n)
5: (function (x, n = 10, ...)
{
ma <- runMean(x, n)
if (!is.null(dim(ma))) {
colnames(ma) <- "SMA"
}
return(ma)
})(x = Cl(mktdata)[, 1], n = 25)
4: do.call(indFun, .formals)
3: applyIndicators(strategy = strategy, mktdata = mktdata, parameters = parameters,
...)
2: applyStrategy(strategy, portfolios = portfolio.st, mktdata = symbol[testing.timespan]) at custom.walk.forward.R#122
1: walk.forward(strategy.st, paramset.label = "WFA", portfolio.st = portfolio.st,
account.st = account.st, period = "days", k.training = 3,
k.testing = 1, obj.func = my.obj.func, obj.args = list(x = quote(result$apply.paramset)),
audit.prefix = "wfa", anchored = FALSE, verbose = TRUE)
The extended GBPUSD data used in the creation of the Luxor Demo includes an erroneous date (2002/10/27) with only 1 observation which causes this problem. I can also foresee this being an issue when testing longer signal periods on instruments like Crude where they have only a few trading hours on Sunday evenings (UTC).
Given that I have purely been following the Luxor demo with the same (extended) intra-day data set, are these genuine issues or have they been caused by package updates etc?
What is the preferred way for these things to be reported to the authors of QS, and find out if/when fixes are likely to be made?
SessionInfo():
R version 3.3.0 (2016-05-03)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
locale:
[1] LC_COLLATE=English_Australia.1252 LC_CTYPE=English_Australia.1252 LC_MONETARY=English_Australia.1252 LC_NUMERIC=C LC_TIME=English_Australia.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] quantstrat_0.9.1739 foreach_1.4.3 blotter_0.9.1741 PerformanceAnalytics_1.4.4000 FinancialInstrument_1.2.0 quantmod_0.4-5 TTR_0.23-1
[8] xts_0.9.874 zoo_1.7-13
loaded via a namespace (and not attached):
[1] compiler_3.3.0 tools_3.3.0 codetools_0.2-14 grid_3.3.0 iterators_1.0.8 lattice_0.20-33
quantstrat is on github here:
https://github.com/braverock/quantstrat
Issues and patches should be reported via github issues.

Resources