I tried to use TukeyHSD(my_anova$aov) but it gives an error:
Error in UseMethod("TukeyHSD") :
no applicable method for 'TukeyHSD' applied to an object of class "c('aovlist', 'listof')"
Google says that there is no way to post hoc with 'aovlist'. But maybe you have any idea about post hoc with ezANOVA output.
Example:
require(ez)
data(ANT)
rt_anova = ezANOVA(data = ANT[ANT$error==0,], dv = rt, wid = subnum, within = cue,return_aov = TRUE)
Try to use multcomp:
require(multcomp)
glht(my_anova$aov, linfct = mcp(cue = "Tukey"))
Error in model.matrix.aovlist(model) :
‘glht’ does not support objects of class ‘aovlist’
Error in factor_contrasts(model) :
no ‘model.matrix’ method for ‘model’ found!
Try to use lme:
require(nlme)
lme_velocity = lme(rt ~ cue, data=ANT[ANT$error==0,], random = ~1|subnum)
Error in .Call("La_chol", as.matrix(x), PACKAGE = "base") :
Incorrect number of arguments (1), expecting 2 for 'La_chol'
> sessionInfo()
R version 2.15.2 (2012-10-26)
Platform: i386-w64-mingw32/i386 (32-bit)
locale:
[1] LC_COLLATE=Russian_Russia.1251 LC_CTYPE=Russian_Russia.1251 LC_MONETARY=Russian_Russia.1251 LC_NUMERIC=C LC_TIME=Russian_Russia.1251
attached base packages:
[1] splines stats graphics grDevices utils datasets methods base
other attached packages:
[1] nlme_3.1-108 multcomp_1.2-15 survival_2.37-2 mvtnorm_0.9-9994 ez_4.1-1 stringr_0.6.2 scales_0.2.3 reshape2_1.2.2 plyr_1.8 memoise_0.1
[11] mgcv_1.7-22 lme4_0.999999-0 Matrix_1.0-10 lattice_0.20-13 ggplot2_0.9.3 car_2.0-15 nnet_7.3-5 MASS_7.3-23
loaded via a namespace (and not attached):
[1] colorspace_1.2-1 dichromat_2.0-0 digest_0.6.2 grid_2.15.0 gtable_0.1.2 labeling_0.1 munsell_0.4 proto_0.3-10 RColorBrewer_1.0-5
[10] stats4_2.15.0 tools_2.15.0
It's not that it's ezANOVA output but that it's a repeated measures ANOVA. The class 'aovlist' is typically for that. TukeyHSD is for independent designs. See this question and related links there.
You don't give any reproducible code, but my guess is that you need to use the package multcomp:
require(multcomp)
glht(my_anova$aov, linfct = mcp(cue= "Tukey"))
(does not work with repeated measures aov, see #John's answer why)
===Update===
Your code works for me (R 2.15.2, nlme 3.1-105, multcomp 1.2-15):
> data(ANT)
> lme_velocity = lme(rt ~ cue, data=ANT[ANT$error==0,], random = ~1|subnum)
> glht(lme_velocity, linfct = mcp(cue= "Tukey"))
General Linear Hypotheses
Multiple Comparisons of Means: Tukey Contrasts
Linear Hypotheses:
Estimate
Center - None == 0 -41.872
Double - None == 0 -47.897
Spatial - None == 0 -86.040
Double - Center == 0 -6.026
Spatial - Center == 0 -44.169
Spatial - Double == 0 -38.143
Related
There is a strange behaviour when I use lmer: when I save the fit using lmer into an object, let's say fit0, using lmer, I can look at the summary (output not showing):
>summary(fit0)
If I save the objects using save.image(), close the session and reopen it again, summary gives me:
>summary(fit0)
Error in diag(vcov(object, use.hessian = use.hessian))
error in evaluating the argument 'x' in selecting a method for function 'diag': Error in object#pp$unsc() : object 'merPredDunsc' not found
If I run again the model, I get the expected summary but will loose it if I close the session.
What happens? How can I avoid this Error?
Thanks for help.
Environment and version:
Windows 7
R version 3.1.2 (2014-10-31)
GNU Emacs 24.3.1 (i386-mingw-nt6.1.7601)/ESS
Here is a minimal example:
# j: cluster
# i[j]: i in cluster j
# yi[j] = zi[j] + N(0,1)
# zi[j] = b0j + b1*xi[j]
# b0j = g0 + u0j, u0j ~ N(0,sd0)
# b1 = const
library(lme4)
# Number of clusters (level 2)
N <- 20
# intercept
g0 <- 1
sd0 <- 2
# slope
b1 <- 3
# Number of observations (level 1) for cluster j
nj <- 10
# Vector of clusters indices 1,1...n1,2,2,2,....n2,...N,N,....nN
j <- c(sapply(1:N, function(x) rep(x, nj)))
# Vector of random variable
uj <- c(sapply(1:N, function(x)rep(rnorm(1,0,sd0), nj)))
# Vector of fixed variable
x1 <- rep(runif(nj),N)
# linear combination
z <- g0 + uj + b1*x1
# add error
y <- z + rnorm(N*nj,0,1)
# Put all together
d0 <- data.frame(j, y=y, z=z,x1=x1, uj=uj)
head(d0)
# mixed model
fit0 <- lmer(y ~ x1 + (1|j), data = d0)
vcov(fit0)
summary(fit0)
save.image()
After restarting und adding library lme4:
> sessionInfo()
R version 3.1.2 (2014-10-31)
Platform: x86_64-w64-mingw32/x64 (64-bit)
locale:
[1] LC_COLLATE=German_Switzerland.1252 LC_CTYPE=German_Switzerland.1252
[3] LC_MONETARY=German_Switzerland.1252 LC_NUMERIC=C
[5] LC_TIME=German_Switzerland.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] lme4_1.1-7 Rcpp_0.11.0 Matrix_1.1-2-2
loaded via a namespace (and not attached):
[1] compiler_3.1.2 grid_3.1.2 lattice_0.20-29 MASS_7.3-35
[5] minqa_1.2.3 nlme_3.1-118 nloptr_1.0.4 splines_3.1.2
[9] tools_3.1.2
>
My code runs fine but fails in a package
I boiled it down to
wtf<-function(r)
{
require(raster)
stopifnot(class(r) == "RasterLayer")
return(as.matrix(r))
}
When sourced, everything works fine. When the function is part of a package, it fails. It nicely runs in debug mode though, step by step.
library(mypackage)
r <- raster(ncol=6, nrow=6)
r[] <- runif(ncell(r),0,1)
extent(r) <- matrix(c(0, 0, 6, 6), nrow=2)
wtf(r)
# Error in as.vector(data) :
# no method for coercing this S4 class to a vector
# Traceback
# 5 as.vector(data)
# 4 array(x, c(length(x), 1L), if (!is.null(names(x))) list(names(x),
# NULL) else NULL)
# 3 as.matrix.default(r)
# 2 as.matrix(r) at terrain.R#7
# 1 wtf(s)
I'm a bit puzzeled as to why this happens and to how proceed.
The build went fine, the check went clean, so what is going on?
What would be the next question to ask and explore in order to solve the problem?
R version 3.1.1 (2014-07-10)
Platform: x86_64-apple-darwin10.8.0 (64-bit)
locale:
[1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] raster_2.3-0 spdep_0.5-77 Matrix_1.1-4 minerva_1.4.1 gdata_2.13.3 rgdal_0.9-1 sp_1.0-15
loaded via a namespace (and not attached):
[1] boot_1.3-11 coda_0.16-1 deldir_0.1-5 grid_3.1.1 gtools_3.4.1 lattice_0.20-29 LearnBayes_2.15 MASS_7.3-33 nlme_3.1-118 parallel_3.1.1 splines_3.1.1 tools_3.1.1
The traceback shows that the default as.matrix is used, rather than the raster variant. I believe this problem goes away if you add this line to your Namespace file:
import(raster)
Or when you are explicit about which as.matrix you want:
wtf <- function(r) {
stopifnot(inherits(r, "RasterLayer"))
raster::as.matrix(r)
}
Rather than 'manually' testing for class membership, you might consider a more formal (S4) approach:
if (!isGeneric("wtf")) {
setGeneric("wtf", function(x, ...)
standardGeneric("wtf"))
}
setMethod("wtf", signature(x='RasterLayer'),
function(x, ...) {
raster::as.matrix(x)
}
)
I am applying similar add.distribution rule as in the luxor-demo while my strategy has only a long position.
The whole strategy works, but when applying a parameterset I get following error:
TakeProfitLONG 47 0.047
TakeProfitLONG 47 0.047 result of evaluating expression:
simpleError in param.combo[[param.label]]: subscript out of bounds
got results for task 47 numValues: 47, numResults: 47, stopped: FALSE
returning status FALSE evaluation # 48: $param.combo
I am trying to run a distribution on a simple takeProfit rule (get same result from stopLoss or trailingStop):
.use.takeProfit = TRUE
.takeprofit <- 2.0/100 # actual
.TakeProfit = seq(0.1, 4.8, length.out=48)/100 # parameter set for optimization
## take-profit
add.rule(strategy.st, name = 'ruleSignal',
arguments=list(sigcol='signal.gt.zero' , sigval=TRUE,
replace=FALSE,
orderside='long',
ordertype='limit',
tmult=TRUE,
threshold=quote(.takeprofit),
TxnFees=.txnfees,
orderqty='all',
orderset='ocolong'
),
type='chain',
parent='EnterLONG',
label='TakeProfitLONG',
enabled=.use.takeProfit
)
I am adding the distribution as follows:
add.distribution(strategy.st,
paramset.label = 'TakeProfit',
component.type = 'chain',
component.label = 'TakeProfitLONG',
variable = list(threshold = .TakeProfit),
label = 'TakeProfitLONG'
)
and apply the set:
results <- apply.paramset(strategy.st, paramset.label='TakeProfit', portfolio.st=portfolio.st, account.st=account.st, nsamples=.nsamples, verbose=TRUE)
From my limited debugging it seems that the parameterset is a simple vector whereas in the apply.paramset following function fails:
results <- fe %dopar% { ... }
Here I am too new to R as i am only 4 weeks looking into this, but possibly a call to:
install.param.combo <- function(strategy, param.combo, paramset.label)
might cause the error?
Have to apologize as I am to new, but did anyone encounter this or could help how to apply a distribution to only one item in a long only strategy?
Many thanks in advance!
EDIT 1: SessionInfo()
R version 3.1.2 (2014-10-31)
Platform: i486-pc-linux-gnu (32-bit)
locale:
[1] C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] lattice_0.20-29 iterators_1.0.7 downloader_0.3
[4] quantstrat_0.9.1665 foreach_1.4.2 blotter_0.9.1644
[7] PerformanceAnalytics_1.4.3574 FinancialInstrument_1.2.0 quantmod_0.4-3
[11] TTR_0.22-0.1 xts_0.9-7 zoo_1.7-12
loaded via a namespace (and not attached):
[1] codetools_0.2-9 compiler_3.1.2 digest_0.6.7 grid_3.1.2 tools_3.1.2
This is the same bug as # 5776. It was fixed for "signal" component types, but not for "chain". It should now be fixed as of revision 1669 on R-Forge.
I have been trying to apply recursive feature selection using caret package. What I need is that ref uses the AUC as performance measure. After googling for a month I cannot get the process working. Here is the code I have used:
library(caret)
library(doMC)
registerDoMC(cores = 4)
data(mdrr)
subsets <- c(1:10)
ctrl <- rfeControl(functions=caretFuncs,
method = "cv",
repeats =5, number = 10,
returnResamp="final", verbose = TRUE)
trainctrl <- trainControl(classProbs= TRUE)
caretFuncs$summary <- twoClassSummary
set.seed(326)
rf.profileROC.Radial <- rfe(mdrrDescr, mdrrClass, sizes=subsets,
rfeControl=ctrl,
method="svmRadial",
metric="ROC",
trControl=trainctrl)
When executing this script I get the following results:
Recursive feature selection
Outer resampling method: Cross-Validation (10 fold)
Resampling performance over subset size:
Variables Accuracy Kappa AccuracySD KappaSD Selected
1 0.7501 0.4796 0.04324 0.09491
2 0.7671 0.5168 0.05274 0.11037
3 0.7671 0.5167 0.04294 0.09043
4 0.7728 0.5289 0.04439 0.09290
5 0.8012 0.5856 0.04144 0.08798
6 0.8049 0.5926 0.02871 0.06133
7 0.8049 0.5925 0.03458 0.07450
8 0.8124 0.6090 0.03444 0.07361
9 0.8181 0.6204 0.03135 0.06758 *
10 0.8069 0.5971 0.04234 0.09166
342 0.8106 0.6042 0.04701 0.10326
The top 5 variables (out of 9):
nC, X3v, Sp, X2v, X1v
The process always uses Accuracy as performance mesure. Another problem that arises is that when I try to get prediction from the model obtained using:
predictions <- predict(rf.profileROC.Radial$fit,mdrrDescr)
I get the following message
In predictionFunction(method, modelFit, tempX, custom = models[[i]]$control$custom$prediction) :
kernlab class prediction calculations failed; returning NAs
turning out to be imposible to get some prediction from the model.
Here is the information obtained through sessionInfo()
R version 3.0.2 (2013-09-25)
Platform: x86_64-pc-linux-gnu (64-bit)
locale:
[1] LC_CTYPE=es_ES.UTF-8 LC_NUMERIC=C LC_TIME=es_ES.UTF-8
[4] LC_COLLATE=es_ES.UTF-8 LC_MONETARY=es_ES.UTF-8 LC_MESSAGES=es_ES.UTF-8
[7] LC_PAPER=es_ES.UTF-8 LC_NAME=C LC_ADDRESS=C
[10] LC_TELEPHONE=C LC_MEASUREMENT=es_ES.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] grid parallel splines stats graphics grDevices utils datasets methods base
other attached packages:
[1] e1071_1.6-2 class_7.3-9 pROC_1.6.0.1 doMC_1.3.2 iterators_1.0.6 foreach_1.4.1
[7] caret_6.0-21 ggplot2_0.9.3.1 lattice_0.20-24 kernlab_0.9-19
loaded via a namespace (and not attached):
[1] car_2.0-19 codetools_0.2-8 colorspace_1.2-4 compiler_3.0.2 dichromat_2.0-0
[6] digest_0.6.4 gtable_0.1.2 labeling_0.2 MASS_7.3-29 munsell_0.4.2
[11] nnet_7.3-7 plyr_1.8 proto_0.3-10 RColorBrewer_1.0-5 Rcpp_0.10.6
[16] reshape2_1.2.2 scales_0.2.3 stringr_0.6.2 tools_3.0.2
One problem is a minor typo ('trControl=' instead of 'trainControl='). Also, you change caretFuncs after you attached it to rfe's control function. Lastly, you will need to tell trainControl to calculate the ROC curves.
This code works:
caretFuncs$summary <- twoClassSummary
ctrl <- rfeControl(functions=caretFuncs,
method = "cv",
repeats =5, number = 10,
returnResamp="final", verbose = TRUE)
trainctrl <- trainControl(classProbs= TRUE,
summaryFunction = twoClassSummary)
rf.profileROC.Radial <- rfe(mdrrDescr, mdrrClass,
sizes=subsets,
rfeControl=ctrl,
method="svmRadial",
## I also added this line to
## avoid a warning:
metric = "ROC",
trControl = trainctrl)
> rf.profileROC.Radial
Recursive feature selection
Outer resampling method: Cross-Validated (10 fold)
Resampling performance over subset size:
Variables ROC Sens Spec ROCSD SensSD SpecSD Selected
1 0.7805 0.8356 0.6304 0.08139 0.10347 0.10093
2 0.8340 0.8491 0.6609 0.06955 0.10564 0.09787
3 0.8412 0.8491 0.6565 0.07222 0.10564 0.09039
4 0.8465 0.8491 0.6609 0.06581 0.09584 0.10207
5 0.8502 0.8624 0.6652 0.05844 0.08536 0.09404
6 0.8684 0.8923 0.7043 0.06222 0.06893 0.09999
7 0.8642 0.8691 0.6913 0.05655 0.10837 0.06626
8 0.8697 0.8823 0.7043 0.05411 0.08276 0.07333
9 0.8792 0.8753 0.7348 0.05414 0.08933 0.07232 *
10 0.8622 0.8826 0.6696 0.07457 0.08810 0.16550
342 0.8650 0.8926 0.6870 0.07392 0.08140 0.17367
The top 5 variables (out of 9):
nC, X3v, Sp, X2v, X1v
For the prediction problems, you should use rf.profileROC.Radial instead of the fit component:
> predict(rf.profileROC.Radial, head(mdrrDescr))
pred Active Inactive
1 Inactive 0.4392768 0.5607232
2 Active 0.6553482 0.3446518
3 Active 0.6387261 0.3612739
4 Inactive 0.3060582 0.6939418
5 Active 0.6661557 0.3338443
6 Active 0.7513180 0.2486820
Max
I want to make a f-test to a plm-model and test for
model <- plm(y ~ a + b)
if
# a = b
and
# a = 0 and b = 0
I tried linearHypothesis like this
linearHypothesis(ur.model, c("a", "b")) to test for a = 0 and b = 0
but got the error
Error in constants(lhs, cnames_symb) :
The hypothesis "sgp1" is not well formed: contains bad coefficient/variable names.
Calls: linearHypothesis ... makeHypothesis -> rbind -> Recall -> makeHypothesis -> constants
In addition: Warning message:
In constants(lhs, cnames_symb) : NAs introduced by coercion
Execution halted
My example above is with code that is a little simplified if the problem is easy. If the problems is in the details is the actual code here.
model3 <- formula(balance.agr ~ sgp1 + sgp2 + cp + eu + election + gdpchange.imf + ue.ameco)
ur.model<-plm(model3, data=panel.l.fullsample, index=c("country","year"), model="within", effect="twoways")
linearHypothesis(ur.model, c("sgp1", "sgp2"), vcov.=vcovHC(plmmodel1, method="arellano", type = "HC1", clustering="group"))
I can't reproduce your error with one of the inbuilt data sets, even after quite a bit of fiddling.
Does this work for you?
require(plm)
require(car)
data(Grunfeld)
form <- formula(inv ~ value + capital)
re <- plm(form, data = Grunfeld, model = "within", effect = "twoways")
linearHypothesis(re, c("value", "capital"),
vcov. = vcovHC(re, method="arellano", type = "HC1"))
Note also, that you seem to have an error in the more complex code you showed. You are using linearHypothesis() on the object ur.model, yet call vcovHC() on object plmmodel1. Not sure if that is the problem or not, but check that in case.
Is it possible to provide the data? Finally, edit your Question to include output from sessionInfo(). Mine is (from quite a busy R instance):
> sessionInfo()
R version 2.11.1 Patched (2010-08-25 r52803)
Platform: x86_64-unknown-linux-gnu (64-bit)
locale:
[1] LC_CTYPE=en_GB.utf8 LC_NUMERIC=C
[3] LC_TIME=en_GB.utf8 LC_COLLATE=en_GB.utf8
[5] LC_MONETARY=C LC_MESSAGES=en_GB.utf8
[7] LC_PAPER=en_GB.utf8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_GB.utf8 LC_IDENTIFICATION=C
attached base packages:
[1] splines grid stats graphics grDevices utils datasets
[8] methods base
other attached packages:
[1] car_2.0-2 nnet_7.3-1 plm_1.2-6 Formula_1.0-0
[5] kinship_1.1.0-23 lattice_0.19-11 nlme_3.1-96 survival_2.35-8
[9] mgcv_1.6-2 chron_2.3-37 MASS_7.3-7 vegan_1.17-4
[13] lmtest_0.9-27 sandwich_2.2-6 zoo_1.6-4 moments_0.11
[17] ggplot2_0.8.8 proto_0.3-8 reshape_0.8.3 plyr_1.2.1
loaded via a namespace (and not attached):
[1] Matrix_0.999375-44 tools_2.11.1
Could it be because you are "mixing" models? You have a variance specification that starts out:
, ...vcov.=vcovHC(plmmodel1,
... and yet you are working with ur.model.