Planned Contrasts on glmmTMB - r

Apologies if this is a repeat question. Many have posted looking looking for a way to do post-hoc analyses on the conditional model (fixed factors) in glmmTMB. I want to do plannned contrasts between certain groups, not test every pairwise comparison (e.g. Tukey).
The code below worked well on nlme:lme for a lmm. However, it returns an error on the code below.
Error in modelparm.default(model, ...) :
dimensions of coefficients and covariance matrix don't match
Is there a way to do planned contrasts on a glmmTMB?
#filtdens is a dataframe and TRT,DATE,BURN,VEG are factors
filtdens <- merged %>% filter(!BLOCK %in% c("JB2","JB4","JB5") & MEAS =="DENS" &
group == "TOT" & BURN == "N" & VEG == "C")
filtdens$TD <- interaction(filtdens$TRT, filtdens$DATE)
mod2 <- glmmTMB(count~(TD)+(1|BLOCK),
data=filtdens,
zi=~1,
family=nbinom1(link = "log"))
k1 <- matrix(c(0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, -1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, -1, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, -1, 0, 1, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, -1, 1, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, -1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, -1, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, -1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 1, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 1,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 1), byrow = T, ncol = 12)
summary(glht(mod2, linfct=k1),test=adjusted("bonferroni"))

A reproducible example would be helpful, but: this vignette in the development version offers code that ought to enable multcomp::linfct, i.e.:
glht_glmmTMB <- function (model, ..., component="cond") {
glht(model, ...,
coef. = function(x) fixef(x)[[component]],
vcov. = function(x) vcov(x)[[component]],
df = NULL)
}
modelparm.glmmTMB <- function (model,
coef. = function(x) fixef(x)[[component]],
vcov. = function(x) vcov(x)[[component]],
df = NULL, component="cond", ...) {
multcomp:::modelparm.default(model, coef. = coef., vcov. = vcov.,
df = df, ...)
}
Test (this example is with Tukey, but I don't see why it shouldn't work more generally ...)
library(glmmTMB)
data("cbpp",package="lme4")
cbpp_b1 <- glmmTMB(incidence/size~period+(1|herd),
weights=size,family=binomial,
data=cbpp)
g1 <- glht(cbpp_b1, linfct = mcp(period = "Tukey"))
summary(g1)
This works with the current CRAN version, but the current development version of glmmTMB offers more options (e.g. emmeans(); see the above-linked vignette). You'll need to install via devtools::install_github("glmmTMB/glmmTMB/glmmTMB") (you'll need compilation tools installed as well).

Related

How can I calculate jaccard vertex similarity with weights in igraph

I have a square matrix that represents directed interactions, with values representing the magnitude of the "flow" from row i to column j.
mat <- structure(c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.59734154600838,
0.962276996464401, 0.996554553573577, 0.988150008522967, 0.581536975261071,
0.280105566896129, 0.0520717823071291, 0.0443864046117343, 0.0162858335588474,
0, 0, 0, 0, 0, 0, 0, 0.111900863185923, 0.289483837277475, 0.338036619790556,
0.973201117894343, 0.876145758734938, 0.280105566896129, 0.245172586054694,
0.101440228047504, 0.0136022221272776, 0, 0, 0, 0, 0, 0, 0.073088274682518,
0.21588462733217, 0.258134862678946, 0.93528472971792, 0.921844796228768,
0.318790697187933, 0.280105566896129, 0.117928032625428, 0.016073037487081,
0, 0, 0, 0, 0, 0, 0, 0.0119602547215087, 0.0174757225504163,
0.443466799224191, 0.941024455005652, 0.632609306727839, 0.57418820480725,
0.280105566896129, 0.043827579210664, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0.0547471528159807, 0.884304818335752, 0.937495721370637,
0.925118019265575, 0.280105566896129, 0.055967839940851, 0.0122649398400715,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0679263578760456, 0.104884821422108,
0.569814755335506, 0.853130344409379, 0.280105566896129, 0.0728699300735904,
0.0339371561178606, 0.012188886551821, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.0219303360220489, 0.843994038605239, 0.759918325154657,
0.280105566896129, 0.143508732965731, 0.0556400089034765, 0.0296286033644999,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.421151438381493, 0.977746695038157,
0.499880491267235, 0.280105566896129, 0.116686808742586, 0.0639605586005988,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0495967410949283, 0.841406989124245,
0.85505217514437, 0.578265483357174, 0.280105566896129, 0.163154497800251,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.499941945587477, 0.993657104473566,
0.807475685951474, 0.45318772928331, 0.280105566896129), .Dim = c(15L,
15L))
I am interested in calculating the weighted linkage similarity (both in and out flows) of all vertices in the network, so taking magnitude into account.
Using igraph, I can calculate the Jaccard similarity but without considering weights
library(igraph)
bin <- mat
bin[bin > 0] <- 1
similarity(graph_from_adjacency_matrix(bin),
mode = "all",
method = "jaccard")
# this gives the same result as the one above
similarity(graph_from_adjacency_matrix(mat, weighted = T),
mode = "all",
method = "jaccard")
Using the code from this blogpost, I was able to calculate the Jaccard similarity of outflows and inflows and combine them.
# outflow similarity
sim.jac.out <- matrix(0, nrow=nrow(mat), ncol=nrow(mat))
pairs <- t(combn(1:nrow(mat), 2))
for (i in 1:nrow(pairs)) {
num <- sum(sapply(1:ncol(mat), function(x) (min(mat[pairs[i,1],x], mat[pairs[i,2],x]))))
den <- sum(sapply(1:ncol(mat), function(x) (max(mat[pairs[i,1],x], mat[pairs[i,2],x]))))
sim.jac.out[pairs[i,1],pairs[i,2]] <- num/den
sim.jac.out[pairs[i,2],pairs[i,1]] <- num/den
}
sim.jac.out[which(is.na(sim.jac.out))] <- 0
diag(sim.jac.out) <- 1
# inflow similarity
sim.jac.in <- matrix(0, nrow=nrow(mat), ncol=nrow(mat))
pairs <- t(combn(1:nrow(t(mat)), 2))
for (i in 1:nrow(pairs)) {
num <- sum(sapply(1:ncol(t(mat)), function(x) (min(t(mat)[pairs[i,1],x], t(mat)[pairs[i,2],x]))))
den <- sum(sapply(1:ncol(t(mat)), function(x) (max(t(mat)[pairs[i,1],x], t(mat)[pairs[i,2],x]))))
sim.jac.in[pairs[i,1],pairs[i,2]] <- num/den
sim.jac.in[pairs[i,2],pairs[i,1]] <- num/den
}
sim.jac.in[which(is.na(sim.jac.in))] <- 0
diag(sim.jac.in) <- 1
# total similariry
sim.jac.all <- (sim.jac.in + sim.jac.out)/2
So the general question is, does this make sense?
But more specifically, I would be interested to know if there is a way to incorporate link weights in the calculation of similarity with igraph.
In my real dataset, I need to do this several times iteratively (swapping individuals), for a large number of networks, so my method would take forever. I believe igraph uses C++ under the hood.

ICC unused argument

I need to calculate the 95% CI of iCC.
I'm using this code:
icc(mydata[,c(1,2)], model = "twoway",type = "agreement", unit = "average")
I obtain this error message.
Error in icc(mydata[, c(1, 1)], model = "twoway", type = "agreement", :
unused arguments (model = "twoway", type = "agreement", unit = "average")
My data:
mydata= data.frame(A=c(0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 3), B=c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 2))
How can I fix it?
Thank you!
You might have competing packages loaded that both use icc(). The psych package and the irr package both have this as a function name. If both are loaded, the psych package is likely masking the irr command. Those are not valid arguments for psych::icc() but they are for irr::icc()
Try:
irr:: icc(mydata[,c(1,2)], model = "twoway",type = "agreement", unit = "average")

Why am I getting this error when using glmnet in R

When trying to predict I am getting this error
error in evaluating the argument 'x' in selecting a method for function 'as.matrix': Cholmod error 'X and/or Y have wrong dimensions' at file ../MatrixOps/cholmod_sdmult.c, line 90
Here is my code so far
library(data.table)
library(caret)
library(Metrics)
library(glmnet)
library(plotmo)
library(lubridate)
#Reading in the necessary data needed for this project
train <- fread("project/volume/data/processed/start_train.csv")
test<-fread("project/volume/data/processed/start_test.csv")
example_sub<-fread("project/volume/data/processed/example_submission.csv")
card_tab <- fread("project/volume/data/processed/card_tab.csv")
#Merging the card_tab dataset with both my train and test datasets to add more variables to each
train = merge(train, card_tab, by = "id")
test = merge(test, card_tab, by = "id")
train$power = as.numeric(train$power)
train$toughness = as.numeric(train$toughness)
test$power = as.numeric(test$power)
test$toughness = as.numeric(test$toughness)
train$powerovertough = train$power/train$toughness
test$powerovertough = test$power/test$toughness
train$current_date<-as_date(train$current_date)
train<-train[order(-current_date)]
test$current_date<-as_date(test$current_date)
test<-test[order(-current_date)]
#Handling NA values in both train and test. The NA values will be set to 0
train[is.na(train)] <- 0
test[is.na(test)] <- 0
# Specifying which columns of our model we will be dropping and applying it to our train and test datasets
drops<- c('id','future_date','current_date','card_name','power','loyalty','cmc','type','colors','mana_cost','subtypes', 'text','set','set_name')
train<-train[, !drops, with = FALSE]
test<-test[, !drops, with = FALSE]
#Saving the response variable in the train dataset
train_y<-train$future_price
test$future_price<-0
#Using dummies to train model
dummies <- dummyVars(future_price ~ ., data = train)
train<-predict(dummies, newdata = train)
test<-predict(dummies, newdata = test)
train<-data.table(train)
test<-data.table(test)
#Cross validating the model to fin the best lamda value
train<-as.matrix(train)
test<-as.matrix(test)
gl_model<-cv.glmnet(train, train_y, alpha = 1,family="gaussian")
bestlam<-gl_model$lambda.min
# Fitting the full model
gl_model<-glmnet(train, train_y, alpha = 1,family="gaussian")
plot_glmnet(gl_model)
saveRDS(gl_model,"./project/volume/models/gl_model.model")
test<-as.matrix(test)
#use the full model
pred<-predict(gl_model,s=bestlam, newx = test)
I am trying to predict future_price for my test set. The error is saying my dimensions are wrong but I don't know what could be causing them to be different. I have tried observing the data sets as it runs through the code and they seem to have the same variables.
Here is the dput
> dput(head(train))
structure(c(0.25, 0.1, 0.1, 0.1, 0.25, 0.25, 1, 0, 0, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 3, 0, 2, 0, 0, 0, 0.333333333333333,
0, 1, 0), .Dim = c(6L, 20L), .Dimnames = list(NULL, c("current_price",
"typesArtifact", "typesArtifact Creature", "typesCreature", "typesEnchantment",
"typesEnchantment Artifact", "typesEnchantment Creature", "typesInstant",
"typesLand", "typesPlaneswalker", "typesSorcery", "supertypes",
"supertypesBasic", "supertypesLegendary", "rarityCommon", "rarityMythic",
"rarityRare", "rarityUncommon", "toughness", "powerovertough"
)))
> dput(head(test))
structure(c(0.15, 0.16, 2, 0.39, 0.16, 0.19, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0,
1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,
1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0), .Dim = c(6L, 18L), .Dimnames = list(
NULL, c("current_price", "typesArtifact", "typesArtifact Creature",
"typesCreature", "typesEnchantment", "typesInstant", "typesLand",
"typesPlaneswalker", "typesSorcery", "supertypes", "supertypesBasic",
"supertypesLegendary", "rarityCommon", "rarityMythic", "rarityRare",
"rarityUncommon", "toughness", "powerovertough")))

What does this error mean "order(vertex_attr(g, measure), decreasing = TRUE) : argument 1 is not a vector" in R?

I am trying to calculate robustness, a graph theory measure using R (braingraph package).
Robustness = robustness(my_networkgraph, type = c("vertex"), measure = ("btwn.cent"))
I get the following error, when I use the above robustness function:
Error in order(vertex_attr(g, measure), decreasing = TRUE) : argument 1 is not a vector
Any idea, what I am doing wrong here?
My network, which is a matrix has been converted to igraph object and robustness was calculated.
My network as a matrix:
mynetwork <- matrix(c(0, 1, 0, 1, 0, 0, 0, 0,
1, 0, 1, 0, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 1, 1, 0, 1, 1,
0, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0), nrow = 8)
This matrix was converted as igraph using the following code:
my_networkgraph <-graph_from_adjacency_matrix(mynetwork, mode = c("undirected"),weighted = NULL, diag = TRUE, add.colnames = NULL, add.rownames = NA)
Please help me to understand the above error
Thanks
Priya
There was a bug in the above function. To run the robustness code, you will need to supply a vertex attribute to your network: V(network)$degree <- degree(network) V(network)$btwn.cent <- centr_betw(network)$res

Using car::Anova package for a doubly-multivariate MANOVA in R

I'm trying to run a repeated-measures MANOVA in R, which also contains a number of dependent variables (key outcome variables of behavioural tasks). The repeated-measures are due to a cross-over design, in which individuals took a drug and placebo (in randomised order).
The code I'm running looks like this:
imatrix <- matrix(c(
1, 0, 0, 0, 0, 0, 1,
1, 0, 0, 0, 0, 0, -1,
0, 1, 0, 0, 0, 0, 1,
0, 1, 0, 0, 0, 0, -1,
0, 0, 1, 0, 0, 0, 1,
0, 0, 1, 0, 0, 0, -1,
0, 0, 0, 1, 0, 0, 1,
0, 0, 0, 1, 0, 0, -1,
0, 0, 0, 0, 1, 0, 1,
0, 0, 0, 0, 1, 0, -1,
0, 0, 0, 0, 0, 1, 1,
0, 0, 0, 0, 0, 1, -1
), 12, 7, byrow=TRUE)
colnames(imatrix) <- c("BCST", "CGT", "AST", "AGN", "DDT", "FERT", "NAC")
(imatrix <- list(measure=imatrix[,1:6], condition=imatrix[,7]))
contrasts(condition_factor) <- matrix(c(-1,1,1, -1), ncol=2)
doubly.mod<-lm(cbind(bcst_nac$totPersErr,bcst_placebo$totPersErr,cantab_nac$CGT.Delay.aversion,cantab_placebo$CGT.Delay.aversion,cantab_nac$AST.Switching.cost..Mean..correct.,cantab_placebo$AST.Switching.cost..Mean..correct.,cantab_nac$AGN.Affective.response.bias..Mean.,cantab_placebo$AGN.Affective.response.bias..Mean.,aucs_NAC,aucs_placebo,fert_nac$FERTACCHA,fert_placebo$FERTACCHA)~1))
Manova(doubly.mod, imatrix=imatrix, type =3)
The result is this error: Error in Anova.III.mlm(mod, SSPE, error.df, idata, idesign, icontrasts, :
(list) object cannot be coerced to type 'double'
However, when I change imatrix back from a list to a matrix, I get this error response:
Error in do.call(cbind, imatrix) : second argument must be a list
I've based this off the example from the car::Anova package about doubly multivariate analyses. Please let me know if you can help, or if I can add anything to make this question clearer.

Resources