comparing segmented models in R - r

I have the hypothesis that the automatically found model below is not significantly better than one where the middle segment is horizontal. How could I test that?
df<-structure(list(ageThen=c(9,10,11,12,13,14,15,16,17,
18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,
34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,
50,51,52,53,54,55,56,57,58,59,60,61,62),mTh=c(-0.057,
-0.253,-0.345,-0.185,-0.155,-0.013,0.285,0.16,0.197,0.199,
0.215,0.288,0.401,0.363,0.387,0.37,0.387,0.28,0.571,
0.383,0.297,0.366,0.36,0.25,0.269,0.235,0.273,0.336,
0.354,0.286,0.331,0.21,0.32,0.278,0.195,0.257,0.259,
0.251,0.222,0.206,0.214,-0.072,-0.123,-0.043,-0.003,0.116,
-0.193,-0.218,-0.278,-0.265,-0.218,-0.541,-0.76,-0.401
),n=c(64L,524L,20595L,2504L,795L,704L,1700L,1239L,
1273L,1149L,1011L,1122L,1031L,814L,717L,667L,462L,414L,
405L,313L,256L,305L,187L,255L,240L,221L,262L,227L,230L,
239L,199L,290L,201L,246L,217L,215L,273L,229L,213L,193L,
199L,204L,159L,207L,148L,121L,115L,89L,87L,78L,68L,
85L,55L,80L)),class=c("tbl_df","tbl","data.frame"),row.names=c(NA,
-54L))
library(segmented)
m1<-lm(mTh~ageThen,data=df,weights=n)##initialfit
s2<-segmented(m1,psi=c(20,50))##twobreakpoints,estimatedstartingvalues
plot(mTh~ageThen,data=df)
lines(df$ageThen,predict(s2),col=2,lwd=2)

This workflow seems to solve my problem. The literature that I saw suggests that the there is a significant increase in the predictive value of the model if the elpd_diff/se_diff (in absolute terms) is larger than 2, in other places 5. In my case it isn't, so unconstrained model is not much better predictor than the constrained (theoretical). A citable source would be much appreciated to corroborate this.
https://lindeloev.github.io/mcp/articles/comparison.html#what-is-loo-cv.
https://discourse.mc-stan.org/t/interpreting-elpd-diff-loo-package/1628
library(mcp)
df<-structure(list(ageThen=c(9,10,11,12,13,14,15,16,17,
18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,
34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,
50,51,52,53,54,55,56,57,58,59,60,61,62),mTh=c(-0.057,
-0.253,-0.345,-0.185,-0.155,-0.013,0.285,0.16,0.197,0.199,
0.215,0.288,0.401,0.363,0.387,0.37,0.387,0.28,0.571,
0.383,0.297,0.366,0.36,0.25,0.269,0.235,0.273,0.336,
0.354,0.286,0.331,0.21,0.32,0.278,0.195,0.257,0.259,
0.251,0.222,0.206,0.214,-0.072,-0.123,-0.043,-0.003,0.116,
-0.193,-0.218,-0.278,-0.265,-0.218,-0.541,-0.76,-0.401
),n=c(64L,524L,20595L,2504L,795L,704L,1700L,1239L,
1273L,1149L,1011L,1122L,1031L,814L,717L,667L,462L,414L,
405L,313L,256L,305L,187L,255L,240L,221L,262L,227L,230L,
239L,199L,290L,201L,246L,217L,215L,273L,229L,213L,193L,
199L,204L,159L,207L,148L,121L,115L,89L,87L,78L,68L,
85L,55L,80L)),class=c("tbl_df","tbl","data.frame"),row.names=c(NA,
-54L))
modelC = list(
mTh | weights(n) ~ 1 + ageThen, # slope
~ 0 , # joined plateau
~ 0 + ageThen # joined slope
)
fitC<-mcp(modelC, data = df, prior = list(cp_1 = "dunif(12, 25)", cp_2 = "dunif(35, 60)"))
plot(fitC)
summary(fitC)
modelUc = list(
mTh | weights(n) ~ 1 + ageThen, # slope
~ 0 + ageThen, # joined slope
~ 0 + ageThen # joined slope
)
fitUc<-mcp(modelUc, data = df, prior = list(cp_1 = "dunif(12, 25)", cp_2 = "dunif(35, 60)"))
plot(fitUc)
summary(fitUc)
fitUc$loo = loo(fitUc)
fitC$loo = loo(fitC)
library(tidyverse)
as.data.frame(loo::loo_compare(fitUc$loo,fitC$loo))%>%
mutate(diffRatio=elpd_diff/se_diff, .keep='used')

Related

R tree doesn't use all variables(why?)

Hi I'm working on a decision tree.
tree1=tree(League.binary~TME.factor+APM.factor+Wmd.factor,starcraft)
The tree shows a partitioning based solely on the APM.factor and the leaves aren't pure. here's a screenshot:
I tried creating a tree with a subset with 300 of the 3395 observations and it used more than one variable. What went wrong in the first case? Did it not need the extra two variables so it used only one?
Try playing with the tree.control() parameters, for example setting minsize=1 so that you end up with a single observation in each leaf (overfit), e.g:
model = tree(y ~ X1 + X2, data = data, control = tree.control(nobs=n, minsize = 2, mindev=0))
Also, try the same thing with the rpart package, see what results you get, which is the "new" version of tree. You can also plot the importance of the variables. Here a syntax example:
install.packages("rpart")
install.packages("rpart.plot")
library(rpart)
library(rpart.plot)
## fit tree
### alt1: class
model = rpart(y ~ X1 + X2, data=data, method = "class")
### alt2: reg
model = rpart(y ~ X1 + X2, data=data, control = rpart.control(maxdepth = 30, minsplit = 1, minbucket = 1, cp=0))
## show model
print(model)
rpart.plot(model, cex=0.5)
## importance
model$variable.importance
Note that since trees do binary splits, it is possible that a single variable explains most/all of the SSR (for regression). Try plotting the response for each regressor, see if there's any significant relation to anything but the variable you're getting.
In case you want to run the examples above, here a data simulation (put it at beginning of code):
n = 12000
X1 = runif(n, -100, 100)
X2 = runif(n, -100, 100)
## 1. SQUARE DATA
# y = ifelse( (X1< -50) | (X1>50) | (X2< -50) | (X2>50), 1, 0)
## 2. CIRCLE DATA
y = ifelse(sqrt(X1^2+X2^2)<=50, 0, 1)
## 3. LINEAR BOUNDARY DATA
# y = ifelse(X2<=-X1, 0, 1)
# Create
color = ifelse(y==0,"red","green")
data = data.frame(y,X1,X2,color)
# Plot
data$color = data$color %>% as.character()
plot(data$X2 ~ data$X1, col = data$color, type='p', pch=15)

Factors in nlme: singularity in backsolve error

I am parameterizing exponential fits for some metabolic scaling models. I have done this in lmer already, without problem, using logged dependent and independent variables. However, I would now like to incorporate other parameters that aren't necessarily exponentially related to the dependent variable. Hence, I've turned to nlme (lme4::nlmer doesn't seem to handle fixed effects), but I don't have much experience with it. Apologies in advance for newbie mistakes.
With the code below, I am getting the following error. I'm guessing that it has something to do with the 'site' factor being misspecified:
Error in nlme.formula(dep ~ scaling_fun(alpha, beta, ind, site), data = scale_df, :
Singularity in backsolve at level 0, block 1
When I fit a simpler function that does not involve 'site', the model seems to work correctly.
Any thoughts would be greatly appreciated!
Thanks,
Allie
# dput for data
# copy from http://pastebin.com/WNHhi2kZ (too large to include here)
> head(scale_df)
dep ind spp site
2 0.28069471 -0.0322841 157 A
3 -0.69719050 -1.2568901 183 A
4 0.29252012 0.1592420 246 A
5 0.72030740 -0.3282789 154 A
6 -0.08601891 0.3623756 110 A
7 0.30793594 0.2230840 154 A
scaling_fun <- function(alpha, beta, ind, site) {
return(beta + ind^alpha + site*(ind^alpha))
}
# results in singularity in backsolve error
nlme(dep ~ trait_scaling_fun(alpha, beta, ind, site),
data = scale_df,
fixed = list(alpha + beta + site ~ 1), random = alpha ~ 1|spp,
start = list(fixed = c(0.7, 0, 1)))
##############################
# simpler function converges #
##############################
scaling_fun <- function(alpha, beta, ind) {
return(beta + ind^alpha)
}
nlme(dep ~ scaling_fun(alpha, beta, ind),
data = scale_df,
fixed = list(alpha + beta ~ 1), random = alpha ~ 1|spp,
start = list(fixed = c(0.7, 0)))
Your model does not really make sense since site is a factor variable (and not a parameter). I suspect you actually want to stratify alpha by site:
library(nlme)
scaling_fun <- function(alpha, beta, ind) {
return(beta + ind^alpha)
}
nlme(dep ~ scaling_fun(alpha, beta, ind),
data = scale_df,
fixed = list(alpha ~ site, beta ~ 1), random = alpha ~ 1|spp,
start = list(fixed = c(0.487, rep(0, 19), -0.3)))
#Nonlinear mixed-effects model fit by maximum likelihood
# Model: dep ~ scaling_fun(alpha, beta, ind)
# Data: scale_df
# Log-likelihood: -716.4634
# Fixed: list(alpha ~ site, beta ~ 1)
#alpha.(Intercept) alpha.siteB alpha.siteC alpha.siteD alpha.siteE
# 0.57671912 -0.61258632 -0.59244337 -0.25793558 -0.24572998
# alpha.siteF alpha.siteG alpha.siteH alpha.siteI alpha.siteJ
# -0.23615274 -0.31015393 0.17970575 0.01286117 -0.12539377
# alpha.siteK alpha.siteL alpha.siteM alpha.siteN alpha.siteO
# 3.72445972 -0.08560994 0.13636185 0.31877456 -0.25952204
# alpha.siteQ alpha.siteR alpha.siteS alpha.siteT alpha.siteU
# 0.15663989 0.66511079 0.10785082 -0.21547379 -0.23656126
# beta
# -0.30280707
#
#Random effects:
# Formula: alpha ~ 1 | spp
# alpha.(Intercept) Residual
#StdDev: 0.6426563 0.4345844
#
#Number of Observations: 1031
#Number of Groups: 279
However, I also suspect that site should be a random effect.

Representing Parametric Survival Model in 'Counting Process' form in JAGS

I'm trying to build a survival model in JAGS that allows for time-varying covariates. I'd like it to be a parametric model — for example, assuming survival follows the Weibull distribution (but I'd like to allow the hazard to vary, so exponential is too simple). So, this is essentially a Bayesian version of what can be done in the flexsurv package, which allows for time-varying covariates in parametric models.
Therefore, I want to be able to enter the data in a 'counting-process' form, where each subject has multiple rows, each corresponding to a time interval in which their covariates remained constant (as described in this pdf or here. This is the (start, stop] formulation that the survival or flexurv packages allow.
Unfortunately, every explanation of how to perform survival analysis in JAGS seems to assume one row per subject.
I attempted to take this simpler approach and extend it to the counting process format, but the model does not correctly estimate the distribution.
A Failed Attempt:
Here's an example. First we generate some data:
library('dplyr')
library('survival')
## Make the Data: -----
set.seed(3)
n_sub <- 1000
current_date <- 365*2
true_shape <- 2
true_scale <- 365
dat <- data_frame(person = 1:n_sub,
true_duration = rweibull(n = n_sub, shape = true_shape, scale = true_scale),
person_start_time = runif(n_sub, min= 0, max= true_scale*2),
person_censored = (person_start_time + true_duration) > current_date,
person_duration = ifelse(person_censored, current_date - person_start_time, true_duration)
)
person person_start_time person_censored person_duration
(int) (dbl) (lgl) (dbl)
1 1 11.81416 FALSE 487.4553
2 2 114.20900 FALSE 168.7674
3 3 75.34220 FALSE 356.6298
4 4 339.98225 FALSE 385.5119
5 5 389.23357 FALSE 259.9791
6 6 253.71067 FALSE 259.0032
7 7 419.52305 TRUE 310.4770
Then we split the data into 2 observations per subject. I'm just splitting each subject at time = 300 (unless they didn't make it to time=300, in which they get just one observation).
## Split into multiple observations per person: --------
cens_point <- 300 # <----- try changing to 0 for no split; if so, model correctly estimates
dat_split <- dat %>%
group_by(person) %>%
do(data_frame(
split = ifelse(.$person_duration > cens_point, cens_point, .$person_duration),
START = c(0, split[1]),
END = c(split[1], .$person_duration),
TINTERVAL = c(split[1], .$person_duration - split[1]),
CENS = c(ifelse(.$person_duration > cens_point, 1, .$person_censored), .$person_censored), # <— edited original post here due to bug; but problem still present when fixing bug
TINTERVAL_CENS = ifelse(CENS, NA, TINTERVAL),
END_CENS = ifelse(CENS, NA, END)
)) %>%
filter(TINTERVAL != 0)
person split START END TINTERVAL CENS TINTERVAL_CENS
(int) (dbl) (dbl) (dbl) (dbl) (dbl) (dbl)
1 1 300.0000 0 300.0000 300.00000 1 NA
2 1 300.0000 300 487.4553 187.45530 0 187.45530
3 2 168.7674 0 168.7674 168.76738 1 NA
4 3 300.0000 0 300.0000 300.00000 1 NA
5 3 300.0000 300 356.6298 56.62979 0 56.62979
6 4 300.0000 0 300.0000 300.00000 1 NA
Now we can set up the JAGS model.
## Set-Up JAGS Model -------
dat_jags <- as.list(dat_split)
dat_jags$N <- length(dat_jags$TINTERVAL)
inits <- replicate(n = 2, simplify = FALSE, expr = {
list(TINTERVAL_CENS = with(dat_jags, ifelse(CENS, TINTERVAL + 1, NA)),
END_CENS = with(dat_jags, ifelse(CENS, END + 1, NA)) )
})
model_string <-
"
model {
# set priors on reparameterized version, as suggested
# here: https://sourceforge.net/p/mcmc-jags/discussion/610036/thread/d5249e71/?limit=25#8c3b
log_a ~ dnorm(0, .001)
log(a) <- log_a
log_b ~ dnorm(0, .001)
log(b) <- log_b
nu <- a
lambda <- (1/b)^a
for (i in 1:N) {
# Estimate Subject-Durations:
CENS[i] ~ dinterval(TINTERVAL_CENS[i], TINTERVAL[i])
TINTERVAL_CENS[i] ~ dweibull( nu, lambda )
}
}
"
library('runjags')
param_monitors <- c('a', 'b', 'nu', 'lambda')
fit_jags <- run.jags(model = model_string,
burnin = 1000, sample = 1000,
monitor = param_monitors,
n.chains = 2, data = dat_jags, inits = inits)
# estimates:
fit_jags
# actual:
c(a=true_shape, b=true_scale)
Depending on where the split point is, the model estimates very different parameters for the underlying distribution. It only gets the parameters right if the data isn't split into the counting process form. It seems like this is not the way to format the data for this kind of problem.
If I am missing an assumption and my problem is less related to JAGS and more related to how I'm formulating the problem, suggestions are very welcome. I might be despairing that time-varying covariates can't be used in parametric survival models (and can only be used in models like the Cox model, which assumes constant hazards and which doesn't actually estimate the underlying distribution)— however, as I mentioned above, the flexsurvreg package in R does accommodate the (start, stop] formulation in parametric models.
If anyone knows how to build a model like this in another language (e.g. STAN instead of JAGS) that would be appreciated too.
Edit:
Chris Jackson provides some helpful advice via email:
I think the T() construct for truncation in JAGS is needed here. Essentially for each period (t[i], t[i+1]) where a person is alive but the covariate is constant, the survival time is left-truncated at the start of the period, and possibly also right-censored at the end. So you'd write something like y[i] ~ dweib(shape, scale[i])T(t[i], )
I tried implementing this suggestion as follows:
model {
# same as before
log_a ~ dnorm(0, .01)
log(a) <- log_a
log_b ~ dnorm(0, .01)
log(b) <- log_b
nu <- a
lambda <- (1/b)^a
for (i in 1:N) {
# modified to include left-truncation
CENS[i] ~ dinterval(END_CENS[i], END[i])
END_CENS[i] ~ dweibull( nu, lambda )T(START[i],)
}
}
Unfortunately this doesn't quite do the trick. With the old code, the model was mostly getting the scale parameter right, but doing a very bad job on the shape parameter. With this new code, it gets very close to the correct shape parameter, but consistently over-estimates the scale parameter. I have noticed that the degree of over-estimation is correlated with how late the split point comes. If the split-point is early (cens_point = 50), there's not really any over-estimation; if it's late (cens_point = 350), there is a lot.
I thought maybe the problem could be related to 'double-counting' the observations: if we see a censored observation at t=300, then from that same person, an uncensored observation at t=400, it seems intuitive to me that this person is contributing two data-points to our inference about the Weibull parameters when really they should just be contributing one point. I, therefore, tried incorporating a random-effect for each person; however, this completely failed, with huge estimates (in the 50-90 range) for the nu parameter. I'm not sure why that is, but perhaps that's a question for a separate post. Since I'm not whether the problems are related, you can find the code for this whole post, including the JAGS code for that model, here.
You can use rstanarm package, which is a wrapper around STAN. It allows to use standard R formula notation to describe survival models. stan_surv function accepts arguments in a "counting process" form. Different base hazard functions including Weibull can be used to fit the model.
The survival part of rstanarm - stan_surv function is still not available at CRAN so you should install the package directly from mc-stan.org.
install.packages("rstanarm", repos = c("https://mc-stan.org/r-packages/", getOption("repos")))
Please see the code below:
library(dplyr)
library(survival)
library(rstanarm)
## Make the Data: -----
set.seed(3)
n_sub <- 1000
current_date <- 365*2
true_shape <- 2
true_scale <- 365
dat <- data_frame(person = 1:n_sub,
true_duration = rweibull(n = n_sub, shape = true_shape, scale = true_scale),
person_start_time = runif(n_sub, min= 0, max= true_scale*2),
person_censored = (person_start_time + true_duration) > current_date,
person_duration = ifelse(person_censored, current_date - person_start_time, true_duration)
)
## Split into multiple observations per person: --------
cens_point <- 300 # <----- try changing to 0 for no split; if so, model correctly estimates
dat_split <- dat %>%
group_by(person) %>%
do(data_frame(
split = ifelse(.$person_duration > cens_point, cens_point, .$person_duration),
START = c(0, split[1]),
END = c(split[1], .$person_duration),
TINTERVAL = c(split[1], .$person_duration - split[1]),
CENS = c(ifelse(.$person_duration > cens_point, 1, .$person_censored), .$person_censored), # <— edited original post here due to bug; but problem still present when fixing bug
TINTERVAL_CENS = ifelse(CENS, NA, TINTERVAL),
END_CENS = ifelse(CENS, NA, END)
)) %>%
filter(TINTERVAL != 0)
dat_split$CENS <- as.integer(!(dat_split$CENS))
# Fit STAN survival model
mod_tvc <- stan_surv(
formula = Surv(START, END, CENS) ~ 1,
data = dat_split,
iter = 1000,
chains = 2,
basehaz = "weibull-aft")
# Print fit coefficients
mod_tvc$coefficients[2]
unname(exp(mod_tvc$coefficients[1]))
Output, which is consistent with true values (true_shape <- 2; true_scale <- 365):
> mod_tvc$coefficients[2]
weibull-shape
1.943157
> unname(exp(mod_tvc$coefficients[1]))
[1] 360.6058
You can also look at STAN source using rstan::get_stanmodel(mod_tvc$stanfit) to compare STAN code with the attempts you made in JAGS.

correlation in multivariate mixed model in r

I am running multivariate mixed model in R by using nlme package. Suppose that x and y are responses variables for longitudinal data which assumed that the error within group is correlated. The residual error matrix is presented as:
So my question is how to involve the correlation into lme function?
I tried commands corr = corComSymm(from =~ 1 | x) or corr = corAR1(from =~ 1 | x) but did not work!
here en example:
# visiting time by months
time = rep(c(0,3,6,9),time = 4, 200)
# subjects
subject = rep(1:50, each = 4)
# first response variable "identity"
x = c(rep(0, 100), rep(1,100))
# second response variable "identity"
y = c(rep(1, 100), rep(0,100))
# values of both reponses variables (x_1, x_2)
value = c(rnorm(100,20,1),rnorm(100,48,1))
# variables refer to reponses variables (x_1, x_2)
variable = factor(c(rep(0,150),rep(1,50)), label=c("X","Y"))
df = data.frame(subject , time, x,y,value, variable)
library(nlme)
# fit the model that each response variable has intercept and slope (time) for each random and fixed effects
# as well as fixed effects slopes for sex and lesion, and each response has different variance
f= lme(value ~ -1 + x + y + x:time + y:time , random = ~ -1 + (x + y) + time:( x + y)|subject ,
weights = varIdent(form=~1| x),corr = corAR1(from = ~ 1|x), control=lmeControl(opt="optim"), data =df)
Error in corAR1(from = ~1 | x) : unused argument (from = ~1 | x)
Any suggestions?
I found this website (below) which helpful and useful, I posted here in case someone might has this problem in future.
https://rpubs.com/bbolker/3336

MCMCglmm multinomial model in R

I'm trying to create a model using the MCMCglmm package in R.
The data are structured as follows, where dyad, focal, other are all random effects, predict1-2 are predictor variables, and response 1-5 are outcome variables that capture # of observed behaviors of different subtypes:
dyad focal other r present village resp1 resp2 resp3 resp4 resp5
1 10101 14302 0.5 3 1 0 0 4 0 5
2 10405 11301 0.0 5 0 0 0 1 0 1
…
So a model with only one outcome (teaching) is as follows:
prior_overdisp_i <- list(R=list(V=diag(2),nu=0.08,fix=2),
G=list(G1=list(V=1,nu=0.08), G2=list(V=1,nu=0.08), G3=list(V=1,nu=0.08), G4=list(V=1,nu=0.08)))
m1 <- MCMCglmm(teaching ~ trait-1 + at.level(trait,1):r + at.level(trait,1):present,
random= ~idh(at.level(trait,1)):focal + idh(at.level(trait,1)):other +
idh(at.level(trait,1)):X + idh(at.level(trait,1)):village,
rcov=~idh(trait):units, family = "zipoisson", prior=prior_overdisp_i,
data = data, nitt = nitt.1, thin = 50, burnin = 15000, pr = TRUE, pl = TRUE, verbose = TRUE, DIC = TRUE)
Hadfield's course notes (Ch 5) give an example of a multinomial model that uses only a single outcome variable with 3 levels (sheep horns of 3 types). Similar treatment can be found here: http://hlplab.wordpress.com/2009/05/07/multinomial-random-effects-models-in-r/ This is not quite right for what I'm doing, but contains helpful background info.
Another reference (Hadfield 2010) gives an example of a multi-response MCMCglmm that follows the same format but uses cbind() to predict a vector of responses, rather than a single outcome. The same model with multiple responses would look like this:
m1 <- MCMCglmm(cbind(resp1, resp2, resp3, resp4, resp5) ~ trait-1 +
at.level(trait,1):r + at.level(trait,1):present,
random= ~idh(at.level(trait,1)):focal + idh(at.level(trait,1)):other +
idh(at.level(trait,1)):X + idh(at.level(trait,1)):village,
rcov=~idh(trait):units,
family = cbind("zipoisson","zipoisson","zipoisson","zipoisson","zipoisson"),
prior=prior_overdisp_i,
data = data, nitt = nitt.1, thin = 50, burnin = 15000, pr = TRUE, pl = TRUE, verbose = TRUE, DIC = TRUE)
I have two programming questions here:
How do I specify a prior for this model? I've looked at the materials mentioned in this post but just can't figure it out.
I've run a similar version with only two response variables, but I only get one slope - where I thought I should get a different slope for each resp variable. Where am I going wrong, or having I misunderstood the model?
Answer to my first question, based on the HLP post and some help from a colleage/stats consultant:
# values for prior
k <- 5 # originally: length(levels(dative$SemanticClass)), so k = # of outcomes for SemanticClass aka categorical outcomes
I <- diag(k-1) #should make matrix of 0's with diagonal of 1's, dimensions k-1 rows and k-1 columns
J <- matrix(rep(1, (k-1)^2), c(k-1, k-1)) # should make k-1 x k-1 matrix of 1's
And for my model, using the multinomial5 family and 5 outcome variables, the prior is:
prior = list(
R = list(fix=1, V=0.5 * (I + J), n = 4),
G = list(
G1 = list(V = diag(4), n = 4))
For my second question, I need to add an interaction term to the fixed effects in this model:
m <- MCMCglmm(cbind(Resp1, Resp2...) ~ -1 + trait*predictorvariable,
...
The result gives both main effects for the Response variables and posterior estimates for the Response/Predictor interaction (the effect of the predictor variable on each response variable).

Resources