Error messages when running glmer in R - r

I am attempting to run two similar generalized linear mixed models in R. Both models have the same input variables for predictors, covariates and random factors, however, response variables differ. Models require the lme4 package. The issue I was having with the second model has been resolved by Ben Bolker.
In the first model, the response variable is biomass weight and family = gaussian.
global.model <- lmer(ex.drywght ~ forestloss562*forestloss17*roaddenssec*nearestroadprim +
elevation + soilPC1 + soilPC2 +
(1|block/fragment),
data = RespPredComb,
family = "gaussian")
Predictors have the following units:
forestloss562 = %,
forestloss17 = %,
roaddenssec = (km/km2) and
nearestroadprim = (m).
Executing this model brings up the following warning messages:
Warning messages:
1: In glmer(ex.drywght ~ forestloss562 * forestloss17 * roaddenssec * :
calling glmer() with family=gaussian (identity link) as a shortcut to lmer() is deprecated; please call lmer() directly
2: Some predictor variables are on very different scales: consider rescaling
I then perform these subsequent steps (following the sequence of steps described in Grueber et al. (2011):
I standardize predictors,
stdz.model <- standardize(global.model, standardize.y = FALSE)
(requires package arm)
use automated model selection with subsets of the supplied ‘global’ model
model.set <- dredge(stdz.model)
requires package (MuMIn)
Here I get the following warning message:
Warning message:
In dredge(stdz.model2) : comparing models fitted by REML
find the top 2 AIC models and
top.models <- get.models(model.set, subset = delta < 2)
do model averaging
model.avg(model.set, subset = delta < 2)
Here, I get this error message:
Error in apply(apply(z, 2L, is.na), 2, all) :
dim(X) must have a positive length
Any advice on how to possibly fix this error would be very much appreciated.
In the second model, the response variable is richness, family is poisson.
global.model <- glmer(ex.richness ~ forestloss562*forestloss17*roaddenssec*nearestroadprim +
elevation + soilPC1 + soilPC2 +
(1|block/fragment),
data = mydata,
family = "poisson")
When I execute the above command I get the following error and warning messages:
Error: (maxstephalfit) PIRLS step-halvings failed to reduce deviance in pwrssUpdate
In addition: Warning messages:
1: Some predictor variables are on very different scales: consider rescaling
2: In pwrssUpdate(pp, resp, tolPwrss, GQmat, compDev, fac, verbose) :
Cholmod warning 'not positive definite' at file:../Cholesky/t_cholmod_rowfac.c, line 431
3: In pwrssUpdate(pp, resp, tolPwrss, GQmat, compDev, fac, verbose) :
Cholmod warning 'not positive definite' at file:../Cholesky/t_cholmod_rowfac.c, line 431
Please find a reproducible subset of my data below:
structure(list(plot.code = structure(c(1L, 3L, 2L, 4L, 5L, 6L,
7L), .Label = c("a100m56r", "b1m177r", "c100m56r", "d1f1r", "e1m177r",
"f1m17r", "lf10m56r"), class = "factor"), site.code = structure(c(1L,
3L, 2L, 4L, 5L, 6L, 7L), .Label = c("a100m56", "b1m177", "c100m56",
"d1f1", "e1m177", "f1m17", "lf10m56"), class = "factor"), block = structure(c(1L,
3L, 2L, 4L, 5L, 6L, 7L), .Label = c("a", "b", "c", "d", "e",
"f", "lf"), class = "factor"), fragment = structure(c(1L, 3L,
2L, 4L, 5L, 6L, 7L), .Label = c("a100", "b1", "c100", "d1", "e1",
"f1", "lf10"), class = "factor"), elevation = c(309L, 342L, 435L,
495L, 443L, 465L, 421L), forestloss562 = c(25.9, 56.77, 5.32,
27.4, 24.25, 3.09, 8.06), forestloss17 = c(7.47, 51.93, 79.76,
70.41, 80.55, 0, 0), roaddenssec = c(2.99, 3.92, 2.61, 1.58,
1.49, 1.12, 1.16), nearestroadprim = c(438L, 237L, 2637L, 327L,
655L, 528L, 2473L), soilPC1 = c(0.31, -0.08, 1.67, 2.39, -1.33,
-1.84, -0.25), soilPC2 = c(0.4, 0.41, -0.16, 0.15, 0.03, -0.73,
0.51), ex.richness = c(0L, 0L, 1L, 7L, 0L, 0L, 1L), ex.drywght = c(0,
0, 1.255, 200.2825, 0, 0, 0.04)), .Names = c("plot.code", "site.code",
"block", "fragment", "elevation", "forestloss562", "forestloss17",
"roaddenssec", "nearestroadprim", "soilPC1", "soilPC2", "ex.richness",
"ex.drywght"), class = "data.frame", row.names = c(NA, -7L))

tl;dr you need to standardize your variables before you fit the model, for greater numerical stability. I also have a few comments about the advisability of what you're doing, but I'll save them for the end ...
source("SO_glmer_26904580_data.R")
library("arm")
library("lme4")
library("MuMIn")
Try the first fit:
pmod <- glmer(ex.richness ~
forestloss562*forestloss17*roaddenssec*nearestroadprim +
elevation + soilPC1 + soilPC2 +
(1|block/fragment),
data = dat,
family = "poisson")
This fails, as reported above.
However, I get a warning you didn't report above:
## 1: Some predictor variables are on very different scales: consider rescaling
which provides a clue.
Scaling numeric parameters:
pvars <- c("forestloss562","forestloss17",
"roaddenssec","nearestroadprim",
"elevation","soilPC1","soilPC2")
datsc <- dat
datsc[pvars] <- lapply(datsc[pvars],scale)
Try again:
pmod <- glmer(ex.richness ~
forestloss562*forestloss17*roaddenssec*nearestroadprim +
elevation + soilPC1 + soilPC2 +
(1|block/fragment),
data = datsc,
family = "poisson",
na.action="na.fail")
This works, although we get a warning message about a too-large gradient -- I think this is actually ignorable (we're still working on getting these error sensitivity thresholds right).
As far as I can tell, the following lines seem to be working:
stdz.model <- standardize(pmod, standardize.y = FALSE)
## increases max gradient -- larger warning
model.set <- dredge(stdz.model) ## slow, but running ...
Here are my comments about advisability:
Not even counting random-effects parameters, you have only 8x as many observations as predictor variables. This is pushing it (a rule of thumb is that you should have 10-20 observations per parameter).
nrow(datsc) ## 159
ncol(getME(pmod,"X")) ## 19
Dredging/multi-model-averaging over models with and without interactions can be dangerous -- at the very least, centering continuous variables is necessary in order for it to be interpretable. (I don't know whether dredge does anything to try to be sensible in this case.)
I also tried glmmLasso on this problem -- it ended up shrinking away all of the fixed effect terms ...
library("glmmLasso")
datsc$bf <- interaction(datsc$block,datsc$fragment)
glmmLasso(ex.richness ~
forestloss562+forestloss17+roaddenssec+nearestroadprim +
elevation + soilPC1 + soilPC2,
rnd=list(block=~1,bf=~1),
data = datsc,
family = poisson(),
lambda=500)

Related

Interaction effect plot with CIs and emmeans contrast

I'm having trouble creating an interaction effect plot. There is probably something fairly simple I don't yet know how to do. I'm pretty new to R and ggplot. My reprex is below. Your insight is greatly appreciated!
The data is from UCLA and I'm also adapting their example for my purposes here.
library(here)
library(emmeans)
library(tidyverse)
dat <- read.csv("https://stats.idre.ucla.edu/wp-content/uploads/2019/03/exercise.csv")
Convert prog into factor variable
dat$prog <- factor(dat$prog, labels = c("jog","swim","read"))
The model
contcat <- lm(loss ~ hours * prog, data=dat)
summary(contcat)
I create mylist with certain points on hours and the two categories in prog that I want to contrast.
(mylist <- list(hours = seq(0, 4, .5), prog=c("jog","read")))
I then pass the object contcat into the emmeans. I request that predicted values of every combination of hours and prog be specified in at=mylist and store the output into an object called emcontcat.
emcontcat <- emmeans(contcat, ~ hours * prog, at=mylist)
I use emmip to output a set of values using plotit=FALSE.
contcatdat <- emmip(contcat, prog ~ hours, at = mylist, CIs=TRUE, plotit=FALSE)
The output object is fed to ggplot. The interaction effect is plotted along with CI bands.
ggplot(data=contcatdat, aes(x=hours, y=yvar, color=prog)) +
geom_line() +
geom_ribbon(aes(ymax=UCL, aymin=LCL, fill=prog), alpha=0.4)
The plot looks like this:
But overlapping CIs do not always correspond to the portions of the lines where there is no significant differences in predicted values. I want to add hashed lines for the portions of the lines where there is no significant difference in predicted values. This figure below
shows the kind of figure I'm trying to create. (The figure is from a paper by Trenton Mize (2019) found here at Fig. 14.)
To get the simple effect (i.e., difference of two predicted values), I pass emcontcat into a function called contrast where we can request "pairwise" differences (or simple effects). P-values are given for jog - read at each level of hours that was specified in mylist.
contrast(emcontcat, "pairwise", by="hours")
The output:
Where I am having trouble is how to incorporate the simple effect (i.e., the parts of hours where jog - read are significantly different or not) into ggplot as hashed or solid portions of the lines like the Mize 2019 figure.
We want to know if the intervals overlap, and if so, we want dashed lines. Actually that's easy by writing a respective function itvl_is_l(). However, on the LHS of the plot, there is just one point, but to draw a line we need a minimum of two. So we have to interpolate with "approximate", which is also done internally in the plot functions. Since we want to do everything for the two progs, we use by.
Preprocessing
## merge interpolations by prog
aux <- by(contcatdat, contcatdat$prog, \(x) {
x <- merge(x, data.frame(hours=with(x, seq.int(min(hours), max(hours),
length.out=1e3))), all=TRUE)
x$prog <- unique(na.omit(x$prog))
u <- c('yvar', 'LCL', 'UCL')
x[u] <- lapply(x[u], \(x) approx(x, xout=seq_along(x))$y)
x
})
## logical interval intersect function
itvl_is_l <- \(a, b) {unname(as.vector(ifelse(b[, 1] > a[, 2] | a[, 1] > b[2], TRUE, FALSE)))}
## check if intersecting CIs
its <- itvl_is_l(aux$jog[c('LCL', 'UCL')], aux$read[c('LCL', 'UCL')])
aux <- lapply(aux, `[<-`, 'its', val=its) ## add as variable
aux <- lapply(aux, \(x) transform(x, itsn=cumsum(c(0, diff(x$its)) != 0) + 1)) ## making a sequence out of it
contcatdat <- do.call(rbind, aux) ## combine back as contcatdat
Plot
clr <- c('#FF0000', '#0000FF', '#0000001A') ## some colors
png('foo.png', 600, 400) ## open .png device
plot(yvar ~ hours, contcatdat, type='n')
grid()
## lines left
lines(yvar ~ hours, contcatdat, subset=prog == 'jog' & itsn > 2, lwd=2, col=clr[1])
lines(yvar ~ hours, contcatdat, subset=prog == 'read' & itsn > 2, lwd=2, col=clr[2])
## lines middle, dashed
lines(yvar ~ hours, contcatdat, subset=prog == 'jog' & itsn == 2, lwd=2, col=clr[1], lty=2)
lines(yvar ~ hours, contcatdat, subset=prog == 'read' & itsn == 2, lwd=2, col=clr[2], lty=2)
## lines right
lines(yvar ~ hours, contcatdat, subset=prog == 'jog' & itsn < 2, lwd=2, col=clr[1])
lines(yvar ~ hours, contcatdat, subset=prog == 'read' & itsn < 2, lwd=2, col=clr[2])
## CIs
with(subset(contcatdat, prog == 'jog'),
polygon(c(hours, rev(hours)), c(UCL, rev(LCL)), border=NA, col=clr[3]))
with(subset(contcatdat, prog == 'read'),
polygon(c(hours, rev(hours)), c(UCL, rev(LCL)), border=NA, col=clr[3]))
## legend
legend('topleft', legend=unique(contcatdat$prog), title='Group', col=clr[1:2], lty=1, lwd=2)
dev.off() ## close .png device
You could also try to plot the polygons first and opaque with a border, if that might look better.
Data:
contcatdat <- structure(list(prog = structure(c(1L, 2L, 1L, 2L, 1L, 2L, 1L,
2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L), levels = c("jog",
"read"), class = "factor"), hours = c(0, 0, 0.5, 0.5, 1, 1, 1.5,
1.5, 2, 2, 2.5, 2.5, 3, 3, 3.5, 3.5, 4, 4), yvar = c(-6.78065983345649,
2.21637209230689, -3.05428518360714, 0.738291278604121, 0.672089466242214,
-0.739789535098646, 4.39846411609157, -2.21787034880141, 8.12483876594092,
-3.69595116250418, 11.8512134157903, -5.17403197620695, 15.5775880656396,
-6.65211278990971, 19.303962715489, -8.13019360361248, 23.0303373653383,
-9.60827441731525), SE = c(1.64384530410457, 1.48612021916972,
1.25520349531108, 1.14711211184156, 0.87926401607137, 0.820840725755632,
0.543079708493216, 0.531312719216624, 0.375535476484592, 0.376041650300328,
0.558013604603198, 0.501120592808483, 0.89777081499028, 0.781944232621328,
1.27470257475094, 1.1056003463909, 1.66373129934114, 1.44356083265185
), df = c(894, 894, 894, 894, 894, 894, 894, 894, 894, 894, 894,
894, 894, 894, 894, 894, 894, 894), LCL = c(-10.0069052579393,
-0.700318757711651, -5.51777400669205, -1.51305511813823, -1.05357261502514,
-2.35078883599747, 3.33260443922245, -3.26063588462286, 7.38780492844162,
-4.43397842739773, 10.7560441598055, -6.15754180868669, 13.815604150934,
-8.18677301395645, 16.8022045883112, -10.3000681349591, 19.7650632676689,
-12.4414373187615), UCL = c(-3.55441440897366, 5.13306294232543,
-0.590796360522233, 2.98963767534648, 2.39775154750957, 0.871209765800175,
5.46432379296068, -1.17510481297997, 8.86187260344022, -2.95792389761063,
12.946382671775, -4.19052214372721, 17.3395719803452, -5.11745256586298,
21.8057208426668, -5.96031907226584, 26.2956114630078, -6.77511151586902
), tvar = structure(c(1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L,
1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L), levels = c("jog", "read"), class = "factor"),
xvar = c(0, 0, 0.5, 0.5, 1, 1, 1.5, 1.5, 2, 2, 2.5, 2.5,
3, 3, 3.5, 3.5, 4, 4)), estName = "yvar", clNames = c("lower.CL",
"upper.CL"), pri.vars = c("prog", "hours"), adjust = "none", side = 0, delta = 0, type = "link", mesg = "Confidence level used: 0.95", row.names = c(NA,
18L), class = c("summary_emm", "data.frame"), labs = list(xlab = "hours",
ylab = "Linear prediction", tlab = "prog"), vars = list(byvars = character(0),
tvars = "prog"))

R confusionMatrix error data and reference factors with same levels

I'm trying to understand how to make a confusion matrix after I use the glm function for a logistic regression. Here is my code so far. I am using the caret package and the confusionMatrix function.
dput(head(wine_quality))
structure(list(fixed.acidity = c(7, 6.3, 8.1, 7.2, 7.2, 8.1),
volatile.acidity = c(0.27, 0.3, 0.28, 0.23, 0.23, 0.28),
citric.acid = c(0.36, 0.34, 0.4, 0.32, 0.32, 0.4), residual.sugar = c(20.7,
1.6, 6.9, 8.5, 8.5, 6.9), chlorides = c(0.045, 0.049, 0.05,
0.058, 0.058, 0.05), free.sulfur.dioxide = c(45, 14, 30,
47, 47, 30), total.sulfur.dioxide = c(170, 132, 97, 186,
186, 97), density = c(1.001, 0.994, 0.9951, 0.9956, 0.9956,
0.9951), pH = c(3, 3.3, 3.26, 3.19, 3.19, 3.26), sulphates = c(0.45,
0.49, 0.44, 0.4, 0.4, 0.44), alcohol = c(8.8, 9.5, 10.1,
9.9, 9.9, 10.1), quality = structure(c(4L, 4L, 4L, 4L, 4L,
4L), .Label = c("3", "4", "5", "6", "7", "8", "9", "white"
), class = "factor"), type = structure(c(3L, 3L, 3L, 3L,
3L, 3L), .Label = c("", "red", "white"), class = "factor"),
numeric_type = c(0, 0, 0, 0, 0, 0)), row.names = c(NA, 6L
), class = "data.frame")
library(tibble)
library(broom)
library(ggplot2)
library(caret)
any(is.na(wine_quality)) # this evaulates to FALSE
wine_model <- glm(type ~ fixed.acidity + volatile.acidity + citric.acid + residual.sugar + chlorides + free.sulfur.dioxide + total.sulfur.dioxide + density + pH + sulphates + alcohol, wine_quality, family = "binomial")
# split data into test and train
smp_size <- floor(0.75 * nrow(wine_quality))
set.seed(123)
train_ind <- sample(seq_len(nrow(wine_quality)), size = smp_size)
train <- wine_quality[train_ind, ]
test <- wine_quality[-train_ind, ]
# make prediction on train data
pred <- predict(wine_model)
train$fixed.acidity <- as.numeric(train$fixed.acidity)
round(train$fixed.acidity)
train$fixed.acidity <- as.factor(train$fixed.acidity)
pred <- as.numeric(pred)
round(pred)
pred <- as.factor(pred)
confusionMatrix(pred, wine_quality$fixed.acidity)
After this final line of code, I get this error:
Error: `data` and `reference` should be factors with the same levels.
This error doesn't make sense to me. I've tested that the length of pred and length of fixed.acidity are both the same (6497) and also they are both factor data type.
length(pred)
length(wine_quality$fixed.acidity)
class(pred)
class(train$fixed.acidity)
Is there any obvious reason why this confusion matrix is not working? I'm trying to find a hit ratio for the model. I would appreciate dummy explanations I really don't know what I'm doing here.
The error from confusionMatrix() tells us that the two variables passed to the function need to be factors with the same values. We can see why we received the error when we run str() on both variables.
> str(pred)
Factor w/ 5318 levels "-23.6495182533792",..: 310 339 419 1105 310 353 1062 942 594 1272 ...
> str(wine_quality$fixed.acidity)
num [1:6497] 7.4 7.8 7.8 11.2 7.4 7.4 7.9 7.3 7.8 7.5 ...
pred is a factor, when wine_quality$fixed_acidity is a numeric vector. The confusionMatrix() function is used to compare predicted and actual values of a dependent variable. It is not intended to cross tabulate a predicted variable and an independent variable.
Code in the question uses fixed.acidity in the confusion matrix when it should be comparing predicted values of type against actual values of type from the testing data.
Also, the code in the question creates the model prior to splitting the data into test and training data. The correct procedure is to split the data before building a model on the training data, make predictions with the testing (hold back) data, and compare actuals to predictions in the testing data.
Finally, the result of the predict() function as coded in the original post is the linear predicted values from the GLM model (equivalent to wine_model$linear.predictors in the output model object). These values must be further transformed to make them suitable before use in confusionMatrix().
In practice, it's easier to use caret::train() with the GLM method and binomial family, where predict() will generate results that are usable in confusionMatrix(). We'll illustrate this with the UCI wine quality data.
First, we download the data from the UCI Machine Learning Repository to make the example reproducible.
download.file("https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv",
"./data/wine_quality_red.csv")
download.file("https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv",
"./data/wine_quality_white.csv")
Second, we load the data, assign type as either red or white depending on the data file, and bind the data into a single data frame.
red <- read.csv("./data/wine_quality_red.csv",header = TRUE,sep=";")
white <- read.csv("./data/wine_quality_white.csv",header = TRUE,sep=";")
red$type <- "red"
white$type <- "white"
wine_quality <- rbind(red,white)
wine_quality$type <- factor(wine_quality$type)
Next, we split the data into test and training based on values of type so each data frame gets a proportional number of red and white wines, train the data with the default caret::train() settings and a GLM method.
library(caret)
set.seed(123)
inTrain <- createDataPartition(wine_quality$type, p = 3/4)[[1]]
training <- wine_quality[ inTrain,]
testing <- wine_quality[-inTrain,]
aModel <- train(type ~ .,data = training, method="glm", familia's = "binomial")
Finally, we use the model to make predictions on the hold back data frame, and run a confusion matrix.
testLM <- predict(aModel,testing)
confusionMatrix(data=testLM,reference=testing$type)
...and the output:
> confusionMatrix(data=testLM,reference=testing$type)
Confusion Matrix and Statistics
Reference
Prediction red white
red 393 3
white 6 1221
Accuracy : 0.9945
95% CI : (0.9895, 0.9975)
No Information Rate : 0.7542
P-Value [Acc > NIR] : <2e-16
Kappa : 0.985
Mcnemar's Test P-Value : 0.505
Sensitivity : 0.9850
Specificity : 0.9975
Pos Pred Value : 0.9924
Neg Pred Value : 0.9951
Prevalence : 0.2458
Detection Rate : 0.2421
Detection Prevalence : 0.2440
Balanced Accuracy : 0.9913
'Positive' Class : red

How can I compute the median absolute deviation (MAD) for generalized linear mixed-effects models

I know my question is linked to stats but I'm looking for a solution in R, so I believe it's suited for SO.
I built a generalized linear mixed-effects model (GLMM) using the glmer function from the lme4 package in R to model species richness around aquaculture sites based on significant explanatory variables using Zuur et al. (2009) Mixed Effects Models and Extensions in Ecology with R. The model is:
Mod1 <- glmer(Richness ~ Distance + Depth + Substrate + Beggiatoa +
Distance*Beggiatoa + (1|Site/transect), family = poisson, data = mydata)
Now I have a full data set collected at different sites and I want to assess how this model performs on the new data set.
Following a question on CV, someone suggested to look for the median absolute deviation (mad) on the new data set. I tried the mad function from the stats package in R but I get the following error message:
Error in x[!is.na(x)] : object of type 'S4' is not subsettable
In addition: Warning messages:
1: In is.na(x) : is.na() applied to non-(list or vector) of type 'S4'
2: In is.na(x) : is.na() applied to non-(list or vector) of type 'S4'
Does anybody knows what's going wrong here? Is it that mad in stats can't be calculated for GLMMs? If so, is there another R package to calculate mad from GLMMs?
Edit:
To give you an idea of my data, here's the output from dput(head(mydata)), also note that there's no "Substrate" category in the new data set and "S" refers to "Richness":
structure(list(S = c(0, 1, 2, 3, 3, 2), Site = structure(c(1L,
1L, 1L, 1L, 1L, 1L), .Label = c("BC", "BH", "GC", "IS", "Ref"
), class = "factor"), Transect = structure(c(4L, 4L, 4L, 4L,
4L, 4L), .Label = c("10GC", "10IS", "10N", "10S", "11IS", "12IS",
"13E", "1GC", "1N", "1W", "2E", "2GC", "2IS", "2N", "2W", "2WA",
"3E", "3GC", "3IS", "3N", "3S", "4E", "4GC", "4IS", "4S", "4W",
"5GC", "5IS", "5S", "6GC", "6IS", "6N", "6S", "6W", "7E", "7GC",
"7IS", "8GC", "8IS", "8W", "9E", "9GC", "9IS", "9N", "RefBC1",
"RefBC10", "RefBC11", "RefBC12", "RefBC2", "RefBC3", "RefBC4",
"RefBC5", "RefBC6", "RefBC7", "RefBC8", "RefBC9", "X1", "X2"), class = "factor"),
Distance = c(2, 20, 40, 80, 120, 160), Depth = c(40L, 40L,
50L, 40L, 40L, 40L), Beggiatoa = c(2, 1, 1, 0, 0, 0)), .Names = c("S",
"Site", "Transect", "Distance", "Depth", "Beggiatoa"), row.names = c(NA,
6L), class = "data.frame")
For within-sample error, the median absolute deviation computation would just be
mad(residuals(fitted_model))
... you might want residuals(fitted_model,type="response"), since residuals will give you deviance residuals by default (see ?residuals.merMod)
If you want to look at out-of-sample error, you could do something like this:
pred <- predict(fitted_model,
newdata = newdf,
type = "response",
re.form=~0)
mad(pred, center=newdf$S)
(re.form=~0 specifies that you want to omit random effects from the prediction, which is your only choice unless you're predicting at sites/transects where you've also got training data)

Implementing Tabu Search in R

I am trying to implement Tabu Search on a classification dataset namely Indian patients liver disease available in the UCI repository on https://archive.ics.uci.edu/ml/datasets/ILPD+(Indian+Liver+Patient+Dataset) but facing issues.
Following is the code I've used
NF <- 10
NTR <- 193
NTE <- 193
library(class)
library(e1071)
library(caret)
library(party)
library(nnet)
ILPD <- read.csv("C:/Users/Dell/Desktop/Codes and Datasets/ILPD.csv")
nrow(ILPD)
set.seed(9850)
gp<-runif(nrow(ILPD))
ILPD<-ILPD[order(gp),]
idx <- createDataPartition(y = ILPD$Class, p = 0.7, list = FALSE)
train<-ILPD[idx,]
test<-ILPD[-idx,]
ver<-test[,11]
evaluate <- function(th){
if (sum(th) == 0)return(0)
model <- svm(train[ ,th==1], train[,11] , gamma = 0.1, kernel ="sigmoid", na.action = na.omit)
pred <- predict(model, test[ ,th==1])
csRate <- sum(pred == ver)/NTE
penalty <- (NF - sum(th))/NF
return(csRate + penalty)
}
library(tabuSearch)
res <- tabuSearch(size = NF, iters = 2, objFunc = evaluate, config = matrix(1,1,NF), listSize = 5, nRestarts = 4)
plot(res)
plot(res, "tracePlot")
summary(res, verbose = TRUE)
Error:
Error in if (any(co)) { : missing value where TRUE/FALSE needed
In addition: Warning message:
In FUN(newX[, i], ...) : NAs introduced by coercion
Called from: svm.default(train[, th == 1], train[, 11], gamma = 0.1, kernel = "sigmoid", na.action = na.omit)
Some part of the data
structure(list(age = c(55L, 48L, 14L, 17L, 40L, 37L), gender = c(0L,
0L, 0L, 0L, 1L, 0L), TB = c(0.9, 2.4, 0.9, 0.9, 0.9, 0.7), DB = c(0.2,
1.1, 0.3, 0.2, 0.3, 0.2), Alkphos = c(116L, 554L, 310L, 224L,
293L, 235L), SGPT = c(36L, 141L, 21L, 36L, 232L, 96L), sgot = c(16L,
73L, 16L, 45L, 245L, 54L), TP = c(6.2, 7.5, 8.1, 6.9, 6.8, 9.5
), ALB = c(3.2, 3.6, 4.2, 4.2, 3.1, 4.9), AG = c(1, 0.9, 1, 1.55,
0.8, 1), Class = structure(c(2L, 1L, 2L, 1L, 1L, 1L), .Label = c("One",
"Two"), class = "factor")), .Names = c("age", "gender", "TB",
"DB", "Alkphos", "SGPT", "sgot", "TP", "ALB", "AG", "Class"), row.names = c(216L,
405L, 316L, 103L, 20L, 268L), class = "data.frame")
If anyone could help me with it
I wanted to see how tabu worked anyway so seemed a good place to start.
Basically you need to test your code better, evaluate just did not work. It is easy to test by hand by creating values of th and then calling evaluate on them.
Also use high level comments to organize your code and keep track of what you are doing, especially when posting to SO for help so as to save us time figuring out what you intend.
Not sure if these results are good, the amount of data is so minimal it is hard to tell.
Anyway here is the changed code:
NF <- 10
NTR <- 193
NTE <- 193
library(class)
library(e1071)
library(caret)
library(party)
library(nnet)
ILPD1 <- structure(
list(
age = c(55L,48L,14L,17L,40L,37L),
gender = c(0L,0L,0L,0L,1L,0L),
TB = c(0.9,2.4,0.9,0.9,0.9,0.7),
DB = c(0.2,1.1,0.3,0.2,0.3,0.2),
Alkphos = c(116L,554L,310L,224L,293L,235L),
SGPT = c(36L,141L,21L,36L,232L,96L),
sgot = c(16L,73L,16L,45L,245L,54L),
TP = c(6.2,7.5,8.1,6.9,6.8,9.5),
ALB = c(3.2,3.6,4.2,4.2,3.1,4.9),
AG = c(1,0.9,1,1.55,0.8,1),
Class = structure(c(2L,1L,2L,1L,1L,1L),
.Label = c("One","Two"),
class = "factor")
),
.Names = c("age","gender","TB","DB","Alkphos",
"SGPT","sgot","TP","ALB","AG","Class"),
row.names = c(216L,405L,316L,103L,20L,268L),
class = "data.frame"
)
ILPD <- ILPD1
#ILPD <- read.csv("ILPD.csv")
nrow(ILPD)
set.seed(9850)
# setup test and training data
gp <- runif(nrow(ILPD))
ILPD <- ILPD[order(gp),]
idx <- createDataPartition(y = ILPD$Class,p = 0.7,list = FALSE)
train <- ILPD[idx,]
test <- ILPD[ - idx,]
ver <- test[,11]
evaluate <- function(th) {
# evaluate the tabu for a value of th
# tabuSearch will use this function to evaluate points in its search space
#
# if everything is turned off just return zero as we are not interested
if(sum(th) == 0) return(0)
# we just want to train our svm on the columns for which th==1
svmtrn <- train[,th==1]
# but we need to have the Class varible as our label
if (is.null(trn$Class)) return(0)
# Train up an svm now
# Note that the first argument is the forumula we are training
model <- svm(Class~.,svmtrn,gamma = 0.1,kernel = "sigmoid",na.action = na.omit)
pred <- predict(model,test)
# now evaluate how well our prediction worked
csRate <- sum(pred == ver) / NTE
penalty <- (NF - sum(th)) / NF
return(csRate + penalty)
}
library(tabuSearch)
evaluate(matrix(1,1,NF))
res <- tabuSearch(size = NF,iters = 2,objFunc = evaluate,
config = matrix(1,1,NF),listSize = 5,nRestarts = 4)
plot(res)
plot(res,"tracePlot")
summary(res,verbose = TRUE)
Here are the output results:
[1] 6
[1] 0.005181347
Tabu Settings
Type = binary configuration
No of algorithm repeats = 1
No of iterations at each prelim search = 2
Total no of iterations = 12
No of unique best configurations = 8
Tabu list size = 5
Configuration length = 10
No of neighbours visited at each iteration = 10
Results:
Highest value of objective fn = 0.70518
Occurs # of times = 1
Optimum number of variables = 3
Optimum configuration:
[1] 1 0 0 0 0 1 0 0 0 1
And here is your plot:

How to plot asymptote of a curve in R?

I have this data called mydf where I have hybrid sample comparison for efficiency. There are seven different efficiency columns for the intermixing of sampleA and sampleB. I want to see the plot for these seven efficiencies to see at which efficiency level will they significantly drop compared to the first few columns.
mydf<-structure(list(sample_A = structure(c(1L, 2L, 2L, 2L, 3L, 4L), .Label = c("2568",
"2669", "2670", "2671", "2946", "LPH-001-10_AK1", "LPH-001-12_AK2",
"LPH-001-9"), class = "factor"), sample_B = structure(c(1L, 2L,
3L, 4L, 3L, 4L), .Label = c("2568", "2669", "2670", "2671", "2946",
"LPH-001-10_AK1", "LPH-001-12_AK2", "LPH-001-9"), class = "factor"),
efficiency = c(1.02, 0.964, 0.415, 0.422, 0.98, 0.986), efficiency2 = c(1,
0.944, 0.395, 0.402, 0.96, 0.966), efficiency3 = c(0.9, 0.844,
0.295, 0.302, 0.86, 0.866), efficiency4 = c(0.32, 0.264,
-0.285, -0.278, 0.28, 0.286), efficiency5 = c(0.02, -0.0360000000000001,
-0.585, -0.578, -0.0200000000000001, -0.0140000000000001),
efficiency6 = c(0.12, 0.0639999999999999, -0.485, -0.478,
0.08, 0.086), efficiency7 = c(0.02, -0.036, -0.585, -0.578,
-0.02, -0.014)), .Names = c("sample_A", "sample_B", "efficiency",
"efficiency2", "efficiency3", "efficiency4", "efficiency5", "efficiency6",
"efficiency7"), row.names = c(NA, 6L), class = "data.frame")
Here's one way to plot your data:
mydf <- structure(list(sample_A=structure(c(1L,2L,2L,2L,3L,4L),.Label=c('2568','2669','2670','2671','2946','LPH-001-10_AK1','LPH-001-12_AK2','LPH-001-9'),class='factor'),sample_B=structure(c(1L,2L,3L,4L,3L,4L),.Label=c('2568','2669','2670','2671','2946','LPH-001-10_AK1','LPH-001-12_AK2','LPH-001-9'),class='factor'),efficiency=c(1.02,0.964,0.415,0.422,0.98,0.986),efficiency2=c(1,0.944,0.395,0.402,0.96,0.966),efficiency3=c(0.9,0.844,0.295,0.302,0.86,0.866),efficiency4=c(0.32,0.264,-0.285,-0.278,0.28,0.286),efficiency5=c(0.02,-0.0360000000000001,-0.585,-0.578,-0.0200000000000001,-0.0140000000000001),efficiency6=c(0.12,0.0639999999999999,-0.485,-0.478,0.08,0.086),efficiency7=c(0.02,-0.036,-0.585,-0.578,-0.02,-0.014)),.Names=c('sample_A','sample_B','efficiency','efficiency2','efficiency3','efficiency4','efficiency5','efficiency6','efficiency7'),row.names=c(NA,6L),class='data.frame');
effCis <- grep('^efficiency',names(mydf));
xlim <- c(1,length(effCis));
ylim <- range(mydf[,effCis],na.rm=T);
ylim[1L] <- floor(ylim[1L]/0.1)*0.1;
ylim[2L] <- ceiling(ylim[2L]/0.1)*0.1;
xticks <- seq_along(effCis);
yticks <- seq(ylim[1L],ylim[2L],0.1);
plot(NA,xlim=xlim,ylim=ylim,xlab='measurement',ylab='efficiency',xaxs='i',yaxs='i',axes=F);
abline(v=xticks,col='lightgrey');
abline(h=yticks,col='lightgrey');
abline(h=0,lwd=2);
axis(1L,xticks,xticks,font=2L,cex.axis=0.7);
axis(2L,yticks,sprintf('%.1f',yticks),las=1L,font=2L,cex.axis=0.7);
hybrid.col <- data.frame(hybrid=seq_len(nrow(mydf)),col=c('red','green','blue','gold','cyan','magenta'),stringsAsFactors=F);
splineN <- 200L;
for (ri in seq_len(nrow(hybrid.col))) {
hybrid <- hybrid.col$hybrid[ri];
col <- hybrid.col$col[ri];
x <- xticks;
y <- c(as.matrix(mydf[hybrid,effCis]));
points(x,y,pch=16L,col=col,xpd=NA);
with(spline(x,y,splineN),{
lines(x,y,col=col,lwd=2,xpd=NA);
localwin <- which(x>2 & x<3);
tp <- which.min(abs(diff(y[localwin])));
if (length(tp)>0L) points(x[localwin[tp]],y[localwin[tp]],col=col,pch=4L);
localwin <- which(x>2 & x<5);
tp <- which.min(diff(y[localwin]));
if (length(tp)>0L) {
m <- diff(y[localwin[seq(tp,len=2L)]])/diff(x[localwin[seq(tp,len=2L)]]);
if (is.finite(m)) abline(y[localwin[tp]]-m*x[localwin[tp]],m,col=col,lty=2L);
};
});
};
legend(5.5,0.95,paste0(mydf$sample_A,' / ',mydf$sample_B),fill=hybrid.col$col,cex=0.7,title='hybrid');
I wasn't 100% sure what you meant by the asymptote. I initially thought maybe you wanted the local maxima of the curves just prior to where they begin to drop, which is why I marked the local maxima with points (symbol X, i.e. pch=4L). But then I realized maybe you meant the tangent line along the drop, so I added lines tangent to the points of steepest slope.
This is the definition of asymptote:
a straight line approached by a given curve as one of the variables in the equation of the curve approaches infinity.
I don't think that's applicable here; plotting this data does not involve taking anything to infinity. I think you want either the local maxima or tangent lines.

Resources