predicted values with MuMIn throwing error when full = FALSE - r

I am running into an error when I try to compute the predicted values from a model averaged object using the MuMIn package's predict.averaging. I have been assured that when the full argument is set to FALSE the function should return predicted values based off the conditional average coefficients. However, it returns an error. See example below using the cars dataset. It is very similar to my actual set up.
library(MuMIn)
options(na.action = "na.fail")
global.model <- glm(mpg ~ hp + drat + wt,
data=mtcars)
dr <- dredge(global.model)
mod.avg <- model.avg(dr, subset = delta < 2, fit = T)
summary(mod.avg)
predict(mod.avg, se.fit = TRUE, full = FALSE)
The error indicates that full is ignored, meaning that the full model coefficients are used for the predicted values (not what I want). I have confirmed this by some simple manual checking of values. It is also evident my examining predict() output. Notice how the values jump, suggesting that a coefficient is set to zero or something. It has also been suggested that changing glm to lm will fix the issue but it does not, at least for me.
Thanks!

Comparing predictions from the component models to the averaged ones you can see that the "full averaged" predictions fall within the component predictions (which is as it should be).
On the other hand, the "subset averaged" coefficients produce predictions that are quite biased. This is because the effects are augmented due to ignoring the zero coefficients when calculating the mean.
# Full/subset averaged predictions
pyfa <- predict(mod.avg, full = TRUE)
pysa <- predict(mod.avg, full = FALSE)
# Note: full=FALSE works only with se.fit=FALSE
# Predictions from component models
pycm <- do.call("cbind", lapply(get.models(mod.avg, TRUE), predict))
n <- ncol(pycm)
k <- rep(1:3, c(n, 1, 1))
lty <- c(2,1,1); lwd <- c(1,2,2); col <- c(3,1,2)
matplot(cbind(pycm, pyfa, pysa), type = "l",
lty = lty[k], lwd = lwd[k], col = col[k],
ylab = "predicted")
legend("topleft", legend = c("component", "full average", "subset average"),
lty = lty, lwd = lwd, col = col)

Related

ROC Curve Ranger

I am trying to calculate ROC Curve and AUC using ranger for a binomial classification problem (0 and 1), where the response variable is defined as BiClass.
Suppose I cast a data frame to Train_Set and Test_Set (75% and 25 % respectively) and compute binary class probabilities using:
library(ranger)
library(ROCR)
library(mlr)
library(pROC)
library(tidyverse)
Biclass.ranger <- ranger(BiClass ~ ., ,data=Train_Set, num.trees = 500, importance="impurity", save.memory = TRUE, probability=TRUE)
pred <- predict(BiClass.ranger, data = Test_Set, num.trees = 500, type='response', verbose = TRUE)
My intention is now to compute ROC curve (and AUC). I tried the following code, through which I get ROC curve (using ROCR and mlr packages):
pred_object <- prediction(pred$predictions[,2], Test_Set$BiClass)
per_measure <- performance(pred_object, "tnr", "fnr")
plot(per_measure, col="red", lwd=1)
abline(a=0,b=1,lwd=1,lty=1,col="gray")
Or, aletrnatively using pROC package:
probabilities <- as.data.frame(predict(Biclass.ranger, data = Test_Set, num.trees = 500, type='response', verbose = TRUE)$predictions)
probabilities$predic <- colnames(probabilities)[max.col(probabilities,ties.method="first")] # For each row, return the column name of the largest value from 0 and 1 columns (prediction column). This will be a character type
probabilities$prednum <- as.numeric(as.character(probabilities$predic)) # create prednum as a numeric data type in probabilities
probabilities <- dplyr::mutate_if(probabilities, is.character, as.factor) # convert character to factor
probabilities <- cbind(probabilities,BiClass=Test_Set$BiClass) # append BiClass. This data frame contains the response variable from the Test_Data, along with prediction (prednum) and probability classes (0 and 1)
ROC_ranger <- pROC::roc(Table$BiClass, pred$predictions[,2])
plot(ROC_ranger, col = "blue", main = "ROC - Ranger")
paste("Accuracy % of ranger: ", mean(Test_Set$BiClass == round(pred$predictions[,2], digits = 0))) # print the performance of each model
The ROC curve obtained is given below:
I have the following questions:
1) How can I set a threshold value and plot confusion matrix for the set threshold?
I compute the confusion matrix presently using:
probabilities <- as.data.frame(predict(Biclass.ranger, data = Test_Set, num.trees = 500, type='response', verbose = TRUE)$predictions)
max.col(probabilities) - 1
confusionMatrix(table(Test_Set$BiClass, max.col(probabilities)-1))
2) How do I calculate the optimal thershold value (global value at which I have more true positives or true negatives) through optimization?
Again, referring to the pROC and the guidelines proposed by its author using:
myroc <- pROC::roc(probabilities$BiClass, probabilities$`1`)
mycoords <- pROC::coords(myroc, "all", transpose = FALSE)
plot(mycoords$threshold, mycoords$specificity, type="l", col="red", xlab="Cutoff", ylab="Performance")
lines(mycoords$threshold, mycoords$sensitivity, type="l", col="blue")
legend(0.23,0.2, c("Specificity", "Sensitivity"), col=c("red", "blue"), lty=1)
best.coords <- coords(myroc, "best", best.method="youden", transpose = FALSE)
abline(v=best.coords$threshold, lty=2, col="grey")
abline(h=best.coords$specificity, lty=2, col="red")
abline(h=best.coords$sensitivity, lty=2, col="blue")
I was able to draw this curve using youden index:
]2
Does it mean there isn't a lot of freedom to vary threshold to play with specificity and sensitivity, since the dashed blue and red lines are not far away from each other?
3) How to evaulate AUC?
I calculated AUC using pROC again following the guidelines of its author. See below:
ROC_ranger <- pROC::roc(probabilities$BiClass, probabilities$`1`)
ROC_ranger_auc <- pROC::auc(ROC_ranger)
paste("Area under curve of random forest: ", ROC_ranger_auc) # AUC of the model
The goal finally is to increase the True Neagtives, which are presently defined by 1 in BiClass and of course True Positives (defined by 0 in BiClass) in the confusion matrix. At present, the Accuracy of my classification algorithm is 0.74 and the AUC is 0.81 respectively.

Calculating piecewise quantile linear regression with segmented package R

I am looking for a way to obtain the piecewise quantile linear regression with R. I have been able to compute the Quantile regression with the package quantreg. However, I don't want just 1 unique slope but want to check for breakpoints in my dataset. I have seen that the segmented package can do so. While it works good if the fit is carried out with lm or glm (as shown below in an example), it doesn't manage to work for quantile.
On the segmented package info I have read that there is a segmented.default which can be used for specific regression models, such as Quantiles. However, when I apply it for my quantile outcome it gives me the following errors:
Error in diag(vv) : invalid 'nrow' value (too large or NA)
In addition: Warning message:
cannot compute the covariance matrix
If instead of using K=2 I use for example psi I get other type of errors:
Error in rq.fit.br(x, y, tau = tau, ...) : Singular design matrix
I have created an example with the mtcars data so you can see the errors that I get.
library(quantreg)
library(segmented)
data(mtcars)
out.rq <- rq(mpg ~ wt, data= mtcars)
out.lm <- lm(mpg ~ wt, data= mtcars)
# Plotting the results
plot(mpg ~ wt, data = mtcars, pch = 1, main = "mpg ~ wt")
abline(out.lm, col = "red", lty = 2)
abline(out.rq, col = "blue", lty = 2)
legend("topright", legend = c("linear", "quantile"), col = c("red", "blue"), lty = 2)
#Generating segmented LM
o <- segmented(out.lm, seg.Z= ~wt, npsi=2, control=seg.control(display=FALSE))
plot(o, lwd=2, col=2:6, main="Segmented regression", res=FALSE) #lwd: line width #col: from 2 to 6 #RES: show datapoints
#Generating segmented Quantile
#using K=2
o.quantile <- segmented.default(out.rq, seg.Z= ~wt, control=seg.control(display=FALSE, K=2))
# using psi
o.quantile <- segmented.default(out.rq, seg.Z= ~wt, psi=list(wt=c(2,4)), control=seg.control(display=FALSE))
I came across this post after a long time because I have the same issue. Just in case others might be stuck with the problem in the future, I wanted to point out what the problem is.
I examined "segmented.default". There is a line in the source code as follows:
Cov <- try(vcov(objF), silent = TRUE)
vcov is used to calculate the covariance matrix but does not work for quantile regression object objF. To get the covariance matrix for quantile regression, you need:
summary(objF,se="boot",cov=TRUE)$cov
Here, I used bootstrap method to compute the covariance matrix by selecting se="boot" but you should choose the appropriate method for you. Check ?summary.rq then "se" section for different methods.
Additionally, you need to assign the row/column names as follows:
dimnames(Cov)[[1]] <- dimnames(Cov)[[2]] <- unlist(attributes(objF$coef))
After modifying the function, it worked for me.
Maybe the other answer isn't particularly clean, as you need to modify a package function.
Additionally, maybe boot isn't such a good idea for SEs, according to this answer.
To get it working a bit easier, add a function to your workspace:
vcov.rq <- function(object, ...) {
result = summary(object, se = "nid", covariance = TRUE)$cov
rownames(result) = colnames(result) = names(coef(object))
return(result)
}
Caveats from the Cross-Validated link apply.

Copula result in R

I have a table of two column, it consist of an already computed index for 2 variables, a simple is quoted as following:
V1, V2
0.46,1.08
0.84,1.05
-0.68,0.93
-0.99,0.68
-0.87,0.30
-1.08,-0.09
-1.16,-0.34
-0.61,-0.43
-0.65,-0.48
0.73,-0.48
In order to find out the correlation between the aforementioned data I have, I am using the copula package in R.
The following VineCopula code I have used to figure out which family of Copula to use:
library(VineCopula)
selectedCopula <- BiCopSelect(u,v,familyset=NA)
selectedCopula
It has suggested to use the survival Gumbel, the rotated version of the Gumbel Copula according to the copula R manual (Link)
However, I chose The Frank copula, since it offers symmetric dependence structure, and it permits modeling positive as negative dependence in the data, how plausible is that?
One more thing, after running the following self explanatory copula code:
# Estimate V1 distribution parameters and visually compare simulated vs observed data
x_mean <- mean(mydata$V1)
#Normal Distribution
hist(mydata$V1, breaks = 20, col = "green", density = 30)
hist(rnorm( nrow(mydata), mean = x_mean, sd = sd(mydata$V1)),
breaks = 20,col = "blue", add = T, density = 30, angle = -45)
# Same for V2
y_mean <- mean(mydata$V2)
#Normal Distribution
hist(mydata$V2, breaks = 20, col = "green", density = 30)
hist(rnorm(nrow(mydata), mean = y_mean,sd = sd(mydata$V2)),
breaks = 20, col = "blue", add = T, density = 30, angle = -45)
# Measure association using Kendall's Tau
cor(mydata, method = "kendall")
#Fitting process with copula choice
# Estimate copula parameters
cop_model <- frankCopula(dim = 2)
m <- pobs(as.matrix(mydata))
fit <- fitCopula(cop_model, m, method = 'ml')
coef(fit)
# Check Kendall's tau value for the frank copula with = 3.236104
tau(frankCopula(param = 3.23))
#Building the bivariate distribution using frank copula
# Build the bivariate distribution
sdx =sd(mydata$V1)
sdy =sd(mydata$V2)
my_dist <- mvdc(frankCopula(param = 3.23, dim = 2), margins = c("norm","norm"),
paramMargins = list(list(mean = x_mean, sd=sdx),
list(mean = y_mean, sd=sdy)))
# Generate 439 random sample observations from the multivariate distribution
v <- rMvdc(439, my_dist)
# Compute the density
pdf_mvd <- dMvdc(v, my_dist)
# Compute the CDF
cdf_mvd <- pMvdc(v, my_dist)
# Sample 439 observations from the distribution
sim <- rMvdc(439,my_dist)
# Plot the data for a visual comparison
plot(mydata$V1, mydata$V2, main = 'Test dataset x and y', col = "blue")
points(sim[,1], sim[,2], col = 'red')
legend('bottomright', c('Observed', 'Simulated'), col = c('blue', 'red'), pch=21)
The plotted data set shows good fitting results even for extreme values.
here, I want to present the correlated values from applying frank copula with my original data in the same line graph,
I could not figure out how to extract the frank copula results?
(A one column so I can plot with the original data and have a visual comparison)
I am not sure if I correctly understand your questions. However, if you want to get the copula data (generated from Frank copula) they are stored in sim. If you are asking for the Kendall tau then they should be stored in the fitcopula. You cannot have a frank copula data as one column as it must be a matrix. Also, pobs function will give you a result as a matrix so you do not need to use as.matrix. If you need more help, I am very happy to help.

R smooth.spline(): smoothing spline is not smooth but overfitting my data

I have several data points which seem suitable for fitting a spline through them. When I do this, I get a rather bumpy fit, like overfitting, which is not what I understand as smoothing.
Is there a special option / parameter for getting back the function of a really smooth spline like here.
The usage of the penalty parameter for smooth.spline didn't have any visible effect. Maybe I did it wrong?
Here are data and code:
results <- structure(
list(
beta = c(
0.983790622281964, 0.645152464354322,
0.924104713597375, 0.657703886566088, 0.788138034115623, 0.801080207252363,
1, 0.858337365965949, 0.999687052533693, 0.666552625121279, 0.717453633245958,
0.621570152961453, 0.964658181346544, 0.65071758770312, 0.788971505000918,
0.980476054183113, 0.670263506919246, 0.600387040967624, 0.759173403408052,
1, 0.986409675965, 0.982996471134736, 1, 0.995340781899163, 0.999855895958986,
1, 0.846179233381267, 0.879226324448832, 0.795820998892035, 0.997586607285667,
0.848036806290156, 0.905320944437968, 0.947709125535428, 0.592172373022407,
0.826847031044922, 0.996916006944244, 0.785967729206612, 0.650346929853076,
0.84206351833549, 0.999043126652724, 0.936879214753098, 0.76674066557003,
0.591431233516217, 1, 0.999833445117791, 0.999606223666537, 0.6224971799303,
1, 0.974537160571494, 0.966717133936379
), inventoryCost = c(
1750702.95138889,
442784.114583333, 1114717.44791667, 472669.357638889, 716895.920138889,
735396.180555556, 3837320.74652778, 872873.4375, 2872414.93055556,
481095.138888889, 538125.520833333, 392199.045138889, 1469500.95486111,
459873.784722222, 656220.486111111, 1654143.83680556, 437511.458333333,
393295.659722222, 630952.170138889, 4920958.85416667, 1723517.10069444,
1633579.86111111, 4639909.89583333, 2167748.35069444, 3062420.65972222,
5132702.34375, 838441.145833333, 937659.288194444, 697767.1875,
2523016.31944444, 800903.819444444, 1054991.49305556, 1266970.92013889,
369537.673611111, 764995.399305556, 2322879.6875, 656021.701388889,
458403.038194444, 844133.420138889, 2430700, 1232256.68402778,
695574.479166667, 351348.524305556, 3827440.71180556, 3687610.41666667,
2950652.51736111, 404550.78125, 4749901.64930556, 1510481.59722222,
1422708.07291667
)
), .Names = c("beta", "inventoryCost"), class = c("data.frame")
)
plot(results$beta,results$inventoryCost)
mySpline <- smooth.spline(results$beta,results$inventoryCost, penalty=999999)
lines(mySpline$x, mySpline$y, col="red", lwd = 2)
Transform your data sensibly before modelling
Based on the scale of your results$inventoryCost, log transform is appropriate. For simplicity, in the following I am using x, y. I am also reordering your data so that x is ascending:
x <- results$beta; y <- log(results$inventoryCost)
reorder <- order(x); x <- x[reorder]; y <- y[reorder]
par(mfrow = c(1,2))
plot(x, y, main = "take log transform")
hist(x, main = "x is skewed")
The left figure looks better? Also, it is highly recommended to further take transform for x, because it is skewed! (see right figure).
The following transform is appropriate:
x1 <- -(1-x)^(1/3)
The cubic root of (1-x) will make data more spread out around x = 1. I put an additional -1 so that there is a positively monotonic relation rather than a negative one between x and x1. Now let's check the relationship:
par(mfrow = c(1,2))
plot(x1, y, main = expression(y %~% ~ x1))
hist(x1, main = "x1 is well spread out")
Fitting a spline
Now we are ready for statistical modelling. Try the following call:
fit <- smooth.spline(x1, y, nknots = 10)
pred <- stats:::predict.smooth.spline(fit, x1)$y ## predict at all x1
## or you can simply call: pred <- predict(fit, x1)$y
plot(x1, y) ## scatter plot
lines(x1, pred, lwd = 2, col = 2) ## fitted spline
Does it look nice? Note, that I have used nknots = 10 tells smooth.spline to place 10 interior knots (by quantile); Therefore, we are to fit a penalized regression spline rather than a smoothing spline. In fact, the smooth.spline() function almost never fit a smoothing spline, unless you put all.knots = TRUE (see later example).
I also dropped penalty = 999999, as that has nothing to do with smoothness control. If you really want to control smoothness, rather than letting smooth.spline figure out the optimal one by GCV, you should use argument df or spar. I will give example later.
To transform fit back to original scale, do:
plot(x, exp(y), main = expression(Inventory %~%~ beta))
lines(x, exp(pred), lwd = 2, col = 2)
As you can see, the fitted spline is as smooth as you had expected.
Explanation on fitted spline
Let's see the summary of your fitted spline:
> fit
Smoothing Parameter spar= 0.4549062 lambda= 0.0008657722 (11 iterations)
Equivalent Degrees of Freedom (Df): 6.022959
Penalized Criterion: 0.08517417
GCV: 0.004288539
We used 10 knots, ending up with 6 degree of freedom, so penalization suppresses about 4 parameters. The smoothing parameter GCV has chosen, after 11 iterations, is lambda= 0.0008657722.
Why do we have to transform x to x1
Spline is penalized by 2nd derivatives, yet such penalization is on the averaged/integrated 2nd derivatives at all data points. Now, look at your data (x, y). For x before 0.98, the relationship is relatively steady; as x approaches 1, the relationship quickly goes steeper. The "change point", 0.98, has very high second derivative, much much higher than the second derivatives at other locations.
y0 <- as.numeric(tapply(y, x, mean)) ## remove tied values
x0 <- unique(x) ## remove tied values
dy0 <- diff(y0)/diff(x0) ## 1st order difference
ddy0 <- diff(dy0)/diff(x0[-1]) ## 2nd order difference
plot(x0[1:43], abs(ddy0), pch = 19)
Look at that huge spike in 2nd order difference/derivative! Now, if we fit a spline directly, the spline curve around this change point will be heavily penalized.
bad <- smooth.spline(x, y, all.knots = TRUE)
bad.pred <- predict(bad, x)$y
plot(x, exp(y), main = expression(Inventory %~% ~ beta))
lines(x, exp(bad.pred), col = 2, lwd = 3)
abline(v = 0.98, lwd = 2, lty = 2)
You can see clearly that the spline is having some difficulty in approximating data after x = 0.98.
There are of course some ways to achieve better approximation after this change point, for example, by manually setting smaller smoothing parameter, or higher degree of freedom. But we are going to another extreme. Remember, both penalization and degree of freedom are a global measure. Increasing model complexity will get better approximation after x = 0.98, but will also make other parts more bumpy. Now let's try a model with 45 degree of freedom:
worse <- smooth.spline(x, y, all.knots = TRUE, df = 45)
worse.pred <- predict(worse, x)$y
plot(x, exp(y), main = expression(Inventory %~% ~ beta))
lines(x, exp(worse.pred), col = 2, lwd = 2)
As you can see, the curve is bumpy. Sure, we have overfitted our dataset of 50 data, with 45 degree of freedom.
In fact, your original misuse of smooth.spline() is doing the same thing:
> mySpline
Call:
smooth.spline(x = results$beta, y = results$inventoryCost, penalty = 999999)
Smoothing Parameter spar= -0.8074624 lambda= 3.266077e-19 (17 iterations)
Equivalent Degrees of Freedom (Df): 45
Penalized Criterion: 5.598386
GCV: 0.03824885
Oops, 45 degree of freedom, overfitting!
I don't think you should use / want splinefun. I would suggest fitting a GAM instead:
library(mgcv)
fit <- gam(inventoryCost ~ s(beta, bs = "cr", k = 20), data = results)
summary(fit)
gam.check(fit)
plot(fit)
plot(inventoryCost ~ beta, data = results, col = "dark red", , pch = 16)
curve(predict(fit, newdata = data.frame(beta = x)), add = TRUE,
from = min(results$beta), to = max(results$beta), n = 1e3, lwd = 2)

R: Plotting "Actual vs. Fitted"

I do have a question related to plotting actual data of a time series and the values from a fitted model. In particular, my questions relate to this paper:
https://static.googleusercontent.com/media/www.google.com/en//googleblogs/pdfs/google_predicting_the_present.pdf
In the appendix of the document, you can find an R script. Here, I do have two initial questions: (1) What does
##### Define Predictors - Time Lags;
dat$s1 = c(NA, dat$sales[1:(nrow(dat)-1)]);
dat$s12 = c(rep(NA, 12), dat$sales[1:(nrow(dat)-12)]);
do and what is the function of:
##### Divide data by two parts - model fitting & prediction
dat1 = mdat[1:(nrow(mdat)-1), ]
dat2 = mdat[nrow(mdat), ]
Final and main question: Let's say I get a calculation for my data with
fit = lm(log(sales) ~ log(s1) + log(s12) + trends1, data=dat1);
summary(fit)
The adj. R-squared value is 0.342. Thus, I'd argue that the model above explains roughly 34% of the variance between modeled data (predictive data?) and the actual data. Now, how can I plot this "model graph" (fitted) so that I get something like this in the paper?
I assume the second graph's "fitted" is actually the data from the estimated model, right? If so, then this part seems missing in the script.
Thanks a lot!
EDIT 1:
Tried this:
# Actual values and fitted values
plot(sales ~ month, data= dat1, col="blue", lwd=1, type="l", xaxt = "n", xaxs="r",yaxs="r", xlab="", ylab="Total Sales");
par(new=TRUE)
plot(fitted(fit) ~ month, data= dat1, col="red", lwd=1, type="l", xaxs="r", yaxs="r", yaxt = "n", xlab="Month", ylab="Index", xaxt="n");
axis(4)
Output: Error in (function (formula, data = NULL, subset = NULL, na.action = na.fail, : variable lengths differ (found for 'month')
dat$s1 = c(NA, dat$sales[1:(nrow(dat)-1)])
This creates a new column s1 with data from sales where first element is NA. Last item from sales is missing.
dat$s12 = c(rep(NA, 12), dat$sales[1:(nrow(dat)-12)])
Crate s12 column with 12 NAs and the rest is first nrow(dat)-12 values from dat$sales.
dat1 = mdat[1:(nrow(mdat)-1), ]
dat2 = mdat[nrow(mdat), ]
dat1 is all but last observation (rows), dat2 is only last row. When predicting the response (sales), you only need to feed a data.frame with at least the columns that are on the right side of the formula (called also explanatory variables), in this case s1 and s12, as a newdata argument to predict() function. This is where dat2 is used.
predict.fit = predict(fit, newdata=dat2, se.fit=TRUE)
This next line fits a model using dat1.
fit = lm(log(sales) ~ log(s1) + log(s12) + trends1, data=dat1)
fitted(fit) will give you fitted values. Try predict(fit) and compare if it's any different.
Semicolons at the end of each statement is redundant.

Resources