I am currently computing binomial probit glm in R.
For analysis of interaction effects, I use the effects package. I want to plot different interactions, where one of the interacting variables is held constant at a fixed level. I do this for several values of the variable to see how the effect evolves.
I use the following operation:
plot(effect("varL",hx1,given.values=c("varP"=0.7)))
plot(effect("varL",hx1,given.values=c("varP"=0.1)))
However, to graphically compare the different plots, the y-axis should be the same for all plots, which is not the case. When plotting for different varP values, the y axis changes its range.
When specifying ylim, the plot is also incorrect and shows a completely different segment than specified.
I tried what was recommended in this post (Scaling axis in a GLM plot with library "effects"), however, it resulted in an error message:
plot(effect("varL",hx1,given.values=c("varP"=0.7)), ylim = qlogis(c(0, 0.20)))
Error in valid.viewport(x, y, width, height, just, gp, clip, xscale, yscale, :
invalid 'yscale' in viewport
Now my question: how can I set the y-axis for plotting interaction effects with the effect package using a probit glm model? I am sure the problem is that ylim takes the values as specified without adjusting them into the logit and probit scale. qlogis likely works for logit, but not probit.
Below some code to replicate the issue. You see that the y axis "jumps around", which I want to avoid.
install.packages("effects")
require(effects)
varL <- rnorm(100, mean = 1000, sd = 10)
varP <- rnorm(100, mean = 5)
entry <- as.factor(sample(0:1, 100, replace = TRUE))
dat <- data.frame(varL, varP, entry)
hx1 <- glm(entry ~ varL*varP, data = dat, family = binomial(link = "probit"))
plot(effect("varL",hx1,given.values=c("varP"=min(dat$varP))))
plot(effect("varL",hx1,given.values=c("varP"=max(dat$varP))))
Here are the plots with the "jumping" y-axes:
I had a similar problem with logistic regression,actually, and I used the advice Change the y axis on Effect plot in R
Basically all that I needed to do is add the argument rescale.axis=F in addition to ylim=c(0,1)
Related
I've plotted the response curves for each of my predictors against all of predicted values to determine how each predictor influences my counts. However, I also want to plot the binary part of my zero-inflated model to see how the predictors in the binary part of the zero-inflated model help explain the probability of false zeroes. I am trying to get a plot similar to the one at the bottom of the page of the link below however they don't provide reproducible code in that example.
https://fukamilab.github.io/BIO202/04-C-zero-data.html#sketch_fitted_and_predicted_values
I've included some code below where I have my zero-inflated model and the predictors used. I then use the predict function to predict the estimates for a much larger raster grid (new.data) and I want to see the response between those predicted values and the predictors I use across the entire raster grid.
mod1 = zeroinfl(Response~x1+x2|x1,link ="logit",data=data,
dist="negbin")
modpred=predict(mod1, new.data, se.fit=T, type = "response")
response1 <- ggplot(data, aes(x = x1, y = modpred)) + geom_point()+
+geom_smooth(data = data, aes(x = x1, y = modpred))
I am trying to learn gam() in R for a logistic regression using spline on a predictor. The two methods of plotting in my code gives the same shape but different ranges of response in the logit scale, seems like an intercept is missing in one. Both are supposed to be correct but, why the differences in range?
library(ISLR)
attach(Wage)
library(gam)
gam.lr = gam(I(wage >250) ~ s(age), family = binomial(link = "logit"), data = Wage)
agelims = range(age)
age.grid = seq(from = agelims[1], to = agelims[2])
pred=predict(gam.lr, newdata = list(age = age.grid), type = "link")
par(mfrow = c(2,1))
plot(gam.lr)
plot(age.grid, pred)
I expected that both of the methods would give the exact same plot. plot(gam.lr) plots the additive effects of each component and since here there's only one so it is supposed to give the predicted logit function. The predict method is also giving me estimates in the link scale. But the actual outputs are on different ranges. The minimum value of the first method is -4 while that of the second is less than -7.
The first plot is of the estimated smooth function s(age) only. Smooths are subject to identifiability constraints as in the basis expansion used to parametrise the smooth, there is a function or combination of functions that are entirely confounded with the intercept. As such, you can't fit the smooth and an intercept in the same model as you could subtract some value from the intercept and add it back to the smooth and you have the same fit but different coefficients. As you can add and subtract an infinity of values you have an infinite supply of models, which isn't helpful.
Hence identifiability constraints are applied to the basis expansions, and the one that is most useful is to ensure that the smooth sums to zero over the range of the covariate. This involves centering the smooth at 0, with the intercept then representing the overall mean of the response.
So, the first plot is of the smooth, subject to this sum to zero constraint, so it straddles 0. The intercept in this model is:
> coef(gam.lr)[1]
(Intercept)
-4.7175
If you add this to values in this plot, you get the values in the second plot, which is the application of the full model to the data you supplied, intercept + f(age).
This is all also happening on the link scale, the log odds scale, hence all the negative values.
I'm pretty new to R and am trying to analyse some data and fit a Gaussian to it using the ggplot2 package.
I am able to plot a smooth curve using geom_smooth and the results are as expected. However, using geom_density (see code below) the result is not as expected.
ggplot(All_Wavelengths_LabVIEW_selected_)+
geom_smooth(mapping = aes(Actual_Wavelength, B), se = FALSE)+
geom_density(kernel = "gaussian", Actual_Wavelength, B)
Instead of a Gaussian fit, I get:
'Error in fortify(data) : object 'B' not found'
I don't understand how this can occur given it uses B to plot the smooth curve without any issue.
In addition, I would like to do the following:
Extract FWHM value of the peak
Overlay multiple of these Gaussian fits for other sets of data (similar to B) with the same X axis
Is this possible?
Any help on this would be greatly appreciated.
I'm trying to plot the resultant curve from fitting a non-linear mixed model. It should be something like a curve of a normal distribution but skewed to the right. I followed previous links here and here, but when I use my data I can not make it happen for different difficulties (see below).
Here is the dataset
and code
s=read.csv("GRVMAX tadpoles.csv")
t=s[s$SPP== levels(s$SPP)[1],]
head(t)
vmax=t[t$PERFOR=="VMAX",]
colnames(vmax)[6]="vmax"
vmax$TEM=as.numeric(as.character(vmax$TEM));
require(lme4)
start =c(TEM=25)
is.numeric(start)
nm1 <- nlmer ( vmax ~ deriv(TEM)~TEM|INDIVIDUO,nlpars=start, nAGQ =0,data= vmax)# this gives an error suggesting nlpars is not numeric, despite start is numeric...:~/
After that, I want to plot the curve over the original data
with(vmax,plot(vmax ~ (TEM)))
x=vmax$TEM
lines(x, predict(nm1, newdata = data.frame(TEM = x, INDIVIDUO = "ACI5")))
Any hint?
Thanks in advance
I have a scatter plot of a dataset and I am interested in calculating the upper bound of the data. I don't know if this is a standard statistical approach so what I was considering doing was splitting the X-axis data into small ranges, calculating the max for these ranges and then trying to identify a function to describe these points. Is there a function already in R to do this?
If it's relevant there are 92611 points.
You might like to look into quantile regression, which is available in the quantreg package. Whether this is useful will depend on whether you want the absolute maximum within your "windows" are whether some extreme quantile, say 95th or 99th, is acceptable? If you are not familiar with quantile regression, then consider the linear regression which fits a model for the expectation or mean response, conditional upon the model covariates. Quantile regression for the middle quantile (0.5) would fit a model to the median response, conditional upon the model covariates.
Here is an example using the quantreg package, to show you what I mean. First, generate some dummy data similar to the data you show:
set.seed(1)
N <- 5000
DF <- data.frame(Y = rev(sort(rlnorm(N, -0.9))) + rnorm(N),
X = seq_len(N))
plot(Y ~ X, data = DF)
Next, fit the model to the 99th percentile (or the 0.99 quantile):
mod <- rq(Y ~ log(X), data = DF, tau = .99)
To generate the "fitted line", we predict from the model at 100 equally spaced values in X
pDF <- data.frame(X = seq(1, 5000, length = 100))
pDF <- within(pDF, Y <- predict(mod, newdata = pDF))
and add the fitted model to the plot:
lines(Y ~ X, data = pDF, col = "red", lwd = 2)
This should give you this:
I would second Gavin's nomination for using quantile regression. Your data might be simulated with your X and Y each log-normally distributed. You can see what a plot of the joint distribution of two independent (no imposed correlation, but not necessarily cor(x,y)==0) log-normal variates looks like if you run:
x <- rlnorm(1000, log(300), sdlog=1)
y<- rlnorm(1000, log(7), sdlog=1)
plot(x,y, cex=0.3)
You might consider looking at their individual distributions with qqplot (in the base plotting functions) remembering that the tails of such distrubutions can behave in surprising manner. You should be more interested in how well the bulk of the values fit a particular distribution than the extremes ... unless of course your applications are in finance or insurance. Don't want another global financial crisis because of poor modeling assumptions about tail behavior, now do we?
qqplot(x, rlnorm(10000, log(300), sdlog=1) )