Issues plotting count distribution displot() - r

I have count data. I'm trying to document my decision to use a negative binomial distribution rather than Poisson (I couldn't get a quasi-poisson dist. in lme4) and am having graphical issues (the vector is appended to the end of the post).
I've been trying to implement the distplot() function to inform my decision about which distribution to model:
here's the outcome variable (physician count):
plot(d1.2$totalmds)
Which might look poisson
but the mean and variance aren't close (the variance is doubled by two extreme values; but is still not anywhere near the mean)
> var(d1.2$totalmds, na.rm = T)
[1] 114240.7
> mean(d1.2$totalmds, na.rm = T)
[1] 89.3121
My outcome is partly population driven so I'm using the total population as an offset variable in preliminary models. This, as I understand it, divides the outcome by the natural log of the offset variable so totalmds/log(poptotal) is essentially what's being modeled. Which looks something like:
But when I try to model this using:
plot 1: distplot(x = d1.2$totalmds, type = "poisson")
plot 2: distplot(x = d1.2$totalmds, type = "nbinomial") # looks way off
plot 3: plot(fitdist(data = d1.2$totalmds, distr = "pois", method = "mle"))
plot 4: plot(fitdist(data = d1.2$totalmds, distr = "nbinom", method = "mle")) # throws warnings
plot 5: qqcomp(fitdist(data = d1.2$totalmds, distr = "pois", method = "mle"))
plot 6: qqcomp(fitdist(data = d1.2$totalmds, distr = "nbinom", method = "mle")) # throws warnings
Does anyone have suggestions for why the following plots look a little screwy/inconsistent?
As I mentioned I'm using another variable as an offset variable in my actual analysis, if that makes a difference.
Here's the vector:
https://gist.github.com/timothyslau/f95a777b713eb33a2fe6
I'm fairly sure NB is better than poisson since var(d1.2$totalmds)/mean(d1.2$totalmds) # variance-to-mean ratio (VMR) > 1
But if NB is appropriate the plots should look a lot cleaner (I think, unless I'm doing something wrong with these plotting functions/packages).

Related

Error with svyglm function in survey package in R: "all variables must be in design=argument"

New to stackoverflow. I'm working on a project with NHIS data, but I cannot get the svyglm function to work even for a simple, unadjusted logistic regression with a binary predictor and binary outcome variable (ultimately I'd like to use multiple categorical predictors, but one step at a time).
El_under_glm<-svyglm(ElUnder~SO2, design=SAMPdesign, subset=NULL, family=binomial(link="logit"), rescale=FALSE, correlation=TRUE)
Error in eval(extras, data, env) :
object '.survey.prob.weights' not found
I changed the variables to 0 and 1 instead:
Under_narm$SO2REG<-ifelse(Under_narm$SO2=="Heterosexual", 0, 1)
Under_narm$ElUnderREG<-ifelse(Under_narm$ElUnder=="No", 0, 1)
But then get a different issue:
El_under_glm<-svyglm(ElUnderREG~SO2REG, design=SAMPdesign, subset=NULL, family=binomial(link="logit"), rescale=FALSE, correlation=TRUE)
Error in svyglm.survey.design(ElUnderREG ~ SO2REG, design = SAMPdesign, :
all variables must be in design= argument
This is the design I'm using to account for the weights -- I'm pretty sure it's correct:
SAMPdesign=svydesign(data=Under_narm, id= ~NHISPID, weight= ~SAMPWEIGHT)
Any and all assistance appreciated! I've got a good grasp of stats but am a slow coder. Let me know if I can provide any other information.
Using some make-believe sample data I was able to get your model to run by setting rescale = TRUE. The documentation states
Rescaling of weights, to improve numerical stability. The default
rescales weights to sum to the sample size. Use FALSE to not rescale
weights.
So, one solution maybe is just to set rescale = TRUE.
library(survey)
# sample data
Under_narm <- data.frame(SO2 = factor(rep(1:2, 1000)),
ElUnder = sample(0:1, 1000, replace = TRUE),
NHISPID = paste0("id", 1:1000),
SAMPWEIGHT = sample(c(0.5, 2), 1000, replace = TRUE))
# with 'rescale' = TRUE
SAMPdesign=svydesign(ids = ~NHISPID,
data=Under_narm,
weights = ~SAMPWEIGHT)
El_under_glm<-svyglm(formula = ElUnder~SO2,
design=SAMPdesign,
family=quasibinomial(), # this family avoids warnings
rescale=TRUE) # Weights rescaled to the sum of the sample size.
summary(El_under_glm, correlation = TRUE) # use correlation with summary()
Otherwise, looking code for this function's method with 'survey:::svyglm.survey.design', it seems like there may be a bug. I could be wrong, but by my read when 'rescale' is FALSE, .survey.prob.weights does not appear to get assigned a value.
if (is.null(g$weights))
g$weights <- quote(.survey.prob.weights)
else g$weights <- bquote(.survey.prob.weights * .(g$weights)) # bug?
g$data <- quote(data)
g[[1]] <- quote(glm)
if (rescale)
data$.survey.prob.weights <- (1/design$prob)/mean(1/design$prob)
There may be a work around if you assign a vector of numeric values to .survey.prob.weights in the global environment. No idea what these values should be, but your error goes away if you do something like the following. (.survey.prob.weights needs to be double the length of the data.)
SAMPdesign=svydesign(ids = ~NHISPID,
data=Under_narm,
weights = ~SAMPWEIGHT)
.survey.prob.weights <- rep(1, 2000)
El_under_glm<-svyglm(formula = ElUnder~SO2,
design=SAMPdesign,
family=quasibinomial(),
rescale=FALSE)
summary(El_under_glm, correlation = TRUE)

R One-Class SVM - Get Probabilistic outputs

I am trying to find away to derive probabilistic outputs when predicting from a one-class svm in R. I know this is not supported in libsvm and I also know this question has been asked before and here a couple of years ago on SO but packages were not available at that time. I'm hoping things have changed now! Also this question is still valid as no approach implemented in R was given as a solution.
I could not find a package to do this so I tried two approaches myself to get around this:
Get the decision values and transform them through the use of the sigmoid activation function. This is described in this paper. Note the paragraph:
Furthermore, SVMs can also produce class probabilities as output instead of class labels. This
is can done by an improved implementation (Lin, Lin, and Weng 2001) of Platt’s a posteriori
probabilities (Platt 2000) where a sigmoid function is fitted to the decision values f of the binary SVM classifiers, A and B being estimated by minimizing the negative log-likelihood function
Use a logistic regression function on the predicted output and derive the probabilities from it. This approach was first described by Platt and an approach is outlined here
My problem is that to check if either of my two solutions are plausible, I tested these two approaches on a two-class svm problem as e1071, using libsvm, gives probabilities for two-class problems so this was taken as the 'truth'. I found that neither of my approaches aligned closely to libsvm.
Here are three graphs showing the resulting probabilities versus the known decision values.
Click to see image. Sorry I seem to have too low a reputation to embed the image which is frustrating! I'm not sure if someone in the community with a higher reputation can edit to embed?
I think my Platt approach is theoretically more sound but, as can be seen from the graph, it appears the logistic regression was somehow too good, the probabilities associated with either classification being extremely close to 1 for positive and 0 for negative.
My code for the Platt implementation is
platt_scale <- function(oc_svm, X){
# Get SVM predictions
y_pred <- predict(oc_svm$best.model,X)
#y_pred <- as.factor(ifelse(y_pred==T,"pos","neg"))
# Train using logistic regression with cross-validation
require(caret)
model <- train(x = X,
y = y_pred,
method = "glm",
family=binomial(),
trControl = trainControl(method = "cv",
number = 5),
control = list(maxit = 50) #BROUGHT IN TO STOP WARNING MESSAGES
)
return(predict(model,
newdata = X,
type = "prob")[,1])
}
I get the following warning when this runs
glm.fit: fitted probabilities numerically 0 or 1 occurred
So I am clearly doing something wrong! I feel like fixing this function is probably the best approach but I don't see where I have gone wrong? I am following the approach I mentioned earlier, here
I get the sigmoid of the decision values as follows
sig_mult <-e1071::sigmoid(decision_values)
The examples were done using the Iris dataset, full code is here
data(iris)
two_class<-iris[iris$Species %in% c("setosa","versicolor"),]
#Make Two-class SVM
svm_mult<-e1071::tune(svm,
train.x = two_class[,1:4],
train.y = factor(two_class[,5],levels=c("setosa", "versicolor")),
type="C-classification",
kernel="radial",
gamma=0.05,
cost=1,
probability = T,
tunecontrol = tune.control(cross = 5))
#Get related decision values
dec_vals_mult <-attr(predict(svm_mult$best.model,
two_class[,1:4],
decision.values = T #use decision values to get score
), "decision.values")
#Get related probabilities
prob_mult <-attr(predict(svm_mult$best.model,
two_class[,1:4],
probability = T #use decision values to get score
), "probabilities")[,1]
#transform decision values using sigmoid
sig_mult <-e1071::sigmoid(dec_vals_mult)
#Use Platt Implementation function to derive probabilities
platt_imp<-platt_scale(svm_mult,two_class[,1:4])
require(ggplot2)
data2<-as.data.frame(cbind(dec_vals_mult,sig_mult))
names(data2)<-c("Decision.Values","Sigmoid.Decision.Values(Prob)")
sig<-ggplot(data=data2,aes(x=Decision.Values,
y=`Sigmoid.Decision.Values(Prob)`,
colour=ifelse(Decision.Values<0,"neg","pos")))+
geom_point()+
ylim(0,1)+
theme(legend.position = "none")
data3<-as.data.frame(cbind(dec_vals_mult,prob_mult))
names(data3)<-c("Decision.Values","Probabilities")
actual<-ggplot(data=data3,aes(x=Decision.Values,
y=Probabilities,
colour=ifelse(Decision.Values<0,"neg","pos")))+
geom_point()+
ylim(0,1)+
theme(legend.position = "none")
data4<-as.data.frame(cbind(dec_vals_mult,platt_imp))
names(data4)<-c("Decision.Values","Platt")
plat_imp<-ggplot(data=data4,aes(x=Decision.Values,
y=Platt,
colour=ifelse(Decision.Values<0,"neg","pos")))+
geom_point()+
ylim(0,1)
require(ggpubr)
ggarrange(actual, plat_imp, sig,
labels = c("Actual", "Platt Implementation", "Sigmoid Transformation"),
ncol = 3,
label.x = -.05,
label.y = 1.001,
font.label = list(size = 8.5, color = "black", face = "bold", family = NULL),
common.legend = TRUE, legend = "bottom")

How to simulate the posterior filtered estimates of a Kalman Filter using the DSE package in R

How do I call for the posterior (refined) state estimates from a Kalman Filter simulation in R using the DSE package?
I have added an example below. Assume that I have created a simple random walk state space with the error being a standard normal distribution. The model is created using the SS function, with initialised state and covariance estimates of zero. The theoretical model form is thus:
X(t) = X(t-1) + e(t)~N(0,1) for state evolution
Y(t) = X(t) + w(t)~N(0,1)
We now implement this in R by following the instructions on page 6 and 7 of the "Kalman Filtering in R" article in the Journal of Statistical Software. First we create the state space model using the SS() function and store it in the variable called kalman.filter:
kalman.filter=dse::SS(F = matrix(1,1,1),
Q = matrix(1,1,1),
H = matrix(1,1,1),
R = matrix(1,1,1),
z0 = matrix(0,1,1),
P0 = matrix(0,1,1)
)
Then we simulate a 100 observations from the model form using simulate() and put them in a variable called simulate.kalman.filter:
simulate.kalman.filter=simulate(kalman.filter, start = 1, freq = 1, sampleT = 100)
Then we run the kalman filter against the measurements using l() and store it under the variable called test:
test=l(kalman.filter, simulate.kalman.filter)
From the outputs, which ones are my filtered estimates?
I have found the answer to this question.
Firstly, the filtered estimates of the model are not given in the l() function. This function only gives the one step ahead predictions. The above framing of my problem was coded as:
kalman.filter=dse::SS(F = matrix(1,1,1),
Q = matrix(1,1,1),
H = matrix(1,1,1),
R = matrix(1,1,1),
z0 = matrix(0,1,1),
P0 = matrix(0,1,1)
)
simulate.kalman.filter=simulate(kalman.filter, start = 1, freq = 1, sampleT = 100)
test=l(kalman.filter, simulate.kalman.filter)
The one step ahead predictions are given by:
predictions = test$estimates$pred
A quick way to visualize this is given by:
tfplot(test)
This allows you to quickly plot your one step ahead predictions against the actual data. To get your filtered estimates, you need to use the smoother() function, in the same dse package. It inputs the state model as well as the data, in this case it is kalman.filter and simulate.kalman.filter respectively. The output is smoothed estimates for all the time points. But note that it does this after considering the full data set, so it does not do this as each observation comes in. See code below. The first line of the code gives you your smoothed estimates, the following lines plot the example:
smooth = smoother(test, simulate.kalman.filter)
plot(test$estimates$pred, ylim=c(max(test$estimates$pred,smooth$filter$track,simulate.kalman.filter$outpu), min(test$estimates$pred,smooth$filter$track,simulate.kalman.filter$output)))
points(smooth$smooth$state, col = 3)
points(simulate.kalman.filter$output, col = 4)
The above plot plots all your actual data, model estimates and smoothed estimates against one another.

vegan::ordiR2step() doesn't find best-fit model

The vegan package includes the ordiR2step() function for model building, which can be used to identify the most important variables using the R2 and the p-value as goodness of fit measures. However for the dataset I was recently working with the function doesn't provide the best-fit model.
# data
RIKZ <- read.table("http://www.uni-koblenz-landau.de/en/campus-landau/faculty7/environmental-sciences/landscape-ecology/Teaching/RIKZ_data/at_download/file", header = TRUE)
# data preparation
Species <- RIKZ[ ,2:5]
ExplVar <- RIKZ[ , 9:15]
Species_fin <- Species[ rowSums(Species) > 0, ]
ExplVar_fin <- ExplVar[ rowSums(Species) > 0, ]
# rda
RIKZ_rda <- rda(Species_fin ~ . , data = ExplVar_fin, scale = TRUE)
# stepwise model building: ordiR2step()
require(vegan)
step_both_R2 <- ordiR2step(rda(Species_fin ~ salinity, data = ExplVar_fin, scale = TRUE),
scope = formula(RIKZ_rda),
direction = "both", R2scope = TRUE, Pin = 0.05,
steps = 1000)
Why does ordiR2step() not add the variable exposure to the model, although it would increase the explained variance?
If R2scope is set FALSE and the p-value criterion is increased (Pin = 0.15) it adds the variable exposure corretly but throws the following error:
Error in terms.formula(tmp, simplify = TRUE) :
invalid model formula in ExtractVars
If R2scope is set TRUE (Pi = 0.15) exposure is not added.
Note: This might seem more as a statistic question and therefore more suitable for CV. However I think the problem is rather technical and better off here on SO.
Please read the ordiR2step documentation: it will tell you why exposure is not added to the model. The help page tells that ordiR2step has three stopping criteria. The second criterion is that "the adjusted R2 of the ‘scope’ is exceeded". This happens with exposure and therefore it was not added. This second criterion will be ignored if you set R2scope = FALSE (also documented). So the function works like documented.

The right way to use R reliability plot

For classification problems, I was using Balanced Accuracy, Sensitivity and Specificity to evaluate the models. Recently, I saw calibration could capture those cannot be captured by accuracy and AUC. So, I want to give it a try, and Reliability Plot is the visualized calibration.
I am using R Verification package, reliability.plot() function. However the result looks weird like this:
Maybe it's because the variable I put into the function is wrong, but I am not sure how to modify. Here is my code:
Train The Model and Get Predicted Probilities
library(verification)
library(mlr)
svm_learner <- makeLearner("classif.ksvm", predict.type = "prob")
svm_param <- makeParamSet(
makeDiscreteParam("C", values = 2^c(-8,-4,-2,0)), #cost parameters
makeDiscreteParam("sigma", values = 2^c(-8,-4,0,4)) #RBF Kernel Parameter
)
ctrl <- makeTuneControlRandom()
cv_svm <- makeResampleDesc("CV",iters = 5L)
svm_tune <- tuneParams(svm_learner, task = train_task, resampling = cv_svm, par.set = svm_param, control = ctrl,measures = acc)
svm_tune$x
svm_tune$y
t.svm <- setHyperPars(svm_learner, par.vals = svm_tune$x)
svm_model <- mlr::train(svm_learner, train_task)
svmpredict <- predict(svm_model, test_task)
svmpredict
I am trying to calculate the observed frequency and forecasted frequency, and put them in the function
xy <- data.table(Truth=svmpredict$data$truth, Response=svmpredict$data$response)
summary(xy$Truth)
summary(xy$Response)
xy[, ObservedFreq := ifelse(Truth==0, 1806/(1806+48), 48/(1806+48))]
xy[, ForecastedFreq := ifelse(Truth==0, 1807/(1807+47), 47/(1807+47))]
reliability.plot(svmpredict$data$prob.1, xy$ObservedFreq, xy$ForecastedFreq, positive="1")
I guess the problem maybe caused by the variables I put in the function, but what else can be observed and forecasted frequency? Do you know how to plot the right reliability plot?

Resources