I am generating ecological niche models for a set of species and I would like to use AUC as a metric for ecological niche quality. Steven Phillips, who developed Maxent, provides code in his Maxent manual for calculating the AUC in R. However, I am reading papers that report partial AUC ratios as a more robust and conceptually sound metric. I think I understand how to calculate partial AUC using the ROCR R package, but how does one calculate AUC ratio?
Here is the tutorial script from Phillips:
presence<-read.csv("bradypus_variegatus_samplePredictions.csv")
background<-read.csv("bradypus_variegatus_backgroundPredictions.csv")
pp<-presence$Logistic.prediction
testpp<-pp[presence$Test.or.train=="test"]
trainpp<-pp[presence$Test.or.train=="train"]
bb<-background$logistic
combined<-c(testpp,bb)
label<-c(rep(1,length(testpp)),rep(0,length(bb)))
pred<-prediction(combined,label)
perf<-performance(pred,"tpr","fpr")
plot(perf,colorize=TRUE)
performance(pred,"auc")#y.values[[1]] #RETURNS AUC
AUC<-function(p,ind){
pres<-p[ind]
combined<-c(pres,bb)
label<-c(rep(1,length(pres)),rep(0,length(bb)))
predic<-prediction(combined,label)
return(performance(predic,'auc')#y.values[[1]])
}
b1<-boot(testpp,AUC,100) #RETURNS AUC WITH STANDARD ERROR
b1
Any advice or suggestions would be greatly appreciated! Thank you.
Without knowing the specifics of your dataset and application,
Partial AUC: The area under only a portion of the curve. (usually picked because it is more robust or otherwise desirable, like you said)
AUC ratio: The ratio of one AUC to another. (usually a reference of some sort)
Soo...
Partial AUC ratio: The ratio of one partial AUC to another.
Package ROCR can calculate partial AUC values using the fpr.stop= parameter. As John said the ratio is just this value divided by the same calculation for your reference model.
Related
I have a data set where observations come from highly distinct groups. Each group may have a wildly different distribution, so I am trying to find the best distribution using fitdist from fitdistrplus, then use gamlssML from the gamlss package to find the best parameters.
My issue is with transforming the data after this step. For some of the distributions, like the Box-Cox t, I can find the equation for normalizing the data using the BCT coefficients, but for many of these distributions I cannot.
Does gamlss have a function that normalizes the data after fitting? Their documentation only provides the transformations for a small number of distributions https://www.gamlss.com/wp-content/uploads/2018/01/DistributionsForModellingLocationScaleandShape.pdf
Thanks a lot
The normalised data values (for any distribution) are exactly equal to the residuals from a gamlss fit,
m1 <- gamlss()
which can be accessed by
residuals(m1) or
m1$residuals
I am doing a counterfactual impact evaluation on survival data. More precisely, I try to evaluate the impact of vocational training on time spent in unemployment. I use the Kaplan Meier estimator of the survival curve (package survival).
Before doing Kaplan Meier, I use coarsened exact matching (aim is ATT) to get the control and treatment groups close in terms of pretreatment covariates (package MatchIt).
For the Kaplan Meier estimator, I have to use the weights form the matching, which works well using the weights option and robust standard errors of survfit :
library(survival)
library(survminer)
kp_cem <- survfit(Surv(time=time_cem,event=status_cem)~treatment_cem, data=data_impact_cem,robust =TRUE,weights =weights)
Although, when I try to use a log-rank test to test for the difference in survival curves between treatment and control groups, I cannot take into account the frequency weights from the matching so the test statistics are not correct.
log_rank <- survdiff(Surv(time=time_cem,event=status_cem)~treatment_cem, data=data_impact_cem,rho=0)
I tried the option "pval = TRUE" of ggsurvplot (package survminer) but the problem is the same, the frequency weights are not taken into account.
How can I include frequency weights in survdiff? Are there other packages to compute log-rank test taking into account frequency weights (obtained after matching)?
There are at least two ways to do this:
First, you can use the survey::svylogrank function, as #IRTFM suggests. This will treat the weights as sampling weights, but I think that's ok with the robust standard errors that svylogrank uses.
Second, you can use survival::coxph. The logrank test is the score test in a Cox model, and coxph takes frequency weights. Use robust=TRUE if you want a robust score test: it will be at the bottom of the output of summary(your_cox_model) and you can extract it as summary(your_cox_model)$robscore
Thank you very much #Thomas Lumley and #IRTFM for your answers.
Here is how I apply your 2 suggestions (I added some comments + references).
1. Using survey::svylogrank
I don’t feel very confortable using sampling weights while it is really frequency weights that I have.
How should I specify the survey design ? The weights come from Coarsened Exact Matching (matchit with method = "cem") which is a class of stratum matching.
Should I specify the strata and the weights in the survey design ? In this vignette form Matchit Estimating Effects After Matching, it is suggested to use only weights and robust standard errors in the survival analysis (not the strata) (p. 27).
Here is how I specify the design and how I obtain the log-rank test using the package survey taking into account the weights from matching :
library(survey)
design_weights <- svydesign(id=~ibis, strata=~subclass, weights=~weights, data=data_impact_cem)
log_rank <- svylogrank(Surv(time=time_cem,event=status_cem)~treatment_cem, design=design_weights, rho=0)
2. Using survival::coxph
Thank you for this piece of information, being quite new to survival analysis, I overlooked this nice property of the equivalency of score test from cox model and log-rank test. For people wishing more info on this subject, I found this book very instructive : Moore, D. (2016). Applied survival analysis using R. New York: NY : Springer (p 58).
I find this 2d option more attractive than the 1st involving survey. Here is how I apply it :
library(survival)
cox_cem <-coxph(Surv(time=time_cem,event=status_cem)~treatment_cem, data=data_impact_cem,robust =TRUE,weights =weights)
sum_cox_cem <-summary(cox_cem)
score_test <-sum_cox_month[[13]][[1]]
score_test <- round(score_test,3)
pvalue <- sum_cox_month[[13]][[3]]
pvalue <-if(pvalue<0.001){"<0.001"} else{round(pvalue,3)}
Here is the difference between the 2 test statistics (quite close in the end).
enter image description here
Though, I still wonder why the weights option does not exist in survdiff.
I'm working on a classification problem (predicting three classes) and I'm comparing SVM against Random Forest in R.
For evaluation and comparison I want to calculate the bias and variance of the models. I've looked up the two terms in many machine learning books and I'd say I do understand the sense of variance and bias (easiest explanation with the bullseye). But I can't really figure out how to apply it in my case.
Let's say I predict the results for a test set with 4 SVM-models that were trained with 4 different training sets. Each time I get a total error (meaning all wrong predictions/all predictions).
Do I then get the bias for SVM by calculating this?
which would mean that the bias is more or less the mean of the errors?
I hope you can help me with not to complicated formula, because I've already seen many of them.
I would like to calculate a summary odds ratio value for two or more papers where the only information I have is the individual odds ratios with their 95% confidence intervals. Is this possible? I have been poking around in the meta package, and only figured out how to do it with crude counts.
Thanks so much!
It is quite simple.
You just need to use the natural logarithm of the odds ratio (logOR), and its standard errror (and corresponding variance). These can be easily back-calculated from the 95% confidence intervals according to the normal distribution. Finally, pool logORs with their variance.
For instance, after you have built a data frame (eg called mydata) with logOR and variance for each study, you can easily proceed with a random effect meta-analysis with the metafor package in R as follows:
res <- rma(logOR, variance, data=mydata, method="DL")
forest(res)
In the future, you may consider posting similar questions in CrossValidated.
I plan to use the precision-recall plot (PR plot) to compare models. See the attached figure (partial screenshot, sorry!) below. Obviously I have the true positives, true negatives, false positives and false negatives at hand, and I need a a single summary quantity for each model. Here are my questions:
Area Under the PR curve (AUC) is the first quantity, but I don't know how to calculate that in R. I do NOT want to use any package like ROCR because all the codes are written by myself and I hope to write my own codes using the quantities available. It seems that there are many ways -- I hope to know which one is the most implementable.
Another quantity is the F-measure: a measure that combines precision and recall is the harmonic mean of precision and recall, the traditional F-measure or balanced F-score. However, I am curious if this is better than the AUC in #1 or they are describing different things? Moreover, since I have a bunch of Recall and Precision values, how can I calculate a single F measure in this case (see Figure below).
Thank you!
To calculate the AUC of a curve, you can use a numeric integration function such as trapz() in the caTools package.
auc <- trapz(recall, precision)
The F-score is the harmonic mean for a given cutoff value. In your case, you would get many F-scores for each curve so it would not summarize the curve as you like.
The AUC describes the performance of the model across possible values of the continuous output from the model. The F-score describes a model at a particular cutpoint. It is more of a way to combine recall and precision to a single statistic.
Be careful when explaining it though. Usually, AUC is discussed in the context of sensitivity and specificity.