ROC curve based on means and variances of controls and cases - r

Does anyone know of an R package (or any other statistical freeware or just a piece of code) that lets you plot a smooth ROC curve knowing only the means and variances of the control and case groups? That is, one that doesn't require a dataset with specific classifier values and test outcomes. I found a couple of online graph plotters that do just that:
https://kennis-research.shinyapps.io/ROC-Curves/ ,
http://arogozhnikov.github.io/2015/10/05/roc-curve.html
Any help appreciated

I don't think you need any fancy package for this. You can just use simple probability functions in base R.
m1 <- 0
m2 <- 2
v1 <- 4
v2 <- 4
range <- seq(-10, 10, length.out=200)
d1<-pnorm(range, m1, sd=sqrt(v1))
d2<-pnorm(range, m2, sd=sqrt(v2))
tpr <- 1-d2
fpr <- 1-d1
plot(fpr, tpr, xlim=0:1, ylim=0:1, type="l")
abline(0,1, lty=2)

Related

Convolution of two mixture of densities in R

I am currently trying to do a convolution between a mixture of two lognormal densisites in R using package "distr". The mixtures are given by:
library(distr)
M1 <- UnivarMixingDistribution(Lnorm(meanlog=8.1,sdlog=0.02),
Lnorm(meanlog=8.15,sdlog=0.035),
mixCoeff = c(0.45,0.55))
M2 <- UnivarMixingDistribution(Lnorm(meanlog=8.17,sdlog=0.035),
Lnorm(meanlog=8.15,sdlog=0.03),
mixCoeff = c(0.65,0.35))
Both these functions should deliver positive probabilities in the area X=[3000,4000]. However when I do the convolution:
conv <- convpow(M1+M2,1)
M3 <- d(conv)
M3(seq(3000,4000,by=1))
I get all zero probabilities and in fact the PDF is located around the 7000 area. Can someone please help me to do this convolution correctly?

How to update different types of distribution estimating same value in R?

we are trying to create a distribution that estimates pathogens presence on vegetables. This was done using different methods, each providing a distribution:
- method S (from sludge concentration) is best fitted by weibull(1.55, 8.57)
- method SO (from soil) is best fitted by logN(0.68, 0.63)
- method F (from field data) PERT(0.093, 0.34, 0.52)
Theoretically the 3 methods should estimate the same value. What would be the best way to combine them?
I have searched online but I could only find & understand how to do it using normal distributions. The posterior normal distribution would have a mean that is a weighted average (see page 3 on https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading15a.pdf)
How to update different types of distributions?
Thank you for your help.
library(mc2d)
soil.df <- matrix(data=0, nrow=10000, ncol=3)
colnames(soil.df) <- c("from sludge","soil sample","field data")
for (i in 1:10000) {
migration <- 0.27
application <- rpert(1,0.01,0.02,0.25)
C <- rweibull(1,1.57,85.79)
soil.df[i,1] <- C*application*migration ##from sludge
soil.df[i,2]<- 10^rnorm(1,0.68,0.63)*migration ## from soil concentration
soil.df[i,3] <- rpert(1,0.093, 0.34, 0.52) ##from field data
}
par(mfrow=c(1,1))
plot(density(soil.df[,1]), col="red", xlim=c(0,15), ylim=c(0,1), main="Ova/gr soil")
lines(density(soil.df[,2]), col="black")
lines(density(soil.df[,3]), col="green")

R: How to read Nomograms to predict the desired variable

I am using Rstudio. I have created nomograms using function nomogram from package rms using following code (copied from the example code of the documentation):
library(rms)
n <- 1000 # define sample size
set.seed(17) # so can reproduce the results
age <- rnorm(n, 50, 10)
blood.pressure <- rnorm(n, 120, 15)
cholesterol <- rnorm(n, 200, 25)
sex <- factor(sample(c('female','male'), n,TRUE))
# Specify population model for log odds that Y=1
L <- .4*(sex=='male') + .045*(age-50) +
(log(cholesterol - 10)-5.2)*(-2*(sex=='female') + 2*(sex=='male'))
# Simulate binary y to have Prob(y=1) = 1/[1+exp(-L)]
y <- ifelse(runif(n) < plogis(L), 1, 0)
ddist <- datadist(age, blood.pressure, cholesterol, sex)
options(datadist='ddist')
f <- lrm(y ~ lsp(age,50)+sex*rcs(cholesterol,4)+blood.pressure)
nom <- nomogram(f, fun=function(x)1/(1+exp(-x)), # or fun=plogis
fun.at=c(.001,.01,.05,seq(.1,.9,by=.1),.95,.99,.999),
funlabel="Risk of Death")
#Instead of fun.at, could have specified fun.lp.at=logit of
#sequence above - faster and slightly more accurate
plot(nom, xfrac=.45)
Result:
This code produces a nomogram but there is no line connecting each scale (called isopleth) to help predict the desired variable ("Risk of Death") from the plot. Usually, nomograms have the isopleth for prediction (example from wikipedia). But here, how do I predict the variable value?
EDIT:
From the documentation:
The nomogram does not have lines representing sums, but it has a
reference line for reading scoring points (default range 0--100). Once
the reader manually totals the points, the predicted values can be
read at the bottom.
I don't understand this. It seems that predicting is supposed to be done without the isopleth, from the scale of points. but how? Can someone please elaborate with this example on how I can read the nomograms to predict the desired variable? Thanks a lot!
EDIT 2 (FYI):
In the description of the bounty, I am talking about the isopleth. When starting the bounty, I did not know that nomogram function does not provide isopleth and has points scale instead.
From the documentation, the nomogram is used to manualy obtain prediction:
In the top of the plot (over Total points)
you draw a vertical line for each of the variables of your patient (for example age=40, cholesterol=220 ( and sex=male ), blood.pressure=172)
then you sum up the three values you read on the Points scale (40+60+3=103) to obtain Total Points.
Finally you draw a vertical line on the Total Points scale (103) to read the Risk of death (0.55).
These are regression nomograms, and work in a different way to classic nomograms. A classic nomogram will perform a full calculation. For these nomograms you drop a line from each predictor to the scale at the bottom and add your results.
The only way to have a classic 'isopleth' nomogram working on a regression model would be 1 have just two predictors or 2 have a complex multi- step nomogram.

Fit distribution to given frequency values in R

I have frequency values changing with the time (x axis units), as presented on the picture below. After some normalization these values may be seen as data points of a density function for some distribution.
Q: Assuming that these frequency points are from Weibull distribution T, how can I fit best Weibull density function to the points so as to infer the distribution T parameters from it?
sample <- c(7787,3056,2359,1759,1819,1189,1077,1080,985,622,648,518,
611,1037,727,489,432,371,1125,69,595,624)
plot(1:length(sample), sample, type = "l")
points(1:length(sample), sample)
Update.
To prevent from being misunderstood, I would like to add little more explanation. By saying I have frequency values changing with the time (x axis units) I mean I have data which says that I have:
7787 realizations of value 1
3056 realizations of value 2
2359 realizations of value 3 ... etc.
Some way towards my goal (incorrect one, as I think) would be to create a set of these realizations:
# Loop to simulate values
set.values <- c()
for(i in 1:length(sample)){
set.values <<- c(set.values, rep(i, times = sample[i]))
}
hist(set.values)
lines(1:length(sample), sample)
points(1:length(sample), sample)
and use fitdistr on the set.values:
f2 <- fitdistr(set.values, 'weibull')
f2
Why I think it is incorrect way and why I am looking for a better solution in R?
in the distribution fitting approach presented above it is assumed that set.values is a complete set of my realisations from the distribution T
in my original question I know the points from the first part of the density curve - I do not know its tail and I want to estimate the tail (and the whole density function)
Here is a better attempt, like before it uses optim to find the best value constrained to a set of values in a box (defined by the lower and upper vectors in the optim call). Notice it scales x and y as part of the optimization in addition to the Weibull distribution shape parameter, so we have 3 parameters to optimize over.
Unfortunately when using all the points it pretty much always finds something on the edges of the constraining box which indicates to me that maybe Weibull is maybe not a good fit for all of the data. The problem is the two points - they ares just too large. You see the attempted fit to all data in the first plot.
If I drop those first two points and just fit the rest, we get a much better fit. You see this in the second plot. I think this is a good fit, it is in any case a local minimum in the interior of the constraining box.
library(optimx)
sample <- c(60953,7787,3056,2359,1759,1819,1189,1077,1080,985,622,648,518,
611,1037,727,489,432,371,1125,69,595,624)
t.sample <- 0:22
s.fit <- sample[3:23]
t.fit <- t.sample[3:23]
wx <- function(param) {
res <- param[2]*dweibull(t.fit*param[3],shape=param[1])
return(res)
}
minwx <- function(param){
v <- s.fit-wx(param)
sqrt(sum(v*v))
}
p0 <- c(1,200,1/20)
paramopt <- optim(p0,minwx,gr=NULL,lower=c(0.1,100,0.01),upper=c(1.1,5000,1))
popt <- paramopt$par
popt
rms <- paramopt$value
tit <- sprintf("Weibull - Shape:%.3f xscale:%.1f yscale:%.5f rms:%.1f",popt[1],popt[2],popt[3],rms)
plot(t.sample[2:23], sample[2:23], type = "p",col="darkred")
lines(t.fit, wx(popt),col="blue")
title(main=tit)
You can directly calculate the maximum likelihood parameters, as described here.
# Defining the error of the implicit function
k.diff <- function(k, vec){
x2 <- seq(length(vec))
abs(k^-1+weighted.mean(log(x2), w = sample)-weighted.mean(log(x2),
w = x2^k*sample))
}
# Setting the error to "quite zero", fulfilling the equation
k <- optimize(k.diff, vec=sample, interval=c(0.1,5), tol=10^-7)$min
# Calculate lambda, given k
l <- weighted.mean(seq(length(sample))^k, w = sample)
# Plot
plot(density(rep(seq(length(sample)),sample)))
x <- 1:25
lines(x, dweibull(x, shape=k, scale= l))
Assuming the data are from a Weibull distribution, you can get an estimate of the shape and scale parameter like this:
sample <- c(7787,3056,2359,1759,1819,1189,1077,1080,985,622,648,518,
611,1037,727,489,432,371,1125,69,595,624)
f<-fitdistr(sample, 'weibull')
f
If you are not sure whether it is distributed Weibull, I would recommend using the ks.test. This tests whether your data is from a hypothesised distribution. Given your knowledge of the nature of the data, you could test for a few selected distributions and see which one works best.
For your example this would look like this:
ks = ks.test(sample, "pweibull", shape=f$estimate[1], scale=f$estimate[2])
ks
The p-value is insignificant, hence you do not reject the hypothesis that the data is from a Weibull distribution.
Update: The histograms of either the Weibull or exponential look like a good match to your data. I think the exponential distribution gives you a better fit. Pareto distribution is another option.
f<-fitdistr(sample, 'weibull')
z<-rweibull(10000, shape= f$estimate[1],scale= f$estimate[2])
hist(z)
f<-fitdistr(sample, 'exponential')
z = rexp(10000, f$estimate[1])
hist(z)

Programming a QQ plot

I have a sample of math test scores for male and female students. I want to draw QQ plot for each gender to see if each of them is normally distributed. I know how to draw the QQ plot for the overall sample, but how can I draw them separately?
Here is a simple solution using base graphics:
scores <- rnorm(200, mean=12, sd=2)
gender <- gl(2, 50, labels=c("M","F"))
opar <- par(mfrow=c(1,2))
for (g in levels(gender))
qqnorm(scores[gender==g], main=paste("Gender =", g))
par(opar)
A more elegant lattice solution then:
qqmath(~ scores | gender, data=data.frame(scores, gender), type=c("p", "g"))
See the on-line help for qqmath for more discussion and example of possible customization.
In Python, you have a QQplot method offered by the OpenTURNS Library see doc here. Here is an example.
In a first step, we generate a random sample of size 300 from a Uniform distribution.
In a second step, we consider that we do not know where this sample comes from and try to fit a Normal distribution and a Uniform distribution.
In a third step, we draw the QQPlot of ;the sample against each of the fitted distributions in order to "see" which one is the best
1st step:
import openturns as ot
from openturns.viewer import View
distribution = ot.Uniform(-1, 1)
sample = distribution.getSample(300)
2nd step:
fitted_normal = ot.NormalFactory().build(sample)
fitted_uniform = ot.UniformFactory().build(sample)
3rd step:
QQ_plot1 = ot.VisualTest.DrawQQplot(sample, fitted_normal)
QQ_plot2 = ot.VisualTest.DrawQQplot(sample,fitted_uniform)
View(QQ_plot1)
View(QQ_plot2)
As expected, the fitted Uniform is more adapted to the sample the Normal which has bigger error at both ends of the domain.

Resources