Programming a QQ plot - r

I have a sample of math test scores for male and female students. I want to draw QQ plot for each gender to see if each of them is normally distributed. I know how to draw the QQ plot for the overall sample, but how can I draw them separately?

Here is a simple solution using base graphics:
scores <- rnorm(200, mean=12, sd=2)
gender <- gl(2, 50, labels=c("M","F"))
opar <- par(mfrow=c(1,2))
for (g in levels(gender))
qqnorm(scores[gender==g], main=paste("Gender =", g))
par(opar)
A more elegant lattice solution then:
qqmath(~ scores | gender, data=data.frame(scores, gender), type=c("p", "g"))
See the on-line help for qqmath for more discussion and example of possible customization.

In Python, you have a QQplot method offered by the OpenTURNS Library see doc here. Here is an example.
In a first step, we generate a random sample of size 300 from a Uniform distribution.
In a second step, we consider that we do not know where this sample comes from and try to fit a Normal distribution and a Uniform distribution.
In a third step, we draw the QQPlot of ;the sample against each of the fitted distributions in order to "see" which one is the best
1st step:
import openturns as ot
from openturns.viewer import View
distribution = ot.Uniform(-1, 1)
sample = distribution.getSample(300)
2nd step:
fitted_normal = ot.NormalFactory().build(sample)
fitted_uniform = ot.UniformFactory().build(sample)
3rd step:
QQ_plot1 = ot.VisualTest.DrawQQplot(sample, fitted_normal)
QQ_plot2 = ot.VisualTest.DrawQQplot(sample,fitted_uniform)
View(QQ_plot1)
View(QQ_plot2)
As expected, the fitted Uniform is more adapted to the sample the Normal which has bigger error at both ends of the domain.

Related

How do I find a true mixture density in R?

I don't understand how to find a plot of the true mixture density. In the following, I have provided the code which creates a function for the normal mixture. And I understand how to create each individual density plot, but I do not know how to find the true density.
NormalMix <- function(n,omega,mu1,mu2,sigma1,sigma2) {
z <- sample(2,n,replace=T,prob=c(omega,1-omega))
n1 <- sum(z==1)
n2 <- sum(z==2)
z[z==1] <- rnorm(n1,mu1,sigma1)
z[z==2] <- rnorm(n2,mu2,sigma2)
z
}
I and my professor wrote this paper: D. S. Young, X. Chen, D. C. Hewage, and R. N. Poyanco (2018). “Finite Mixture-of-Gamma Preparation Distributions: Estimation, Inference, and Model-Based Clustering.”
And he also has an R library, which is called mixtools: https://cran.r-project.org/web/packages/mixtools/vignettes/mixtools.pdf
You can use the wait1 <- normalmixEM(waiting, lambda = .5, mu = c(55, 80), sigma = 5) for example, on page 6, to extimate the parameters.
This library doesn't need you to provide the mean and standard deviation, it will estimate for you based on the range you provided. And you can use native R plot function to generate nice plots.
I hope it helps.

ROC curve based on means and variances of controls and cases

Does anyone know of an R package (or any other statistical freeware or just a piece of code) that lets you plot a smooth ROC curve knowing only the means and variances of the control and case groups? That is, one that doesn't require a dataset with specific classifier values and test outcomes. I found a couple of online graph plotters that do just that:
https://kennis-research.shinyapps.io/ROC-Curves/ ,
http://arogozhnikov.github.io/2015/10/05/roc-curve.html
Any help appreciated
I don't think you need any fancy package for this. You can just use simple probability functions in base R.
m1 <- 0
m2 <- 2
v1 <- 4
v2 <- 4
range <- seq(-10, 10, length.out=200)
d1<-pnorm(range, m1, sd=sqrt(v1))
d2<-pnorm(range, m2, sd=sqrt(v2))
tpr <- 1-d2
fpr <- 1-d1
plot(fpr, tpr, xlim=0:1, ylim=0:1, type="l")
abline(0,1, lty=2)

R: How to read Nomograms to predict the desired variable

I am using Rstudio. I have created nomograms using function nomogram from package rms using following code (copied from the example code of the documentation):
library(rms)
n <- 1000 # define sample size
set.seed(17) # so can reproduce the results
age <- rnorm(n, 50, 10)
blood.pressure <- rnorm(n, 120, 15)
cholesterol <- rnorm(n, 200, 25)
sex <- factor(sample(c('female','male'), n,TRUE))
# Specify population model for log odds that Y=1
L <- .4*(sex=='male') + .045*(age-50) +
(log(cholesterol - 10)-5.2)*(-2*(sex=='female') + 2*(sex=='male'))
# Simulate binary y to have Prob(y=1) = 1/[1+exp(-L)]
y <- ifelse(runif(n) < plogis(L), 1, 0)
ddist <- datadist(age, blood.pressure, cholesterol, sex)
options(datadist='ddist')
f <- lrm(y ~ lsp(age,50)+sex*rcs(cholesterol,4)+blood.pressure)
nom <- nomogram(f, fun=function(x)1/(1+exp(-x)), # or fun=plogis
fun.at=c(.001,.01,.05,seq(.1,.9,by=.1),.95,.99,.999),
funlabel="Risk of Death")
#Instead of fun.at, could have specified fun.lp.at=logit of
#sequence above - faster and slightly more accurate
plot(nom, xfrac=.45)
Result:
This code produces a nomogram but there is no line connecting each scale (called isopleth) to help predict the desired variable ("Risk of Death") from the plot. Usually, nomograms have the isopleth for prediction (example from wikipedia). But here, how do I predict the variable value?
EDIT:
From the documentation:
The nomogram does not have lines representing sums, but it has a
reference line for reading scoring points (default range 0--100). Once
the reader manually totals the points, the predicted values can be
read at the bottom.
I don't understand this. It seems that predicting is supposed to be done without the isopleth, from the scale of points. but how? Can someone please elaborate with this example on how I can read the nomograms to predict the desired variable? Thanks a lot!
EDIT 2 (FYI):
In the description of the bounty, I am talking about the isopleth. When starting the bounty, I did not know that nomogram function does not provide isopleth and has points scale instead.
From the documentation, the nomogram is used to manualy obtain prediction:
In the top of the plot (over Total points)
you draw a vertical line for each of the variables of your patient (for example age=40, cholesterol=220 ( and sex=male ), blood.pressure=172)
then you sum up the three values you read on the Points scale (40+60+3=103) to obtain Total Points.
Finally you draw a vertical line on the Total Points scale (103) to read the Risk of death (0.55).
These are regression nomograms, and work in a different way to classic nomograms. A classic nomogram will perform a full calculation. For these nomograms you drop a line from each predictor to the scale at the bottom and add your results.
The only way to have a classic 'isopleth' nomogram working on a regression model would be 1 have just two predictors or 2 have a complex multi- step nomogram.

Fit distribution to given frequency values in R

I have frequency values changing with the time (x axis units), as presented on the picture below. After some normalization these values may be seen as data points of a density function for some distribution.
Q: Assuming that these frequency points are from Weibull distribution T, how can I fit best Weibull density function to the points so as to infer the distribution T parameters from it?
sample <- c(7787,3056,2359,1759,1819,1189,1077,1080,985,622,648,518,
611,1037,727,489,432,371,1125,69,595,624)
plot(1:length(sample), sample, type = "l")
points(1:length(sample), sample)
Update.
To prevent from being misunderstood, I would like to add little more explanation. By saying I have frequency values changing with the time (x axis units) I mean I have data which says that I have:
7787 realizations of value 1
3056 realizations of value 2
2359 realizations of value 3 ... etc.
Some way towards my goal (incorrect one, as I think) would be to create a set of these realizations:
# Loop to simulate values
set.values <- c()
for(i in 1:length(sample)){
set.values <<- c(set.values, rep(i, times = sample[i]))
}
hist(set.values)
lines(1:length(sample), sample)
points(1:length(sample), sample)
and use fitdistr on the set.values:
f2 <- fitdistr(set.values, 'weibull')
f2
Why I think it is incorrect way and why I am looking for a better solution in R?
in the distribution fitting approach presented above it is assumed that set.values is a complete set of my realisations from the distribution T
in my original question I know the points from the first part of the density curve - I do not know its tail and I want to estimate the tail (and the whole density function)
Here is a better attempt, like before it uses optim to find the best value constrained to a set of values in a box (defined by the lower and upper vectors in the optim call). Notice it scales x and y as part of the optimization in addition to the Weibull distribution shape parameter, so we have 3 parameters to optimize over.
Unfortunately when using all the points it pretty much always finds something on the edges of the constraining box which indicates to me that maybe Weibull is maybe not a good fit for all of the data. The problem is the two points - they ares just too large. You see the attempted fit to all data in the first plot.
If I drop those first two points and just fit the rest, we get a much better fit. You see this in the second plot. I think this is a good fit, it is in any case a local minimum in the interior of the constraining box.
library(optimx)
sample <- c(60953,7787,3056,2359,1759,1819,1189,1077,1080,985,622,648,518,
611,1037,727,489,432,371,1125,69,595,624)
t.sample <- 0:22
s.fit <- sample[3:23]
t.fit <- t.sample[3:23]
wx <- function(param) {
res <- param[2]*dweibull(t.fit*param[3],shape=param[1])
return(res)
}
minwx <- function(param){
v <- s.fit-wx(param)
sqrt(sum(v*v))
}
p0 <- c(1,200,1/20)
paramopt <- optim(p0,minwx,gr=NULL,lower=c(0.1,100,0.01),upper=c(1.1,5000,1))
popt <- paramopt$par
popt
rms <- paramopt$value
tit <- sprintf("Weibull - Shape:%.3f xscale:%.1f yscale:%.5f rms:%.1f",popt[1],popt[2],popt[3],rms)
plot(t.sample[2:23], sample[2:23], type = "p",col="darkred")
lines(t.fit, wx(popt),col="blue")
title(main=tit)
You can directly calculate the maximum likelihood parameters, as described here.
# Defining the error of the implicit function
k.diff <- function(k, vec){
x2 <- seq(length(vec))
abs(k^-1+weighted.mean(log(x2), w = sample)-weighted.mean(log(x2),
w = x2^k*sample))
}
# Setting the error to "quite zero", fulfilling the equation
k <- optimize(k.diff, vec=sample, interval=c(0.1,5), tol=10^-7)$min
# Calculate lambda, given k
l <- weighted.mean(seq(length(sample))^k, w = sample)
# Plot
plot(density(rep(seq(length(sample)),sample)))
x <- 1:25
lines(x, dweibull(x, shape=k, scale= l))
Assuming the data are from a Weibull distribution, you can get an estimate of the shape and scale parameter like this:
sample <- c(7787,3056,2359,1759,1819,1189,1077,1080,985,622,648,518,
611,1037,727,489,432,371,1125,69,595,624)
f<-fitdistr(sample, 'weibull')
f
If you are not sure whether it is distributed Weibull, I would recommend using the ks.test. This tests whether your data is from a hypothesised distribution. Given your knowledge of the nature of the data, you could test for a few selected distributions and see which one works best.
For your example this would look like this:
ks = ks.test(sample, "pweibull", shape=f$estimate[1], scale=f$estimate[2])
ks
The p-value is insignificant, hence you do not reject the hypothesis that the data is from a Weibull distribution.
Update: The histograms of either the Weibull or exponential look like a good match to your data. I think the exponential distribution gives you a better fit. Pareto distribution is another option.
f<-fitdistr(sample, 'weibull')
z<-rweibull(10000, shape= f$estimate[1],scale= f$estimate[2])
hist(z)
f<-fitdistr(sample, 'exponential')
z = rexp(10000, f$estimate[1])
hist(z)

specifying degrees of freedom for b-spline fit using bs function in splines package

I am using the bs function of the splines package to create a b-spline smoothing curve for graphical purposes. (There is at least one report that Excel uses a third order b-spline for its smooth line graphs, and I would like to be able to duplicate those curves.) I am having trouble understanding the arguments required by the bs function. Representative code follows below, as adapted from the bs documentation:
require(splines)
require(ggplot2)
n <- 10
x <- 1:10
y <- rnorm(n)
d <- data.frame(x=x, y=y)
summary(fm1 <- lm(y ~ bs(x, degree=3)), data=d)
x.spline <- seq(1, 10, length.out=n*10)
spline.data <- data.frame(x=x.spline, y=predict(fm1, data.frame(x=x.spline)))
ggplot(d, aes(x,y)) + geom_point + geom_line(aes(x,y), data=spline.data)
The example code in the bs documentation specifies df=5 in the call to bs, and does not specify degree. I have no idea how many degrees of freedom I have. All I know is that I want a third order b-spline. I have experimented with specifying different values of df instead of, or in addition to degree, and I get dramatically different results. This is why I suspect that a specification of df is the issue here. How would I calculate df in this context?
The help file suggests df = length(knots) + degree. If I treat the interior points as knots, this gives me df=11 for this example, which generates error messages and a nonsensical spline fit.
Thank you in advance.
I was apparently not clear in my intentions. I am trying to do this:
How can I use spline() with ggplot?, but with b-splines.
You should not be trying to fit every point. The goal is to find a summary that is an acceptable fit but which depends on a limited number of knots. There is not much value in increasing hte degree of the polynomial above the default of three. With only 10 points you surely do not want df=11. Try df=5 and the results should be reasonably flat. The rms/Hnisc package author, Frank Harrell, prefers restricted cubic splines because the predictions at the extremes are linear and thus less wild than would occur with other polynomial bases.
I corrected a couple of misspellings and added a knots argument to make your code work:
require(splines)
require(ggplot2); set.seed(trunc(100000*pi))
n <- 10
x <- 1:10
y <- rnorm(n)
d <- data.frame(x=x, y=y)
summary(fm1 <- lm(y ~ bs(x, degree=3, knots=2)), data=d)
x.spline <- seq(1, 10, length.out=n*10)
spline.data <- data.frame(x=x.spline, y=predict(fm1, data.frame(x=x.spline)))
ggplot(d, aes(x,y)) + geom_point() + geom_line(aes(x,y), data=spline.data)
I came away from the exercise of varying the randomseed with the opinion that Frank Harrell knows what he is talking about. I don't get the same sort of behavior at the extremes when using his packages.
I did a little more work and came up with the following. First, an apology. What I was looking for was a smoothing spline, rather than a regression spline. I did not have the vocabulary to phrase the question properly. While the example in the help file for bs() appears to provide this, the function does not provide the same behavior for my sample data. There is another function, smooth.spline, in the stats package, which offers what I needed.
set.seed(tunc(100000*pi))
n <- 10
x <- 1:n
xx <- seq(1, n, length.out=200)
y <- rnorm(n)
d <- data.frame(x=x, y=y)
spl <- smooth.spline(x,y, spar=0.1)
spline.data <- data.frame(y=predict(spl,xx))
ggplot(d,aes(x,y)) + geom_point() + geom_line(aes(x,y), spline.data)
spl2 <- smooth.spline(x, y, control=
list(trace=TRUE, tol=1e-6, spar=0.1, low=-1.5, high=0.3))
spline.data2 <- data.frame(predit(spl2,xx))
ggplot(d,aes(x,y)) + geom_point() + geom_line(aes(x,y), spline.data2)
The two calls to smooth.spline represent two approaches. The first specifies the smoothing parameter manually, and the second iterates to an optimal solution. I found that I had to constrain the optimization properly to get the type of solution I was after.
The result is intended to match the b-spline used by the Excel line plot. I have collaborators who consider Excel graphics to be the standard, and I need to at least match that performance.

Resources