How to format p-values in ggplot2 `annotate()`—APA style? - r

I would like to add a p-value to a scatter-plot, while respecting APA style. This entails two elements: (a) an italicized p, and (b) stripping the leading zero (but also: formatting values smaller than .001 as < .001).
We can strip the leading zero with a custom formula
# Formatting formula
format.p <- function(p, precision = 0.001) {
digits <- -log(precision, base = 10)
p <- formatC(p, format = 'f', digits = digits)
p[p == formatC(0, format = 'f', digits = digits)] <- paste0('< ', precision)
sub("0", "", p)}
# Get p-value
(p = cor.test(mtcars$wt, mtcars$mpg)$p.value)
1.293959e-10
# Format p-value
(p = format.p(p))
"< .001"
# Make plot
library(ggplot2)
ggplot(mtcars,aes(x=wt,y=mpg)) +
stat_smooth(geom="line",method="lm")+
annotate(geom="text",label=paste0("p = ", p),x=4.5,y=25,size=8)
We can also achieve the italicized p like this:
ggplot(mtcars,aes(x=wt,y=mpg)) +
stat_smooth(geom="line",method="lm") +
(geom="text",label=paste0("italic('p')~'='",p),parse=T,x=4.5,y=25,size=8)
But notice then that we lost the stripped zero (the leading zero is back while we don't want it). Any idea how to fix this?

Solution provided by #rawr in comments (thank you!)
The key was to change label=paste0("italic('p')~'='", p) to label=sprintf("italic('p')~'%s'", p).
Furthermore, in order to avoid having situations where the function would simultaneously output equal and smaller than signs (e.g., p = < .001), I have also modified the format.p() function to choose either < or = depending on the situation.
Here's the final solution:
# Formatting formula
format.p <- function(p, precision = 0.001) {
digits <- -log(precision, base = 10)
p <- formatC(p, format = 'f', digits = digits)
if (p < .001) {
p = paste0('< ', precision)}
if (p >= .001) {
p = paste0('= ', p) }
sub("0", "", p)
}
# Get p-value
(p = cor.test(mtcars$wt, mtcars$mpg)$p.value)
1.293959e-10
# Format p-value
(p = format.p(p))
"< .001"
# Make plot
library(ggplot2)
ggplot(mtcars,aes(x=wt,y=mpg)) +
stat_smooth(geom="line",method="lm")+
annotate(geom="text",label=sprintf("italic('p')~'%s'",p),parse=TRUE,x=4.5,y=25,size=8)

Related

Area Under the Curve using Simpson's rule in R

I would like to compute the Area Under the Curve defined by a set of experimental values. I created a function to calculate an aproximation of the AUC using the Simpson's rule as I saw in this post. However, the function only works when it receives a vector of odd length. How can I modify the code to add the area of the last trapezoid when the input vector has an even length.
AUC <- function(x, h=1){
# AUC function computes the Area Under the Curve of a time serie using
# the Simpson's Rule (numerical method).
# https://link.springer.com/chapter/10.1007/978-1-4612-4974-0_26
# Arguments
# x: (vector) time serie values
# h: (int) temporal resolution of the time serie. default h=1
n = length(x)-1
xValues = seq(from=1, to=n, by=2)
sum <- list()
for(i in 1:length(xValues)){
n_sub <- xValues[[i]]-1
n <- xValues[[i]]
n_add <- xValues[[i]]+1
v1 <- x[[n_sub+1]]
v2 <- x[[n+1]]
v3 <- x[[n_add+1]]
s <- (h/3)*(v1+4*v2+v3)
sum <- append(sum, s)
}
sum <- unlist(sum)
auc <- sum(sum)
return(auc)
}
Here a data example:
smoothed = c(0.3,0.317,0.379,0.452,0.519,0.573,0.61,0.629,0.628,0.613,0.587,0.556,0.521,
0.485,0.448,0.411,0.363,0.317,0.273,0.227,0.185,0.148,0.12,0.103,0.093,0.086,
0.082,0.079,0.076,0.071,0.066,0.059,0.053,0.051,0.052,0.057,0.067,0.081,0.103,
0.129,0.165,0.209,0.252,0.292,0.328,0.363,0.398,0.431,0.459,0.479,0.491,0.494,
0.488,0.475,0.457,0.43,0.397,0.357,0.316,0.285,0.254,0.227,0.206,0.189,0.181,
0.171,0.157,0.151,0.162,0.192,0.239)
One recommended way to handle an even number of points and still achieve precision is to combine Simpson's 1/3 rule with Simpson's 3/8 rule, which can handle an even number of points. Such approaches can be found in (at least one or perhaps more) engineering textbooks on numerical methods.
However, as a practical matter, you can write a code chunk to check the data length and add a single trapezoid at the end, as was suggested in the last comment of the post to which you linked. I wouldn't assume that it is necessarily as precise as combining Simpson's 1/3 and 3/8 rules, but it is probably reasonable for many applications.
I would double-check my code edits below, but this is the basic idea.
AUC <- function(x, h=1){
# AUC function computes the Area Under the Curve of a time serie using
# the Simpson's Rule (numerical method).
# https://link.springer.com/chapter/10.1007/978-1-4612-4974-0_26
# Arguments
# x: (vector) time serie values
# h: (int) temporal resolution of the time serie. default h=1
#jh edit: check for even data length
#and chop off last data point if even
nn = length(x)
if(length(x) %% 2 == 0){
xlast = x[length(x)]
x = x[-length(x)]
}
n = length(x)-1
xValues = seq(from=1, to=n, by=2)
sum <- list()
for(i in 1:length(xValues)){
n_sub <- xValues[[i]]-1
n <- xValues[[i]]
n_add <- xValues[[i]]+1
v1 <- x[[n_sub+1]]
v2 <- x[[n+1]]
v3 <- x[[n_add+1]]
s <- (h/3)*(v1+4*v2+v3)
sum <- append(sum, s)
}
sum <- unlist(sum)
auc <- sum(sum)
##jh edit: add trapezoid for last two data points to result
if(nn %% 2 == 0){
auc <- auc + (x[length(x)] + xlast)/2 * h
}
return(auc)
}
sm = smoothed[-length(smoothed)]
length(sm)
[1] 70
#even data as an example
AUC(sm)
[1] 20.17633
#original odd data
AUC(smoothed)
[1] 20.389
There may be a good reason for you to prefer using Simpson's rule, but if you're just looking for a quick and efficient estimate of AUC, the trapezoid rule is far easier to implement, and does not require an even number of breaks:
AUC <- function(x, h = 1) sum((x[-1] + x[-length(x)]) / 2 * h)
AUC(smoothed)
#> [1] 20.3945
Here, I show example code that uses the Simpson's 1/3 and 3/8 rules in tandem for the numerical integration of data. As always, the usual caveats about the possibility of coding errors or compatibility issues apply.
The output at the end compares the numerical estimates of this algorithm with the trapezoidal rule using R's "integrate" function.
#Algorithm adapted from:
#Numerical Methods for Engineers, Seventh Edition,
#By Chapra and Canale, page 623
#Modified to accept data instead of functional values
#Modified by: Jeffrey Harkness, M.S.
##Begin Simpson's rule function code
simp13 <- function(dat, h = 1){
ans = 2*h*(dat[1] + 4*dat[2] + dat[3])/6
return(ans)}
simp13m <- function(dat, h = 1){
summ <- dat[1]
n <- length(dat)
nseq <- seq(2,(n-2),2)
for(i in nseq){
summ <- summ + 4*dat[i] + 2*dat[i+1]}
summ <- summ + 4*dat[n-1] + dat[n]
result <- (h*summ)/3
return(result)}
simp38 <- function(dat, h = 1){
ans <- 3*h*(dat[1] + 3*sum(dat[2:3]) + dat[4])/8
return(ans)}
simpson = function(dat, h = 1){
hin = h
len = length(dat)
comp <- len %% 2
##number of segments
if(len == 2){
ans = sum(dat)/2*h} ##n = 2 is the trapezoidal rule
if(len == 3){
ans = simp13(dat, h = hin)}
if(len == 4){
ans = simp38(dat,h = hin)}
if(len == 6){
ans <- simp38(dat[1:4],h = hin) + simp13(dat[4:len],h = hin)}
if(len > 6 & comp == 0){
ans = simp38(dat[1:4],h = hin) + simp13m(dat[4:len],h = hin)}
if(len >= 5 & comp == 1){
ans = simp13m(dat,h = hin)}
return(ans)}
##End Simpson's rule function code
This next section of code shows the performance comparison. This code can easily be altered for different test functions and cases.
The precision difference tends to change with the sample size and test function used; this example is not intended to imply that the difference is always this pronounced.
#other algorithm for comparison purposes, from Allan Cameron above
oa <- function(x, h = 1) sum((x[-1] + x[-length(x)]) / 2 * h)
#Testing and algorithm comparison code
simans = NULL; oaans = NULL; simerr = NULL; oaerr = NULL; mp = NULL
for( j in 1:10){
n = j
#f = function(x) cos(x) + 2 ##Test functions
f = function(x) 0.2 + 25*x - 200*x^2 + 675*x^3 - 900*x^4 + 400*x^5
a = 0;b = 10
h = (b-a)/n
datain = seq(a,b,by = h)
preans = integrate(f,a,b)$value #precise numerical estimate of test function
simans[j] = simpson(f(datain), h = h)
oaans[j] = oa(f(datain), h = h)
(simerr[j] = abs(simans[j] - preans)/preans * 100)
(oaerr[j] = abs(oaans[j] - preans)/preans * 100)
mp[j] = simerr[j] < oaerr[j]
}
(outframe = data.frame("simpsons percent diff" = simerr,"trapezoidal percent diff" = oaerr, "more precise?" = mp, check.names = F))
simpsons percent diff trapezoidal percent diff more precise?
1 214.73489738 214.734897 FALSE
2 15.07958148 64.993410 TRUE
3 6.70203621 29.816799 TRUE
4 0.94247384 16.955208 TRUE
5 0.54830021 10.905620 TRUE
6 0.18616767 7.593825 TRUE
7 0.12051767 5.588209 TRUE
8 0.05890462 4.282980 TRUE
9 0.04087107 3.386525 TRUE
10 0.02412733 2.744500 TRUE

R substitute function prints c() concatenated into equation

I had an old function that worked like a charm:
lm_eqn = function(m) {
l <- list(a = format(coef(m)[1], digits = 2),
b = format(abs(coef(m)[2]), digits = 2),
r2 = format(summary(m)$r.squared, digits = 3));
eq <- substitute(italic(C)[i] == a + b %.% italic(I)[i]*","~~italic(r)^2~"="~r2,l)
as.character(as.expression(eq));
}
where m was an lm model. This would produce an equation like the following:
y = 0.3 + 4.4x, r = 0.67
which could then be used in a ggplot to show the model formula with its graph. The problem is that the same equation now incorporates uncalled for symbols:
y = c(0.3) + c(4.4)x, r=0.67
The concatenated c() is now included for each variable from the list I am accruing - and I don't know why. Does anyone know how to
a) prevent this, or
b) correct it?
Note: the problem seems to emerge in substitution, the output of eq is:
"italic(y) == c(`(Intercept)` = \"0.3\") + c(x = \"4.4\") %.% italic(x) * \",\" ~ ~italic(r)^2 ~ \"=\" ~ \"0.67\""
It looks like substitute's output includes the c() for the intercept and slope.
edit
m in this case is a generic lm element. For example
x <- c(5,3,6,8,2,6)
y <- c(2,6,3,7,4,9)
test.lm <- lm(y~x)
lm_eqn(test.lm)
[1] "italic(C)[i] == c(`(Intercept)` = \"3.3\") + c(x = \"0.37\") %.% italic(I)[i] * \",\" ~ ~italic(r)^2 ~ \"=\" ~ \"0.0969\""
You apparently need to unname the coef() values:
lm_eqn = function(m) {
l <- list(a = format(unname(coef(m))[1], digits = 2),
b = format(abs(unname(coef(m))[2]), digits = 2),
r2 = format(summary(m)$r.squared, digits = 3));
eq <- bquote( italic(C)[i] == .(l$a) + .(l$b) %.% italic(I)[i]*","~~italic(r)^2~"="~.(l$r2))
as.character(as.expression(eq));
}
I also think you need to clarify exactly what you are hoping to see. At the moment you are creating an expression vector with two elements and then you are converting that to a character. The fact that ggplot requires character values for its "expressions" makes it quite difficult to look at a character value and figure out what will be displayed, so you should probably expand your test code to include that manner in which this value will be delivered. (It's much easier to look at a real R expression.) I think there are mechanisms that allow unevaluated expressions to be passed to ggplot annotations and titles but they seem incredibly convoluted to my eyes.
Could also use substitute which requires specifying a list that has named elements.
lm_eqn = function(m) {
l <- list(a = format(unname(coef(m))[1], digits = 2),
b = format(abs(unname(coef(m))[2]), digits = 2),
r2 = format(summary(m)$r.squared, digits = 3));
eq <- substitute( italic(C)[i] == a + b %.% italic(I)[i]*","~~italic(r)^2 == r2, env=l) )
as.character(as.expression(eq));
}
lm_eqn(test.lm)
[1] "italic(C)[i] == \"3.3\" + \"0.37\" %.% italic(I)[i] * \",\" ~ ~italic(r)^2 == \"0.0969\""

Plotting a sum in R Studio

I am trying to plot the following function in R Studio using the curve function as follows:
loglikelihood.func = function(x, mu){
n = length(x)
n*mu - sum(x) - sum(exp(mu)/(exp(x)))
}
curve(expr = loglikelihood.func(x = data, mu), xname = "mu", from
= 0, to = 15)
Now, I have a vector of data that contains 50 data points and in the function, it is currently summing both mu and x (my data) i.e. sum(exp(mu)/(exp(x))) is equivalent to sum(exp(mu))/sum(exp(x))
I want my function to work so that I have the fraction sum for each different data point in x while keeping mu constant i.e. exp(mu)/exp(x1) + exp(mu)/exp(x2) + exp(mu)/exp(x3) + ... and repeat this for each separate mu when it plots in the curve function.
If I change my function to exp(mu)/sum(exp(x)) it's doing exp(mu)/[exp(x1)+exp(x2)+...] which is not what I want. Can someone offer some advice here?
Edit:
This is a subset of my data,
data = c(8.5,8.9,9.1,8.9,8.4,9.7,9.1,9.6,8.7,9.3,9.6,9.3,8.7,9.0,8.8,8.9,8.9,12.2)
Not sure I get this right... But curve() can't do as much by itself. You can define your function:
loglikelihood.func = function(x, mu) {
length(x) * mu - sum(x) - sum(exp(mu)/(exp(x)))
}
Then define a range (for x) over which you'd like to plot it, and specify the limits (xlim, ylim) on a first plot. If you don't want to have a sequence, you can use your data instead.
xrange <- seq(from=1, to=10, by = 0.1)
plot(x=xrange, y = sapply(xrange, function(x) loglikelihood.func(x, mu=0)),
xlim = c(1, 10),
ylim = c(-10, 0),
type = "l")
Then add other curves, specifying different mu's:
lines(x=xrange, y = sapply(xrange, function(x) loglikelihood.func(x, mu=1)))
lines(x=xrange, y = sapply(xrange, function(x) loglikelihood.func(x, mu=2)))
(More practical doing with a loop if you don't need extra graph parameters)

Setting up a Horner polynomial in R

I am trying to set up a function in R that computes a polynomial
P(x) = c1 + c2*x + c3*x^2 + ... + cn-1*x^n-2 + cn*x^n-1
for various values of x and set coefficients c.
Horner's method is to
Set cn = bn
For i = n-1, n-1, ..., 2, 1, set bi = bi+1*x + ci
Return the output
What I have so far:
hornerpoly1 <- function(x, coef, output = tail(coef,n=1), exp = seq_along(coef)-1) {
for(i in 1:tail(exp,n=1)) {
(output*x)+head(tail(coef,n=i),n=1)
}
}
hornerpoly <- function(x, coef) {
exp<-seq_along(coef)-1
output<-tail(coef,n=1)
if(length(coef)<2) {
stop("Must be more than one coefficient")
}
sapply(x, hornerpoly1, coef, output,exp)
}
I also need to error check on the length of coef, that's what the if statement is for but I am not struggling with that part. When I try to compute this function for x = 1:3 and coef = c(4,16,-1), I get three NULL statements, and I can't figure out why. Any help on how to better construct this function or remedy the null output is appreciated. Let me know if I can make anything more clear.
How about the following:
Define a function that takes x as the argument at which to evaluate the polynomial, and coef as the vector of coefficients in decreasing order of degree. So the vector coef = c(-1, 16, 4) corresponds to P(x) = -x^2 + 16 * x + 4.
The Horner algorithm is implemented in the following function:
f.horner <- function(x, coef) {
n <- length(coef);
b <- rep(0, n);
b[n] <- coef[n];
while (n > 0) {
n <- n - 1;
b[n] <- coef[n] + b[n + 1] * x;
}
return(b[1]);
}
We evaluate the polynomial at x = 1:3 for coef = c(-1, 16, 4):
sapply(1:3, f.horner, c(-1, 16, 4))
#[1] 19 47 83
Some final comments:
Note that the check on the length of coef is realised in the statement while (n > 0) {...}, i.e. we go through the coefficients starting from the last and stop when we reach the first coefficient.
You don't need to save the intermediate b values as a vector in the function. This is purely for (my) educational/trouble-shooting purposes. It's easy to rewrite the code to store bs last value, and then update b every iteration. You could then also vectorise f.horner to take a vector of x values instead of only a scalar.

fminsearch in R is worse than in Matlab

There is my data (x and y columns are relevant):
https://www.dropbox.com/s/b61a7enhoa0p57p/Simple1.csv
What I need is to fit the data with the polyline. Matlab code that does this is:
spline_fit.m:
function [score, params] = spline_fit (points, x, y)
min_f = min(x)-1;
max_f = max(x);
points = [min_f points max_f];
params = zeros(length(points)-1, 2);
score = 0;
for i = 1:length(points)-1
in = (x > points(i)) & (x <= points(i+1));
if sum(in) > 2
p = polyfit(x(in), y(in), 1);
pred = p(1)*x(in) + p(2);
score = score + norm(pred - y(in));
params(i, :) = p;
else
params(i, :) = nan;
end
end
test.m:
%Find the parameters
r = [100,250,400];
p = fminsearch('spline_fit', r, [], x, y)
[score, param] = spline_fit(p, x, y)
%Plot the result
y1 = zeros(size(x));
p1 = [-inf, p, inf];
for i = 1:size(param, 1)
in = (x > p1(i)) & (x <= p1(i+1));
y1(in) = x(in)*param(i,1) + param(i,2);
end
[x1, I] = sort(x);
y1 = y1(I);
plot(x,y,'x',x1,y1,'k','LineWidth', 2)
And this does work fine, producing following optimization: [102.9842, 191.0006, 421.9912]
I've implemented the same idea in R:
library(pracma);
spline_fit <- function(x, xx, yy) {
min_f = min(xx)-1;
max_f = max(xx);
points = c(min_f, x, max_f)
params = array(0, c(length(points)-1, 2));
score = 0;
for( i in 1:length(points)-1)
{
inn <- (xx > points[i]) & (xx <= points[i+1]);
if (sum(inn) > 2)
{
p <- polyfit(xx[inn], yy[inn], 1);
pred <- p[1]*xx[inn] + p[2];
score <- score + norm(as.matrix(pred - yy[inn]),"F");
params[i,] <- p;
}
else
params[i,] <- NA;
}
score
}
But I get very bad results:
> fminsearch(spline_fit,c(100,250,400), xx = Simple1$x, yy = Simple1$y)
$xval
[1] 100.1667 250.0000 400.0000
$fval
[1] 4452.761
$niter
[1] 2
As you can see, it stops after 2 iterations and doesn't produce good points.
I'll be very glad for any help in resolving this issue.
Also, if anyone knows how to implement this in C# using any free library, it will be even better. I know whereto get polyfit, but not fminsearch.
The problem here is that the likelihood surface is very badly behaved -- there are both multiple minima and discontinuous jumps -- which will make the results you get with different optimizers almost arbitrary. I will admit that MATLAB's optimizers are remarkably robust, but I would say that it's pretty much a matter of chance (and where you start) whether an optimizer will get to the global minimum for this case, unless you use some form of stochastic global optimization such as simulated annealing.
I chose to use R's built-in optimizer (which uses Nelder-Mead by default) rather than fminsearch from the pracma package.
spline_fit <- function(x, xx = Simple1$x, yy=Simple1$y) {
min_f = min(xx)-1
max_f = max(xx)
points = c(min_f, x, max_f)
params = array(0, c(length(points)-1, 2))
score = 0
for( i in 1:(length(points)-1))
{
inn <- (xx > points[i]) & (xx <= points[i+1]);
if (sum(inn) > 2)
{
p <- polyfit(xx[inn], yy[inn], 1);
pred <- p[1]*xx[inn] + p[2];
score <- score + norm(as.matrix(pred - yy[inn]),"F");
params[i,] <- p;
}
else
params[i,] <- NA;
}
score
}
library(pracma) ## for polyfit
Simple1 <- read.csv("Simple1.csv")
opt1 <- optim(fn=spline_fit,c(100,250,400), xx = Simple1$x, yy = Simple1$y)
## [1] 102.4365 201.5835 422.2503
This is better than the fminsearch results, but still different from the MATLAB results, and worse than them:
## Matlab results:
matlab_fit <- c(102.9842, 191.0006, 421.9912)
spline_fit(matlab_fit, xx = Simple1$x, yy = Simple1$y)
## 3724.3
opt1$val
## 3755.5 (worse)
The bbmle package offers an experimental/not very well documented set of tools for exploring optimization surfaces:
library(bbmle)
ss <- slice2D(fun=spline_fit,opt1$par,nt=51)
library(lattice)
A 2D "slice" around the optim-estimated parameters. The circles show the optim fit (solid) and the minimum value within each slice (open).
png("splom1.png")
print(splom(ss))
dev.off()
A 'slice' between the matlab and optim fits shows that the surface is quite rugged:
ss2 <- bbmle:::slicetrans(matlab_fit,opt1$par,spline_fit)
png("slice1.png")
print(plot(ss2))
dev.off()

Resources