Can we access all data columns in a custom ggplot2's stat? - r

I would like to implement diagnostics for Cox proportional hazards model with ggplot2, by creating new stats functions and ggproto objects. The idea is to benefit from grouping (by color, facet_grid, etc.) for conditional computation of desired statistics (e.g. martingale residuals).
In the example below, the null model is refitted and martingale's computed for each cofactor level. The issues are:
Fit is done based on data argument provided internally for the ggproto's compute_group method, not the actual dataset. This means, that dataframe is stripped of columns not explicitly defined in aes so it's up to the user to provide at least "time to event" and "event indicator" columns each time. (unless they're hardcoded in a wrappper for stat_... function). Can compute_group somehow access corresponding rows from the original dataset?
Rownames for compute_group's data are unique per PANEL, which means they do not reflect the actual rownames of original dataset supplited to ggplot, and we can no longer exclude e.g. incomplete cases omitted by full model, unless explicitly specifying/hardcoding some id variable. The question is same as above.
Can a geom layer access the y's computed by another custom stat, which is also plotted? Consider a scatterplot of residuals with a smoother, such as. ggplot(data, aes(x = covariate, time = time, event = event)) + stat_funcform() + geom_smooth(aes(y = ..martingales..)), where ..martingales.. are actually computed by stat_funcform.
.
# Load & Set data ---------------------------------------------------------
require(data.table)
require(dplyr)
require(ggplot2)
require(survival)
set.seed(1)
dt <- data.table(factor = sample(c(0, 2), 1e3, T),
factor2 = sample(c(0, 2, NA), 1e3, T),
covariate = runif(1e3, 0, 1))
dt[, `:=`(etime = rexp(1e3, .01 * (.5 + factor + covariate)),
ctime = runif(1e3, 0, 1e2))]
dt[, `:=`(event = ifelse(etime < ctime, 1, 0),
time = ifelse(etime < ctime, etime, ctime))]
survfit(Surv(time, event) ~ factor + I(covariate > .5), data = dt) %>%
autoplot(fun = 'cloglog')
# Fit coxph model ---------------------------------------------------------
fit <- coxph(Surv(time, event) ~ factor + covariate, data = dt)
# Define custom stat ------------------------------------------------------
StatFuncform <- ggproto(
"StatFuncform", Stat,
compute_group = function(data, scales, params) {
head(data) %>% print
data$y <- update(params$fit, ~ 1, data = data) %>%
resid(type = 'martingale')
return(data)
}
)
stat_funcform <- function(mapping = NULL, data = NULL, geom = "point",
position = "identity", na.rm = FALSE, show.legend = NA,
inherit.aes = TRUE, ...) {
layer(
stat = StatFuncform, data = data, mapping = mapping, geom = geom,
position = position, show.legend = show.legend, inherit.aes = inherit.aes,
params = list(na.rm = na.rm, ...)
)
}
# Plot --------------------------------------------------------------------
# computation unsuccessfull - no 'time' and 'event' variables
ggplot(data = dt,
mapping = aes(x = covariate,
color = factor(event))) +
stat_funcform(params = list(fit = fit)) +
facet_grid(~ factor, labeller = label_both, margins = TRUE)
# ok - 'time' and 'event' specified explicitly
ggplot(data = dt,
mapping = aes(x = covariate, time = time, event = event,
factor2 = factor2,
color = factor(event))) +
stat_funcform(params = list(fit = fit)) +
facet_grid(~ factor, labeller = label_both, margins = TRUE)
I'm aware there are already a few packages implementing diagnostics for survival analysis (e.g. survminer) or ggplot2's autoplot methods, but they're rather providing wrappers instead of taking advantage of default ggplot2() + stat_ semantics.

Related

ggplot2 custom stat not shown when facetting

I'm trying to write a custom stat_* for ggplot2, where I would like to color a 2D loess surface using tiles. When I start from the extension guide, I can write a stat_chull like they do:
stat_chull = function(mapping = NULL, data = NULL, geom = "polygon",
position = "identity", na.rm = FALSE, show.legend = NA,
inherit.aes = TRUE, ...) {
chull = ggproto("chull", Stat,
compute_group = function(data, scales) {
data[chull(data$x, data$y), , drop = FALSE]
},
required_aes = c("x", "y")
)
layer(
stat = chull, data = data, mapping = mapping, geom = geom,
position = position, show.legend = show.legend, inherit.aes = inherit.aes,
params = list(na.rm = na.rm, ...)
)
}
This work for both the simple call and facet wrapping:
ggplot(mpg, aes(x=displ, y=hwy)) +
geom_point() +
stat_chull()
# optionally + facet_wrap(~ class)
When I write my stat_loess2d, I can also visualize all classes or an individual class:
stat_loess2d = function(mapping = NULL, data = NULL, geom = "tile",
position = "identity", na.rm = FALSE, show.legend = NA,
inherit.aes = TRUE, ...) {
loess2d = ggproto("loess2d", Stat,
compute_group = function(data, scales) {
dens = MASS::kde2d(data$x, data$y)
lsurf = loess(fill ~ x + y, data=data)
df = data.frame(x = rep(dens$x, length(dens$y)),
y = rep(dens$y, each=length(dens$x)),
dens = c(dens$z))
df$fill = predict(lsurf, newdata=df[c("x", "y")])
df
},
required_aes = c("x", "y", "fill")
)
layer(
stat = loess2d, data = data, mapping = mapping, geom = geom,
position = position, show.legend = show.legend, inherit.aes = inherit.aes,
params = list(na.rm = na.rm, ...)
)
}
ggplot(mpg, aes(x=displ, y=hwy, fill=year)) +
geom_point(aes(color=year)) +
stat_loess2d()
ggplot(mpg[mpg$class == "compact",], aes(x=displ, y=hwy, fill=year)) +
geom_point(aes(color=year)) +
stat_loess2d()
However, when I try to facet the above, tiles are no longer shown:
ggplot(mpg, aes(x=displ, y=hwy, fill=year)) +
geom_point(aes(color=year)) +
stat_loess2d() +
facet_wrap(~ class)
Can someone tell me what I'm doing wrong here?
Explanation
The main issue I see here actually lies beyond what you have done, & is related to how geom_tile handles tile creation across different facets, when the specific x / y axis values differ significantly. An older question demonstrated a similar phenomenon: geom_tile works fine with each facet's data on its own, but put them together, and the tiles shrink to match the smallest difference between different facets' values. This leaves vast amounts of white space in the plot layer, and usually gets progressively worse with every additional facet, until the tiles themselves become practically invisible.
To get around this, I would add a data processing step after the density / loess calculations for each facet, to standardize the range of x and y values across all facets.
Some elaboration in case you are not very familiar with the relationship between compute_layer, compute_panel, and compute_group (I certainly wasn't when I started messing around with ggproto objects...):
Essentially, all Stat* objects have these three functions to bridge the gap between a given dataframe (mpg in this case), and what's received by the Geom* side of things.
Of the three, compute_layer is the top-level function, and typically triggers compute_panel to calculate a separate dataframe for each facet / panel (the terminology used in exported functions is facet, but the underlying package code refers to the same as panel; I'm not sure why either). In turn, compute_panel triggers compute_group to calculate a separate dataframe for each group (as defined by the group / colour / fill / etc. aesthetic parameters).
The results from compute_group are returned to compute_panel and combined into one dataframe. Likewise, compute_layer receives one dataframe from each facet's compute_panel, and combines them together again. The combined dataframe is then passed on to Geom* to draw.
(Above is the generic set-up as defined in the top-level Stat. Other Stat* objects inheriting from Stat may override the behaviour in any of the steps. For example, StatIdentity's compute_layer returns the original dataframe as-is, without triggering compute_panel / compute_group at all, because there is no need to do so for unchanged data.)
For this use case, we can modify the code in compute_layer, after the results have been returned from compute_panel / compute_group and combined together, to interpolate values associated with each facet into common bins. Because common bins = nice large tiles without white space in between.
Modification
Here's how I would have written the loess2d ggproto object, with an additional definition for compute_layer:
loess2d = ggproto("loess2d", Stat,
compute_group = function(data, scales) {
dens = MASS::kde2d(data$x, data$y)
lsurf = loess(fill ~ x + y, data=data)
df = data.frame(x = rep(dens$x, length(dens$y)),
y = rep(dens$y, each=length(dens$x)),
dens = c(dens$z))
df$fill = predict(lsurf, newdata=df[c("x", "y")])
df
},
compute_layer = function(self, data, params, layout) {
# no change from Stat$compute_layer in this chunk, except
# for liberal usage of `ggplot2:::` to utilise un-exported
# functions from the package
ggplot2:::check_required_aesthetics(self$required_aes,
c(names(data), names(params)),
ggplot2:::snake_class(self))
data <- remove_missing(data, params$na.rm,
c(self$required_aes, self$non_missing_aes),
ggplot2:::snake_class(self),
finite = TRUE)
params <- params[intersect(names(params), self$parameters())]
args <- c(list(data = quote(data), scales = quote(scales)), params)
df <- plyr::ddply(data, "PANEL", function(data) {
scales <- layout$get_scales(data$PANEL[1])
tryCatch(do.call(self$compute_panel, args),
error = function(e) {
warning("Computation failed in `", ggplot2:::snake_class(self),
"()`:\n", e$message, call. = FALSE)
data.frame()
})
})
# define common x/y grid range across all panels
# (length = 25 chosen to match the default value for n in MASS::kde2d)
x.range <- seq(min(df$x), max(df$x), length = 25)
y.range <- seq(min(df$y), max(df$y), length = 25)
# interpolate each panel's data to a common grid,
# with NA values for regions where each panel doesn't
# have data (this can be changed via the extrap
# parameter in akima::interp, but I think
# extrapolating may create misleading visuals)
df <- df %>%
tidyr::nest(-PANEL) %>%
mutate(data = purrr::map(data,
~akima::interp(x = .x$x, y = .x$y, z = .x$fill,
xo = x.range, yo = y.range,
nx = 25, ny = 25) %>%
akima::interp2xyz(data.frame = TRUE) %>%
rename(fill = z))) %>%
tidyr::unnest()
return(df)
},
required_aes = c("x", "y", "fill")
)
Usage:
ggplot(mpg,
aes(x=displ, y=hwy, fill=year)) +
stat_loess2d() +
facet_wrap(~ class)
# this does trigger warnings (not errors) because some of the facets contain
# really very few observations. if we filter for facets with more rows of data
# in the original dataset, this wouldn't be an issue
ggplot(mpg %>% filter(!class %in% c("2seater", "minivan")),
aes(x=displ, y=hwy, fill=year)) +
stat_loess2d() +
facet_wrap(~ class)
# no warnings triggered

Creating geom / stat from scratch

I just started working with R not long ago, and I am currently trying to strengthen my visualization skills. What I want to do is to create boxplots with mean diamonds as a layer on top (see picture in the link below). I did not find any functions that does this already, so I guess I have to create it myself.
What I was hoping to do was to create a geom or a stat that would allow something like this to work:
ggplot(data, aes(...))) +
geom_boxplot(...) +
geom_meanDiamonds(...)
I have no idea where to start in order to build this new function. I know which values are needed for the mean diamonds (mean and confidence interval), but I do not know how to build the geom / stat that takes the data from ggplot(), calculates the mean and CI for each group, and plots a mean diamond on top of each boxplot.
I have searched for detailed descriptions on how to build these type of functions from scratch, however, I have not found anything that really starts from the bottom. I would really appreciate it, if anyone could point me towards some useful guides.
Thank you!
I'm currently learning to write geoms myself, so this is going to be a rather long & rambling post as I go through my thought processes, untangling the Geom aspects (creating polygons & line segments) from the Stats aspects (calculating where these polygons & segments should be) of a geom.
Disclaimer: I'm not familiar with this kind of plot, and Google didn't throw up many authoritative guides. My understanding of how the confidence interval is calculated / used here may be off.
Step 0. Understand the relationship between a geom / stat and a layer function.
geom_boxplot and stat_boxplot are examples of layer functions. If you enter them into the R console, you'll see that they are (relatively) short, and does not contain actual code for calculating the box / whiskers of the boxplot. Instead, geom_boxplot contains a line that says geom = GeomBoxplot, while stat_boxplot contains a line that says stat = StatBoxplot (reproduced below).
> stat_boxplot
function (mapping = NULL, data = NULL, geom = "boxplot", position = "dodge2",
..., coef = 1.5, na.rm = FALSE, show.legend = NA, inherit.aes = TRUE)
{
layer(data = data, mapping = mapping, stat = StatBoxplot,
geom = geom, position = position, show.legend = show.legend,
inherit.aes = inherit.aes, params = list(na.rm = na.rm,
coef = coef, ...))
}
GeomBoxplot and StatBoxplot are ggproto objects. They are where the magic happens.
Step 1. Recognise that ggproto()'s _inherit parameter is your friend.
Don't reinvent the wheel. Since we want to create something that overlaps nicely with a boxplot, we can take reference from the Geom / Stat used for that, and only change what's necessary.
StatMeanDiamonds <- ggproto(
`_class` = "StatMeanDiamonds",
`_inherit` = StatBoxplot,
... # add functions here to override those defined in StatBoxplot
)
GeomMeanDiamonds <- ggproto(
`_class` = "GeomMeanDiamonds",
`_inherit` = GeomBoxplot,
... # as above
)
Step 2. Modify the Stat.
There are 3 functions defined within StatBoxplot: setup_data, setup_params, and compute_group. You can refer to the code on Github (link above) for the details, or view them by entering for example StatBoxplot$compute_group.
The compute_group function calculates the ymin / lower / middle / upper / ymax values for all the y values associated with each group (i.e. each unique x value), which are used to plot the box plot. We can override it with one that calculates the confidence interval & mean values instead:
# ci is added as a parameter, to allow the user to specify different confidence intervals
compute_group_new <- function(data, scales, width = NULL,
ci = 0.95, na.rm = FALSE){
a <- mean(data$y)
s <- sd(data$y)
n <- sum(!is.na(data$y))
error <- qt(ci + (1-ci)/2, df = n-1) * s / sqrt(n)
stats <- c("lower" = a - error, "mean" = a, "upper" = a + error)
if(length(unique(data$x)) > 1) width <- diff(range(data$x)) * 0.9
df <- as.data.frame(as.list(stats))
df$x <- if(is.factor(data$x)) data$x[1] else mean(range(data$x))
df$width <- width
df
}
(Optional) StatBoxplot has provision for the user to include weight as an aesthetic mapping. We can allow for that as well, by replacing:
a <- mean(data$y)
s <- sd(data$y)
n <- sum(!is.na(data$y))
with:
if(!is.null(data$weight)) {
a <- Hmisc::wtd.mean(data$y, weights = data$weight)
s <- sqrt(Hmisc::wtd.var(data$y, weights = data$weight))
n <- sum(data$weight[!is.na(data$y) & !is.na(data$weight)])
} else {
a <- mean(data$y)
s <- sd(data$y)
n <- sum(!is.na(data$y))
}
There's no need to change the other functions in StatBoxplot. So we can define StatMeanDiamonds as follows:
StatMeanDiamonds <- ggproto(
`_class` = "StatMeanDiamonds",
`_inherit` = StatBoxplot,
compute_group = compute_group_new
)
Step 3. Modify the Geom.
GeomBoxplot has 3 functions: setup_data, draw_group, and draw_key. It also includes definitions for default_aes() and required_aes().
Since we've changed the upstream data source (the data produced by StatMeanDiamonds contain the calculated columns "lower" / "mean" / "upper", while the data produced by StatBoxplot would have contained the calculated columns "ymin" / "lower" / "middle" / "upper" / "ymax"), do check whether the downstream setup_data function is affected as well. (In this case, GeomBoxplot$setup_data makes no reference to the affected columns, so no changes required here.)
The draw_group function takes the data produced by StatMeanDiamonds and set up by setup_data, and produces multiple data frames. "common" contains the aesthetic mappings common to all geoms. "diamond.df" for the mappings that contribute towards the diamond polygon, and "segment.df" for the mappings that contribute towards the horizontal line segment at the mean. The data frames are then passed to the draw_panel functions of GeomPolygon and GeomSegment respectively, to produce the actual polygons / line segments.
draw_group_new = function(data, panel_params, coord,
varwidth = FALSE){
common <- data.frame(colour = data$colour,
size = data$size,
linetype = data$linetype,
fill = alpha(data$fill, data$alpha),
group = data$group,
stringsAsFactors = FALSE)
diamond.df <- data.frame(x = c(data$x, data$xmax, data$x, data$xmin),
y = c(data$upper, data$mean, data$lower, data$mean),
alpha = data$alpha,
common,
stringsAsFactors = FALSE)
segment.df <- data.frame(x = data$xmin, xend = data$xmax,
y = data$mean, yend = data$mean,
alpha = NA,
common,
stringsAsFactors = FALSE)
ggplot2:::ggname("geom_meanDiamonds",
grid::grobTree(
GeomPolygon$draw_panel(diamond.df, panel_params, coord),
GeomSegment$draw_panel(segment.df, panel_params, coord)
))
}
The draw_key function is used to create the legend for this layer, should the need arise. Since GeomMeanDiamonds inherits from GeomBoxplot, the default is draw_key = draw_key_boxplot, and we don't have to change it. Leaving it unchanged will not break the code. However, I think a simpler legend such as draw_key_polygon offers a less cluttered look.
GeomBoxplot's default_aes specifications look fine. But we need to change the required_aes since the data we expect to get from StatMeanDiamonds is different ("lower" / "mean" / "upper" instead of "ymin" / "lower" / "middle" / "upper" / "ymax").
We are now ready to define GeomMeanDiamonds:
GeomMeanDiamonds <- ggproto(
"GeomMeanDiamonds",
GeomBoxplot,
draw_group = draw_group_new,
draw_key = draw_key_polygon,
required_aes = c("x", "lower", "upper", "mean")
)
Step 4. Define the layer functions.
This is the boring part. I copied from geom_boxplot / stat_boxplot directly, removing all references to outliers in geom_meanDiamonds, changing to geom = GeomMeanDiamonds / stat = StatMeanDiamonds, and adding ci = 0.95 to stat_meanDiamonds.
geom_meanDiamonds <- function(mapping = NULL, data = NULL,
stat = "meanDiamonds", position = "dodge2",
..., varwidth = FALSE, na.rm = FALSE, show.legend = NA,
inherit.aes = TRUE){
if (is.character(position)) {
if (varwidth == TRUE) position <- position_dodge2(preserve = "single")
} else {
if (identical(position$preserve, "total") & varwidth == TRUE) {
warning("Can't preserve total widths when varwidth = TRUE.", call. = FALSE)
position$preserve <- "single"
}
}
layer(data = data, mapping = mapping, stat = stat,
geom = GeomMeanDiamonds, position = position,
show.legend = show.legend, inherit.aes = inherit.aes,
params = list(varwidth = varwidth, na.rm = na.rm, ...))
}
stat_meanDiamonds <- function(mapping = NULL, data = NULL,
geom = "meanDiamonds", position = "dodge2",
..., ci = 0.95,
na.rm = FALSE, show.legend = NA, inherit.aes = TRUE) {
layer(data = data, mapping = mapping, stat = StatMeanDiamonds,
geom = geom, position = position, show.legend = show.legend,
inherit.aes = inherit.aes,
params = list(na.rm = na.rm, ci = ci, ...))
}
Step 5. Check output.
# basic
ggplot(iris,
aes(Species, Sepal.Length)) +
geom_boxplot() +
geom_meanDiamonds()
# with additional parameters, to see if they break anything
ggplot(iris,
aes(Species, Sepal.Length)) +
geom_boxplot(width = 0.8) +
geom_meanDiamonds(aes(fill = Species),
color = "red", alpha = 0.5, size = 1,
ci = 0.99, width = 0.3)

Constrict ggplot ellips to realistic/possible values

When plotting an ellips with ggplot is it possible to constrain the ellips to values that are actually possible?
For example, the following reproducible code and data plots Ele vs. Var for two species. Var is a positive variable and cannot be negative. Nonetheless, negative values are included in the resulting ellips. Is it possible to bound the ellips by 0 on the x-axis (using ggplot)?
More specifically, I am picturing a flat edge with the ellipsoids truncated at 0 on the x-axis.
library(ggplot2)
set.seed(123)
df <- data.frame(Species = rep(c("BHS", "MTG"), each = 100),
Ele = c(sample(1500:3000, 100), sample(2500:3500, 100)),
Var = abs(rnorm(200)))
ggplot(df, aes(Var, Ele, color = Species)) +
geom_point() +
stat_ellipse(aes(fill = Species), geom="polygon",level=0.95,alpha=0.2)
You could edit the default stat to clip points to a particular value. Here we change the basic stat to trim x values less than 0 to 0
StatClipEllipse <- ggproto("StatClipEllipse", Stat,
required_aes = c("x", "y"),
compute_group = function(data, scales, type = "t", level = 0.95,
segments = 51, na.rm = FALSE) {
xx <- ggplot2:::calculate_ellipse(data = data, vars = c("x", "y"), type = type,
level = level, segments = segments)
xx %>% mutate(x=pmax(x, 0))
}
)
Then we have to wrap it in a ggplot stat that is identical to stat_ellipe except that it uses our custom Stat object
stat_clip_ellipse <- function(mapping = NULL, data = NULL,
geom = "path", position = "identity",
...,
type = "t",
level = 0.95,
segments = 51,
na.rm = FALSE,
show.legend = NA,
inherit.aes = TRUE) {
layer(
data = data,
mapping = mapping,
stat = StatClipEllipse,
geom = geom,
position = position,
show.legend = show.legend,
inherit.aes = inherit.aes,
params = list(
type = type,
level = level,
segments = segments,
na.rm = na.rm,
...
)
)
}
then you can use it to make your plot
ggplot(df, aes(Var, Ele, color = Species)) +
geom_point() +
stat_clip_ellipse(aes(fill = Species), geom="polygon",level=0.95,alpha=0.2)
This was inspired by the source code for stat_ellipse.
Based on my comment above, I created a less-misleading option for visualization. This is ignoring the problem with y being uniformly distributed, since that's a somewhat less egregious problem than the heavily skewed x variable.
Both these options use the ggforce package, which is an extension of ggplot2, but just in case, I've also included the source for the particular function I used.
library(ggforce)
library(scales)
# power_trans <- function (n)
# {
# scales::trans_new(name = paste0("power of ", fractions(n)), transform = function(x) {
# x^n
# }, inverse = function(x) {
# x^(1/n)
# }, breaks = scales::extended_breaks(), format = scales::format_format(),
# domain = c(0, Inf))
# }
Option 1:
ggplot(df, aes(Var, Ele, color = Species)) +
geom_point() +
stat_ellipse(aes(fill = Species), geom="polygon",level=0.95,alpha=0.2) +
scale_x_sqrt(limits = c(-0.1,3.5),
breaks = c(0.0001,1:4),
labels = 0:4,
expand = c(0.00,0))
This option stretches the x-axis along a square-root transform, spreading out the points clustered near zero. Then it computes an ellipse over this new space.
Advantage: looks like an ellipse still.
Disadvantage: in order to get it to play nice and label the Var=0 point on the x axis, you have to use expand = c(0,0), which clips the limits exactly, and so requires a bit more fiddling with manual limits/breaks/labels, including choosing a very small value (0.0001) to be represented as 0.
Disadvantage: the x values aren't linearly distributed along the axis, which requires a bit more cognitive load when reading the figure.
Option 2:
ggplot(df, aes(sqrt(Var), Ele, color = Species)) +
geom_point() +
stat_ellipse() +
coord_trans(x = ggforce::power_trans(2)) +
scale_x_continuous(breaks = sqrt(0:4), labels = 0:4,
name = "Var")
This option plots the pre-transformed sqrt(Var) (notice the aes(...)). It then calculates the ellipses based on this new approximately normal value. Then it stretches out the x-axis so that the values of Var are once again linearly spaced, which distorts the ellipse in the same transformation.
Advantage: looks cool.
Advantage: values of Var are easy to interpret on the x-axis.
Advantage: you can see the density near Var=0 with the points and the wide flat end of the "egg" easily.
Advantage: the pointy end shows you how low the density is at those values.
Disadvantage: looks unfamiliar and requires explanation and additional cognitive load to interpret.

Q-Q plot with ggplot2::stat_qq, colours, single group

I'm looking for a more convenient way to get a Q-Q plot in ggplot2 where the quantiles are computed for the data set as a whole. but I can use mappings (colour/shapes) for groups in the data.
library(dplyr)
library(ggplot2)
library(broom) ## for augment()
Make up some data:
set.seed(1001)
N <- 1000
G <- 10
dd <- data_frame(x=runif(N),
f=factor(sample(1:G,size=N,replace=TRUE)),
y=rnorm(N)+2*x+as.numeric(f))
m1 <- lm(y~x,data=dd)
dda <- cbind(augment(m1),f=dd$f)
Basic plot:
ggplot(dda)+stat_qq(aes(sample=.resid))
if I try to add colour, the groups get separated for the quantile computation (which I don't want):
ggplot(dda)+stat_qq(aes(sample=y,colour=f))
If I use stat_qq(aes(sample=y,colour=f,group=1)) ggplot ignores the colour specification and I get the first plot back.
I want a plot where the points are positioned as in the first case, but coloured as in the second case. I have a qqnorm-based manual solution that I can post but am looking for something nicer ...
You could calculate the quantiles yourself and then plot using geom_point:
dda = cbind(dda, setNames(qqnorm(dda$.resid, plot.it=FALSE), c("Theoretical", "Sample")))
ggplot(dda) +
geom_point(aes(x=Theoretical, y=Sample, colour=f))
Ah, I guess I should have read to the end of your question. This is the manual solution you were referring to, right? Although you could just package it as a function:
my_stat_qq = function(data, colour.var) {
data=cbind(data, setNames(qqnorm(data$.resid, plot.it=FALSE), c("Theoretical", "Sample")))
ggplot(data) +
geom_point(aes_string(x="Theoretical", y="Sample", colour=colour.var))
}
my_stat_qq(dda, "f")
Here's a ggproto-based approach that attempts to change StatQq, since the underlying issue here (colour specification gets ignored when group is specified explicitly) is due to how its compute_group function is coded.
Define alternate version of StatQq with modified compute_group (last few lines of code):
StatQq2 <- ggproto("StatQq", Stat,
default_aes = aes(y = after_stat(sample), x = after_stat(theoretical)),
required_aes = c("sample"),
compute_group = function(data, scales, quantiles = NULL,
distribution = stats::qnorm, dparams = list(),
na.rm = FALSE) {
sample <- sort(data$sample)
n <- length(sample)
# Compute theoretical quantiles
if (is.null(quantiles)) {
quantiles <- stats::ppoints(n)
} else if (length(quantiles) != n) {
abort("length of quantiles must match length of data")
}
theoretical <- do.call(distribution, c(list(p = quote(quantiles)), dparams))
res <- ggplot2:::new_data_frame(list(sample = sample,
theoretical = theoretical))
# NEW: append remaining columns from original data
# (e.g. if there were other aesthetic variables),
# instead of returning res directly
data.new <- subset(data[rank(data$sample), ],
select = -c(sample, PANEL, group))
if(ncol(data.new) > 0) res <- cbind(res, data.new)
res
}
)
Define geom_qq2 / stat_qq2 to use modified StatQq2 instead of StatQq for their stat:
geom_qq2 <- function (mapping = NULL, data = NULL, geom = "point",
position = "identity", ..., distribution = stats::qnorm,
dparams = list(), na.rm = FALSE, show.legend = NA,
inherit.aes = TRUE) {
layer(data = data, mapping = mapping, stat = StatQq2, geom = geom,
position = position, show.legend = show.legend, inherit.aes = inherit.aes,
params = list(distribution = distribution, dparams = dparams,
na.rm = na.rm, ...))
}
stat_qq2 <- function (mapping = NULL, data = NULL, geom = "point",
position = "identity", ..., distribution = stats::qnorm,
dparams = list(), na.rm = FALSE, show.legend = NA,
inherit.aes = TRUE) {
layer(data = data, mapping = mapping, stat = StatQq2, geom = geom,
position = position, show.legend = show.legend, inherit.aes = inherit.aes,
params = list(distribution = distribution, dparams = dparams,
na.rm = na.rm, ...))
}
Usage:
cowplot::plot_grid(
ggplot(dda) + stat_qq(aes(sample = .resid)), # original
ggplot(dda) + stat_qq2(aes(sample = .resid, # new
color = f, group = 1))
)

Why is bins parameter unknown for the stat_density2d function? (ggmap)

I have been stuck with this for hours. When I run this :
library(ggmap)
set.seed(1)
n=100
df <- data.frame(x=rnorm(n, 0, 1), y=rnorm(n, 0, 1))
TestData <- ggplot (data = df) +
stat_density2d(aes(x = x, y = y,fill = as.factor(..level..)),bins=4, geom = "polygon",) +
geom_point(aes(x = x, y = y)) +
scale_fill_manual(values = c("yellow","red","green","royalblue", "black"))
I get this error message :
Error: Unknown parameters: bins
Does anyone know why?
Okay, adding this one as a second answer because I think the descriptions and comments in the first answer are useful and I don't feel like merging them. Basically I figured there must be an easy way to restore the regressed functionality. And after awhile, and learning some basics about ggplot2, I got this to work by overriding some ggplot2 functions:
library(ggmap)
library(ggplot2)
# -------------------------------
# start copy from stat-density-2d.R
stat_density_2d <- function(mapping = NULL, data = NULL, geom = "density_2d",
position = "identity", contour = TRUE,
n = 100, h = NULL, na.rm = FALSE,bins=0,
show.legend = NA, inherit.aes = TRUE, ...) {
layer(
data = data,
mapping = mapping,
stat = StatDensity2d,
geom = geom,
position = position,
show.legend = show.legend,
inherit.aes = inherit.aes,
params = list(
na.rm = na.rm,
contour = contour,
n = n,
bins=bins,
...
)
)
}
stat_density2d <- stat_density_2d
StatDensity2d <-
ggproto("StatDensity2d", Stat,
default_aes = aes(colour = "#3366FF", size = 0.5),
required_aes = c("x", "y"),
compute_group = function(data, scales, na.rm = FALSE, h = NULL,
contour = TRUE, n = 100,bins=0) {
if (is.null(h)) {
h <- c(MASS::bandwidth.nrd(data$x), MASS::bandwidth.nrd(data$y))
}
dens <- MASS::kde2d(
data$x, data$y, h = h, n = n,
lims = c(scales$x$dimension(), scales$y$dimension())
)
df <- data.frame(expand.grid(x = dens$x, y = dens$y), z = as.vector(dens$z))
df$group <- data$group[1]
if (contour) {
# StatContour$compute_panel(df, scales,bins=bins,...) # bad dots...
if (bins>0){
StatContour$compute_panel(df, scales,bins)
} else {
StatContour$compute_panel(df, scales)
}
} else {
names(df) <- c("x", "y", "density", "group")
df$level <- 1
df$piece <- 1
df
}
}
)
# end copy from stat-density-2d.R
# -------------------------------
set.seed(1)
n=100
df <- data.frame(x=rnorm(n, 0, 1), y=rnorm(n, 0, 1))
TestData <- ggplot (data = df) +
stat_density2d(aes(x = x, y = y,fill = as.factor(..level..)),bins=5,geom = "polygon") +
geom_point(aes(x = x, y = y)) +
scale_fill_manual(values = c("yellow","red","green","royalblue", "black"))
print(TestData)
Which yields the result. Note that varying the bins parameter has the desired effect now, which cannot be replicated by varying the n parameter.
update:
After an extended discussion with Roland (see the comments), he determined it is probably a regression bug and filed a bug report.
As the question is "why is the bins parameter unknown?", and I spent a fair amount of time researching it, I will answer it.
Your example obviously comes from this link from Oct 2013, where the parameter is used. How to correctly interpret ggplot's stat_density2d
However it was never a documented parameter, and it is not clear that it is being used there. It is probably a parameter that is being passed through to other libraries (like MASS) that are being used by stat_density2d.
We can get the code to work by getting rid of the scale_fill_manual call and using this:
library(ggmap)
set.seed(1)
n=100
df <- data.frame(x=rnorm(n, 0, 1), y=rnorm(n, 0, 1))
TestData <- ggplot (data = df) +
stat_density2d(aes(x = x, y = y,fill = as.factor(..level..)), geom = "polygon",) +
geom_point(aes(x = x, y = y))
# scale_fill_manual(values = c("yellow","red","green","royalblue", "black"))
print(TestData)
which yields this:
Since this looks rather different from the plot posted in the original Oct 2013 link, I would say that stat_density2d has been extensively rewritten since then, or maybe MASS:kde2d (or another MASS routine), and that the bin parameter is no longer accepted. It is also not clear that that parameter ever did anything (read the link).
You can however vary the parameters n and h - also undocumented from the stat_density2d point-of-view (as far as I can tell).

Resources