Passing data and column names to ggplot via another function - r

I'll skip right to an example and comment afterwords:
cont <- data.frame(value = c(1:20),variable = c(1:20,(1:20)^1.5,(1:20)^2),group=rep(c(1,2,3),each=20))
value variable group
1 1 1.000000 1
2 2 2.000000 1
3 3 3.000000 1
#... etc.
#ser is shorthand for "series".
plot_scat <- function(data,x,y,ser) {
ggplot(data,aes(x=x,y=y,color=factor(ser)))+geom_point()
}
plot_scat(cont,value,variable,group)
#This gives the error:
#Error in eval(expr,envir,enclose) : object 'x' not found
Now, I know that ggplot2 has a known bug where aes() will only look in the global environent and not in the local environment. Following advice from: Use of ggplot() within another function in R, I tried another route.
plot_scat <- function(data,x,y,ser) {
#environment=environment() added
ggplot(data,aes(x=x,y=y,color=factor(ser)),environment=environment())+geom_point()
}
plot_scat(cont,value,variable,group)
#This gives the error:
#Error in eval(expr,envir,enclos) : object 'value' not found
#In addition: Warning message:
#In eval(expr,envir,enclos) : restarting interrupted promise evaluation
I don't know what that last line means. If I call:
ggplot(cont,aes(x=value,y=variable,color=group))+geom_point()
I get the graph you would expect. At the command line, aes() is looking for the variable names in ggplot(), but it is not doing this within the function call. So I tried to change my call:
plot_scat(cont,cont$value,cont$variable,cont$group)
This gets me what I want. So I add the next layer of complexity:
plot_scat <- function(data,x,y,ser) {
#added facet_grid
ggplot(data,aes(x=x,y=y,color=factor(ser)),environment=environment())+geom_point()+
facet_grid(.~ser)
}
plot_scat(cont,cont$value,cont$variable,cont$group)
#This gives the error:
#Error in layout_base(data, cols, drop = drop):
# At least one layer must contain all variables used for facetting
My thought on this is that ser is actually cont$group, which is fine for use in aes(), but when passed to facet_grid is now a one column data frame with no information about value and variables. According to the help page, facet_grid does not take a "data=" argument so I cant use facet_grid(data=data,.~ser) to get around this. I don't know how to proceed from here.
This is an extremely simple example, but the long term goal is to have a function I can give to non-R-literate people in my office and say "give it your data frame name, column names and the column you want to split on and it will make pretty plots for you". It will also get a lot more complex, with a very customized theme, which is irrelevant to the problems I'm having.

If you do not want to pass your variables (column names) as strings/quoted, then one approach that I tried (see also here) was to make use of match.call() and eval. It works with faceting (as in your case) as well:
library(ggplot2)
cont <- data.frame( value = c(1:20),
variable = c(1:20, (1:20) ^ 1.5, (1:20) ^ 2),
group = rep(c(1, 2, 3), each = 20))
plot_scat <- function(data, x, y, ser) {
arg <- match.call()
ggplot(data, aes(x = eval(arg$x),
y = eval(arg$y),
color = factor(eval(arg$ser)))) +
geom_point() +
facet_grid(. ~ eval(arg$ser))
}
# Call your custom function without quoting the variables
plot_scat(data = cont, x = value, y = variable, ser = group)
To get an idea what match.call() does, maybe try to run this:
plot_scat <- function(data, x, y, ser) {
str(as.list(match.call()))
}
plot_scat(cont, value, variable, group)
#> List of 5
#> $ : symbol plot_scat
#> $ data: symbol cont
#> $ x : symbol value
#> $ y : symbol variable
#> $ ser : symbol group
Created on 2019-01-10 by the reprex package (v0.2.1)
Or, another workaround, but this time with passing quoted column names to the custom plotting function is using get():
plot_scat <- function(data, x, y, ser) {
ggplot(data, aes(x = get(x),
y = get(y),
color = factor(get(ser)))) +
geom_point() +
facet_grid(. ~ get(ser))
}
plot_scat(data = cont, x = "value", y = "variable", ser = "group")

You could use aes_string() in place of aes() and pass the column names as strings.
plot_scat <- function(data,x,y,ser) {
ser_col = paste("factor(",ser,")")
ggplot(data,aes_string(x=x,y=y,col=ser_col))+geom_point()+facet_grid(as.formula(sprintf('~%s',ser)))
}
plot_scat(cont,"value","variable","group")
facet_grid requires a formula so you can use as.formula to parse the string to a formula.

Related

Error on final object when generating ggplot objects in for loop with dplyr select()

I want to make many plots using multiple pairs of variables in a dataframe, all with the same x. I store the plots in a named list. For simplicity, below is an example with only 1 variable in each plot.
Key to this function is a select() call that is clearly not necessary here but is with my actual data.
The body of the function works fine on each variable, but when I loop through a list of variables, the last one in the list always produces
Error in get(ll): object 'd' not found.
(or whatever the last variable, if not 'd'). Replacing data <- df %>% select(x,ll) with data <- df avoids the error.
## make data
df2 <- data.frame(x = 1:10,
a = 1:10,
b = 2:11,
c = 101:110,
d = 10*(1:10))
## make function
testfun <- function(df = df2, vars = letters[1:4]){
## initialize list to store plots
plotlist <- list()
for (ll in vars){
## subset data
data <- df %>% select(x, ll) ## comment out select() to get working function
# print(data) ## uncomment to check that dataframe subset works correctly
## plot variable vs. x
p <- ggplot(data,
aes(x = x, y = get(ll))) +
geom_point() +
ylab(ll)
## add plot to named list
plotlist[[ll]] <- p
# print(p) ## uncomment to see that each plot is being made
}
return(plotlist) ## unnecessary, being explicit for troubleshooting
}
## use function
pl <- testfun(df2)
## error ?
pl
I have a work-around that avoids select() by renaming variables in my actual dataframe, but I am curious why this does not work? Any ideas?
get() could work, but not with ll directly. Try y = get(!!ll) or y = {{ll}}.
ggplot (or maybe aes, it's hard to tell) waits to run this code until its plot object is referenced, as the error in the provided code demonstrates. By the time each ggplot evaluates get(ll), the for loop has already finished. So ll evaluates to the last value of the loop variable, "d", for all four ggplots. ll being "d" in the error makes it seem like it's the final ggplot object that fails, but it's actually evaluating the first one that causes this error.
In the body of the loop we'd like a way to evaluate the ll variable and stick that resulting string ("a", "b", "c", or "d") into this code, the rest of which won't run until later. Changing y = get(ll) to y = get(!!ll) is one way to do this: !! performs "surgery" on the unevaluated expression (called a "blueprint for code" in Tidyverse docs) so that the expression passed into ggplot contains a literal string like "a" instead of the variable reference ll.
testfun <- function(df = df2, vars = letters[1:4]){
plotlist <- list()
for (ll in vars){
data <- df %>% select(x, ll)
p <- ggplot(data,
aes(x = x, y = get(!!ll))) +
geom_point() +
ylab(ll)
plotlist[[ll]] <- p
}
return(plotlist)
}
Read on for explanation and an alternate solution.
The loop problem: late binding
In a given function or in the global scope in R, there's just one variable of any given name. A for (x in xs) loop repeatedly rebinds that variable to a new value. That means that after a for loop has finished, that variable still exists and retains the last value it was assigned. Here's a way this can trip you up:
vars <- c("a", "b", "c", "d")
results <- list()
for (ll in vars){
message("in for loop, ll: ", ll)
func <- function () { ll }
results[[ll]] <- c(ll, func)
}
message("after for loop, ll: ", ll)
# after for loop, now ll is "d"
for (vec in results) {
message(vec[[1]], " ", vec[[2]]())
}
This outputs
in for loop, ll: a
in for loop, ll: b
in for loop, ll: c
in for loop, ll: d
after for loop, ll: d
a d
b d
c d
d d
Each of the four functions constructed here use the same outer scope variable ll which, by the time the functions are actually called after the for loop, is "d". The late binding part is that the value of the variable at function call time (late) is used when looking up its value, not the value of the variable when the function is defined (early).
The NSE problem
The OP isn't creating functions in a loop though, they're calling ggplot. ggplot does something similar to creating a function: it takes some code as an argument that it doesn't evaluate until later. ggplot (or maybe aes) "captures" code from some of arguments instead of running them. In OP's case, get(ll) isn't evaluated until later.
When this code is evaluated it's in a new context with a "data mask" that allows names of a data frame to be referenced directly. This part is great, it's what we want — this is what makes get("a") work at all. But the fact that the evaluation happens later is a problem for the OP: ll in get(ll) evaluated to "d", like get("d"), because the code is evaluated after the for-loop iteration where ll had the expected value.
Ignoring the data mask part, here's a function called run.later that, like ggplot, doesn't run one of its arguments. When we run that code later, we again find that ll evaluates to "d" for all four of the saved expressions.
vars <- c("a", "b", "c", "d")
unevaluated.exprs <- list();
run.later <- function(name, something) {
expr <- substitute(something)
unevaluated.exprs[[name]] <<- c(name, expr)
}
for (ll in vars){
run.later(ll, ll)
}
for (vec in unevaluated.exprs) {
message(c(vec[[1]], " ", eval(vec[[2]])))
}
prints
a d
b d
c d
d d
That's the ll part of the problem. The rule of thumb from languages like Python of "Don't define functions in a loop (if they reference loop variables)" could be generalized for R to "don't define functions or otherwise write code that won't be immediately evaluated in a loop (if that code references loop variables)."
Fixing the scope problem instead of metaprogramming
The !! solution provided at the top uses metaprogramming to evaluate the ll variable in the loop instead of evaluating it later.
Theoretically, one could instead dynamically create variables in each iteration of a loop, then carefully reference that dynamically created variable name with metaprogramming. But a more elegant way would be to use the same variable name but in different scopes. This is what Nithin's answer does with a function: every function creates a new scope and tada, you can use the same variable name in each. Here's another version of that, closer to OP's code:
testfun <- function(df = df2, vars = letters[1:4]){
plotlist <- list()
plot.fn <- function(var) {
data <- df %>% select(x, var)
p <- ggplot(data,
aes(x = x, y = get(var))) +
geom_point() +
ylab(var)
plotlist[[ll]] <<- p
}
for (ll in vars){
plot.fn(ll)
}
return(plotlist)
}
pl <- testfun(df2)
pl
There are 4 distinct variables called var in this code, and each iteration of the loop references a different one.
Prettier metaprogramming
I think (haven't tested) that get(!!ll) is equivalent to {{ll}} here — get() looks up a string as a variable, but that's also what sticking the symbol of the string that ll evaluates to into the expression does. Double curlies seem more common and can roughly be understood as "evaluate the result of this expression as a variable in the other context," or as "template this string into the expression."
write a custom function like this
plot_fn<- function(df,y){
df %>% ggplot(aes(x=x,
y=get(y))+
geom_point()+
ylab(y)
}
Iterate over plots with purrr:::map
map(letters[1:4],~plot_fn(df=df2,y=.x))
The issue is that we cannot use get to access dplyr/tidyverse data in a "programming" paradigm. Instead, we should use non standard evaluation to access the data. I offer a simplified function below (originally I thought it was a function masking issue as I quickly skimmed the question).
testfun <- function(df = df2, vars = letters[1:4]){
lapply(vars, function(y) {
ggplot(df,
aes(x = x, y = .data[[y]] )) +
geom_point() +
ylab(y)
})
}
Calling
plots <- testfun(df2)
plots[[1]]
EDIT
Since OP would like to know what the issue is, I have used a traditional loop as requested
testfun2 <- function(df = df2, vars = letters[1:4]){
## initialize list to store plots
plotlist <- list()
for (ll in vars){
## subset data
d_t <- df %>% select(x, ll) ## comment out select() to get working function
# print(data) ## uncomment to check that dataframe subset works correctly
## plot variable vs. x
p <- ggplot(d_t,
aes(x = x, y = .data[[ll]])) +
geom_point() +
ylab(ll)
## add plot to named list
plotlist[[ll]] <- p
## uncomment to see that each plot is being made
}
plotlist
}
pl <- testfun2(df2)
pl[[1]]
The reason get does not work is that we need to use non-standard evaluation as the docs state. Related questions on using get may be useful.
First plot

How can I pass character strings as independent parameters after a `+`?

Before you mark as dup, I know about Use character string as function argument, but my use case is slightly different. I don't need to pass a parameter INSIDE the function, I would like to pass a dynamic number of parameters after a + (think ggplot2).
(Note: Please don't format and remove the extra-looking ####, I have left them in so people can copy paste the code into R for simplicity).
This has been my process:
#### So let's reproduce this example:
library(condformat)
condformat(iris[c(1:5,70:75, 120:125),]) +
rule_fill_discrete(Species) +
rule_fill_discrete(Petal.Width)
#### I would like to be able to pass the two rule_fill_discrete() functions dynamically (in my real use-case I have a variable number of possible inputs and it's not possible to hardcode these in).
#### First, create a function to generalize:
PlotSeries <- function(x){
b=NULL
for (i in 1:length(x)){
a <- paste('rule_fill_discrete(',x[i],')',sep="")
b <- paste(paste(b,a,sep="+"))
}
b <- gsub("^\\+","",b)
eval(parse(text = b))
}
#### Which works with one argument
condformat(iris[c(1:5,70:75, 120:125),]) +
PlotSeries("Species")
#### But not if we pass two arguments:
condformat(iris[c(1:5,70:75, 120:125),]) +
PlotSeries(c("Species","Petal.Width"))
Error in rule_fill_discrete(Species) + rule_fill_discrete(Petal.Width) :
non-numeric argument to binary operator
#### It will work if we call each individually
condformat(iris[c(1:5,70:75, 120:125),]) +
PlotSeries("Species") +
PlotSeries("Petal.Width")
#### Which gives us an indication as to what the problem is... the fact that it doesn't like when the rule_fill_discrete statements are passed in as one statement. Let's test this:
condformat(iris[c(1:5,70:75, 120:125),]) +
eval(rule_fill_discrete(Species) +
rule_fill_discrete(Petal.Width) )
Error in rule_fill_discrete(Species) + rule_fill_discrete(Petal.Width) :
non-numeric argument to binary operator
#### Fails. But:
condformat(iris[c(1:5,70:75, 120:125),]) +
eval(rule_fill_discrete(Species)) +
eval(rule_fill_discrete(Petal.Width) )
#### This works. But we need to be able to pass in a GROUP of statements (that's kinda the whole point). So let's try to get the eval statements in:
Nasty <- "eval(rule_fill_discrete(Species)) eval(rule_fill_discrete(Petal.Width))"
condformat(iris[c(1:5,70:75, 120:125),]) + Nasty #### FAIL
Error in +.default(condformat(iris[c(1:5, 70:75, 120:125), ]), Nasty) :
non-numeric argument to binary operator
condformat(iris[c(1:5,70:75, 120:125),]) + eval(Nasty) #### FAIL
Error in +.default(condformat(iris[c(1:5, 70:75, 120:125), ]), eval(Nasty)) :
non-numeric argument to binary operator
condformat(iris[c(1:5,70:75, 120:125),]) + parse(text=Nasty) #### FAIL
Error in +.default(condformat(iris[c(1:5, 70:75, 120:125), ]), parse(text = Nasty)) :
non-numeric argument to binary operator
condformat(iris[c(1:5,70:75, 120:125),]) + eval(parse(text=Nasty)) #### FAIL
Error in eval(rule_fill_discrete(Species)) + eval(rule_fill_discrete(Petal.Width)) :
non-numeric argument to binary operator
So how can we do it?
Thanks to this stackoverflow question and thanks to the bug report from #amit-kohli, I was made aware that there was a bug in the condformat package.
Update: Answer updated to reflect the new condformat API introduced in condformat 0.7.
Here I show how to (using condformat 0.7.0). Note that the syntax I use in the standard evaluation function is derived from the rlang package.
Install condformat:
install.packages("condformat)"
A simple example, asked in the question:
# Reproduce the example
library(condformat)
condformat(iris[c(1:5,70:75, 120:125),]) %>%
rule_fill_discrete(Species) %>%
rule_fill_discrete(Petal.Width)
# With variables:
col1 <- rlang::quo(Species)
col2 <- rlang::quo(Petal.Width)
condformat(iris[c(1:5,70:75, 120:125),]) %>%
rule_fill_discrete(!! col1) %>%
rule_fill_discrete(!! col2)
# Or even with character strings to give the column names:
col1 <- "Species"
col2 <- "Petal.Width"
condformat(iris[c(1:5,70:75, 120:125),]) %>%
rule_fill_discrete(!! col1) %>%
rule_fill_discrete(!! col2)
# Do it programmatically (In a function)
#' #importFrom magrittr %>%
some_color <- function(data, col1, col2) {
condformat::condformat(data) %>%
condformat::rule_fill_discrete(!! col1) %>%
condformat::rule_fill_discrete(!! col2)
}
some_color(iris[c(1:5,70:75, 120:125),], "Species", "Petal.Width")
A more general example, using an expression:
# General example, using an expression:
condformat(iris[c(1:5,70:75, 120:125),]) %>%
rule_fill_gradient(Species, expression = Sepal.Width - Sepal.Length)
# General example, using a column given as character and an
# expression given as character as well:
expr <- rlang::parse_expr("Sepal.Width - Sepal.Length")
condformat(iris[c(1:5,70:75, 120:125),]) %>%
rule_fill_gradient("Species", expression = !! expr)
# General example, in a function, everything given as a character:
two_column_difference <- function(data, col_to_colour, col1, col2) {
expr1 <- rlang::parse_expr(col1)
expr2 <- rlang::parse_expr(col2)
condformat::condformat(data) %>%
condformat::rule_fill_gradient(
!! col_to_colour,
expression = (!!expr1) - (!!expr2))
}
two_column_difference(iris[c(1:5,70:75, 120:125),],
col_to_colour = "Species",
col1 = "Sepal.Width",
col2 = "Sepal.Length")
Custom discretized scales for continuous values
Custom discrete color values can be specified with a function that preprocesses a continuous column into a discrete scale:
discretize <- function(column) {
sapply(column,
FUN = function(value) {
if (value < 4.7) {
return("low")
} else if (value < 5.0) {
return("mid")
} else {
return("high")
}
})
}
And we can specify the colors for each of the levels of the scale using colours =:
condformat(head(iris)) %>%
rule_fill_discrete(
"Sepal.Length",
expression = discretize(Sepal.Length),
colours = c("low" = "red", "mid" = "yellow", "high" = "green"))
If we want, the discretize function can return colours:
discretize_colours <- function(column) {
sapply(column,
FUN = function(value) {
if (value < 4.7) {
return("red")
} else if (value < 5.0) {
return("yellow")
} else {
return("green")
}
})
}
The code to use it:
condformat(head(iris)) %>%
rule_fill_discrete(
"Sepal.Length",
expression = discretize_colours(Sepal.Length),
colours = identity)
Note that as expression returns the colours we use colours = identity. identity is just function(x) x.
Finally, using some rlang tidy evaluation we can create a function:
colour_based_function <- function(data, col1) {
col <- rlang::parse_expr(col1)
condformat::condformat(data) %>%
condformat::rule_fill_discrete(
columns = !! col1,
expression = discretize_colours(!! col),
colours = identity)
}
colour_based_function(head(iris), "Sepal.Length")
NOTE: This answer provides a workaround for a bug in an old version of condformat. The bug has since been fixed, see #zeehio's answer for the current version after this bug was fixed.
I think you have two mostly separate questions. That are all mixed together in your post. I will attempt to restate and answer them individually, and then put things together - which doesn't work all the way at this point but gets close.
First, let's save some typing by defining a couple variables:
ir = iris[c(1:5,70:75, 120:125), ]
cf = condformat(ir)
Q1: How do I use + on a vector or list of inputs?
This is the easy question. The base answer is Reduce. The following are all equivalent:
10 + 1 + 2 + 5
"+"("+"("+"(10, 1), 2), 5)
Reduce("+", c(1, 2, 5), init = 10))
More pertinent to your case, we can do this to replicate your desired output:
fills = list(rule_fill_discrete(Species), rule_fill_discrete(Petal.Width))
res = Reduce(f = "+", x = fills, init = cf)
res
Q2: How do I use string inputs with rule_fill_discrete?
This is my first time using condformat, but it looks to be written in the lazyeval paradigm with rule_fill_discrete_ as a standard-evaluating counterpart to the non-standard-evaluating rule_fill_discrete. This example is even given in ?rule_fill_discrete, but it doesn't work as expected
cf + rule_fill_discrete_(columns = "Species")
# bad: Species column colored entirely red, not colored by species
# possibly a bug? At the very least misleading documentation...
cf + rule_fill_discrete_(columns = "Species", expression = expression(Species))
# bad: works as expected, but still uses an unquoted Species
# other failed attempts
cf + rule_fill_discrete_(columns = "Species", expression = expression("Species"))
cf + rule_fill_discrete_(columns = "Species", expression = "Species")
# bad: single color still single color column
There is also an env environment argument in the SE function, but I had no luck with that either. Maybe someone with more lazyeval/expression experience can point out something I'm overlooking or doing wrong.
Work-around: What we can do is pass the column directly. This works because we're not doing any fancy functions of the column, just using it's values directly to determine the coloring:
cf + rule_fill_discrete_(columns = c("Species"), expression = ir[["Species"]])
# hacky, but it works
Putting it together
Using the NSE version with Reduce is easy:
fills = list(rule_fill_discrete(Species), rule_fill_discrete(Petal.Width))
res = Reduce(f = "+", x = fills, init = cf)
res
# works!
Using SE with input strings, we can use the hacky workaround.
input = c("Species", "Petal.Width")
fills_ = lapply(input, function(x) rule_fill_discrete_(x, expression = ir[[x]]))
res_ = Reduce(f = "+", x = fills_, init = cf)
res_
# works!
And this, of course, you could wrap up into a custom function that takes a data frame and a string vector of column names as input.
#Gregor's answer was perfect. A bit hacky, but works excellently.
In my use-case, I needed a bit more complication, I will post it here in case it's useful to somebody else.
In my use-case, I needed to be able to color multiple columns based on the values of one column. condformat allows us to do this already, but again we run into the parametrization problem. Here's my solution to that, based on the response by Gregor:
CondFormatForInput <- function(Table,VectorToColor,VectorFromColor) {
cf <- condformat(Table)
input = data.frame(Val=VectorToColor,
Comp=VectorFromColor)
fills2_ = map2(input$Val,.y = input$Comp,.f = function(x,y) rule_fill_discrete_(x, expression =
iris[[y]]))
res_ = Reduce(f = "+", x = fills2_, init = cf)
res_
}
CondFormatForInput(iris,
c("Sepal.Length","Sepal.Width","Petal.Length","Petal.Width"),
c("Sepal.Width","Sepal.Width","Petal.Width","Petal.Width"))

Functions inside aes

Question: why can't I call sapply inside aes()?
Goal of following figure: Create histogram showing proportion that died/lived so that the proportion for each combination of group/type sums to 1 (example inspired by previous post).
I know you could make the figure by summarising outside of ggplot but the question is really about why the function isn't working inside of aes.
## Data
set.seed(999)
dat <- data.frame(group=factor(rep(1:2, 25)),
type=factor(sample(1:2, 50, rep=T)),
died=factor(sample(0:1, 50, rep=T)))
## Setup the figure
p <- ggplot(dat, aes(x=died, group=interaction(group, type), fill=group, alpha=type)) +
theme_bw() +
scale_alpha_discrete(range=c(0.5, 1)) +
ylab("Proportion")
## Proportions, all groups/types together sum to 1 (not wanted)
p + geom_histogram(aes(y=..count../sum(..count..)), position=position_dodge())
## Look at groups
stuff <- ggplot_build(p)
stuff$data[[1]]
## The long way works: proportions by group/type
p + geom_histogram(
aes(y=c(..count..[..group..==1] / sum(..count..[..group..==1]),
..count..[..group..==2] / sum(..count..[..group..==2]),
..count..[..group..==3] / sum(..count..[..group..==3]),
..count..[..group..==4] / sum(..count..[..group..==4]))),
position='dodge'
)
## Why can't I call sapply there?
p + geom_histogram(
aes(y=sapply(unique(..group..), function(g)
..count..[..group..==g] / sum(..count..[..group..==g]))),
position='dodge'
)
Error in get(as.character(FUN), mode = "function", envir = envir) :
object 'expr' of mode 'function' was not found
So, the issue arises because of a recursive call to ggplot2:::strip_dots for any aesthetics that include 'calculated aesthetics'. There is some discussion around the calculated aesthetics in this SO question and answer. The relevant code in layer.r is here:
new <- strip_dots(aesthetics[is_calculated_aes(aesthetics)])
i.e. strip_dots is called only if there are calculated aesthetics, defined using the regex "\\.\\.([a-zA-z._]+)\\.\\.".
strip_dots in takes a recursive approach, working down through the nested calls and stripping out the dots. The code is like this:
function (expr)
{
if (is.atomic(expr)) {
expr
}
else if (is.name(expr)) {
as.name(gsub(match_calculated_aes, "\\1", as.character(expr)))
}
else if (is.call(expr)) {
expr[-1] <- lapply(expr[-1], strip_dots)
expr
}
else if (is.pairlist(expr)) {
as.pairlist(lapply(expr, expr))
}
else if (is.list(expr)) {
lapply(expr, strip_dots)
}
else {
stop("Unknown input:", class(expr)[1])
}
}
If we supply an anonymous function this code as follows:
anon <- as.call(quote(function(g) mean(g)))
ggplot2:::strip_dots(anon)
we reproduce the error:
#Error in get(as.character(FUN), mode = "function", envir = envir) :
# object 'expr' of mode 'function' was not found
Working through this, we can see that anon is a call. For calls, strip_dots will use lapply to call strip_dots on the second and third elements of the call. For an anonymous function like this, the second element is the formals of the function. If we look at the formals of anon using dput(formals(eval(anon))) or dput(anon[[2]]) we see this:
#pairlist(g = )
For pairlists, strip_dots tries to lapply it to itself. I'm not sure why this code is there, but certainly in this circumstance it leads to the error:
expr <- anon[[2]]
lapply(expr, expr)
# Error in get(as.character(FUN), mode = "function", envir = envir) :
# object 'expr' of mode 'function' was not found
TL; DR At this stage, ggplot2 doesn't support the use of anonymous functions within aes where a calculated aesthetic (such as ..count..) is used.
Anyway, the desired end result can be achieved using dplyr; in general I think it makes for more readable code to separate out the data summarisation from the plotting:
newDat <- dat %>%
group_by(died, type, group) %>%
summarise(count = n()) %>%
group_by(type, group) %>%
mutate(Proportion = count / sum(count))
p <- ggplot(newDat, aes(x = died, y = Proportion, group = interaction(group, type), fill=group, alpha=type)) +
theme_bw() +
scale_alpha_discrete(range=c(0.5, 1)) +
geom_bar(stat = "identity", position = "dodge")
ggplot2 fix
I've forked ggplot2 and have made two changes to aes_calculated.r which fix the problem. The first was to correct the handling of pairlists to lapply strip_dots instead of expr, which I think must have been the intended behaviour. The second was that for formals with no default value (like in the examples provided here), as.character(as.name(expr)) throws an error because expr is an empty name, and while this is a valid construct, it's not possible to create one from an empty string.
Forked version of ggplot2 at https://github.com/NikNakk/ggplot2 and pull request just made.
Finally, after all that, the sapply example given doesn't work because it returns a 2 row by 4 column matrix rather than an 8 length vector. The corrected version is like this:
p + geom_histogram(
aes(y=unlist(lapply(unique(..group..), function(g)
..count..[..group..==g] / sum(..count..[..group..==g])))),
position='dodge'
)
This gives the same output as the dplyr solution above.
One other thing to note is that this lapply code assumes that the data at that stage is sorted by group. I think this is always the case, but if for whatever reason it weren't you would end up with the y data out of order. An alternative which preserves the order of the rows in the calculated data would be:
p + geom_histogram(
aes(y={grp_total <- tapply(..count.., ..group.., sum);
..count.. / grp_total[as.character(..group..)]
}),
position='dodge'
)
It's also worth being aware that these expressions are evaluated in baseenv(), the namespace of the base package. This means that any functions from other packages, even standard ones like stats and utils, need to be used with the :: operator (e.g. stats::rnorm).
After playing around a little, the problem appears to be using anonymous functions with ..group.. or ..count.. inside aes:
xy <- data.frame(x=1:10,y=1:10) #data
ggplot(xy, aes(x = x, y = sapply(y, mean))) + geom_line() #sapply is fine
ggplot(xy, aes(x = x, group = y)) +
geom_bar(aes(y = sapply(..group.., mean))) #sapply with ..group.. is fine
ggplot(xy, aes(x = x, group = y)) +
geom_bar(aes(y = sapply(..group.., function(g) {mean(g)})))
#broken, with same error
ggplot(xy, aes(x = x, group = y)) +
geom_bar(aes(y = sapply(y, function(g) {mean(g)})), stat = "identity")
#sapply with anonymous functions works fine!
It seems like a really weird bug, unless I'm missing something stupid.

Error in eval(expr, envir, enclos) : object * not found when wrapping qplot()

I don't understand why my oh-so-minimal wrapper function produces the subject error. The below should reproduce it. My goal is to do a bunch of plots from data in a single dataframe, each of which lives in a new window.
library(ggplot2)
library(datasets)
data(ChickWeight)
str(ChickWeight)
# This works fine:
qplot(x = weight, y = Time, data = ChickWeight, color = Diet)
myfun <- function(pred = weight, resp = Time, dat = ChickWeight) {
windows()
qplot(x = pred, y = resp, data = dat, color = Diet)
}
# But this returns 'Error in eval(expr, envir, enclos) : object 'weight' not found':
myfun()
# As does this
myfun(weight, Time)
Why can't R find 'weight' in my function?
I'm running R version 3.0.1, 64-bits on windows 8.1 64-bits.
Thanks!
-Roy
I would suggest that in the long run something like this would be a good idea:
myfun <- function(pred = "weight", resp = "Time", dat = ChickWeight) {
dev.new() ## more general than windows()
ggplot(dat,aes_string(x=pred,y=resp,color="Diet"))+geom_point()
}
myfun()
qplot does a lot of fancy evaluation which will be fragile (easy to break, hard to understand) in a context where you are passing objects in and out of functions. aes_string specifies that ggplot should base its lookup on the value of a string, rather than its usual approach of evaluating a language object (i.e. using "weight" rather than weight).
I use the quartz device instead of windows() but otherwise this mostly succeeds:
myfun <- function(pred = 'weight', resp = 'Time', dat = ChickWeight) {
quartz()
qplot(x =dat[[pred]], y = dat[[resp]], color = dat[["Diet"]])
}

I do not understand error "object not found" inside the function

I have roughly this function:
plot_pca_models <- function(models, id) {
library(lattice)
splom(models, groups=id)
}
and I'm calling it like this:
plot_pca_models(data.pca, log$id)
wich results in this error:
Error in eval(expr, envir, enclos) : object 'id' not found
when I call it without the wrapping function:
splom(data.pca, groups=log$id)
it raises this error:
Error in log$id : object of type 'special' is not subsettable
but when I do this:
id <- log$id
splom(models, groups=id)
it behaves as expected.
Please can anybody explain why it behaves like this and how to correct it? Thanks.
btw:
I'm aware of similar questions here, eg:
Help understand the error in a function I defined in R
Object not found error with ddply inside a function
Object disappears from namespace in function
but none of them helped me.
edit:
As requested, there is full "plot_pca_models" function:
plot_pca_models <- function(data, id, sel=c(1:4), comp=1) {
# 'data' ... princomp objects
# 'id' ... list of samples id (classes)
# 'sel' ... list of models to compare
# 'comp' ... which pca component to compare
library(lattice)
models <- c()
models.size <- 1:length(data)
for(model in models.size) {
models <- c(models, list(data[[model]]$scores[,comp]))
}
names(models) <- 1:length(data)
models <- do.call(cbind, models[sel])
splom(models, groups=id)
}
edit2:
I've managed to make the problem reproducible.
require(lattice)
my.data <- data.frame(pca1 = rnorm(100), pca2 = rnorm(100), pca3 = rnorm(100))
my.id <- data.frame(id = sample(letters[1:4], 100, replace = TRUE))
plot_pca_models2 <- function(x, ajdi) {
splom(x, group = ajdi)
}
plot_pca_models2(x = my.data, ajdi = my.id$id)
which produce the same error like above.
The problem is that splom evaluates its groups argument in a nonstandard way.A quick fix is to rewrite your function so that it constructs the call with the appropriate syntax:
f <- function(data, id)
eval(substitute(splom(data, groups=.id), list(.id=id)))
# test it
ir <- iris[-5]
sp <- iris[, 5]
f(ir, sp)
log is a function in base R. Good practice is to not name objects after functions...it can create confusion. Type log$test into a clean R session and you'll see what's happening:
object of type 'special' is not subsettable
Here's a modification of Hong Oi's answer. First I would recommend to include id in the main data frame, i.e
my.data <- data.frame(pca1 = rnorm(100), pca2 = rnorm(100), pca3 = rnorm(100), id = sample(letters[1:4], 100, replace = TRUE))
.. and then
plot_pca_models2 <- function(x, ajdi) {
Call <- bquote(splom(x, group = x[[.(ajdi)]]))
eval(Call)
}
plot_pca_models2(x = my.data, ajdi = "id")
The cause of the confusion is the following line in lattice:::splom.formula:
groups <- eval(substitute(groups), data, environment(formula))
... whose only point is to be able to specify groups without quotation marks, that is,
# instead of
splom(DATA, groups="ID")
# you can now be much shorter, thanks to eval and substitute:
splom(DATA, groups=ID)
But of course, this makes using splom (and other functions e.g. substitute which use "nonstandard evaluation") harder to use from within other functions, and is against the philosophy that is "mostly" followed in the rest of R.

Resources