I have a data frame md:
md <- data.frame(x = c(3,5,4,5,3,5), y = c(5,5,5,4,4,1), z = c(1,3,4,3,5,5),
device1 = c("c","a","a","b","c","c"), device2 = c("B","A","A","A","B","B"))
md[2,3] <- NA
md[4,1] <- NA
md
I want to calculate means by device1 / device2 combinations using dplyr:
library(dplyr)
md %>% group_by(device1, device2) %>% summarise_each(funs(mean))
However, I am getting some NAs. I want the NAs to be ignored (na.rm = TRUE) - I tried, but the function doesn't want to accept this argument.
Both these lines result in error:
md %>% group_by(device1, device2) %>% summarise_each(funs(mean), na.rm = TRUE)
md %>% group_by(device1, device2) %>% summarise_each(funs(mean, na.rm = TRUE))
The other answers showed you the syntax for passing mean(., na.rm = TRUE) into summarize/_each.
Personally, I deal with this so often and it's so annoying that I just define the following convenience set of NA-aware basic functions (e.g. in my .Rprofile), such that you can apply them with dplyr with summarize(mean_) and no pesky arg-passing; also keeps the source-code cleaner and more readable, which is another strong plus:
mean_ <- function(...) mean(..., na.rm=T)
median_ <- function(...) median(..., na.rm=T)
sum_ <- function(...) sum(..., na.rm=T)
sd_ <- function(v) sqrt(sum_((v-mean_(v))^2) / length(v))
cor_ <- function(...) cor(..., use='pairwise.complete.obs')
max_ <- function(...) max(..., na.rm=T)
min_ <- function(...) min(..., na.rm=T)
pmax_ <- function(...) pmax(..., na.rm=T)
pmin_ <- function(...) pmin(..., na.rm=T)
table_ <- function(...) table(..., useNA='ifany')
mode_ <- function(...) {
tab <- table(...)
names(tab[tab==max(tab)]) # the '==' implicitly excludes NA values
}
clamp_ <- function(..., minval=0, maxval=70) pmax(minval, pmin(maxval,...))
Really you want to be able to flick one global switch once and for all, like na.action/na.pass/na.omit/na.fail to tell functions as default behavior what to do, and not throw errors or be inconsistent, as they currently do, across different packages.
There used to be a CRAN package called Defaults for setting per-function defaults but it is not maintained since 2014, pre-3.x . For more about it Setting Function Defaults R on a Project Specific Basis
try:
library(dplyr)
md %>% group_by(device1, device2) %>%
summarise_each(funs(mean(., na.rm = TRUE)))
Simple as that:
funs(mean(., na.rm = TRUE))
Related
I have the following code, in which I have three lists and I am sort of trying to calculate the MSE.
seg_mean <- list(c(3.5,2.5),c(6.5,5.5),c(9.2,8.8),c(5.3,4.7))
mean_tot <- list(c(3,3),c(6,6),c(9,9),c(5,5))
seg_len <- list(20,18,17,15)
MSE <- mapply('-', seg_mean, mean_tot, SIMPLIFY = FALSE) #element wise difference between seg_mean and mean_tot
MSE <- lapply(MSE, function(x) x^2) #element wise squaring
MSE <- mapply('/', MSE, seg_len, SIMPLIFY = FALSE) #dividing each list element by seg_len
This works. My question is can I make these lines of code easier by using %>% from dplyr package in R?
I am not sure how it will work but I am hoping it would look like
MSE <- mapply('-', seg_mean, mean_tot, SIMPLIFY = FALSE) %>% lapply(function(x) x^2) %>% mapply('/', seg_len, SIMPLIFY = FALSE)
With |> (base R's native pipe), you can do:
mapply('-', seg_mean, mean_tot, SIMPLIFY = FALSE) |>
lapply(function(x) x^2) |>
mapply(FUN = '/', seg_len, SIMPLIFY = FALSE)
It also works with %>%. The difference is in the second mapply, where one needs to specify the first argument, otherwise it'll get interpreted as mapply(., '/', seg_len, ...) instead of mapply('/', ., seg_len, ...)
I'm trying to parse the cases (rows) of a data.frame with dplyr, but to no avail. I created two functions for this:
f1 <- function(x) {
c(s = sum(x),
m = mean(x),
v = var(x))
}
f2 <- function(x) {
apply(x, 1, f1)
}
My data.frame (data_1):
for (i in 1:6) {
assign(paste('var', i, sep = '_'),
runif(30, 20, 100))
}
data_1 <- do.call(
cbind.data.frame,
mget(ls(pattern = '*v'))
)
Using dplyr functions:
library(dplyr)
data_1 %>%
mutate_at(.vars = vars (starts_with('v')),
.funs = funs(.= f2))
data_1 %>%
mutate_if(is.numeric, .funs = funs(.= f2))
Error in mutate_impl(.data, dots) : Evaluation error: dim(X) must have a positive length.
Since the analysis is done in the rows, and I have three functions (sum, mean, and variance), the expected return is three columns.
In fact, although not deprecated, rowwise() does not play well with other grouping and summary functions, so is best avoided in dplyr. A useful alternative can be to group by row number. Here is a solution to the above using this approach.
colNames <- syms(paste0("var_", 1:6))
data_1 %>%
group_by (row_number()) %>%
summarize(dataMean = mean(!!!colNames),
dataSum = sum(!!!colNames))
Trying to get my head around Non-Standard Evaluation as used by dplyr but without success. I'd like a short function that returns summary statistics (N, mean, sd, median, IQR, min, max) for a specified set of variables.
Simplified version of my function...
my_summarise <- function(df = temp,
to.sum = 'eg1',
...){
## Summarise
results <- summarise_(df,
n = ~n(),
mean = mean(~to.sum, na.rm = TRUE))
return(results)
}
And running it with some dummy data...
set.seed(43290)
temp <- cbind(rnorm(n = 100, mean = 2, sd = 4),
rnorm(n = 100, mean = 3, sd = 6)) %>% as.data.frame()
names(temp) <- c('eg1', 'eg2')
mean(temp$eg1)
[1] 1.881721
mean(temp$eg2)
[1] 3.575819
my_summarise(df = temp, to.sum = 'eg1')
n mean
1 100 NA
N is calculated, but the mean is not, can't figure out why.
Ultimately I'd like my function to be more general, along the lines of...
my_summarise <- function(df = temp,
group.by = 'group'
to.sum = c('eg1', 'eg2'),
...){
results <- list()
## Select columns
df <- dplyr::select_(df, .dots = c(group.by, to.sum))
## Summarise overall
results$all <- summarise_each(df,
funs(n = ~n(),
mean = mean(~to.sum, na.rm = TRUE)))
## Summarise by specified group
results$by.group <- group_by_(df, ~to.group) %>%
summarise_each(df,
funs(n = ~n(),
mean = mean(~to.sum, na.rm = TRUE)))
return(results)
}
...but before I move onto this more complex version (which I was using this example for guidance) I need to get the evaluation working in the simple version first as thats the stumbling block, the call to dplyr::select() works ok.
Appreciate any advice as to where I'm going wrong.
Thanks in advance
The basic idea is that you have to actually build the appropriate call yourself, most easily done with the lazyeval package.
In this case you want to programmatically create a call that looks like ~mean(eg1, na.rm = TRUE). This is how:
my_summarise <- function(df = temp,
to.sum = 'eg1',
...){
## Summarise
results <- summarise_(df,
n = ~n(),
mean = lazyeval::interp(~mean(x, na.rm = TRUE),
x = as.name(to.sum)))
return(results)
}
Here is what I do when I struggle to get things working:
Remember that, just like the ~n() you already have, the call will have to start with a ~.
Write the correct call with the actual variable and see if it works (~mean(eg1, na.rm = TRUE)).
Use lazyeval::interp to recreate that call, and check this by running only the interp to visually see what it is doing.
In this case I would probably often write interp(~mean(x, na.rm = TRUE), x = to.sum). But running that will give us ~mean("eg1", na.rm = TRUE) which is treating eg1 as a character instead of a variable name. So we use as.name, as is taught to us in vignette("nse").
I have a data.frame of 5 lists
each list has 3 columns:
T_C is an indicator of TEST or CONTROL
id, T_C, SPEND
I know how to use lapply to get a mean of T or C, but how do you do that with multiple lists ?
dfList <- lapply(tableListBase, function(t) fetch(dbSendQuery(con, paste0("SELECT * FROM ", t))))
dfList <- setNames(dfList, tableListBase).
??
For a single list I can do this ?
means <- tapply(NET_SPEND, TC_INDICATOR, mean)
I am learning :-)
My goal to get the mean(), sd() over these 5 lists T/C for now.
My ultimate goal is to identify: mean - 3 sd() and mean + 3 sd() and remove them from this 6 list set and create a new one, after removing outliers.
I know how to do this in a more manual formal not, more code of lines, but would like to learn how to employ more FUN() methods :- )
Here are a few approaches you can take. I've ordered them by which one I would most likely use myself:
# Make a list of 5 data frames. I'll use `mtcars` for convenience, since
# I don't have your data.
X <- lapply(1:5,
function(i) mtcars[sample(1:nrow(mtcars),
size = nrow(mtcars),
replace = TRUE), ])
library(dplyr)
# Bring all of the tables together and summarise
mapply(function(df, i){ df$tbl_id <- i; df},
X,
seq_along(X),
SIMPLIFY = FALSE) %>%
bind_rows() %>%
group_by(tbl_id, am) %>%
summarise(mean = mean(mpg),
sd = sd(mpg))
# Make a list of summaries
lapply(X,
function(df)
{
df %>%
group_by(am) %>%
summarise(mean = mean(mpg),
sd = sd(mpg))
})
# Run tapply separately for the means and sds
mean_list <-
lapply(X,
function(df)
{
tapply(df$mpg, df$am, mean)
}
)
sd_list <-
lapply(X,
function(df)
{
tapply(df$mpg, df$am, sd)
}
)
Trying to get my head around Non-Standard Evaluation as used by dplyr but without success. I'd like a short function that returns summary statistics (N, mean, sd, median, IQR, min, max) for a specified set of variables.
Simplified version of my function...
my_summarise <- function(df = temp,
to.sum = 'eg1',
...){
## Summarise
results <- summarise_(df,
n = ~n(),
mean = mean(~to.sum, na.rm = TRUE))
return(results)
}
And running it with some dummy data...
set.seed(43290)
temp <- cbind(rnorm(n = 100, mean = 2, sd = 4),
rnorm(n = 100, mean = 3, sd = 6)) %>% as.data.frame()
names(temp) <- c('eg1', 'eg2')
mean(temp$eg1)
[1] 1.881721
mean(temp$eg2)
[1] 3.575819
my_summarise(df = temp, to.sum = 'eg1')
n mean
1 100 NA
N is calculated, but the mean is not, can't figure out why.
Ultimately I'd like my function to be more general, along the lines of...
my_summarise <- function(df = temp,
group.by = 'group'
to.sum = c('eg1', 'eg2'),
...){
results <- list()
## Select columns
df <- dplyr::select_(df, .dots = c(group.by, to.sum))
## Summarise overall
results$all <- summarise_each(df,
funs(n = ~n(),
mean = mean(~to.sum, na.rm = TRUE)))
## Summarise by specified group
results$by.group <- group_by_(df, ~to.group) %>%
summarise_each(df,
funs(n = ~n(),
mean = mean(~to.sum, na.rm = TRUE)))
return(results)
}
...but before I move onto this more complex version (which I was using this example for guidance) I need to get the evaluation working in the simple version first as thats the stumbling block, the call to dplyr::select() works ok.
Appreciate any advice as to where I'm going wrong.
Thanks in advance
The basic idea is that you have to actually build the appropriate call yourself, most easily done with the lazyeval package.
In this case you want to programmatically create a call that looks like ~mean(eg1, na.rm = TRUE). This is how:
my_summarise <- function(df = temp,
to.sum = 'eg1',
...){
## Summarise
results <- summarise_(df,
n = ~n(),
mean = lazyeval::interp(~mean(x, na.rm = TRUE),
x = as.name(to.sum)))
return(results)
}
Here is what I do when I struggle to get things working:
Remember that, just like the ~n() you already have, the call will have to start with a ~.
Write the correct call with the actual variable and see if it works (~mean(eg1, na.rm = TRUE)).
Use lazyeval::interp to recreate that call, and check this by running only the interp to visually see what it is doing.
In this case I would probably often write interp(~mean(x, na.rm = TRUE), x = to.sum). But running that will give us ~mean("eg1", na.rm = TRUE) which is treating eg1 as a character instead of a variable name. So we use as.name, as is taught to us in vignette("nse").