Mapping the same function to multiple tibbles in a list - r

I have several tibbles, each of which has different numbers of columns and different column names. I want to standardize the column names for each to be all lowercase. This works for an individual tibble:
library(magrittr)
library(purrr)
colnames(tbl1) %<>% map(tolower)
The column names for the object tbl1 are now all lowercase.
If I put all my tibbles in a list, however, this doesn't work:
all_tbls <- list(tbl1, tbl2, tbl3)
all_tbls %<>% map(function(tbl) {colnames(tbl) %<>% map(tolower)})
The colnames for the objects tbl1, tbl2, and tbl3 are not changed by this. The objects in the list all_tbls are now lists of the column names for each tbl, i.e. what you'd get if you applied as.list() to the result of colnames()).
Why is this happening? Is there a better approach to doing this? I'd prefer to use tidyverse functions (e.g. map instead of *apply) for consistency with other code, but am open to other solutions. EDIT: To be clear, I'd like to be able to work with the original tibble objects, i.e. the desired outcome is for the colnames of tbl1, tbl2, and tbl3 to change.
Other Q&A I looked at and did not find illuminating includes:
apply function to elements over a list
R: apply a function to a list of dataframes and save to workspace

library(magrittr)
library(purrr)
all_tbls %<>% map(~set_names(.x,tolower(colnames(.x))))
The objects in the list all_tbls are now lists of the column names for each tbl
Because you're asking map to lower the column names and return them as a list
To modify in place we can use data.table::setnames, since data.table using copy in place against copy by reference.
library(data.table)
library(purrr)
map(list(df1,df2),~setnames(.,old = names(.), new = tolower(names(.))))
Data
df1 <- read.table(text='
A B
1 1',header=TRUE)
df2 <- read.table(text='
C D
2 2',header=TRUE)

The function you're mapping is returning the column names, you need it to return the actual tibble instead:
all_tbls %<>% map(function(tbl) {
colnames(tbl) %<>% map(tolower)
tbl
})

You can use purrr::map and dplyr::rename_all :
all_tbls <- list(head(iris,2), head(cars,2))
library(tidyverse)
all_tbls %>% map(rename_all, toupper)
# [[1]]
# SEPAL.LENGTH SEPAL.WIDTH PETAL.LENGTH PETAL.WIDTH SPECIES
# 1 5.1 3.5 1.4 0.2 setosa
# 2 4.9 3.0 1.4 0.2 setosa
#
# [[2]]
# SPEED DIST
# 1 4 2
# 2 4 10
#

Related

Subset larger Dataframe into smaller ones in R?

I have a larger dataframe from which I would like to split up based on 2 columns and a changing 3rd column.
Am on mobile so hard to give a reproducible example so I will try my best to describe.
I have a large Dataframe with 10 columns, the first 2 being ID and Year.
I would like to have smaller ones where the 3rd column will be each of the remaining 8.
So a total of 8 smaller dataframes
I have tried:
newDF1<-select(BIGdf, c("ID", "Year", "3rdVariable"))
newDF2<-select(BIGdf, c("ID", "Year", "4thVariable"))
And achieve the result but is there a way I don't have to write out each individual variable.
Sorry for the poor formatting any help would be appreciated.
It is usually bad practice to split up data which belongs together.
However, you can automatically create new R objects based on expressions using assign:
library(tidyverse)
columns <-
iris %>%
colnames() %>%
setdiff("Species")
columns
#> [1] "Sepal.Length" "Sepal.Width" "Petal.Length" "Petal.Width"
columns %>%
walk(~ {
data <- iris %>% head(2) %>% select_at(c("Species", .x))
assign(.x, data, envir = globalenv())
})
# access created objects
Sepal.Width
#> Species Sepal.Width
#> 1 setosa 3.5
#> 2 setosa 3.0
Sepal.Length
#> Species Sepal.Length
#> 1 setosa 5.1
#> 2 setosa 4.9
Created on 2021-11-25 by the reprex package (v2.0.1)
Adding to danlooos answer, if you want a Base R solution you could just use a loop:
for (col in colnames(iris)[-1:-2]) {
assign(col, iris[, c("Sepal.Length", "Sepal.Width", col)], envir = globalenv())
}
Or do the same thing and store the resulting data frames in a list, which I personally find somewhat cleaner:
new_frames <- list()
for (col in colnames(iris)[-1:-2]) {
new_frames <- append(new_frames, list(iris[, c("Sepal.Length", "Sepal.Width", col)]))
}

Iterate sequentially over two lists in R

I have two df that look something like this
library(tidyverse)
iris <- iris%>% mutate_at((1:4),~.+2)
iris2 <- iris
names(iris2)<-sub(".", "_", names(iris2), fixed = TRUE)
My aim is to reduce the values of the variables in iris that are above the maximum values of the corresponding variable in iris2, to match the maximum value in iris2.
I have written a function that does this.
max(iris$Sepal.Length)
[1] 9.9
max(iris2$Sepal_Length)
[1] 7.9
# i want every value of iris that is >= to max value of iris2 to be equal to the max value of iris 2.
# my function:
fixmax<- function(data,data2,var1,var2) {
data<- data %>%
mutate("{var1}" := ifelse(get(var1)>=max(data2[[var2]],na.rm = T),
max(data2[[var2]],na.rm = T),get(var1)))
return(data)
}
# apply my function to a variable
tst_iris <- fixmax(iris,iris2,"Sepal.Length","Sepal_Length")
max(tst_iris$Sepal.Length)
7.9 # it works!
The challange I face is that I would like to iterate my function sequentially overtwo lists of variables- i.e. Sepal.Length with Sepal_Length, Sepal.Widthwith Sepal_Width etc.
Does anyone knows how I can do this?
I tried using Map but I am doing something wrong.
lst1 <- names(iris[,1:4])
lst2 <- names(iris2[,1:4])
final_iris<- Map(fixmax,iris, iris2,lst1,lst2)
My goal is to obtain a df (final_iris) where every variable has been adjusted using the criteria specified by fixmax.
I know I can do this by running my function on every variable like so.
final_iris <- iris
final_iris <- fixmax(final_iris,iris2,"Sepal.Length","Sepal_Length")
final_iris <- fixmax(final_iris,iris2,"Sepal.Width","Sepal_Width")
final_iris <- fixmax(final_iris,iris2,"Petal.Length","Petal_Length")
final_iris <- fixmax(final_iris,iris2,"Petal.Width","Petal_Width")
But in the real data, I have to run this operation tens of times and I would like to be able to loop my function sequentially.
Does anyone know how I loop my fixmax over lst1 and lst2 sequentially?
Rather than explicitly iterating over the different datasets and columns by name, you can take advantage of the vectorization built into R. If the dataframes have the same column/variable ordering a function mapped to both dataframes using mapply or purrr::map2 will iterate column by column without the need to specify column names.
Given two input data frames (df_small and df_big) the steps are:
Calculate the max of each column in df_small to create df_small_max
Apply the pmin function to each column of df_big and each value of df_small_max using mapply (or purr::map2_dfc if you prefer tidyverse mapping)
#set up fake data
df_small <- iris[,1:4]
df_big <- df_small + 2
# find max of each col in df_small
df_small_max <- sapply(df_small, max)
# replace values of df_big which are larger than df_small_max
df_big_fixed <- mapply(pmin, df_big, df_small_max)
# sanity check -- Note the change in Sepal.Width
df_small_max
#> Sepal.Length Sepal.Width Petal.Length Petal.Width
#> 7.9 4.4 6.9 2.5
head(df_big, 3)
#> Sepal.Length Sepal.Width Petal.Length Petal.Width
#> 1 7.1 5.5 3.4 2.2
#> 2 6.9 5.0 3.4 2.2
#> 3 6.7 5.2 3.3 2.2
head(df_big_fixed, 3)
#> Sepal.Length Sepal.Width Petal.Length Petal.Width
#> [1,] 7.1 4.4 3.4 2.2
#> [2,] 6.9 4.4 3.4 2.2
#> [3,] 6.7 4.4 3.3 2.2
Created on 2021-07-31 by the reprex package (v2.0.0)
It's likely that your issue is related to the fact that dataframes are themselves lists. Map() expects the non-function arguments to be lists of the same length. Any arguments that are shorter than the longest list are "recycled" to match it's length.
Currently, you have:
final_iris<- Map(fixmax,iris, iris2,lst1,lst2)
This is actually equivalent to:
final_iris<- Map(fixmax,
list(iris$Sepal.Length,
iris$Sepal.Width,
iris$Petal.Length,
iris$Petal.Width,
iris$Species),
list(iris2$Sepal_Length,
iris2$Sepal_Width,
iris2$Petal_Length,
iris2$Petal_Width,
iris2$Species),
lst1,
lst2)
(To understand why, you must remember that dataframes like iris and iris2 are, technically, under the hood, lists of [atomic] vectors.)
I suspect that you want iris and iris2 to be supplied to each call to fixmax(). In order to have Map() recycle these two vectors, they need to be supplied as single-element lists. Like so:
final_iris<- Map(fixmax, list(iris), list(iris2),lst1,lst2)
To combine a list of dataframes into a single dataframe do
do.call(rbind, final_iris)
Here is a mostly base way. I also renamed the variables because I had some trouble replicating since originally the approach would save over the iris object.
The approach is that instead of mutating a data.frame object, we instead only return the vector of the expected values from our modified function. Then, we re-assign those values back to our original data.frame.
fixmax2 = function(x, y) {
max_y = max(y, na.rm = TRUE)
ifelse(x >= max_y, max_y, y)
}
cols = which(sapply(df_plus, is.numeric))
df_plus[cols] = Map(fixmax2, df_plus[cols], df_iris[cols])
df_plus
Raw data:
library(dplyr)
df_plus = iris %>% mutate_at((1:4), ~. + 2) ## let's not save over iris
df_iris = iris
names(df_iris)<-sub(".", "_", names(df_iris), fixed = TRUE)
Is that what you're expecting ?
my_a <- iris %>% mutate_at((1:4),~.+2)
iris2 <- iris
names(iris2)<-sub(".", "_", names(iris2), fixed = TRUE)
my_var <- which(my_a$Sepal.Length >= max(iris2$Sepal_Length) & my_a$Sepal.Width >= max(iris2$Sepal_Width))
if (length(my_var)) {
my_a <- my_a[my_var,]
}
Your function seems convoluted and hard to read at a first glance. We can tidy up the function to return max(x, max_val) for each value in a column with a quick function
#function to correct max
adjust_max <- function(x, max_val) {
return(ifelse(x >= max_val, max_val, x))
}
Finally, we want to apply this automatically and sequentially using the two dataframes. We will use a simple for loop. Code to set up the problem is attached.
#libraries
library(tidyverse)
#set up fake data
iris_big <- iris%>% mutate_at((1:4),~.+2)
iris_small <- iris
names(iris_small)<- sub(".", "_", names(iris_small), fixed = TRUE)
#check which is the bigger one and the smaller
max(iris_big$Sepal.Length) #bigger
max(iris_small$Sepal_Length) #smaller
#function to correct max
adjust_max <- function(x, max_val) {
return(ifelse(x >= max_val, max_val, x))
}
#apply it to get a final result
iris_final <- iris_big
# iterate over columns, assuming same positions
# you can edit the 1:ncol(iris_final) to only take the columns you want
for (i in 1:ncol(iris_final)) {
#check numeric
if (is.numeric(iris_final[,i])) {
#applies the function - notice we call iris_final and iris_small
iris_final[,i] <- sapply(iris_final[,i],
adjust_max,
max_val = max(iris_small[,i]))
}
}
#check answer is correct
apply(iris_final[,1:4], 2, max)
apply(iris_small[,1:4], 2, max)
tail(iris_final)
For a tidyverse approach you can use transmute instead of mutate. transmute would return only one column in each iteration whereas mutate would return all the columns every time.
Apart from that to make it more tidyverse friendly I am using .data instead of get. Also using pmin instead of complicated ifelse solution.
library(dplyr)
library(purrr)
fixmax<- function(data,data2,var1,var2) {
data<- data %>% transmute("{var1}" := pmin(.data[[var1]], max(data2[[var2]])))
return(data)
}
To apply the function to each pair of columns you can use map2_dfc which will also combine the results in one dataframe.
lst1 <- names(iris[,1:4])
lst2 <- names(iris2[,1:4])
Compare the max values of two dataframes before applying the function.
map_dbl(iris[lst1], max)
#Sepal.Length Sepal.Width Petal.Length Petal.Width
# 9.9 6.4 8.9 4.5
map_dbl(iris2[lst2], max)
#Sepal_Length Sepal_Width Petal_Length Petal_Width
# 7.9 4.4 6.9 2.5
Apply the function -
iris[lst1] <- map2_dfc(lst1, lst2, ~fixmax(iris, iris2, .x, .y))
Compare the max values of two dataframes after applying the function.
map_dbl(iris[lst1], max)
#Sepal.Length Sepal.Width Petal.Length Petal.Width
# 7.9 4.4 6.9 2.5
map_dbl(iris2[lst2], max)
#Sepal_Length Sepal_Width Petal_Length Petal_Width
# 7.9 4.4 6.9 2.5
You should consider using column indices; a complete (not including the data-frame construction) base R solution could look like:
# Resolve the indices of the numeric vectors in
# iris: num_cols => integer vector
num_cols <- which(
vapply(
iris,
is.numeric,
logical(1)
),
arr.ind = TRUE
)
# Map the pmin function over iris to select the
# minimum of the vector element in iris and the
# maximum values of that vector in iris2:
# iris => data.frame
iris[,num_cols] <- Map(function(i){
pmin(
iris[,i],
max(
iris2[,i],
na.rm = TRUE
)
)
},
num_cols
)
You can do this by creating a matrix of the max value repeated in each column and use pmin to take the minimum values between the max values in iris2 and the values in the other dataframe. I created a new fixmax function which only takes the two dataframes as arguments.
Preparing the data
library(tidyverse)
initial <- iris %>% mutate_at(1:4, ~.+2)
iris2 <- iris
names(iris2)<-sub(".", "_", names(iris2), fixed = TRUE)
print(max(initial$Sepal.Length))
# [1] 9.9
print(max(iris2$Sepal_Length))
# [1] 7.9
Creating the function
fixmax <- function(df, dfmax){
colids <- which(unlist(lapply(dfmax, is.numeric)))
dfmax <- apply(dfmax[, colids], 2, max) %>%
matrix(nrow=nrow(dfmax), ncol=length(colids), byrow=TRUE) %>%
as.data.frame()
df[, colids] <- pmin(df[,colids], dfmax)
return(df)
}
Testing the function
newiris <- fixmax(initial, iris2)
print(max(newiris$Sepal.Length))
# [1] 7.9
assertthat::assert_that(!identical(newiris, iris2))
# [1] TRUE
assertthat::assert_that(all((initial == newiris) || (iris2 == newiris)))
# [1] TRUE
imax = apply(iris2[, 1:4], 2, max) %>%
matrix(nrow=nrow(iris2), ncol=4, byrow=TRUE) %>%
as.data.frame()
assertthat::assert_that(all(newiris[, 1:4] <= imax))
# [1] TRUE
print(head(newiris))
# Sepal.Length Sepal.Width Petal.Length Petal.Width Species
# 1 7.1 4.4 3.4 2.2 setosa
# 2 6.9 4.4 3.4 2.2 setosa
# 3 6.7 4.4 3.3 2.2 setosa
# 4 6.6 4.4 3.5 2.2 setosa
# 5 7.0 4.4 3.4 2.2 setosa
# 6 7.4 4.4 3.7 2.4 setosa

Use table() inside dataframes stored in a list to create new dataframes

I have the dataframe below:
library lubridate
eventtime<-c("2020-02-07 22:06:00","2020-02-07 22:00:00","2020-02-07 21:46:00")
eventvalue<-c("home","work",'exit')
geof<-data.frame(eventtime,eventvalue)
Then I use a list to store them and assign the list elements to different data.frames using:
library(lubridate)
library(tidyverse)
list1 <- geof %>%
split(.$eventvalue) %>%
bind_rows() %>%
mutate(EventHour = hour(eventtime)) %>%
split(.$eventvalue)
for (i in names(list1)) setNames(assign(i, data.frame(list1[[i]])), names(list1))
I would like then to use table() based on EventHour for each dataframe to create new dataframes with the frequency of each EventHour and then rename those columns
#table the events by count of hours
tablegeh<-data.frame(table(home$EventHour))
colnames(tablegeh)<-c("Hour","Frequency")
You can use lapply:
lapply(list1, function(x) table(x$EventHour))
#> $exit
#> 21
#> 1
#>
#> $home
#> 22
#> 1
#>
#> $work
#> 22
#> 1
Obviously, you only have a single entry in each data frame, so the "tables" don't look much like tables!

adding hash to each row using dplyr and digest in R

I need to add a fingerprint to each row in a dataset so to check with a later version of the same set to look for difference.
I know how to add hash for each row in R like below:
data.frame(iris,hash=apply(iris,1,digest))
I am learning to use dplyr as the dataset is getting huge and I need to store them in SQL Server, I tried something like below but the hash is not working, all rows give the same hash:
iris %>%
rowwise() %>%
mutate(hash=digest(.))
Any clue for row-wise hashing using dplyr? Thanks!
We could use do
res <- iris %>%
rowwise() %>%
do(data.frame(., hash = digest(.)))
head(res, 3)
# A tibble: 3 x 6
# Sepal.Length Sepal.Width Petal.Length Petal.Width Species hash
# <dbl> <dbl> <dbl> <dbl> <fctr> <chr>
#1 5.1 3.5 1.4 0.2 setosa e261621c90a9887a85d70aa460127c78
#2 4.9 3.0 1.4 0.2 setosa 7bf67322858048d82e19adb6399ef7a4
#3 4.7 3.2 1.3 0.2 setosa c20f3ee03573aed5929940a29e07a8bb
Note that in the apply procedure, all the columns are converted to a single class as apply converts to matrix and matrix can hold only a single class. There will be a warning about converting the factor to character class
Since do is being superseded, this option may be better now:
library(digest)
library(tidyverse)
# Create a tibble for practice
df <- tibble(x = rep(c(1,2), each=2), y = c(1,1,3,4), z = c(1,1,6,4))
# Note that row 1 and 2 are equal.
# This will generate a sha1 over specific columns (column z is excluded)
df %>% rowwise() %>% mutate(m = sha1( c(x, y ) ))
# This will generate over all columns,
# then convert the hash to integer
# (better for joining or other data operations later)
df %>%
rowwise() %>%
mutate(sha =
digest2int( # generates a new integer hash
sha1( c_across(everything() ) ) # across all columns
)
)
It may be a better option to convert everything to character and paste it together to use just one hash function call. You can use unite:
df %>% rowwise() %>%
unite(allCols, everything(), sep = "", remove = FALSE) %>%
mutate(hash = digest2int(allCols)) %>%
select(-allCols)

Correlation using funs in dplyr

I want to find the rank correlation of various columns in a data.frame using dplyr.
I am sure there is a simple solution to this problem, but I think the problem lies in me not being able to use two inputs in summarize_each_ in dplyr when using the cor function.
For the following df:
df <- data.frame(Universe=c(rep("A",5),rep("B",5)),AA.x=rnorm(10),BB.x=rnorm(10),CC.x=rnorm(10),AA.y=rnorm(10),BB.y=rnorm(10),CC.y=rnorm(10))
I want to get the rank correlations between all the .x and the .y combinations. My problem in the function below where you see ????
cor <- df %>% group_by(Universe) %>%
summarize_each_(funs(cor(.,method = 'spearman',use = "pairwise.complete.obs")),????)
I want cor to just include the correlation pairs: AA.x.AA.y , AA.x,BB.y, ... for each Universe.
Please help!
An alternative approach is to just call the cor function once since this will calculate all required correlations. Repeated calls to cor might be a performance issue for a large data set. Code to do this and extract the correlation pairs with labels could look like:
#
# calculate correlations and display in matrix format
#
cor_matrix <- df %>% group_by(Universe) %>%
do(as.data.frame(cor(.[,-1], method="spearman", use="pairwise.complete.obs")))
#
# to add row names
#
cor_matrix1 <- cor_matrix %>%
data.frame(row=rep(colnames(.)[-1], n_groups(.)))
#
# calculate correlations and display in column format
#
num_col=ncol(df[,-1])
out_indx <- which(upper.tri(diag(num_col)))
cor_cols <- df %>% group_by(Universe) %>%
do(melt(cor(.[,-1], method="spearman", use="pairwise.complete.obs"), value.name="cor")[out_indx,])
So here follows the winning (time-wise) solution to my problem:
d <- df %>% gather(R1,R1v,contains(".x")) %>% gather(R2,R2v,contains(".y"),-Universe) %>% group_by(Universe,R1,R2) %>%
summarize(ICAC = cor(x=R1v, y=R2v,method = 'spearman',use = "pairwise.complete.obs")) %>%
unite(Pair, R1, R2, sep="_")
Albeit 0.005 milliseconds in this example, adding data adds time.
Try this:
library(data.table) # needed for fast melt
setDT(df) # sets by reference, fast
mdf <- melt(df[, id := 1:.N], id.vars = c('Universe','id'))
mdf %>%
mutate(obs_set = substr(variable, 4, 4) ) %>% # ".x" or ".y" subgroup
full_join(.,., by=c('Universe', 'obs_set', 'id')) %>% # see notes
group_by(Universe, variable.x, variable.y) %>%
filter(variable.x != variable.y) %>%
dplyr::summarise(rank_corr = cor(value.x, value.y,
method='spearman', use='pairwise.complete.obs'))
Produces:
Universe variable.x variable.y rank_corr
(fctr) (fctr) (fctr) (dbl)
1 A AA.x BB.x -0.9
2 A AA.x CC.x -0.9
3 A BB.x AA.x -0.9
4 A BB.x CC.x 0.8
5 A CC.x AA.x -0.9
6 A CC.x BB.x 0.8
7 A AA.y BB.y -0.3
8 A AA.y CC.y 0.2
9 A BB.y AA.y -0.3
10 A BB.y CC.y -0.3
.. ... ... ... ...
Explanation:
Melt: converts table to long form, one row per observation. To do the melt in a dplyr chain, you would have to use tidyr::gather, I believe, so pick your dependency. Using data.table there is faster and not hard to understand. The step also creates an id for each observation, 1 to nrow(df). The rest is in dplyr like you wanted.
Full join: joins the melted table to itself to create paired observations from all variable pairings based on common Universe and observation id (edit: and now '.x' or '.y' subgroup).
Filter: we don't need to correlate observations paired to themselves, we know those correlations = 1. If you wanted to include them for a correlation matrix or something, comment out this step.
Summarize using Spearman correlation. Note you should use dplyr::summarise since if you have plyr also loaded you might accidentally call plyr::summarise.

Resources