zipping before ddply / mapply on dataframe - r

I have a dataframe :
> s <- expand.grid(c(T,F),c(T,F))
> s
Var1 Var2
1 TRUE TRUE
2 FALSE TRUE
3 TRUE FALSE
4 FALSE FALSE
and would like to duplicate each line a number of times, which is stored in a vector :
> r <- c(2,3,4,1)
Do you know how to do that?
In functional programming terms, it would just be a mapping over zipped list, duplicate, and collect.
I am not sure on how to do either the zip with plyr, or the map with mapply...

Much easier than all that:
s[rep(1:4,times = r),]
Var1 Var2
1 TRUE TRUE
1.1 TRUE TRUE
2 FALSE TRUE
2.1 FALSE TRUE
2.2 FALSE TRUE
3 TRUE FALSE
3.1 TRUE FALSE
3.2 TRUE FALSE
3.3 TRUE FALSE
4 FALSE FALSE

Related

Values of R standardization outputs not equal? [duplicate]

This question already has answers here:
Why are these numbers not equal?
(6 answers)
Closed 1 year ago.
I have been trying to figure out why the standardization outputs using these methods do not seem to be equal, even though numerically they are the same?
library(vegan)
# subset data
env.data <- mite.env[1:10, c("SubsDens", "WatrCont")]
# method 1
env.data.x <- env.data
env.data.x$SubsDens <- as.vector(scale(env.data.x$SubsDens))
env.data.x$WatrCont <- as.vector(scale(env.data.x$WatrCont))
# method 2
env.data.y <- env.data
env.data.y <- as.data.frame(decostand(as.matrix(env.data.y), method = "standardize"))
# method 3
env.data.z <- env.data
normalize <- function(x){
return((x - mean(x))/sd(x))
}
env.data.z$SubsDens <- normalize(env.data.z$SubsDens)
env.data.z$WatrCont <- normalize(env.data.z$WatrCont)
# comparison
env.data.x == env.data.y
env.data.x == env.data.z
env.data.y == env.data.z
Here is the output:
> env.data.x == env.data.y
SubsDens WatrCont
1 TRUE TRUE
2 TRUE TRUE
3 TRUE TRUE
4 TRUE TRUE
5 TRUE TRUE
6 TRUE TRUE
7 TRUE TRUE
8 TRUE TRUE
9 TRUE TRUE
10 TRUE TRUE
> env.data.x == env.data.z
SubsDens WatrCont
1 FALSE TRUE
2 FALSE TRUE
3 FALSE TRUE
4 FALSE TRUE
5 FALSE TRUE
6 FALSE TRUE
7 FALSE TRUE
8 FALSE TRUE
9 FALSE TRUE
10 FALSE TRUE
> env.data.y == env.data.z
SubsDens WatrCont
1 FALSE TRUE
2 FALSE TRUE
3 FALSE TRUE
4 FALSE TRUE
5 FALSE TRUE
6 FALSE TRUE
7 FALSE TRUE
8 FALSE TRUE
9 FALSE TRUE
10 FALSE TRUE
Method 3, standardizing using the formula as a function, seems to be doing something different...
Thank you in advance for your answers!
Thank you Jonny Phelps and r2evans for your comments.
I should've just checked the difference between the columns.
env.data.x - env.data.z
Output was on the order of 1e-16, so not at all significant for my purposes.

How create new column an add column names by selected row in r

a<-c(TRUE,FALSE,TRUE,FALSE,TRUE,FALSE)
b<-c(TRUE,FALSE,TRUE,FALSE,FALSE,FALSE)
c<-c(TRUE,TRUE,TRUE,FALSE,TRUE,FALSE)
costumer<-c("one","two","three","four","five","six")
df<-data.frame(costumer,a,b,c)
That's an example code. It looks like this printed:
costumer a b c
1 one TRUE TRUE TRUE
2 two FALSE FALSE TRUE
3 three TRUE TRUE TRUE
4 four FALSE FALSE FALSE
5 five TRUE FALSE TRUE
6 six FALSE FALSE FALSE
I want to create a new column df$items that contains only the column names that are TRUE for each row in the data. Something like this:
costumer a b c items
1 one TRUE TRUE TRUE a,b,c
2 two FALSE FALSE TRUE c
3 three TRUE TRUE TRUE a,b,c
4 four FALSE FALSE FALSE
5 five TRUE FALSE TRUE
6 six FALSE FALSE FALSE
I thought of using apply function or use which for selecting indexes, but couldn't figure it out. Can anyone help me?
df$items <- apply(df, 1, function(x) paste0(names(df)[x == TRUE], collapse = ","))
df
custumer a b c items
1 one TRUE TRUE TRUE a,b,c
2 two FALSE FALSE TRUE c
3 three TRUE TRUE TRUE a,b,c
4 four FALSE FALSE FALSE
5 five TRUE FALSE TRUE a,c
6 six FALSE FALSE FALSE
df$items = apply(df[2:4], 1, function(x) toString(names(df[2:4])[x]))
df
# custumer a b c items
# 1 one TRUE TRUE TRUE a, b, c
# 2 two FALSE FALSE TRUE c
# 3 three TRUE TRUE TRUE a, b, c
# 4 four FALSE FALSE FALSE
# 5 five TRUE FALSE TRUE a, c
# 6 six FALSE FALSE FALSE
You could use
df$items <- apply(df, 1, function(x) toString(names(df)[which(x == TRUE)]))
Output
# custumer a b c items
# 1 one TRUE TRUE TRUE a, b, c
# 2 two FALSE FALSE TRUE c
# 3 three TRUE TRUE TRUE a, b, c
# 4 four FALSE FALSE FALSE
# 5 five TRUE FALSE TRUE a, c
# 6 six FALSE FALSE FALSE
We can use pivot_longer to reshape to 'long' format and then do a group by paste
library(dplyr)
library(tidyr)
library(stringr)
df %>%
pivot_longer(cols = a:c) %>%
group_by(costumer) %>%
summarise(items = toString(name[value])) %>%
left_join(df)

Using any() vs | in dplyr::mutate

Why should I use | vs any() when I'm comparing columns in dplyr::mutate()?
And why do they return different answers?
For example:
library(tidyverse)
df <- data_frame(x = rep(c(T,F,T), 4), y = rep(c(T,F,T, F), 3), allF = F, allT = T)
df %>%
mutate(
withpipe = x | y # returns expected results by row
, usingany = any(c(x,y)) # returns TRUE for every row
)
What's going on here and why should I use one way of comparing values over another?
The difference between the two is how the answer is calculated:
for |, elements are compared row-wise and boolean logic is used to return the proper value. In the example above each x and y pair are compared to each other and a logical value is returned for each pair, resulting in 12 different answers, one for each row of the data frame.
any(), on the other hand, looks at the entire vector and returns a single value. In the above example, the mutate line that calculates the new usingany column is basically doing this: any(c(df$x, df$y)), which will return TRUE because there's at least one TRUE value in either df$x or df$y. That single value is then assigned to every row of the data frame.
You can see this in action using the other columns in your data frame:
df %>%
mutate(
usingany = any(c(x,y)) # returns all TRUE
, allfany = any(allF) # returns all FALSE because every value in df$allF is FALSE
)
To answer when you should use which: use | when you want to compare elements row-wise. Use any() when you want a universal answer about the entire data frame.
TLDR, when using dplyr::mutate(), you're usually going to want to use |.
You can also use rowwise().
df <- data_frame(x = rep(c(T,F,T), 4), y = rep(c(T,F,T, F), 3), allF = F, allT = T)
df %>%
rowwise() %>%
mutate(x_or_y = any(x,y))
Output:
# A tibble: 12 x 5
x y allF allT x_or_y
<lgl> <lgl> <lgl> <lgl> <lgl>
1 TRUE TRUE FALSE TRUE TRUE
2 FALSE FALSE FALSE TRUE FALSE
3 TRUE TRUE FALSE TRUE TRUE
4 TRUE FALSE FALSE TRUE TRUE
5 FALSE TRUE FALSE TRUE TRUE
6 TRUE FALSE FALSE TRUE TRUE
7 TRUE TRUE FALSE TRUE TRUE
8 FALSE FALSE FALSE TRUE FALSE
9 TRUE TRUE FALSE TRUE TRUE
10 TRUE FALSE FALSE TRUE TRUE
11 FALSE TRUE FALSE TRUE TRUE
12 TRUE FALSE FALSE TRUE TRUE
TL;DR (update):
if_anyis the cleanest replacement for any() in rowwise operations with dplyr. See below.
You can use both the OR operator | or any()
It is the same thing when comparing &and all().
As suggested, you must take into account that |is vectorized, while any() is not
In order to use any() the same way, you must group the data rowwise, so you can call an equivalent of any(current_row). This can be done with purrr::pmap or dplyr::rowwise.
But dplyr::if_any looks a lot cleaner.
Se the code below for a comparison of all methods:
df%>%mutate(
row_OR=x|y,
row_pmap_any=pmap_lgl(select(.,c(x,y)), any),
with_if_any = if_any(c(x,y)))%>%
rowwise()%>%
mutate(
row_rowwise_any=any(c_across(c(x,y))))
# A tibble: 12 × 8
# Rowwise:
x y allF allT row_OR row_pmap_any with_if_any row_rowwise_any
<lgl> <lgl> <lgl> <lgl> <lgl> <lgl> <lgl> <lgl>
1 TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE
2 FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE
3 TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE
4 TRUE FALSE FALSE TRUE TRUE TRUE TRUE TRUE
5 FALSE TRUE FALSE TRUE TRUE TRUE TRUE TRUE
6 TRUE FALSE FALSE TRUE TRUE TRUE TRUE TRUE
7 TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE
8 FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE
9 TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE
10 TRUE FALSE FALSE TRUE TRUE TRUE TRUE TRUE
11 FALSE TRUE FALSE TRUE TRUE TRUE TRUE TRUE
12 TRUE FALSE FALSE TRUE TRUE TRUE TRUE TRUE
All methods work, and I did not find much difference in performance.

finding all possible subsets of a dataframe

I am looking for a function that takes a column of a data.frame as the reference and finds all subsets with respect to the other variable levels. For example, let z be data frame with 4 columns a,b,c,d, each column has 2 levels for instance. let a be the reference. Then z would be like
z$a : TRUE FALSE
z$b : TRUE FALSE
z$c : TRUE FALSE
z$d : TRUE FALSE
Then what I need is a LIST that the elements are combination names such as
aTRUEbTRUEcTRUEdTR UE :subset of the dataframe
aTRUEbFALSEcTRUEdTRUE : subset
...
Here is an example,
set.seed(123)
z=matrix(sample(c(TRUE,FALSE),size = 100,replace = TRUE),ncol=4)
colnames(z) = letters[1:4]
z=as.data.frame(z)
output= list(
'bTUEcTRUEdFALSE' = subset(z,b==TRUE & c==TRUE & d==FALSE),
'bTRUEcTRUEdTRUE' = subset(z,b==TRUE & c==TRUE & d==TRUE),
'bTRUEcFALSEdFALSE' = subset(z,b==TRUE & c==FALSE & d==FALSE),
'bTRUEcFALSEdTRUE' = subset(z,b==TRUE & c==FALSE & d==TRUE)
# and so on ...
)
output
$bTUEcTRUEdFALSE
a b c d
13 FALSE TRUE TRUE FALSE
14 FALSE TRUE TRUE FALSE
$bTRUEcTRUEdTRUE
a b c d
4 FALSE TRUE TRUE TRUE
10 TRUE TRUE TRUE TRUE
16 FALSE TRUE TRUE TRUE
20 FALSE TRUE TRUE TRUE
24 FALSE TRUE TRUE TRUE
$bTRUEcFALSEdFALSE
a b c d
17 TRUE TRUE FALSE FALSE
19 TRUE TRUE FALSE FALSE
22 FALSE TRUE FALSE FALSE
$bTRUEcFALSEdTRUE
a b c d
5 FALSE TRUE FALSE TRUE
11 FALSE TRUE FALSE TRUE
15 TRUE TRUE FALSE TRUE
18 TRUE TRUE FALSE TRUE
21 FALSE TRUE FALSE TRUE
23 FALSE TRUE FALSE TRUE
However, there is an issue with the example. firstly, I do not know the number of variables (in this case 4 (a to d). Secondly, the name of the variables must be caught from the data (simple speaking, I cannot use subset since I do not know the variable name in the condition (a== can be anything==)
What is the most efficient way of doing this in R?
You can use split and paste like so:
split(z, paste(z$b, z$c, z$d))
But the tricky part of your question is how to programmatically combine the variables in columns 2:end without knowing beforehand the number of columns, their names or values. We can use a function like below to paste the values by row in columns 2:end
apply(df, 1, function(i) paste(i[-1], collapse=""))
Now combine with split
split(z, apply(z, 1, function(i) paste(i[-1], collapse="")))

Counting Falses before Trues in R

I'm trying to use R to find the average number of attempts before a success in a dataframe with 300,000+ rows. Data is structured as below.
EventID SubjectID ActionID Success DateUpdated
a b c TRUE 2014-06-21 20:20:08.575032+00
b a c FALSE 2014-06-20 02:58:40.70699+00
I'm still learning my way through R. It looks like I can use ddply to separate the frame out based on Subject and Action (I want to see how many times a given subject tries an action before achieving a success), but I can't figure out how to write the formula I need to apply.
library(data.table)
# example data
dt = data.table(group = c(1,1,1,1,1,2,2), success = c(F,F,T,F,T,F,T))
# group success
#1: 1 FALSE
#2: 1 FALSE
#3: 1 TRUE
#4: 1 FALSE
#5: 1 TRUE
#6: 2 FALSE
#7: 2 TRUE
dt[, which(success)[1] - 1, by = group]
# group V1
#1: 1 2
#2: 2 1
Replace group with list(subject, action) or whatever is appropriate for your data (after converting it to data.table from data.frame).
To follow up on Tarehman's suggestion, since I like rle,
foo <- rle(data$Success)
mean(foo$lengths[foo$values==FALSE])
This might be an answer to a totally different question, but does this get close to what you want?
tfs <- sample(c(FALSE,TRUE),size = 50, replace = TRUE, prob = c(0.8,0.2))
tfs_sums <- cumsum(!tfs)
repsums <- tfs_sums[duplicated(tfs_sums)]
mean(repsums - c(0,repsums[-length(repsums)]))
tfs
[1] FALSE TRUE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE TRUE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE
[20] FALSE FALSE FALSE FALSE FALSE TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE
[39] FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE
repsums
1 6 8 9 20 20 20 20 24 26 31 36
repsums - c(0,repsums[-length(repsums)])
1 5 2 1 11 0 0 0 4 2 5 5
The last vector shown is the length of each continuous "run" of FALSE values in the vector tfs
you could use data.table work around to get what you need as follows:
library (data.table)
df=data.frame(EventID=c("a","b","c","d"),SubjectID=c("b","a","a","a"),ActionID=c("c","c","c","c"),Success=c(TRUE,FALSE,FALSE,TRUE))
dt=data.table(df)
dt[ , Index := 1:.N , by = c("SubjectID" , "ActionID","Success") ]
Now this Index column will hold the number that you need for each subject/action consecutive experiments. You need to aggregate to get that number (max number)
result=stats:::aggregate.formula(Index~(SubjectID+ActionID),data=dt,FUN= function(x) max(x))
so this will give you the max index and it is the number of the falses before you hit a true. Note that you might need to do further processing to filter out subjects that has never had a true

Resources