How to get value of last non-NA column [duplicate] - r

This question already has answers here:
Select last non-NA value in a row, by row
(3 answers)
Closed 2 years ago.
A bit difficult to explain, but I have a dataframe with values that look like a staircase - for every date, there are different columns that have NA for some dates. I want to create a new column that has the last non-NA column value in it.
Hopefuly it makes more sense with this example:
Sample dataframe:
test <- data.frame("date" = c(as.Date("2020-01-01"), as.Date("2020-01-02"), as.Date("2020-01-03")),
"a" = c(4, 3, 4),
"b" = c(NA, 2, 1),
"c" = c(NA, NA, 5))
Desired output:
date............val
2020-01-01...... 4
2020-01-02...... 2
2020-01-03...... 5
I'd also prefer not to do something like take the row number of the date and take that column number + 1, but if that's the only way to do it, that's that. Thanks!

Here's a Tidyverse-based approach - convert the columns to rows using pivot_longer, then get the last row where the value isn't NA for each date:
library(dplyr)
library(tidyr)
test %>%
pivot_longer(-date) %>%
filter(!is.na(value)) %>%
group_by(date) %>%
summarize(value = tail(value, 1), .groups = "drop")

You can use max.col with ties.method set as "last" to get last non-NA value in each row.
test$val <- test[cbind(1:nrow(test), max.col(!is.na(test), ties.method = 'last'))]
test
# date a b c val
#1 2020-01-01 4 NA NA 4
#2 2020-01-02 3 2 NA 2
#3 2020-01-03 4 1 5 5

You can also do this with dplyr's coalesce function, which takes the first non-missing element from the provided vectors.
library(dplyr)
test %>%
mutate(val = coalesce(c, b, a))
#> date a b c val
#> 1 2020-01-01 4 NA NA 4
#> 2 2020-01-02 3 2 NA 2
#> 3 2020-01-03 4 1 5 5
Created on 2020-07-07 by the reprex package (v0.3.0)
Note that if you have many columns, #tfehring & #Ronak's solutions will be better suited, as for this method you'll have to manually specify your columns. It does have the benefit of being short & sweet, though.

Related

How to iterate column values to find out all possible combinations in R? [duplicate]

This question already has answers here:
Count common sets of items between different customers
(4 answers)
Intersect all possible combinations of list elements
(3 answers)
Closed 1 year ago.
Suppose you have a dataframe with ids and elements prescripted to each id. For example:
example <- data.frame(id = c(1,1,1,1,1,2,2,2,3,4,4,4,4,4,4,4,5,5,5,5),
vals = c("a","b",'c','d','e','a','b','d','c',
'd','f','g','h','a','k','l','m', 'a',
'b', 'c'))
I want to find all possible pair combinations. The main struggle here is not the functional of R language that I can use, but the logic. How can I iterate through all elements and find patterns? For instance, a was picked with b 3 times in my sample dataframe. But original dataframe is more than 30k rows, so I cannot count these combinations manually. How do I automatize this process of finding the number of picks of each elements?
I was thinking about widening my df with pivot_wider and then using map_lgl to find matches. Then I faced the problem that it will take a lot of time for me to find all possible combinations, applying map_lgl for every pair of elements.
I was asking nearly the same question less than a month ago, fellow users answered it but the result is not anything I really need.
Do you have any ideas how to create a dataframe with all possible combinations of values for all ids?
I understand that this code is slow, but here is another example code to get the expected output based on tidyverse package.
What I do here is first create a nested dataframe by id, then produce all pair combinations for each id, unnest the dataframe, and finally count the pairs.
library(tidyverse)
example <- data.frame(
id = c(1,1,1,1,1,2,2,2,3,4,4,4,4,4,4,4,5,5,5,5),
vals = c("a","b",'c','d','e','a','b','d','c','d','f','g','h','a','k','l','m','a','b', 'c')
)
example %>% nest(dataset=-id) %>% mutate(dataset=map(dataset, function(dataset){
if(nrow(dataset)>1){
dataset %>% .$vals %>% combn(., 2) %>% t() %>% as_tibble(.name_repair=~c("val1", "val2")) %>% return()
}else{
return(NULL)
}
})) %>% unnest(cols=dataset) %>% group_by(val1, val2) %>% summarize(n=n(), .groups="drop") %>% arrange(desc(n), val1, val2)
#> # A tibble: 34 x 3
#> val1 val2 n
#> <chr> <chr> <int>
#> 1 a b 3
#> 2 a c 2
#> 3 a d 2
#> 4 b c 2
#> 5 b d 2
#> 6 a e 1
#> 7 a k 1
#> 8 a l 1
#> 9 b e 1
#> 10 c d 1
#> # … with 24 more rows
Created on 2021-03-04 by the reprex package (v1.0.0)
This won't (can't) be fast for many IDs. If it is too slow, you need to parallelize or implement it in a compiled language (e.g., using Rcpp).
We sort vals. We can then create all combination of two items grouped by ID. We exclude ID's with 1 item. Finally we tabulate the result.
library(data.table)
setDT(example)
setorder(example, id, vals)
example[, if (.N > 1) split(combn(vals, 2), 1:2), by = id][, .N, by = c("1", "2")]
# 1 2 N
# 1: a b 3
# 2: a c 2
# 3: a d 3
# 4: a e 1
# 5: b c 2
# 6: b d 2
# 7: b e 1
#<...>

How to keep one instance or more of the values in one column when removing duplicate rows?

I'm trying to remove rows with duplicate values in one column of a data frame. I want to make sure that all the existing values in that column are represented, appearing more than once if its values in one other column are not duplicated and non-missing, and only once if the values in that other column are all missing. Take for example the following data frame:
toy <- data.frame(Group = c(1,1,2,2,2,3,3,4,5,5,6,7,7), Class = c("a",NA,"a","b",NA,NA,NA,NA,"a","b","a","a","a"))
I would like to end up with this:
ideal <- data.frame(Group = c(1,2,2,3,4,5,5,6,7), Class = c("a","a","b",NA,NA,"a","b","a","a"))
I tried transforming the data frame into a data table and follow the advice here, like this:
library(data.table)
toy.dt <- as.data.table(toy)
toy.dt[, .(Class = if(all(is.na(Class))) NA_character_ else na.omit(Class)), by = Group]
but duplicates weren't handled as needed: value 7 in the column 'Group' should appear only once in the resulting data.
It would be a bonus if the solution doesn't require transforming the data into a data table.
Here is one way using base R. We first drop NA rows in toy and select only unique rows. We can then left join it with unique Group values to get the rows which are NA for the group.
df1 <- unique(na.omit(toy))
merge(unique(subset(toy, select = Group)), df1, all.x = TRUE)
# Group Class
#1 1 a
#2 2 a
#3 2 b
#4 3 <NA>
#5 4 <NA>
#6 5 a
#7 5 b
#8 6 a
#9 7 a
Same logic using dplyr functions :
library(dplyr)
toy %>%
na.omit() %>%
distinct() %>%
right_join(toy %>% distinct(Group))
If you would like to try a tidyverse approach:
library(tidyverse)
toy %>%
group_by(Group) %>%
filter(!(is.na(Class) & sum(!is.na(Class)) > 0)) %>%
distinct()
Output
# A tibble: 9 x 2
# Groups: Group [7]
Group Class
<dbl> <chr>
1 1 a
2 2 a
3 2 b
4 3 NA
5 4 NA
6 5 a
7 5 b
8 6 a
9 7 a

Product of several columns on a data frame by a vector using dplyr

I would like to multiply several columns on a dataframe by the values of a vector (all values within the same column should be multiplied by the same value, which will be different according to the column), while keeping the other columns as they are.
Since I'm using dplyr extensively I thought that it might be useful to use mutate_each function, so I can modify all columns at the same time, but I am completely lost on the syntax on the fun() part.
On the other hand, I've read this solution which is simple and works fine, but only works for all columns instead of the selected ones.
That's what I've done so far:
Imagine that I want to multiply all columns in df but letters by weight_df vector as follows:
df = data.frame(
letters = c("A", "B", "C", "D"),
col1 = c(3, 3, 2, 3),
col2 = c(2, 2, 3, 1),
col3 = c(4, 1, 1, 3)
)
> df
letters col1 col2 col3
1 A 3 2 4
2 B 3 2 1
3 C 2 3 1
4 D 3 1 3
>
weight_df = c(1:3)
If I use select before applying mutate_each I get rid of letters columns (as expected), and that's not what I want (a part from the fact that the vector is not applyed per columns basis but per row basis! and I want the opposite):
df = df %>%
select(-letters) %>%
mutate_each(funs(. * weight_df))
> df
col1 col2 col3
1 3 2 4
2 6 4 2
3 6 9 3
4 3 1 3
But if I don't select any particular columns, all values within letters are removed (which makes a lot of sense, by the way), but that's not what I want, neither (a part from the fact that the vector is not applyed per columns basis but per row basis! and I want the opposite):
df = df %>%
mutate_each(funs(. * issb_weight))
> df
letters col1 col2 col3
1 NA 3 2 4
2 NA 6 4 2
3 NA 6 9 3
4 NA 3 1 3
(Please note that this is a very simple dataframe and the original one has way more rows and columns -which unfortunately are not labeled in such an easy way and no patterns can be obtained)
The problem here is that you are basically trying to operate over rows, rather columns, hence methods such as mutate_* won't work. If you are not satisfied with the many vectorized approaches proposed in the linked question, I think using tydeverse (and assuming that letters is unique identifier) one way to achieve this is by converting to long form first, multiply a single column by group and then convert back to wide (don't think this will be overly efficient though)
library(tidyr)
library(dplyr)
df %>%
gather(variable, value, -letters) %>%
group_by(letters) %>%
mutate(value = value * weight_df) %>%
spread(variable, value)
#Source: local data frame [4 x 4]
#Groups: letters [4]
# letters col1 col2 col3
# * <fctr> <dbl> <dbl> <dbl>
# 1 A 3 4 12
# 2 B 3 4 3
# 3 C 2 6 3
# 4 D 3 2 9
using dplyr. This filters numeric columns only. Gives flexibility for choosing columns. Returns the new values along with all the other columns (non-numeric)
index <- which(sapply(df, is.numeric) == TRUE)
df[,index] <- df[,index] %>% sweep(2, weight_df, FUN="*")
> df
letters col1 col2 col3
1 A 3 4 12
2 B 3 4 3
3 C 2 6 3
4 D 3 2 9
try this
library(plyr)
library(dplyr)
df %>% select_if(is.numeric) %>% adply(., 1, function(x) x * weight_df)

Using mutate and last functions with NAs

Based on the last function in dplyr package, if you want to take the last element in a vector, excluding NAs, you can just introduce the na.omit.
library(dplyr)
x <- c(1:10,NA)
last(x)
# [1] NA
last(na.omit(x))
# [1] 10
I would like to impute the last element for var1 for each id. The following is an example of the dataframe used.
id<-rep(c(1,2,3),c(3,2,2))
var1<-c(5,1,4,2,NA,NA,NA)
df<-data.frame(id,var1)
df
# id var1
# 1 1 5
# 2 1 1
# 3 1 4
# 4 2 2
# 5 2 NA
# 6 3 NA
# 7 3 NA
Notice that id=1 contains only numeric for var1, id=2 contains one numeric and one NA, while id=3 contains only NAs and no numeric.
I would like to obtain the following:
df
# id var1
# 1 1 4
# 2 1 4
# 3 1 4
# 4 2 2
# 5 2 2
# 6 3 NA
# 7 3 NA
Here is what I did to achieve what I wanted, but I got an error.
mutate(var1=ifelse(length(na.omit(var1))==0,NA,last(na.omit(var1))))
# Error: Unsupported vector type language
EDIT1: Based on the comments, the above code works well for dplyr 0.4.3, and apparently not for dplyr 0.5.0 (in my case). Additionally, I want to impute using the last element not the element with the maximum value. Thus, I have changed my data frame to make it more general.
EDIT2:I have considered a data frame that list all possible cases. Three cases, (1) all numeric, (2) numeric + NAs and (3) all NAs.
I was asked to explain my solution, but I actually don't fully understand why OP's solution doesn't work. Initially I thought it was something due to the class of object returned by na.omit
> na.omit(var1)
[1] 1 2 3 4
attr(,"na.action")
[1] 5
attr(,"class")
[1] "omit"
But then I noticed that nth (and I think last is just a wrapper for it) works fine:
df %>%
group_by(id) %>%
mutate(var1=nth(na.omit(var1),-1L))
An alternative, is to use tail rather then last
df %>%
group_by(id) %>%
mutate(var1=tail(na.omit(var1),1))
Or to create a new function, as I initially did:
aa <- function(x) last(na.omit(x))
df %>% group_by(id) %>% mutate(var1=aa(var1))
I was just curious about any differences in performance, so I checked them out but I would say they are equivalent
Unit: microseconds
expr min lq mean median uq max neval
mutate(var1 = nth(na.omit(var1), -1L)) 795.270 830.4880 1022.196 897.6375 1026.795 4437.483 1000
mutate(var1 = tail(na.omit(var1))) 791.035 825.6165 1011.288 892.6270 1037.463 3406.842 1000
mutate(var1 = aa(var1)) 788.085 825.5180 1108.872 888.9945 1036.664 102915.926 1000
Using dplyr package, we can group by each id and take max values of each id and replace in var1
library(dplyr)
df <- df %>%
group_by(id) %>%
mutate(var1 = max(var1,na.rm=T))
df
id var1
<dbl> <int>
1 1 3
2 1 3
3 1 3
4 2 4
5 2 4
I had a similar issue. This worked for me:
df %>%
group_by(id) %>%
mutate(missing = is.na(var1)) %>%
mutate(var1 = ifelse(any(!missing), var1[!missing][length(var1[!missing])], NA))

How to repeat empty rows so that each split has the same number

My goal is to get the same number of rows for each split (based on column Initial). I am trying to basically pad the number of rows so that each person has the same amount, while retaining the Initial column so I can tell them apart. My attempt failed completely. Anybody have suggestions?
df<-data.frame(Initials=c("a","a","b"),data=c(2,3,4))
attach(df)
maxrows=max(table(Initials))+1
arr<-split(df,Initials)
lapply(arr,function(x){
toadd<-maxrows-dim(x)[1]
replicate(toadd,x<-rbind(x,rep(NA,1)))#colnames -1 because col 1 should the the same Initial
})
Goal:
a 2
a 3
b 4
b NA
Using data.table...
my_rows <- seq.int(max(tabulate(df$Initials)))
library(data.table)
setDT(df)[ , .SD[my_rows], by=Initials]
# Initials data
# 1: a 2
# 2: a 3
# 3: b 4
# 4: b NA
.SD is the Subset of Data associated with each by= group. We can subset its rows like .SD[row_numbers], unlike a data.frame which requires an additional comma DF[row_numbers,].
The analogue in dplyr is
my_rows <- seq.int(max(tabulate(df$Initials)))
library(dplyr)
setDT(df) %>% group_by(Initials) %>% slice(my_rows)
# Initials data
# (fctr) (dbl)
# 1 a 2
# 2 a 3
# 3 b 4
# 4 b NA
Strangely, this only works if df is a data.table. I've filed a report/query with dplyr. There's a good chance that the dplyr devs will prevent this usage in a future version.
Here's a dplyr/tidyr method. We group_by initials, add row_numbers, ungroup, complete row numbers/Initials combinations, then remove our row numbers:
library(dplyr)
library(tidyr)
df %>% group_by(Initials) %>%
mutate(row = row_number()) %>%
ungroup() %>%
complete(Initials, row) %>%
select(-row)
Source: local data frame [4 x 2]
Initials data
(fctr) (dbl)
1 a 2
2 a 3
3 b 4
4 b NA
Interesting problem. Try:
to.add <- max(table(df$Initials)) - table(df$Initials)
rbind(df, c(rep(names(to.add), to.add), rep(NA, ncol(df)-1)))
# Initials data
#1 a 2
#2 a 3
#3 b 4
#4 b <NA>
We calculate the number of extra initials needed then combine the extras with NA values then rbind to the data frame.
max(table(df$Initials)) calculates the the initial with the most repeats. In this case a 2. By subtracting that max amount by the other initials table(df$Initials) we get a vector with the necessary additions. There's an added bonus to this method, by using table we also automatically have a named vector.
We use the names of the new vector to know 1) what initials to repeat, and 2) how many times should they be repeated.
To preserve the class of the data, you can add newdf$data <- as.numeric(newdf$data).

Resources