I wanted to merge different elements of atomic vectors by elements names stored in list. See example:
ls = list(a = c(a = 1, b = 2, d = 2), b = c(b = 2, c = 3), c = c(a = 1, b = 2))
Now, I wanted to get output like this:
a b c
a 1 NA 1
b 2 2 2
c NA 3 NA
d 2 NA NA
I tried Reduce, but it is not working. I do not want to use any external package for this problem.
Thanks
You can use [ in sapply after you have extracted all elements names.
i <- sort(unique(unlist(lapply(ls, names))))
x <- sapply(ls, "[", i)
rownames(x) <- i
x
# a b c
#a 1 NA 1
#b 2 2 2
#c NA 3 NA
#d 2 NA NA
We could also use bind_rows here
library(dplyr)
library(tibble)
bind_rows(ls, .id = 'x') %>%
column_to_rownames('x') %>%
t
a b c
a 1 NA 1
b 2 2 2
d 2 NA NA
c NA 3 NA
Or using base R
xtabs(values ~ ind + x, do.call(rbind, Map(cbind, x = names(ls), lapply(ls, stack))))
x
ind a b c
a 1 0 1
b 2 2 2
d 2 0 0
c 0 3 0
A data.table option using rbindlist
> t(rbindlist(Map(function(x) data.table(t(x)), lst), fill = TRUE))
[,1] [,2] [,3]
a 1 NA 1
b 2 2 2
d 2 NA NA
c NA 3 NA
Related
I would like to aggregate a vector/ matrix y by two variables a and b via the fsum function of the collapse package. fsum does not return values for empty intersections. Is there a way to keep empty intersection using the collapse package? I know that I could e.g. work through cross-joins and data.table, but as my function input is a vector and speed really matters, I would like to avoid converting the input matrix to a data.table and then convert the output back to a matrix / vector (for a solution with data.table, see e.g. here: data.table calculate sums by two variables and add observations for "empty" groups).
Here is an example:
library(collapse)
set.seed(1)
a <- sample(1:5, 10, replace = TRUE)
b <- sample(1:3, 10, replace = TRUE)
y <- matrix(rnorm(10), 10, 1)
fsum(x = y, g = data.frame(a = a, b = b))
#> fsum(x = y, g = data.frame(a = a, b = b))
# [,1]
#1.1 -0.40955189
#1.2 -0.05710677
#2.2 0.50360797
#2.3 -1.28459935
#3.1 0.04672617
#3.2 -0.69095384
#3.3 -0.23570656
#4.1 0.80418951
#5.2 1.08576936
What I would like to get: the regular output above, but keeping the empty intersections of (a, b) - e.g (a = 1, b = 3) and assign a missing or zero:
# a b y
#1: 1 1 -0.7702614
#2: 1 2 -0.2992151
#3: 1 3 NA
#4: 2 1 NA
#5: 2 2 -0.4115108
#6: 2 3 0.4356833
#.................
As an addition: base::aggregate() has a function argument drop = FALSE that achieves this:
aggregate(y, data.frame(a, b), sum, drop = FALSE)
a b V1
#1 1 1 -0.7702614
#2 2 1 NA
#3 3 1 -1.2375384
#4 4 1 -0.2894616
#5 5 1 NA
#6 1 2 -0.2992151
#7 2 2 -0.4115108
#8 3 2 -0.8919211
#9 4 2 NA
#10 5 2 0.2522234
#11 1 3 NA
#12 2 3 0.4356833
#13 3 3 -0.2242679
#14 4 3 NA
#15 5 3 NA
Nevertheless, in my experience both data.table and collapse are significantly faster, butcollapse has the advantage that it also works with matrix objects (that do not need to be converted to data.table's).
Is there away to achieve this via collapse?
yes you can do that with fsum, however other functions like fmedian will warn about that. To do that you need to create factors and interact them using : like so:
library(collapse)
set.seed(1)
a <- sample(1:5, 10, replace = TRUE)
b <- sample(1:3, 10, replace = TRUE)
y <- matrix(rnorm(10), 10, 1)
fsum(x = y, g = qF(a):qF(b))
# [,1]
# 1:1 -0.7702614
# 1:2 -0.2992151
# 1:3 NA
# 2:1 NA
# 2:2 -0.4115108
# 2:3 0.4356833
# 3:1 -1.2375384
# 3:2 -0.8919211
# 3:3 -0.2242679
# 4:1 -0.2894616
# 4:2 NA
# 4:3 NA
# 5:1 NA
# 5:2 0.2522234
# 5:3 NA
For the earlier example you gave, I'd also like to note that the expensive call to data.frame is absolutely not necessary, fsum(x = y, g = list(a = a, b = b)) is sufficient.
This question is a follow-up on my previous question. In this question, after my split.default() call below, I get a named list of data.frames called L.
Qs: I was wondering how I could condense each data.frame in L whose each column consists of a constant number? (How about if I know the names of the data.frames whose columns are constant numbers?)
My desired output is shown further below.
r <- list(
data.frame(study.name = rep("Jacob", 6),
X = c(2,2,1,1,NA, NA),
Y = c(1,1,1,2,1,NA),
A = rep(1, 6),
B = rep(4, 6)),
data.frame(study.name = rep("Jon", 6),
X = c(1,NA,3,1,NA,NA),
G = c(1,1,1,2,NA,NA),
A = rep(3, 6),
B = rep(7, 6)))
DATA <- do.call(cbind, r)
nm1 <- Reduce(intersect, lapply(r, colnames))[-1]
L <- split.default(DATA[names(DATA) %in% nm1], names(DATA)[names(DATA) %in% nm1])
Desired output:
# $A
# A A.1
# 1 1 3
# $B
# B B.1
# 1 4 7
# $X
# X X.1
# 1 2 1
# 2 2 NA
# 3 1 3
# 4 1 1
# 5 NA NA
# 6 NA NA
Assuming that the NA rows should be preserved, apply duplicated by looping over the list as well as if all the elements of a particular are NA, then keep that row
lapply(L, function(x) x[(rowSums(is.na(x)) == ncol(x))|!duplicated(x),])
#$A
# A A.1
#1 1 3
#$B
# B B.1
#1 4 7
#$X
# X X.1
#1 2 1
#2 2 NA
#3 1 3
#4 1 1
#5 NA NA
#6 NA NA
If we also need a check for constant value
is_constant <- function(x) length(unique(x)) == 1L
lapply(L, function(x) if(all(sapply(x, is_constant))) x[1,, drop = FALSE] else x)
#$A
# A A.1
#1 1 3
#$B
# B B.1
#1 4 7
#$X
# X X.1
#1 2 1
#2 2 NA
#3 1 3
#4 1 1
#5 NA NA
#6 NA NA
I'd like to remove the NA values from my columns, merge all columns into four columns, while keeping NA's if there is not 4 values in each row.
Say I have data like this,
df <- data.frame('a' = c(1,4,NA,3),
'b' = c(3,NA,3,NA),
'c' = c(NA,2,NA,NA),
'd' = c(4,2,NA,NA),
'e'= c(NA,5,3,NA),
'f'= c(1,NA,NA,4),
'g'= c(NA,NA,NA,4))
#> a b c d e f g
#> 1 1 3 NA 4 NA 1 NA
#> 2 4 NA 2 2 5 NA NA
#> 3 NA 3 NA NA 3 NA NA
#> 4 3 NA NA NA NA 4 4
My desired outcome would be,
df.desired <- data.frame('a' = c(1,4,3,3),
'b' = c(3,2,3,4),
'c' = c(4,2,NA,4),
'd' = c(1,5,NA,NA))
df.desired
#> a b c d
#> 1 1 3 4 1
#> 2 4 2 2 5
#> 3 3 3 NA NA
#> 4 3 4 4 NA
You could've probably explored a bit more on SO to tweak two answers 1 & 2.
Shifting all the Numbers with NAs
Remove the columns where you've got All NAs
Result:
df <- data.frame('a' = c(1,4,NA,3),
'b' = c(3,NA,3,NA),
'c' = c(NA,2,NA,NA),
'd' = c(4,2,NA,NA),
'e'= c(NA,5,3,NA),
'f'= c(1,NA,NA,4),
'g'= c(NA,NA,NA,4))
df.new<-do.call(rbind,lapply(1:nrow(df),function(x) t(matrix(df[x,order(is.na(df[x,]))])) ))
colnames(df.new)<-colnames(df)
df.new
df.new[,colSums(is.na(df.new))<nrow(df.new)]
Output:
> df.new[,colSums(is.na(df.new))<nrow(df.new)]
a b c d
[1,] 1 3 4 1
[2,] 4 2 2 5
[3,] 3 3 NA NA
[4,] 3 4 4 NA
I believe there are more efficient ways, anyhow that is my try:
x00=sapply(1:nrow(df),function(x) df[x,][!is.na( df[x,])])
x01=lapply(x00,function(x) x=c(x,rep(NA,7-length(x)-1)))
x02=as.data.frame(do.call("rbind",x01))
x02 <- x02[,colSums(is.na(x02))<nrow(x02)]
I have following solution:
df <- data.frame('a' = c(1,4,NA,3),
'b' = c(3,NA,3,NA),
'c' = c(NA,2,NA,NA),
'd' = c(4,2,NA,NA),
'e'= c(NA,5,3,NA),
'f'= c(1,NA,NA,4),
'g'= c(NA,NA,NA,4))
df
x <-list()
for(i in 1:nrow(df)){
x[[i]] <- df[i,]
x[[i]] <- x[[i]][!is.na(x[[i]])]
# x[[i]] <- as.data.frame(x[[i]], stringsAsFactors = FALSE)
x[[i]] <- c(x[[i]], rep(0, 5 -length(x[[i]])))
}
result <- do.call(rbind, x)
result
Assuming I have a matrix looks like below, the values up or down the diagonal are the same. In other words, [,1] x [2,] and [,2] x [1,] both are 2 in the matrix.
> m = cbind(c(1,2,3),c(2,4,5),c(3,5,6))
> m
[,1] [,2] [,3]
[1,] 1 2 3
[2,] 2 4 5
[3,] 3 5 6
Then I have real name for 1, 2, and 3 as well.
>Real_name
A B C # A represents 1, B represents 2, and C represents 3.
If I would like to convert the matrix to 3 columns containing corresponding real name for each pair, and the pair must be unique, A x B is the same as B x A, so we keep A x B only. How can I achieve it using R?
A A 1
A B 2
A C 3
B B 4
B C 5
C C 6
The following is straightforward:
m <- cbind(c(1,2,3), c(2,4,5), c(3,5,6))
## read `?lower.tri` and try `v <- lower.tri(m, diag = TRUE)` to see what `v` is
## read `?which` and try `which(v, arr.ind = TRUE)` to see what it gives
ij <- which(lower.tri(m, diag = TRUE), arr.ind = TRUE)
Real_name <- LETTERS[1:3]
data.frame(row = Real_name[ij[, 1]], col = Real_name[ij[, 2]], val = c(m[ij]))
# row col val
#1 A A 1
#2 B A 2
#3 C A 3
#4 B B 4
#5 C B 5
#6 C C 6
colnames(m) <- c("A", "B", "C")
rownames(m) <- c("A", "B", "C")
m[lower.tri(m)] = NA # replace lower triangular elements with NA
data.table::melt(m, na.rm = TRUE) # melt and remove NA
# Var1 Var2 value
#1 A A 1
#4 A B 2
#5 B B 4
#7 A C 3
#8 B C 5
#9 C C 6
Or you can do it in a single line: melt(replace(m, lower.tri(m), NA), na.rm = TRUE)
This will also work:
g <- expand.grid(1:ncol(m), 1:ncol(m))
g <- g[g[,2]>=g[,1],]
cbind.data.frame(sapply(g, function(x) Real_name[x]), Val=m[as.matrix(g)])
Var1 Var2 Val
1 A A 1
2 A B 2
3 B B 4
4 A C 3
5 B C 5
6 C C 6
Consider the following named vector x.
( x <- setNames(c(1, 2, 0, NA, 4, NA, NA, 6), letters[1:8]) )
# a b c d e f g h
# 1 2 0 NA 4 NA NA 6
I'd like to calculate the cumulative sum of x while ignoring the NA values. Many R functions have an argument na.rm which removes NA elements prior to calculations. cumsum() is not one of them, which makes this operation a bit tricky.
I can do it this way.
y <- setNames(numeric(length(x)), names(x))
z <- cumsum(na.omit(x))
y[names(y) %in% names(z)] <- z
y[!names(y) %in% names(z)] <- x[is.na(x)]
y
# a b c d e f g h
# 1 3 3 NA 7 NA NA 13
But this seems excessive, and makes a lot of new assignments/copies. I'm sure there's a better way.
What better methods are there to return the cumulative sum while effectively ignoring NA values?
You can do this in one line with:
cumsum(ifelse(is.na(x), 0, x)) + x*0
# a b c d e f g h
# 1 3 3 NA 7 NA NA 13
Or, similarly:
library(dplyr)
cumsum(coalesce(x, 0)) + x*0
# a b c d e f g h
# 1 3 3 NA 7 NA NA 13
It's an old question but tidyr gives a new solution.
Based on the idea of replacing NA with zero.
require(tidyr)
cumsum(replace_na(x, 0))
a b c d e f g h
1 3 3 3 7 7 7 13
Do you want something like this:
x2 <- x
x2[!is.na(x)] <- cumsum(x2[!is.na(x)])
x2
[edit] Alternatively, as suggested by a comment above, you can change NA's to 0's -
miss <- is.na(x)
x[miss] <- 0
cs <- cumsum(x)
cs[miss] <- NA
# cs is the requested cumsum
Here's a function I came up from the answers to this question. Thought I'd share it, since it seems to work well so far. It calculates the cumulative FUNC of x while ignoring NA. FUNC can be any one of sum(), prod(), min(), or max(), and x is a numeric vector.
cumSkipNA <- function(x, FUNC)
{
d <- deparse(substitute(FUNC))
funs <- c("max", "min", "prod", "sum")
stopifnot(is.vector(x), is.numeric(x), d %in% funs)
FUNC <- match.fun(paste0("cum", d))
x[!is.na(x)] <- FUNC(x[!is.na(x)])
x
}
set.seed(1)
x <- sample(15, 10, TRUE)
x[c(2,7,5)] <- NA
x
# [1] 4 NA 9 14 NA 14 NA 10 10 1
cumSkipNA(x, sum)
# [1] 4 NA 13 27 NA 41 NA 51 61 62
cumSkipNA(x, prod)
# [1] 4 NA 36 504 NA 7056 NA
# [8] 70560 705600 705600
cumSkipNA(x, min)
# [1] 4 NA 4 4 NA 4 NA 4 4 1
cumSkipNA(x, max)
# [1] 4 NA 9 14 NA 14 NA 14 14 14
Definitely nothing new, but maybe useful to someone.
Another option is using the collapse package with fcumsum function like this:
( x <- setNames(c(1, 2, 0, NA, 4, NA, NA, 6), letters[1:8]) )
#> a b c d e f g h
#> 1 2 0 NA 4 NA NA 6
library(collapse)
fcumsum(x)
#> a b c d e f g h
#> 1 3 3 NA 7 NA NA 13
Created on 2022-08-24 with reprex v2.0.2