extract unique rows with a condition in r - r

I have this kind of data:
x <- matrix(c(2,2,3,3,3,4,4,20,33,2,3,45,6,9,45,454,7,4,6,7,5), nrow = 7, ncol = 3)
In the real dataset, I have a huge matrix with a lot of columns.
I want to extract unique rows with respect to the first column(Id) and minimum of the third column. For instance, for this matrix I would expect
y <- matrix(c(2,3,4,20,3,9,45,4,5), nrow = 3, ncol = 3)
I tried a lot of things but I couldn't figure out.
Any help is appreciated.
Thanks in advance,
Zeray

Here's a version that is more complicated, but somewhat faster that Chase's ddply solution - some 200x faster :-)
uniqueMin <- function(m, idCol = 1L, minCol = ncol(m)) {
t(vapply(split(1:nrow(m), m[,idCol]), function(i, x, minCol) x[i, , drop=FALSE][which.min(x[i,minCol]),], m[1,], x=m, minCol=minCol))
}
And the following test code:
nRows <- 10000
nCols <- 100
ids <- nRows/5
m <- cbind(sample(ids, nRows, T), matrix(runif(nRows*nCols), nRows))
system.time( a<-uniqueMin(m, minCol=3L) ) # 0.07
system.time(ddply(as.data.frame(m), "V1", function(x) x[which.min(x$V3) ,])) # 15.72

You can use package plyr. Convert to a data.frame so you can group on the first column, then use which.min to extract the min row by group:
library(plyr)
ddply(as.data.frame(x), "V1", function(x) x[which.min(x$V3) ,])
V1 V2 V3
1 2 20 45
2 3 3 4
3 4 9 5

Related

Most common row in binary matrix

I have very large binary matrices with duplicated rows:
# create matrix
M = matrix(0, 10000, 50)
# randomly set up to two elements per row to one
for(ii in 1:nrow(M)){
M[ii, sample(1:ncol(M), 1)] = 1
M[ii, sample(1:ncol(M), 1)] = 1
}
I'm trying to find the index of the most frequently occuring row. The general problem of finding the most frequently occuring row is discussed for instance here and here. Another viable solution is to use paste0:
# string vector for each row
row_strings = apply(M, 1, paste0, collapse="")
# tabulate the strings
count_df = data.frame(table(row_strings))
# get indices of most frequently occuring string
which(row_strings == count_df$row_strings[which.max(count_df$Freq)])
However, these solutions are either slow or (relatively) complicated. I was wondering whether there is a more convenient, fast solution for this problem for the special case of binary matrices?
Coercing rows toString and table them.
ts <- apply(M, 1, toString)
pat <- names(sort(table(ts), decreasing=T)[1])
which(ts == pat)
# [1] 101 108 460 839 852
Data:
m <- 1e3
n <- 10
set.seed(42)
M <- matrix(sample(0:1, m*n, prob=c(1-.5, .5), replace=T), m, n)

Summarize matrix of boolean variables

I have a bunch of Boolean variables. I want to summarize them and show the percentage of positive values. The big thing in this question is that the variables are logical organized in two dimensions.
The result I want should look like this (kind of):
a b
v1_1 30% 60%
v1_2 60% 50%
Here is a minimal working (self running) example.
#!/usr/bin/env Rscript
set.seed(0)
df <- data.frame(v1_1_a = sample(c(T,F), 10, replace=TRUE),
v1_1_b = sample(c(T,F), 10, replace=TRUE),
v1_2_a = sample(c(T,F), 10, replace=TRUE),
v1_2_b = sample(c(T,F), 10, replace=TRUE))
my_percent <- function (col) { return (100 / length(col) * sum(col)) }
p <- apply(df, 2, my_percent)
print(p)
This is the output:
v1_1_a v1_1_b v1_2_a v1_2_b
30 60 60 50
Just for information: The real data has 80 Boolean variables logical organized in a 10 x 8 matrix.
If you don't mind having to add in the row and column names, you could use colMeans together with the matrix construction function to build a matrix with the desired structure.
myMat <- matrix(colMeans(df), 2, byrow = TRUE)
MyMat
[,1] [,2]
[1,] 0.3 0.6
[2,] 0.6 0.5
If desired, you could add the names using dimnames. In this instance,
dimnames(myMat) <- list(paste0("V1", 1:2), letters[1:2])
will do the trick.
You could break up the metric names into separate columns.
With dplyr and tidyr:
p <- data.frame(p)
p$metric <- row.names(p)
p %>% mutate(metric_1 = ifelse(grepl('v1_1_', metric), "v1_1", "v1_2"),
metric_2 = ifelse(grepl('a', metric), 'a', 'b')) %>%
select(-metric) %>%
spread(key = metric_2, value = p)
Giving...
metric_1 a b
1 v1_1 30 60
2 v1_2 60 50
You could set the row names to get exactly what you want:
row.names(p) <- p$metric_1
p %<>% select(-metric_1)
Resulting in...
a b
v1_1 30 60
v1_2 60 50

R Sum every k columns in matrix

I have a matrix temp1 (dimensions Nx16) (generally, NxM)
I would like to sum every k columns in each row to one value.
Here is what I got to so far:
cbind(rowSums(temp1[,c(1:4)]), rowSums(temp1[,c(5:8)]), rowSums(temp1[,c(9:12)]), rowSums(temp1[,c(13:16)]))
There must be a more elegant (and generalized) method to do it.
I have noticed similar question here:
sum specific columns among rows
couldn't make it work with Ananda's solution;
Got following error:
sapply(split.default(temp1, 0:(length(temp1)-1) %/% 4), rowSums)
Error in FUN(X[[1L]], ...) :
'x' must be an array of at least two dimensions
Please advise.
You can use by:
do.call(cbind, by(t(temp1), (seq(ncol(temp1)) - 1) %/% 4, FUN = colSums))
If the dimensions are equal for the sub matrices, you could change the dimensions to an array and then do the rowSums
m1 <- as.matrix(temp1)
n <- 4
dim(m1) <- c(nrow(m1), ncol(m1)/n, n)
res <- matrix(rowSums(apply(m1, 2, I)), ncol=n)
identical(res[,1],rowSums(temp1[,1:4]))
#[1] TRUE
Or if the dimensions are unequal
t(sapply(seq(1,ncol(temp2), by=4), function(i) {
indx <- i:(i+3)
rowSums(temp2[indx[indx <= ncol(temp2)]])}))
data
set.seed(24)
temp1 <- as.data.frame(matrix(sample(1:20, 16*4, replace=TRUE), ncol=16))
set.seed(35)
temp2 <- as.data.frame(matrix(sample(1:20, 17*4, replace=TRUE), ncol=17))
Another possibility:
x1<-sapply(1:(ncol(temp1)/4),function(x){rowSums(temp1[,1:4+(x-1)*4])})
## check
x0<-cbind(rowSums(temp1[,c(1:4)]), rowSums(temp1[,c(5:8)]), rowSums(temp1[,c(9:12)]), rowSums(temp1[,c(13:16)]))
identical(x1,x0)
# TRUE
Here's another approach. Convert the matrix to an array and then use apply with sum.
n <- 4
apply(array(temp1, dim=c(dim(temp1)/c(1,n), n)), MARGIN=c(1,3), FUN=sum)
Using #akrun's data
set.seed(24)
temp1 <- matrix(sample(1:20, 16*4, replace=TRUE), ncol=16)
a function which sums matrix columns with each group of size n columns
set.seed(1618)
mat <- matrix(rnorm(24 * 16), 24, 16)
f <- function(mat, n = 4) {
if (ncol(mat) %% n != 0)
stop()
cols <- split(colSums(mat), rep(1:(ncol(mat) / n), each = n))
## or use this to have n mean the number of groups you want
# cols <- split(colSums(mat), rep(1:n, each = ncol(mat) / n))
sapply(cols, sum)
}
f(mat, 4)
# 1 2 3 4
# -17.287137 -1.732936 -5.762159 -4.371258
c(sum(mat[,1:4]), sum(mat[,5:8]), sum(mat[,9:12]), sum(mat[,13:16]))
# [1] -17.287137 -1.732936 -5.762159 -4.371258
More examples:
## first 8 and last 8 cols
f(mat, 8)
# 1 2
# -19.02007 -10.13342
## each group is 16 cols, ie, the entire matrix
f(mat, 16)
# 1
# -29.15349
sum(mat)
# [1] -29.15349

How to pairwise compare values referring to distinct elements in two matrices of different formats?

I've got a set of objects, let's say with the IDs 'A' to 'J'. And I've got two data frames which look the following way (as you can see, the second data frame is symmetric):
df1 <- data.frame(ID = LETTERS[1:5], Var = c(9,13,15,11,28))
df2 <- as.data.frame(matrix(data = c(NA,42,83,74,84,42,NA,26,69,9,83,26,NA,67,95,74,69,67,NA,6,84,9,95,6,NA), ncol = 5, nrow = 5, dimnames = list(df1$ID, df1$ID)))
For example, take the objects 'B' and 'E'. I want to know: Is 13+28 (from df1) less than 9 (from df2)? I'd like to know this for all pairs of objects. The output should be
(a) a logical data frame structured like df2 and
(b) the number of "TRUE" values.
Most of the time I will only need result (b), but sometimes I would also need (a). So if (b) can be calculated without (a) and if this would be significantly faster, then I'd like to have both algorithms in order to select the suitable one dependent on which output I need to answer a particular question.
I'm comparing around 2000 objects, so the algorithm should be reasonably fast. So far I've been only able to implement this with two nested for-loops which is awfully slow. I bet there is a much nicer way to do this, maybe exploiting vectorisation.
This is what it currently looks like:
df3 <- as.data.frame(matrix(data = NA, ncol = nrow(df1), nrow = nrow(df1),
dimnames = list(df1$ID, df1$ID)))
for (i in 2:nrow(df3)){
for (j in 1:(i-1)){
sum.val <- df1[df1$ID == rownames(df3)[i], "Var"] + df1[df1$ID == names(df3)[j], "Var"]
df3[i,j] <- sum.val <= df2[i,j]
}
}
#
Is this what you want?
df3 <- outer(df1$Var, df1$Var, "+")
df3
df4 <- df3 < df2
df4
sum(df4, na.rm = TRUE)
Here's one way to do it...
# Get row and column indices
ind <- t( combn( df1$ID , 2 ) )
# Get totals
tot <- with( df1 , Var[ match( ind[,1] , ID ) ] + Var[ match( ind[,2] , ID ) ] )
# Make df2 a matrix
m <- as.matrix( df2 )
# Total number of values is simply
sum( m[ ind ] > tot )
#[1] 7
# Find which values in upper triangle part of the matrix exceed those from df1 (1 = TRUE)
m[upper.tri(m)] <- m[ ind ] > tot
# A B C D E
#A NA 1 1 1 0
#B 42 NA 1 0 1
#C 83 26 NA 1 1
#D 74 69 67 NA 0
#E 84 9 95 6 NA
This will do what you want.
# Generate the data
df1 <- data.frame(ID = LETTERS[1:5], Var = c(9,13,15,11,28))
df2 <- as.data.frame(matrix(data = c(NA,42,83,74,84,42,NA,26,
69,9,83,26,NA,67,95,74,69,
67,NA,6,84,9,95,6,NA),
ncol = 5, nrow = 5,
dimnames = list(df1$ID, df1$ID)))
# Define a pairwise comparison index matrix using 'combn'
idx <- combn(nrow(df1), 2)
# Create a results matrix
res <- matrix(NA, ncol = ncol(df2), nrow = nrow(df2))
# Loop through 'idx' for each possible comparison (without repeats)
for(i in 1:ncol(idx)){
logiTest <- (df1$Var[idx[1,i]] + df1$Var[idx[2,i]]) < df2[idx[1,i], idx[2,i]]
res[idx[1,i], idx[2, i]] <- logiTest
res[idx[2,i], idx[1, i]] <- logiTest
}
# Count the number of 'true' comparisons
nTrues <- sum(res, na.rm = TRUE)/2
The code simply uses a pairwise comparison index (idx) to define which elements in both df1 and df2 are to be used in each iteration of the 'for loop'. It then uses this same index to define where in the 'res' matrix the answer to the logical test is to be written.
N.B. This code will break down if the order of elements in df1 and df2 are not the same. In such cases, it would be appropriate to use the actual letters to define which values to compare.

Implementation of skyline query or efficient frontier

I know there must be an easy answer to this but somehow I can't seem to find it...
I have a data frame with 2 numeric columns.
I would like to remove from it, the rows, which have the property, that there exists at least one other row in the data frame, with both column values bigger than the ones in this row.
So if I have
Col1 Col2
1 2 3
2 4 7
3 5 6
I would like to remove the first row, because the second one fulfills the property and keep only rows 2 and 3.
Thanks a lot!
That problem is called a "skyline query" by database administrators (they may have other algorithms) and an "efficient frontier" by economists.
Plotting the data can make it clear what we are looking for.
n <- 40
d <- data.frame(
x = rnorm(n),
y = rnorm(n)
)
# We want the "extreme" points in the following plot
par(mar=c(1,1,1,1))
plot(d, axes=FALSE, xlab="", ylab="")
for(i in 1:n) {
polygon( c(-10,d$x[i],d$x[i],-10), c(-10,-10,d$y[i],d$y[i]),
col=rgb(.9,.9,.9,.2))
}
The algorithm is as follows: sort the points along the first coordinate,
keep each observation unless it is worse than the last retained one.
d <- d[ order(d$x, decreasing=TRUE), ]
result <- d[1,]
for(i in seq_len(nrow(d))[-1] ) {
if( d$y[i] > result$y[nrow(result)] ) {
result <- rbind(result, d[i,]) # inefficient
}
}
points(result, cex=3, pch=15)
Edit (2015-03-02): For a more efficient solution, please see Patrick Roocks' rPref, a package for "Database Preferences and Skyline Computation", (also linked to in his answer below). To show that it finds the same solution as my code here, I've appended an example using it to my original answer here.
Riffing off of Vincent Zoonekynd's enlightening response, here's an algorithm that's fully vectorized, and likely more efficient:
set.seed(100)
d <- data.frame(x = rnorm(100), y = rnorm(100))
D <- d[order(d$x, d$y, decreasing=TRUE), ]
res <- D[which(!duplicated(cummax(D$y))), ]
# x y
# 64 2.5819589 0.7946803
# 20 2.3102968 1.6151907
# 95 -0.5302965 1.8952759
# 80 -2.0744048 2.1686003
# And then, if you would prefer the rows to be in
# their original order, just do:
d[sort(as.numeric(rownames(res))), ]
# x y
# 20 2.3102968 1.6151907
# 64 2.5819589 0.7946803
# 80 -2.0744048 2.1686003
# 95 -0.5302965 1.8952759
Or, using the rPref package:
library(rPref)
psel(d, high(x) | high(y))
# x y
# 20 2.3102968 1.6151907
# 64 2.5819589 0.7946803
# 80 -2.0744048 2.1686003
# 95 -0.5302965 1.8952759
Here is an sqldf solution where DF is the data frame of data:
library(sqldf)
sqldf("select * from DF a
where not exists (
select * from DF b
where b.Col1 >= a.Col1 and b.Col2 > a.Col2
or b.Col1 > a.Col1 and b.Col2 >= a.Col2
)"
)
This question is pretty old, but meanwhile there is a new solution. I hope it is ok to do some self-promotion here: I developed a package rPref which does an efficient Skyline computation due to C++ algorithms. With installed rPref package the query from the question can be done via (assuming that df is the name of data set):
library(rPref)
psel(df, high(Col1) | high(Col2))
This removes only those tuples, where some other tuple is better in both dimensions.
If one requires the other tuple to be strictly better in just one dimension (and better or equal in the other dimension), use high(Col1) * high(Col2) instead.
In one line:
d <- matrix(c(2, 3, 4, 7, 5, 6), nrow=3, byrow=TRUE)
d[!apply(d,1,max)<max(apply(d,1,min)),]
[,1] [,2]
[1,] 4 7
[2,] 5 6
Edit: In light of your precision in jbaums' response, here's how to check for both columns separately.
d <- matrix(c(2, 3, 3, 7, 5, 6, 4, 8), nrow=4, byrow=TRUE)
d[apply(d,1,min)>min(apply(d,1,max)) ,]
[,1] [,2]
[1,] 5 6
[2,] 4 8
d <- matrix(c(2, 3, 4, 7, 5, 6), nrow=3, byrow=TRUE)
d2 <- sapply(d[, 1], function(x) x < d[, 1]) &
sapply(d[, 2], function(x) x < d[, 2])
d2 <- apply(d2, 2, any)
result <- d[!d2, ]

Resources