I'm trying to paste all possible characters that are arranged in any diagonal within an N * N matrix.
For example, consider the following 3 X 3 matrix:
#Create matrix, convert to character dataframe
matrix <- matrix(data=c('s','t','y','a','e','l','f','n','e'),nrow=3,ncol=3)
matrix <- as.data.frame(matrix)
for(i in 1:length(colnames(matrix))){
matrix[,i] <- as.character(matrix[,i])
}
In the matrix above I need to paste the diagonals: "see","fey", "ees", and "yef". I can find these in the dataframe with the following code:
diag <- paste(matrix[1,1],matrix[2,2],matrix[3,3],sep='')
diag1 <- paste(matrix[1,3],matrix[2,2],matrix[3,1],sep='')
diag2 <- paste(matrix[3,1],matrix[2,2],matrix[1,3],sep='')
diag3 <- paste(matrix[3,3],matrix[2,2],matrix[1,1],sep='')
The problem is that I want to automate this so that it will work on any N x N matrix. (I'm writing a function to find the diagonals in any N X N matrix). Is there an efficient way to do this?
Oh, that's easy if you use matrix instead of data.frame :)
We can choose matrix elements just like we can take vector elements:
matrix[1:3] # First three elements == first column
n <- ncol(matrix)
(1:n-1)*n+1:n
## [1] 1 5 9
(1:n-1)*n+n:1
## [1] 3 5 7
So now we can use this:
matrix[(1:n-1)*n+1:n]
[1] "s" "e" "e"
paste0(matrix[(1:n-1)*n+1:n],collapse="")
[1] "see"
And if you want it backwards, just reverse the vector of indexes using rev function:
paste0(matrix[rev((1:n-1)*n+1:n)],collapse="")
[1] "ees"
Some benchmarks:
rotate <- function(x) t(apply(x, 2, rev))
revMat <- function(mat, dir=0){
x <- if(bitwAnd(dir,1)) rev(seq(nrow(mat))) else seq(nrow(mat))
y <- if(bitwAnd(dir,2)) rev(seq(ncol(mat))) else seq(nrow(mat))
mat[x,y]
}
bartek <- function(matrix){
n <- ncol(matrix)
c(paste0(matrix[(1:n-1)*n+1:n],collapse=""), paste0(matrix[rev((1:n-1)*n+1:n)],collapse=""),
paste0(matrix[(1:n-1)*n+n:1],collapse=""), paste0(matrix[rev((1:n-1)*n+n:1)],collapse=""))
}
Joe <- function(matrix){
diag0 <- diag(matrix)
diag1 <- diag(rotate(matrix))
diag2 <- rev(diag0)
diag3 <- rev(diag1)
c(paste(diag0, collapse = ""),paste(diag1, collapse = ""),
paste(diag2, collapse = ""),paste(diag3, collapse = ""))
}
James <- function(mat){
sapply(0:3,function(x) paste(diag(revMat(mat,x)),collapse=""))
}
matrix <- matrix(c('s','t','y','a','e','l','f','n','e'), ncol = 3)
microbenchmark(bartek(matrix), Joe(matrix), James(matrix))
Unit: microseconds
expr min lq mean median uq max neval
bartek(matrix) 50.273 55.2595 60.78952 59.4390 62.438 134.880 100
Joe(matrix) 167.431 176.6170 188.46908 182.8260 192.646 337.717 100
James(matrix) 321.313 334.3350 346.15230 339.7235 348.565 447.115 100
matrix <- matrix(1:10000, ncol=100)
microbenchmark(bartek(matrix), Joe(matrix), James(matrix))
Unit: microseconds
expr min lq mean median uq max neval
bartek(matrix) 314.385 326.752 336.1194 331.936 337.9805 423.323 100
Joe(matrix) 2168.141 2221.477 2460.1002 2257.439 2298.4400 8856.482 100
James(matrix) 1200.572 1250.354 1407.5943 1276.307 1323.8845 7419.931 100
For a matrix, this can be accomplished by taking the diag of the four possible rotations. If you set up a rotate function as follows (credit), this becomes straightforward:
> rotate <- function(x) t(apply(x, 2, rev))
> diag0 <- paste(diag(matrix), collapse = "")
> diag1 <- paste(diag(rotate(matrix)), collapse = "")
> diag2 <- paste(diag(rotate(rotate(matrix))), collapse = "")
> diag3 <- paste(diag(rotate(rotate(rotate(matrix)))), collapse = "")
> diag0
[1] "see"
> diag1
[1] "yef"
> diag2
[1] "ees"
> diag3
[1] "fey"
As pointed out by Frank in comments, this could become slow for sufficiently large matrices (on my machine, rotate starts to take longer than about a second for matrices larger than 1000 X 1000). You can save some time by using rev prior to pasting, eg:
> diag0 <- diag(matrix)
> diag1 <- diag(rotate(matrix))
> diag2 <- rev(diag0)
> diag3 <- rev(diag1)
> paste(diag2, collapse = "")
[1] "ees"
> paste(diag3, collapse = "")
[1] "fey"
One way is to use diag on the matrix, called mat here to avoid clashing with the function name, and reversing the row and/or column orders for to get each diagonal and direction.
You can do it with a supplementary function to make the reversals systematic so you can use sapply to loop.
revMat <- function(mat, dir=0)
{
x <- if(bitwAnd(dir,1)) rev(seq(nrow(mat))) else seq(nrow(mat))
y <- if(bitwAnd(dir,2)) rev(seq(ncol(mat))) else seq(nrow(mat))
mat[x,y]
}
sapply(0:3,function(x) paste(diag(revMat(mat,x)),collapse=""))
[1] "see" "yef" "fey" "ees"
Convert matrix to an actual matrix m (as opposed to a data frame). Then the four diagonals are:
m <- as.matrix(matrix)
ix <- ncol(m):1
paste(diag(m), collapse = "")
paste(diag(m[ix,]), collapse = "")
paste(diag(m[,ix]), collapse = "")
paste(diag(m[ix, ix]), collapse = "")
Related
I am trying to identify likely last name from parts of name strings in various formats in R. What is the fastest way to identify the longest string match from the dataset of last names to a given name string (I'm using the wru surnames2010 dataset)?
I need the longest possibility rather than any possibility. I.e. in the example below the first string "scottcampbell" contains possible surnames "scott" and "campbell". I want to only return the longest of the possible matches, in this case only "campbell".
Reproduce example data:
library(wru)
data("surnames2010")
#filter out names under 4 characters
lnames <- surnames2010[nchar(as.character(surnames2010$surname))>3,]
testvec <- c("scottcampbell","mattbaker","tsmith","watkins","burnsmary","terri","frankrodriguez","neal")
Desired imagined function+result:
foo_longest_matches(testvec)
#Desired imagined result:
[1] "campbell" "baker" "smith" "watkins" "burns" "terri" "rodriguez" "neal")
You could use adist. Please note that you are doing more than 1million comparisons to obtain the longest. I would prefer you use a different method. The best so far that I have in mind is
a <- adist(toupper(testvec), surnames2010$surname, counts = TRUE)
b <- attr(a, "trafos")
d <- array(grepl("S|I", b) + nchar(gsub("(.)\\1++", "1",b, perl=TRUE)), dim(a)) * 10 + a
as.character(surnames2010$surname[max.col(-d)])
[1] "CAMPBELL" "BAKER" "SMITH" "WATKINS" "BURNS" "TERRI" "RODRIGUEZ" "NEAL"
benchmark:
longest <- function(testvec,namevec){
a <- adist(testvec, namevec, counts = TRUE)
b <- attr(a, "trafos")
d <- array(grepl("S|I", b) + nchar(gsub("(.)\\1++", "1",b, perl=TRUE)), dim(a)) * 10 + a
as.character(namevec[max.col(-d)])
}
EDIT: Was able to obtain a faster method(Not necessarily the fastest)
longest2 <- function(testvec,namevec){
a <- stack(sapply(namevec,grep,testvec,value = TRUE,simplify = FALSE))
tapply(as.character(a[, 2]), a[, 1], function(x) x[which.max(nchar(x))])[testvec]
}
microbenchmark::microbenchmark(longest(testvec,lnames$surname),longest2(testvec,lnames$surname),foo_longest_matches(testvec),times = 5)
Unit: seconds
expr min lq mean median uq max neval
longest(testvec, lnames$surname) 3.316550 3.984128 5.308339 6.265192 6.396348 6.579477 5
longest2(testvec, lnames$surname) 1.817059 1.917883 2.835354 3.350068 3.538278 3.553481 5
foo_longest_matches(testvec) 10.093179 10.325489 11.610619 10.756714 10.889326 15.988384 5
Not sure about fastest but here is a method to test:
library(wru)
data("surnames2010")
lnames <- surnames2010[nchar(as.character(surnames2010$surname))>3,]
testvec <- c("scottcampbell","mattbaker","tsmith","watkins","burnsmary","terri","frankrodriguez","neal")
lnames$surname <- tolower(lnames$surname)
testvec <- tolower(testvec)
foo_longest_matches <- function(string_vector) {
outdf <- c()
for (name in string_vector) {
print(name)
ting <- lnames[sapply(lnames$surname, function(x) grepl(x, name)),]
# you only care about the longest, remove the next line to get all matches
ting <- ting[which.max(nchar(ting$surname)),]
outdf <- rbind(outdf, ting)
}
return(outdf)
}
get_matches <- foo_longest_matches(testvec)
get_matches
# surname p_whi p_bla p_his p_asi p_oth
# 47 campbell 0.7366 0.2047 0.02490000 0.00530000 0.02840000
# 44 baker 0.7983 0.1444 0.02280000 0.00560000 0.02890000
# 1 smith 0.7090 0.2311 0.02400000 0.00500000 0.03080000
# 240 watkins 0.6203 0.3227 0.02090000 0.00420000 0.03200000
# 155 burns 0.8026 0.1406 0.02480000 0.00590000 0.02610000
# 110133 terri 0.7453 0.1801 0.01243333 0.01243333 0.04973333
# 9 rodriguez 0.0475 0.0054 0.93770000 0.00570000 0.00360000
# 337 neal 0.6210 0.3184 0.02160000 0.00600000 0.03290000
I have an array a with some matrices in it. Now i need to efficiently check how many different matrices I have and what indices (in ascending order) they have in the array. My approach is the following: Paste the columns of the matrixes as character vectors and have a look at the frequency table like this:
n <- 10 #observations
a <- array(round(rnorm(2*2*n),1),
c(2,2,n))
paste_a <- apply(a, c(3), paste, collapse=" ") #paste by column
names(paste_a) <- 1:n
freq <- as.numeric( table(paste_a) ) # frequencies of different matrices (in ascending order)
indizes <- as.numeric(names(sort(paste_a[!duplicated(paste_a)])))
nr <- length(freq) #number of different matrices
However, as you increase n to large numbers, this gets very inefficient (it's mainly paste() that's getting slower and slower). Does anyone have a better solution?
Here is a "real" dataset with 100 observations where some matrices are actual duplicates (as opposed to my example above): https://pastebin.com/aLKaSQyF
Thank you very much.
Since your actual data is made up of the integers 0,1,2,3, why not take advantage of base 4? Integers are much faster to compare than entire matrix objects. (All occurrences of a below are of the data found in the real data set from the link.)
Base4Approach <- function() {
toBase4 <- sapply(1:dim(a)[3], function(x) {
v <- as.vector(a[,,x])
pows <- which(v > 0)
coefs <- v[pows]
sum(coefs*(4^pows))
})
myDupes <- which(duplicated(toBase4))
a[,,-(myDupes)]
}
And since the question is about efficiency, let's benchmark:
MartinApproach <- function() {
### commented this out for comparison reasons
# dimnames(a) <- list(1:dim(a)[1], 1:dim(a)[2], 1:dim(a)[3])
a <- a[,,!duplicated(a, MARGIN = 3)]
nr <- dim(a)[3]
a
}
identical(MartinApproach(), Base4Approach())
[1] TRUE
microbenchmark(Base4Approach(), MartinApproach())
Unit: microseconds
expr min lq mean median uq max neval
Base4Approach() 291.658 303.525 339.2712 325.4475 352.981 636.361 100
MartinApproach() 983.855 1000.958 1160.4955 1071.9545 1187.321 3545.495 100
The approach by #d.b. doesn't really do the same thing as the previous two approaches (it simply identifies and doesn't remove duplicates).
DBApproach <- function() {
a[, , 9] = a[, , 1]
#Convert to list
mylist = lapply(1:dim(a)[3], function(i) a[1:dim(a)[1], 1:dim(a)[2], i])
temp = sapply(mylist, function(x) sapply(mylist, function(y) identical(x, y)))
temp2 = unique(apply(temp, 1, function(x) sort(which(x))))
#The indices in 'a' where the matrices are same
temp2[lengths(temp2) > 1]
}
However, Base4Approach still dominates:
microbenchmark(Base4Approach(), MartinApproach(), DBApproach())
Unit: microseconds
expr min lq mean median uq max neval
Base4Approach() 298.764 324.0555 348.8534 338.899 356.0985 476.475 100
MartinApproach() 1012.601 1087.9450 1204.1150 1110.662 1162.9985 3224.299 100
DBApproach() 9312.902 10339.4075 11616.1644 11438.967 12413.8915 17065.494 100
Update courtesy of #alexis_laz
As mentioned in the comments by #alexis_laz, we can do much better.
AlexisBase4Approach <- function() {
toBase4 <- colSums(a * (4 ^ (0:(prod(dim(a)[1:2]) - 1))), dims = 2)
myDupes <- which(duplicated(toBase4))
a[,,-(myDupes)]
}
microbenchmark(Base4Approach(), MartinApproach(), DBApproach(), AlexisBase4Approach(), unit = "relative")
Unit: relative
expr min lq mean median uq max neval
Base4Approach() 11.67992 10.55563 8.177654 8.537209 7.128652 5.288112 100
MartinApproach() 39.60408 34.60546 27.930725 27.870019 23.836163 22.488989 100
DBApproach() 378.91510 342.85570 262.396843 279.190793 231.647905 108.841199 100
AlexisBase4Approach() 1.00000 1.00000 1.000000 1.000000 1.000000 1.000000 100
## Still gives accurate results
identical(MartinApproach(), AlexisBase4Approach())
[1] TRUE
My first attempt was actually really slow. So here is slightly changed version of yours:
dimnames(a) <- list(1:dim(a)[1], 1:dim(a)[2], 1:dim(a)[3])
a <- a[,,!duplicated(a, MARGIN = 3)]
nr <- dim(a)[3] #number of different matrices
idx <- dimnames(a)[[3]] # indices of left over matrices
I don't know if this is exactly what you want but here is a way you can extract indices where the matrices are same. More processing may be necessary to get what you want
#DATA
n <- 10
a <- array(round(rnorm(2*2*n),1), c(2,2,n))
a[, , 9] = a[, , 1]
temp = unique(apply(X = sapply(1:dim(a)[3], function(i)
sapply(1:dim(a)[3], function(j) identical(a[, , i], a[, , j]))),
MARGIN = 1,
FUN = function(x) sort(which(x))))
temp[lengths(temp) > 1]
#[[1]]
#[1] 1 9
I want to count the number of 1's that occur from RIGHT to LEFT across multiple columns, which stops when encountering the first 0.
Example DF:
df<-data.frame(replicate(7,sample(0:1,30,rep=T)))
colnames(df)<-seq(1950,2010,10)
I've manually entered the desired result here under a new column "condition" as an example:
Thanks in advance for your help,
Cai
Here's a fully vectorized attempt
indx <- rowSums(df) == ncol(df) # Per Jaaps comment
df$condition <- ncol(df) - max.col(-df, ties = "last")
df$condition[indx] <- ncol(df) - 1
This is basically finds the first zero from the right and counts how many columns were before that (which are basically the 1s in a binary data)
EDIT
Had to add handling for the special case when all the rows are ones
df$condition <- apply(df, 1, function(x) {
y <- rev(x)
sum(cumprod(y))
})
[Edit: now works]
Try this
df$condition <- apply(df,1,function(x){x<- rev(x);m <- match(0,x)[1]; if (is.na(m)) sum(x) else sum(x[1:m])})
we're matching the first 0, then summing up until this element.
If there's no zero we sum the full row
Here's a benchmark of all solutions :
library(stringr)
microbenchmark(
Moody_Mudskipper = apply(df,1,function(x){x<- rev(x);m <- match(0,x)[1]; if (is.na(m)) sum(x) else sum(x[1:m])}),
akrun = apply(df, 1, function(x) {x1 <- rle(x)
x2 <- tail(x1$lengths, 1)[tail(x1$values, 1)==1]
if(length(x2)==0) 0 else x2}),
akrun2 = str_count(do.call(paste0, df), "[1]+$"),
roland = apply(df, 1, function(x) {y <- rev(x);sum(y * cumprod(y != 0L))}),
David_Arenburg = ncol(df) - max.col(-df, ties = "last"),
times = 10)
# Unit: microseconds
# expr min lq mean median uq max neval
# Moody_Mudskipper 1437.948 1480.417 1677.1929 1536.159 1597.209 3009.320 10
# akrun 6985.174 7121.078 7718.2696 7691.053 7856.862 9289.146 10
# akrun2 1101.731 1188.793 1290.8971 1226.486 1343.099 1790.091 10
# akrun3 693.315 791.703 830.3507 820.371 884.782 1030.240 10
# roland 1197.995 1270.901 1708.5143 1332.305 1727.802 4568.660 10
# David_Arenburg 2845.459 3060.638 3406.3747 3167.519 3495.950 5408.494 10
# David_Arenburg_corrected 3243.964 3341.644 3757.6330 3384.645 4195.635 4943.099 10
For a bigger example David's solution is indeed the fastest, as said in the chosen solution's comments:
df<-data.frame(replicate(7,sample(0:1,1000,rep=T)))
# Unit: milliseconds
# expr min lq mean median uq max neval
# Moody_Mudskipper 31.324456 32.155089 34.168533 32.827345 33.848560 44.952570 10
# akrun 225.592061 229.055097 238.307506 234.761584 241.266853 271.000470 10
# akrun2 28.779824 29.261499 33.316700 30.118144 38.026145 46.711869 10
# akrun3 14.184466 14.334879 15.528201 14.633227 17.237317 18.763742 10
# roland 27.946005 28.341680 29.328530 28.497224 29.760516 33.692485 10
# David_Arenburg 3.149823 3.282187 3.630118 3.455427 3.727762 5.240031 10
# David_Arenburg_corrected 3.464098 3.534527 4.103335 3.833937 4.187141 6.165159 10
We can loop through the rows, use rle
df$condition <- apply(df, 1, function(x) {x1 <- rle(x)
x2 <- tail(x1$lengths, 1)[tail(x1$values, 1)==1]
if(length(x2)==0) 0 else x2})
Or another option is str_extract
library(stringr)
v1 <- str_extract(do.call(paste0, df), "1+$")
d$condition <- ifelse(is.na(v1), 0, nchar(v1))
Or with a slightly more efficient stringi
library(stringi)
v1 <- stri_count(stri_extract(do.call(paste0, df), regex = "1+$"), regex = ".")
v1[is.na(v1)] <- 0
df$condition <- v1
Or with a more compact option
stri_count(do.call(paste0, df), regex = '(?=1+$)')
I would like to transpose a nested list. Assume the following nested list x is given:
a <- list(c("a","b","c","d"))
b <- list(c("d","c","b","a"))
c <- list(c("4","3","2","1"))
d <- list(c("1","2","3","4"))
x <- list(a,b,c,d)
The outcome should be a nested list where the first column of the original list x is the first nested list element, that is "a","d","4","1", the second column is the second nested list element, i.e. "b","c","3","2" and so on. In the end the structure is kind of a transpose of the original structure. How can this be done in R?
We could also do without lapply (using matrix):
relist(matrix(unlist(x), ncol = 4, byrow = T), skeleton = x)
Benchmarking
library(microbenchmark)
a <- list(c("a","b","c","d"))
b <- list(c("d","c","b","a"))
c <- list(c("4","3","2","1"))
d <- list(c("1","2","3","4"))
x <- list(a,b,c,d)
f_akrun <- function(x) {m1 <- do.call(rbind, lapply(x, function(y) do.call(rbind, y)));relist(m1, skeleton = x);}
f_m0h3n <- function(x) {relist(matrix(unlist(x), ncol = length(x[[1]][[1]]), byrow = T), skeleton = x)}
setequal(f_akrun(x), f_m0h3n(x))
# [1] TRUE
microbenchmark(f_akrun(x), f_m0h3n(x))
# Unit: microseconds
# expr min lq mean median uq max neval
# f_akrun(x) 135.591 137.301 144.3545 138.585 148.422 334.484 100
# f_m0h3n(x) 110.782 111.638 116.5477 112.493 117.412 212.153 100
We can try
m1 <- do.call(rbind, lapply(x, function(y) do.call(rbind, y)))
relist(m1, skeleton = x)
I'm trying to merge two vectors of same length where NAs in vector "a" align with the numbers in vector "b" and vice versa:
a <- c(1, NA, 3, NA)
b <- c(NA, 2, NA, 4)
The output should be:
1, 2 ,3, 4
Thanks for the help!
edit: the solution I used was
a[is.na(a)] <- b[is.na(a)]
The values of a that correspond to is.na(a) should be replaced with the values of b that correspond to the negation of is.na(b). Here I define a new vector d so as to not over-write the original vectors a or b.
d <- a
d[is.na(d)] <- b[!is.na(b)]
d
# [1] 1 2 3 4
If you know the NA values begin in the second position, you could also alternate the assignment.
d <- a
d[c(FALSE, TRUE)] <- b[c(FALSE, TRUE)]
d
# [1] 1 2 3 4
Here are some more solutions that might have more "literal" resonance. They have equivalent outputs:
m <- mapply(c, na.omit(a), na.omit(b), SIMPLIFY= FALSE) ## or,
m <- Map(c, na.omit(a), na.omit(b))
output <- unlist(m) ## or,
output <- Reduce(c, m)
What this does it first concatenates pairs across na.omit(a) and na.omit(b), and then concatenates all those pairs together.
As far as performance goes, here is a quick benchmark:
library(microbenchmark)
gc()
a <- (1:1e4)[c(TRUE, NA)]
b <- (1:1e4)[c(NA, TRUE)]
microbenchmark(
unlist(mapply(c, na.omit(a), na.omit(b), SIMPLIFY= FALSE)),
unlist(Map(c, na.omit(a), na.omit(b))),
Reduce(c, mapply(c, na.omit(a), na.omit(b), SIMPLIFY= FALSE)),
Reduce(c, Map(c, na.omit(a), na.omit(b))),
times = 100
)
# Unit: milliseconds
# expr min lq
# unlist(mapply(c, na.omit(a), na.omit(b), SIMPLIFY = FALSE)) 4.476689 5.103025
# unlist(Map(c, na.omit(a), na.omit(b))) 4.475753 4.902474
# Reduce(c, mapply(c, na.omit(a), na.omit(b), SIMPLIFY = FALSE)) 75.974627 82.953051
# Reduce(c, Map(c, na.omit(a), na.omit(b))) 75.919419 82.626217
# median uq max neval
# 5.488113 5.723023 10.59291 100
# 5.422528 5.784764 13.04502 100
# 86.082578 89.652660 114.94584 100
# 85.761412 89.550317 158.90629 100
Unsurprisingly, Reduce is much slower than unlist. Map is only slightly slower than mapply. However Reduce is much, much more generally applicable, whereas unlist can really only handle this special case.