I am trying to permute a char vector a of variable length picking 3 elements every time, without repetition. Ordering counts only for the first element but doesn't for second and third (e.g. abc != bac != cab, but abc = acb and bca = bac). Each set of 3 permuted elements should be a row in a dataframe b.
A vector with letters a,b,c,d,e would result in this expected output:
abc
abd
abe
acd
ace
ade
bac
bad
bae
bcd
bce
bde
cab
cad
cae
cbd
cbe
cde
dab
dac
dae
dbc
dbe
dce
eab
eac
ead
ebc
ebd
ecd
Using 3 for loops I think I was able to achieve this output, but it is slow if the vector is long.
a = letters[1:5]
aL = length(a)
b <- data.frame(var1 = character(),
var2 = character(),
var3 = character(),
stringsAsFactors = FALSE)
# restricted permutations for moderation
pracma::tic()
for(i in 1:aL){
for(j in 1:(aL-1)){
for(k in (j+1):aL){
if(j != i & k != i) {
b <- rbind(b, data.frame(a[i], a[j], a[k])) }
}
}
}
pracma::toc()
#> elapsed time is 0.070000 seconds
b
#> a.i. a.j. a.k.
#> 1 a b c
#> 2 a b d
#> 3 a b e
#> 4 a c d
#> 5 a c e
#> 6 a d e
#> 7 b a c
#> 8 b a d
#> 9 b a e
#> 10 b c d
#> 11 b c e
#> 12 b d e
#> 13 c a b
#> 14 c a d
#> 15 c a e
#> 16 c b d
#> 17 c b e
#> 18 c d e
#> 19 d a b
#> 20 d a c
#> 21 d a e
#> 22 d b c
#> 23 d b e
#> 24 d c e
#> 25 e a b
#> 26 e a c
#> 27 e a d
#> 28 e b c
#> 29 e b d
#> 30 e c d
Created on 2019-07-17 by the reprex package (v0.2.1)
How can I achieve the same outcome in less time. Is recursion faster?
Any help is greatly appreciated. Thank you.
I propose the following solution:
a = letters[1:5]
A = t(combn(a,3)) # create all possible three-letter combinations,
# disregarding the order
Full = rbind(A, A[,3:1], A[,c(2,3,1)]) # put every of the elements of the
# differing combinations in first place once
Here's one option for your specific example:
library(gtools)
library(dplyr)
# example vector
vec = letters[1:5]
# vectorised function to rearrange elements (based on your restriction)
f = function(x1,x2,x3) paste0(c(x1, sort(c(x2,x3))), collapse = " ")
f = Vectorize(f)
permutations(length(vec), 3, vec) %>% # get permutations
data.frame(., stringsAsFactors = F) %>% # save as data frame
mutate(vec = f(X1,X2,X3)) %>% # apply function to each row
distinct(vec, .keep_all = T) # keep distinct vec values
# X1 X2 X3 vec
# 1 a b c a b c
# 2 a b d a b d
# 3 a b e a b e
# 4 a c d a c d
# 5 a c e a c e
# 6 a d e a d e
# 7 b a c b a c
# ...
Not clear if you want your output to be 3 separate columns with 1 element each, or one column with the vector, so I'm keeping both for you to choose from. You can keep columns {X1, X2, X3} or just vec.
The following is a straightforward rewrite of the triple for loop as a triple lapply loop.
t1 <- system.time({
for(i in 1:aL){
for(j in 1:(aL-1)){
for(k in (j+1):aL){
if(j != i & k != i) {
b <- rbind(b, data.frame(a[i], a[j], a[k])) }
}
}
}
})
t2 <- system.time({
d <- lapply(1:aL, function(i){
tmp <- lapply(1:(aL-1), function(j){
tmp <- lapply((j+1):aL, function(k){
if(j != i & k != i) c(a[i], a[j], a[k])
})
do.call(rbind, tmp)
})
do.call(rbind, tmp)
})
d <- do.call(rbind.data.frame, d)
names(d) <- paste("a", 1:3, sep = ".")
})
all.equal(b, d)
#[1] "Names: 3 string mismatches"
rbind(t1, t2)
# user.self sys.self elapsed user.child sys.child
#t1 0.051 0 0.051 0 0
#t2 0.017 0 0.018 0 0
Related
I have a dataframe with multiple factors and multiple numeric vars. I would like to collapse one of the factors (say by mean).
In my attempts I could only think of nested sapply or for loops to isolate the numerical elements to be averaged.
var <- data.frame(A = c(rep('a',8),rep('b',8)), B =
c(rep(c(rep('c',2),rep('d',2)),4)), C = c(rep(c('e','f'),8)),
D = rnorm(16), E = rnorm(16))
> var
A B C D E
1 a c e 1.1601720731 -0.57092435
2 a c f -0.0120178626 1.05003748
3 a d e 0.5311032778 1.67867806
4 a d f -0.3399901000 0.01459940
5 a c e -0.2887561691 -0.03847519
6 a c f 0.0004299922 -0.36695879
7 a d e 0.8124655890 0.05444033
8 a d f -0.3777058654 1.34074427
9 b c e 0.7380720821 0.37708543
10 b c f -0.3163496271 0.10921373
11 b d e -0.5543252191 0.35020193
12 b d f -0.5753686426 0.54642790
13 b c e -1.9973216646 0.63597405
14 b c f -0.3728926714 -3.07669300
15 b d e -0.6461596329 -0.61659041
16 b d f -1.7902722068 -1.06761729
sapply(4:ncol(var), function(i){
sapply(1:length(levels(var$A)), function(j){
sapply(1:length(levels(var$B)), function(t){
sapply(1:length(levels(var$C)), function(z){
mean(var[var$A == levels(var$A)[j] &
var$B == levels(var$B)[t] &
var$C == levels(var$C)[z],i])
})
})
})
})
[,1] [,2]
[1,] 0.435707952 -0.3046998
[2,] -0.005793935 0.3415393
[3,] 0.671784433 0.8665592
[4,] -0.358847983 0.6776718
[5,] -0.629624791 0.5065297
[6,] -0.344621149 -1.4837396
[7,] -0.600242426 -0.1331942
[8,] -1.182820425 -0.2605947
Is there a way to do this without this many sapply? maybe with mapply or outer
Maybe just,
var <- data.frame(A = c(rep('a',8),rep('b',8)), B =
c(rep(c(rep('c',2),rep('d',2)),4)), C = c(rep(c('e','f'),8)),
D = rnorm(16), E = rnorm(16))
library(dplyr)
var %>%
group_by(A,B,C) %>%
summarise_if(is.numeric,mean)
(Note that the output you show isn't what I get when I run your sapply code, but the above is identical to what I get when I run your sapply's.)
For inline aggregation (keeping same number of rows of data frame), consider ave:
var$D_mean <- with(var, ave(D, A, B, C, FUN=mean))
var$E_mean <- with(var, ave(E, A, B, C, FUN=mean))
For full aggregation (collapsed to factor groups), consider aggregate:
aggregate(. ~ A + B + C, var, mean)
I will complete the holy trinity with a data.table solution. Here .SD is a data.table of all the columns not listed in the by portion. This is a near-dupe of this question (only difference is >1 column being summarized), so click that if you want more solutions.
library(data.table)
setDT(var)
var[, lapply(.SD, mean), by = .(A, B, C)]
# A B C D E
# 1: a c e 0.07465822 0.032976115
# 2: a c f 0.40789460 -0.944631574
# 3: a d e 0.72054938 0.039781185
# 4: a d f -0.12463910 0.003363382
# 5: b c e -1.64343115 0.806838905
# 6: b c f -1.08122890 -0.707975411
# 7: b d e 0.03937829 0.048136471
# 8: b d f -0.43447899 0.028266455
I am attempting to write a simulation that involves randomly re-assigning items to categories with some restrictions.
Lets say I have a collection of pebbles 1 to N distributed across buckets A through J:
set.seed(100)
df1 <- data.frame(pebble = 1:100,
bucket = sample(LETTERS[1:10], 100, T),
stringsAsFactors = F)
head(df1)
#> pebble bucket
#> 1 1 D
#> 2 2 C
#> 3 3 F
#> 4 4 A
#> 5 5 E
#> 6 6 E
I want to randomly re-assign pebbles to buckets. Without restrictions I could do it like so:
random.permutation.df1 <- data.frame(pebble = df1$pebble, bucket = sample(df1$bucket))
colSums(table(random.permutation.df1))
#> A B C D E F G H I J
#> 4 7 13 14 12 11 11 10 9 9
colSums(table(df1))
#> A B C D E F G H I J
#> 4 7 13 14 12 11 11 10 9 9
Importantly this re-assigns pebbles while ensuring that each bucket retains the same number (because we are sampling without replacement).
However, I have a set of restrictions such that certain pebbles cannot be assigned to certain buckets. I encode the restrictions in df2:
df2 <- data.frame(pebble = sample(1:100, 10),
bucket = sample(LETTERS[1:10], 10, T),
stringsAsFactors = F)
df2
#> pebble bucket
#> 1 33 I
#> 2 39 I
#> 3 5 A
#> 4 36 C
#> 5 55 J
#> 6 66 A
#> 7 92 J
#> 8 95 H
#> 9 2 C
#> 10 49 I
The logic here is that pebbles 33 and 39 cannot be placed in bucket I, or pebble 5 in bucket A, etc. I would like to permute which pebbles are in which bucket subject to these restrictions.
So far, I've thought of tackling it in a loop as below, but this does not result in buckets retaining the same number of pebbles:
perms <- character(0)
cnt <- 1
for (p in df1$pebble) {
perms[cnt] <- sample(df1$bucket[!df1$bucket %in% df2$bucket[df2$pebble==p]], 1)
cnt <- cnt + 1
}
table(perms)
#> perms
#> A B C D E F G H I J
#> 6 7 12 22 15 1 14 7 7 9
I then tried sampling positions, and then removing that position from the available buckets and the available remaining positions. This is also not working, and I suspect it is because I am sampling my way into branches of the tree that do not yield solutions.
set.seed(42)
perms <- character(0)
cnt <- 1
ids <- 1:nrow(df1)
bckts <- df1$bucket
for (p in df1$pebble) {
id <- sample(ids[!bckts %in% df2$bucket[df2$pebble==p]], 1)
perms[cnt] <- bckts[id]
bckts <- bckts[-id]
ids <- ids[ids!=id]
cnt <- cnt + 1
}
table(perms)
#> perms
#> A B C D E F G J
#> 1 1 4 1 2 1 2 2
Any thoughts or advice much appreciated (and apologies for the length).
EDIT:
I foolishly forgot to clarify that I was previously solving this by just resampling until I got a draw that didn't violate any of the conditions in df2, but I now have many conditions such that this would make my code take too long to run. I am still up for trying to force it if I could figure out a way to make forcing it faster.
I have a solution (I managed to write it in base R, but the data.table solution is easier to understand and write:
random.permutation.df2 <- data.frame(pebble = df1$pebble, bucket = rep(NA,length(df1$pebble)))
for(bucket in unique(df1$bucket)){
N <- length( random.permutation.df2$bucket[is.na(random.permutation.df2$bucket) &
!random.permutation.df2$pebble %in% df2$pebble[df2$bucket == bucket] ] )
random.permutation.df2$bucket[is.na(random.permutation.df2$bucket) &
!random.permutation.df2$pebble %in% df2$pebble[df2$bucket == bucket] ] <-
sample(c(rep(bucket,sum(df1$bucket == bucket)),rep(NA,N-sum(df1$bucket == bucket))))
}
The idea is to sample the authorised peeble for each bucket: those that are not in df2, and those that are not already filled. You sample then a vector of the good length, choosing between NAs (for the following buckets values) and the value in the loop, and voilà.
Now easier to read with data.table
library(data.table)
random.permutation.df2 <- setDT(random.permutation.df2)
df2 <- setDT(df2)
for( bucketi in unique(df1$bucket)){
random.permutation.df2[is.na(bucket) & !pebble %in% df2[bucket == bucketi, pebble],
bucket := sample(c(rep(bucketi,sum(df1$bucket == bucket)),rep(NA,.N-sum(df1$bucket == bucket))))]
}
it has the two conditions
> colSums(table(df1))
A B C D E F G H I J
4 7 13 14 12 11 11 10 9 9
> colSums(table(random.permutation.df2))
A B C D E F G H I J
4 7 13 14 12 11 11 10 9 9
To verify that there isn't any contradiction with df2
> df2
pebble bucket
1: 37 D
2: 95 H
3: 90 C
4: 80 C
5: 31 D
6: 84 G
7: 76 I
8: 57 H
9: 7 E
10: 39 A
> random.permutation.df2[pebble %in% df2$pebble,.(pebble,bucket)]
pebble bucket
1: 7 D
2: 31 H
3: 37 J
4: 39 F
5: 57 B
6: 76 E
7: 80 F
8: 84 B
9: 90 H
10: 95 D
Here a brute force approach where one simply tries long enough until a valid solution is found:
set.seed(123)
df1 <- data.frame(pebble = 1:100,
bucket = sample(LETTERS[1:10], 100, T),
stringsAsFactors = F)
df2 <- data.frame(pebble = sample(1:100, 10),
bucket = sample(LETTERS[1:10], 10, T),
stringsAsFactors = F)
random.permutation.df1 <- data.frame(pebble = df1$pebble, bucket = sample(df1$bucket))
Random permutation does not match the condition, so try new ones:
merge(random.permutation.df1, df2)
#> pebble bucket
#> 1 60 J
while(TRUE) {
random.permutation.df1 <- data.frame(pebble = df1$pebble, bucket = sample(df1$bucket))
if(nrow(merge(random.permutation.df1, df2)) == 0)
break;
}
New permutation matches the condition:
merge(random.permutation.df1, df2)
#> [1] pebble bucket
#> <0 Zeilen> (oder row.names mit Länge 0)
colSums(table(random.permutation.df1))
#> A B C D E F G H I J
#> 7 12 11 9 14 7 11 11 11 7
colSums(table(df1))
#> A B C D E F G H I J
#> 7 12 11 9 14 7 11 11 11 7
I have a table like this:
A B C D E
7 1 6 8 7
9 3 9 5 9
4 6 2 1 10
10 5 3 4 1
1 3 5 9 3
6 4 8 7 6
I am in the process of finding the correlation of each variable with every other variable in the table. This is the R code I use:
test <- read.csv("D:/AB/test.csv")
iterations <- ncol(test)
correlation <- matrix(ncol = 3 , nrow = iterations * iterations)
for (k in 1:iterations) {
for (l in 1:iterations){
corr <- cor(test[,k], test[,l])
corr_string_A <- names(test[k])
corr_string_B <- names(test[l])
correlation[l + ((k-1) * iterations),] <- rbind(corr_string_A, corr_string_B, corr)
}
}
The following is the output that I received:
Var1 Var2 value
1 A A 1.00000000
2 B A 0.50018605
3 C A -0.35747393
4 D A -0.25670054
5 E A -0.02974821
6 A B 0.50018605
7 B B 1.00000000
8 C B 0.56070716
9 D B 0.46164928
10 E B 0.16813991
11 A C -0.35747393
12 B C 0.56070716
13 C C 1.00000000
14 D C 0.52094589
15 E C 0.23190036
16 A D -0.25670054
17 B D 0.46164928
18 C D 0.52094589
19 D D 1.00000000
20 E D -0.39223227
21 A E -0.02974821
22 B E 0.16813991
23 C E 0.23190036
24 D E -0.39223227
25 E E 1.00000000
However, I don't want the values from the upper triangle; i.e., no diagonal values should occur, and each unique combination should appear only once. The final output should look like:
Var1 Var2 value
1 B A 0.50018605
2 C A -0.35747393
3 D A -0.25670054
4 E A -0.02974821
5 C B 0.56070716
6 D B 0.46164928
7 E B 0.16813991
8 D C 0.52094589
9 E C 0.23190036
10 E D -0.39223227
I understand that there are a few techniques like reshape using which the above output can be achieved, but I want to make the above R code to suit and produce the above mentioned results.
I believe the "n" in the second for loop should be made to change dynamically which can help achieving this. However I am not sure how to make this work.
You can convert your correlation matrix to the 3-column format with as.data.frame and as.table, and then limiting to values above or below the diagonal can be done with subset.
subset(as.data.frame(as.table(cor(dat))),
match(Var1, names(dat)) > match(Var2, names(dat)))
# Var1 Var2 Freq
# 2 B A -0.02299154
# 3 C A 0.23155350
# 4 D A -0.28036851
# 5 E A -0.05230260
# 8 C B -0.58384036
# 9 D B -0.80175393
# 10 E B 0.00000000
# 14 D C 0.52094589
# 15 E C 0.23190036
# 20 E D -0.39223227
Note that for larger datasets this should be much more efficient than separately calling cor on pairs of variables because cor is vectorized, and further it's clearly a lot less typing.
If you really must keep the looping code, then you can achieve your desired result with small changes to the pair of for loops and some book keeping about the row of correlation that you are computing:
iterations <- ncol(test)
correlation <- matrix(ncol = 3 , nrow = choose(iterations, 2))
pos <- 1
for (k in 2:iterations) {
for (l in 1:(k-1)){
corr <- cor(test[,k], test[,l])
corr_string_A <- names(test[k])
corr_string_B <- names(test[l])
correlation[pos,] <- rbind(corr_string_A, corr_string_B, corr)
pos <- pos+1
}
}
However I really wouldn't suggest this looping solution; it would be better to use the one-liner I provided and then to handle all generated NA values afterward.
From the OP's loop output, we can subset the rows,
df1[!duplicated(t(apply(df1[1:2], 1, sort))) & df1[,1]!=df1[,2],]
# Var1 Var2 value
#2 B A 0.50018605
#3 C A -0.35747393
#4 D A -0.25670054
#5 E A -0.02974821
#8 C B 0.56070716
#9 D B 0.46164928
#10 E B 0.16813991
#14 D C 0.52094589
#15 E C 0.23190036
#20 E D -0.39223227
Or as I mentioned (first) in the comments, just use
cor(test)
I have a chunk of sentences and I want to build the undirected edge list of word co-occurrence and see the frequency of every edge. I took a look at the tm package but didn't find similar functions. Is there some package/script I can use? Thanks a lot!
Note: A word doesn't co-occur with itself. A word which appears twice or more co-occurs with other words for only once in the same sentence.
DF:
sentence_id text
1 a b c d e
2 a b b e
3 b c d
4 a e
5 a
6 a a a
OUTPUT
word1 word2 freq
a b 2
a c 1
a d 1
a e 3
b c 2
b d 2
b e 2
c d 2
c e 1
d e 1
It's convoluted so there's got to be a better approach:
dat <- read.csv(text="sentence_id, text
1, a b c d e
2, a b b e
3, b c d
4, a e", header=TRUE)
library(qdapTools); library(tidyr)
x <- t(mtabulate(with(dat, by(text, sentence_id, bag_o_words))) > 0)
out <- x %*% t(x)
out[upper.tri(out, diag=TRUE)] <- NA
out2 <- matrix2df(out, "word1") %>%
gather(word2, freq, -word1) %>%
na.omit()
rownames(out2) <- NULL
out2
## word1 word2 freq
## 1 b a 2
## 2 c a 1
## 3 d a 1
## 4 e a 3
## 5 c b 2
## 6 d b 2
## 7 e b 2
## 8 d c 2
## 9 e c 1
## 10 e d 1
Base only solution
out <- lapply(with(dat, split(text, sentence_id)), function(x) {
strsplit(gsub("^\\s+|\\s+$", "", as.character(x)), "\\s+")[[1]]
})
nms <- sort(unique(unlist(out)))
out2 <- lapply(out, function(x) {
as.data.frame(table(x), stringsAsFactors = FALSE)
})
dat2 <- data.frame(x = nms)
for(i in seq_along(out2)) {
m <- merge(dat2, out2[[i]], all.x = TRUE)
names(m)[i + 1] <- dat[["sentence_id"]][i]
dat2 <- m
}
dat2[is.na(dat2)] <- 0
x <- as.matrix(dat2[, -1]) > 0
out3 <- x %*% t(x)
out3[upper.tri(out3, diag=TRUE)] <- NA
dimnames(out3) <- list(dat2[[1]], dat2[[1]])
out4 <- na.omit(data.frame(
word1 = rep(rownames(out3), ncol(out3)),
word2 = rep(colnames(out3), each = nrow(out3)),
freq = c(unlist(out3)),
stringsAsFactors = FALSE)
)
row.names(out4) <- NULL
out4
This is very closely related to #TylerRinker's answer, but using different tools.
library(splitstackshape)
library(reshape2)
temp <- crossprod(
as.matrix(
cSplit_e(d, "text", " ", type = "character",
fill = 0, drop = TRUE)[-1]))
temp[upper.tri(temp, diag = TRUE)] <- NA
melt(temp, na.rm = TRUE)
# Var1 Var2 value
# 2 text_b text_a 2
# 3 text_c text_a 1
# 4 text_d text_a 1
# 5 text_e text_a 3
# 8 text_c text_b 2
# 9 text_d text_b 2
# 10 text_e text_b 2
# 14 text_d text_c 2
# 15 text_e text_c 1
# 20 text_e text_d 1
The "text_" parts of "Var1" and "Var2" can be stripped easily with sub or gsub.
Here's a base R way:
d <- read.table(text='sentence_id text
1 "a b c d e"
2 "a b b e"
3 "b c d"
4 "a e"', header=TRUE, as.is=TRUE)
result.vec <- table(unlist(lapply(d$text, function(text) {
pairs <- combn(unique(scan(text=text, what='', sep=' ')), m=2)
interaction(pairs[1,], pairs[2,])
})))
# a.b b.b c.b d.b a.c b.c c.c d.c a.d b.d c.d d.d a.e b.e c.e d.e
# 2 0 0 0 1 2 0 0 1 2 2 0 3 2 1 1
result <- subset(data.frame(do.call(rbind, strsplit(names(result.vec), '\\.')), freq=as.vector(result.vec)), freq > 0)
with(result, result[order(X1, X2),])
# X1 X2 freq
# 1 a b 2
# 5 a c 1
# 9 a d 1
# 13 a e 3
# 6 b c 2
# 10 b d 2
# 14 b e 2
# 11 c d 2
# 15 c e 1
# 16 d e 1
Given:
df <- data.frame(rep = letters[sample(4, 30, replace=TRUE)], loc = LETTERS[sample(5:8, 30, replace=TRUE)], y= rnorm(30))
lookup <- data.frame(rep=letters[1:4], loc=LETTERS[5:8])
This will give me the rows in df that have rep,loc combinations that occur in lookup:
mdply(lookup, function(rep,loc){
r=rep
l=loc
subset(df, rep==r & loc==l)
})
But I've read that using subset() inside a function is poor practice due to scoping issues. So how do I get the desired result using index notation?
In this particular case, merge seems to make the most sense to me:
merge(df, lookup)
# rep loc y
# 1 a E 1.6612394
# 2 a E 1.1050825
# 3 a E -0.7016759
# 4 b F 0.4364568
# 5 d H 1.3246636
# 6 d H -2.2573545
# 7 d H 0.5061980
# 8 d H 0.1397326
A simple alternative might be to paste together the "rep" and "loc" columns from df and from lookup and subset based on that:
df[do.call(paste, df[c("rep", "loc")]) %in% do.call(paste, lookup), ]
# rep loc y
# 4 d H 1.3246636
# 10 b F 0.4364568
# 14 a E -0.7016759
# 15 a E 1.6612394
# 19 d H 0.5061980
# 20 a E 1.1050825
# 22 d H -2.2573545
# 28 d H 0.1397326