I am trying to add the last value from the previous row to the subsequent ones. For example
tmat = rbind(c(1,2,3), c(1,2,3), c(1,2,5))
tmat = as.data.frame(tmat)
tmat
V1 V2 V3
1 1 2 3
2 1 2 3
3 1 2 5
changed to
V1 V2 V3
1 1 2 3
2 4 5 6
3 7 8 11
I have tried various ways but I have a blind spot to this one.
new=list()
for(i in 2:nrow(tmat)){
new[[i]] = cumsum(tmat[i,]+tmat[i-1,3])
}
do.call(rbind, new)
Thanks for any help.
I'd use a loop since you need to compute the rows step by step...
a <- 1:3
aa <- rbind(a,a,a)
aa[3,3] <- 6
for(i in 1:(nrow(aa)-1)) {
toadd <- aa[i,ncol(aa)]
aa[i+1,] <- aa[i+1,] + aa[i, ncol(aa)]
}
aa
[,1] [,2] [,3]
a 1 2 3
a 4 5 6
a 7 8 12
As a matrix reduction:
do.call(rbind, Reduce(function(a0, a1) (a1 + a0[3]),
split(as.matrix(tmat), seq_along(tmat)),
accumulate = T))
[,1] [,2] [,3]
[1,] 1 2 3
[2,] 4 5 6
[3,] 7 8 11
Related
I am hoping to create a matrix that shows a count of instances of overlapping values for a grouping variable based on a second variable. Specifically, I am hoping to determine the degree to which primary studies overlap across meta-analyses in order to create a network diagram.
So, in this example, I have three meta-analyses that include some portion of three primary studies.
df <- data.frame(metas = c(1,1,1,2,3,3), studies = c(1,3,2,1,2,3))
metas studies
1 1 1
2 1 3
3 1 2
4 2 1
5 3 2
6 3 3
I would like it to return:
v1 v2 v3
1 3 1 2
2 1 1 0
3 2 0 2
The value in row 1, column 1 indicates that Meta-analysis 1 had three studies in common with itself (i.e., it included three studies). Row 1, column 2 indicates that Meta-analysis 1 had one study in common with Meta-analysis 2. Row 1, column 3 indicates that Meta-analysis 1 had two studies in common with Meta-analysis 3.
I believe you are looking for a symmetric matrix of intersecting studies.
dfspl <- split(df$studies, df$metas)
out <- outer(seq_along(dfspl), seq_along(dfspl),
function(a, b) lengths(Map(intersect, dfspl[a], dfspl[b])))
out
# [,1] [,2] [,3]
# [1,] 3 1 2
# [2,] 1 1 0
# [3,] 2 0 2
If you need names on them, you can go with the names as defined by df$metas:
rownames(out) <- colnames(out) <- names(dfspl)
out
# 1 2 3
# 1 3 1 2
# 2 1 1 0
# 3 2 0 2
If you need the names defined as v plus the meta name, go with
rownames(out) <- colnames(out) <- paste0("v", names(dfspl))
out
# v1 v2 v3
# v1 3 1 2
# v2 1 1 0
# v3 2 0 2
If you need to understand what this is doing, outer creates an expansion of the two argument vectors, and passes them all at once to the function. For instance,
outer(seq_along(dfspl), seq_along(dfspl), function(a, b) { browser(); 1; })
# Called from: FUN(X, Y, ...)
debug at #1: [1] 1
# Browse[2]>
a
# [1] 1 2 3 1 2 3 1 2 3
# Browse[2]>
b
# [1] 1 1 1 2 2 2 3 3 3
# Browse[2]>
What we ultimately want to do is find the intersection of each pair of studies.
dfspl[[1]]
# [1] 1 3 2
dfspl[[3]]
# [1] 2 3
intersect(dfspl[[1]], dfspl[[3]])
# [1] 3 2
length(intersect(dfspl[[1]], dfspl[[3]]))
# [1] 2
Granted, we are doing it twice (once for 1 and 3, once for 3 and 1, which is the same result), so this is a little inefficient ... it would be better to filter them to only look at the upper or lower half and transferring it to the other.
Edited for a more efficient process (only calculating each intersection pair once, and never calculating self-intersection.)
eg <- expand.grid(a = seq_along(dfspl), b = seq_along(dfspl))
eg <- eg[ eg$a < eg$b, ]
eg
# a b
# 4 1 2
# 7 1 3
# 8 2 3
lens <- lengths(Map(intersect, dfspl[eg$a], dfspl[eg$b]))
lens
# 1 1 2 ## btw, these are just names, from eg$a
# 1 2 0
out <- matrix(nrow = length(dfspl), ncol = length(dfspl))
out[ cbind(eg$a, eg$b) ] <- lens
out
# [,1] [,2] [,3]
# [1,] NA 1 2
# [2,] NA NA 0
# [3,] NA NA NA
out[ lower.tri(out) ] <- out[ upper.tri(out) ]
diag(out) <- lengths(dfspl)
out
# [,1] [,2] [,3]
# [1,] 3 1 2
# [2,] 1 1 0
# [3,] 2 0 2
Same idea as #r2evans, also Base R (and a bit less eloquent) (edited as required):
# Create df using sample data:
df <- data.frame(metas = c(1,1,1,2,3,3), studies = c(1,7,2,1,2,3))
# Test for equality between the values in the metas vector and the rest of
# of the values in the dataframe -- Construct symmetric matrix from vector:
m1 <- diag(v1); m1[,1] <- m1[1,] <- v1 <- rowSums(data.frame(sapply(df$metas, `==`,
unique(unlist(df)))))
# Coerce matrix to dataframe setting the names as desired; dropping non matches:
df_2 <- setNames(data.frame(m1[which(rowSums(m1) > 0), which(colSums(m1) > 0)]),
paste0("v", 1:ncol(m1[which(rowSums(m1) > 0), which(colSums(m1) > 0)])))
Calculate sequence score based on score matrix.
sum(j[k])
j <- matrix(1:25, ncol = 5, nrow = 5)
diag(j) <- 0
j
n <- 1:5
k <- sample(n, 5, replace = FALSE)
k <- replicate(5, sample(n, 5, replace = FALSE))
j is score matrix.
k is sequence type matrix.
lets say k[1,] = 4 1 5 3 2
k[2,] = 2 5 4 2 4
solution: Please help answer two issues;
Issue 1:
add one more column to matrix k (lets call it "score"). Based on J matrix the score for this sequence should be 48.
4 1 5 3 2 48
Issue 2:
k[2,] = 2 5 4 2 4 The sample function is producing wrong permutations. I don't want any repetition in the sequence. Here 4 is repeated. Secondly 1 is missing. is there any other best way to generate random permutations.
You better double check the result. Without a reproducible example from your end it's difficult to confirm the values.
set.seed(1)
k <- replicate(5, sample(5))
# each column is a random permutation of 1:5
k
# [,1] [,2] [,3] [,4] [,5]
# [1,] 2 5 2 3 5
# [2,] 5 4 1 5 1
# [3,] 4 2 3 4 2
# [4,] 3 3 4 1 4
# [5,] 1 1 5 2 3
j <- matrix(1:25, 5)
diag(j) <- 0
nr <- nrow(k)
# arrange successive values as a column pair
ix <- cbind(c(k[-nr,]), c(k[-1,]))
# use the column pair to reference indices in j
jx <- j[ix]
# arrange j-values into a matrix and sum by column, producing the scores
scores <- colSums(matrix(jx, nr-1))
cbind(t(k), scores)
# scores
# [1,] 2 5 4 3 1 59
# [2,] 5 4 2 3 1 44
# [3,] 2 1 3 4 5 55
# [4,] 3 5 4 1 2 53
# [5,] 5 1 2 4 3 42
I have below information:
coordinate <- read.table(text = " 18.915 13.462 31.598
17.898 14.453 32.160
18.220 15.420 32.853
19.208 12.313 32.573
20.393 11.524 32.110
20.344 10.809 31.085
21.595 16.610 29.912")
amnumber <- c(1,1,2,3,3,3,4)
atname <-as.data.frame( c("A","B","A","C","D","C","H"),stringsAsFactors = F)
library(geometry)
tri <- delaunayn(coordinate)
tri
[,1] [,2] [,3] [,4]
[1,] 1 3 7 2
[2,] 4 1 6 2
[3,] 4 1 3 2
[4,] 4 1 3 7
[5,] 5 4 3 7
[6,] 5 1 6 7
[7,] 5 4 1 7
[8,] 5 4 1 6
tridmatrix
I want to perform two loops on tri mamtrix such that value 1 in the first row has relations between each other next values like 3,7 and 2. So, in the output matrix of our loops, we have to put 1 between these indices. Then, value 3 of the first row has relations between two other values like 7 and 2. And so on. The output result would be a matrix that only contains 0,1 values. To this end I wrote the below loops:
for (k in 1:nrow(tri)){
for (i in 1:4){
for (j in i+1){
c <- abs(amnumber[tri[k,i]]-amnumber[tri[k,j]])
if (c>=1){
if (!((atname[tri[k,i],]%in%"N")&&(atname[tri[k,j],]%in%"C")&&(c%in%1)||
(atname[tri[k,i],]%in%"C")&&(atname[tri[k,j],]%in%"N")&&(c%in%1))){
d <- sqrt(sum((coordinate[tri[k,i],]-coordinate[tri[k,j],])^2))
if (d<=tridist){
adj_tri[tri[k,i],tri[k,j]] <- 1
adj_tri[tri[k,j],tri[k,i]] <- 1
adj_tri[is.na(adj_tri)] <- 0
}
}
}
}
}
}
But it did not work. And I faced error. i index is equal to the number of columns in tri matrix and I think the problem is in the third loop. However, I could not fix it. Any help would be appreciated.
Besides, this is too slow. Would you please help me to change it lapply to speed up the progress.
thank you for viewing this post. I am a newbie for R language.
I want to find if one column(not specified one) is a duplicate of the other, and return a matrix with dimensions num.duplicates x 2 with each row giving both indices of any pair of duplicated variables. the matrix is organized so that first column is the lower number of the pair, and it is increasing ordered.
Let say I have a dataset
v1 v2 v3 v4 v5 v6
1 1 1 2 4 2 1
2 2 2 3 5 3 2
3 3 3 4 6 4 3
and I want this
[,1] [,2]
[1,] 1 2
[2,] 1 6
[3,] 2 6
[4,] 3 5
Please help, thank you!
Something like this I suppose:
out <- data.frame(t(combn(1:ncol(dd),2)))
out[combn(1:ncol(dd),2,FUN=function(x) all(dd[x[1]]==dd[x[2]])),]
# X1 X2
#1 1 2
#5 1 6
#9 2 6
#11 3 5
I feel like i'm missing something more simple, but this seems to work.
Here's the sample data.
dd <- data.frame(
v1 = 1:3, v2 = 1:3, v3 = 2:4,
v4 = 4:6, v5 = 2:4, v6 = 1:3
)
Now i'll assign each column to a group using ave() to look for duplicates. Then I'll count the number of columns in group
groups <- ave(1:ncol(dd), as.list(as.data.frame(t(dd))), FUN=min, drop=T)
Now that I have the groups, i'll split the column indexes up by those groups, if there is more than one, i'll grab all pairwise combinations. That will create a wide matrix and I flip it to a tall-line as you desire with t()
morethanone <- function(x) length(x)>1
dups <- t(do.call(cbind,
lapply(Filter(morethanone, split(1:ncol(dd), groups)), combn, 2)
))
That returns
[,1] [,2]
[1,] 1 2
[2,] 1 6
[3,] 2 6
[4,] 3 5
as desired
First, generate all possible combinatons with expand.grid. Second, remove duplicates and sort in desired order. Third, use sapply to find indexes of repeated columns:
kk <- expand.grid(1:ncol(df), 1:ncol(df))
nn <- kk[kk[, 1] > kk[, 2], 2:1]
nn[sapply(1:nrow(nn),
function(i) all(df[, nn[i, 1]] == df[, nn[i, 2]])), ]
Var2 Var1
2 1 2
6 1 6
12 2 6
17 3 5
The approach I propose is R-ish, but I suppose writing a simple double loop is justified for this case, especially if you recently started learning the language.
I am a relative newbie to R and I am now very close to being finished with a rather long script with many thanks to everyone who helped me thus far at various steps. I have another point I am stuck on. I have simplified the issue to this:
Dataset1
ax ay
1 3
2 4
Dataset2
bx by
5 7
6 8
A <- dataset1
B <- dataset2
a <- 2 #number of columns
b <- 1:2
(my datasets will vary in number of columns and so I need to be able to vary this factor)
I want this answer in any order (i.e. all possible combinations of two columns one from each of the two datasets) like this or equivalent.
[[1]]
1 5
2 6
[[2]]
1 7
2 8
[[3]]
3 5
4 6
[[4]]
3 7
4 8
But I am not getting it.
I tried a bunch of things and the closest to what I want was with this:
i <- 1
for( i in 1:a )
{
e <- lapply(B, function(x) as.data.frame(cbind(A, x)))
print(e)
i <- i+1
}
Close, yes. I can take the answer and do some fiddling and subsetting but its not right and there must be an easy way to do this. I have not seen anything like this in my on line searches. Any help much appreciated.
Does something like this work for you?
Dataset1 <- data.frame(ax=1:2,ay=3:4)
Dataset2 <- data.frame(bx=5:6,by=7:8)
apply(
expand.grid(seq_along(Dataset1),seq_along(Dataset2)),
1,
function(x) cbind(Dataset1[x[1]],Dataset2[x[2]])
)
Result:
[[1]]
ax bx
1 1 5
2 2 6
[[2]]
ay bx
1 3 5
2 4 6
[[3]]
ax by
1 1 7
2 2 8
[[4]]
ay by
1 3 7
2 4 8
I think the easiest way to do is very similar to what you tried, use two explicit loops. However, there are still some things I would do differently:
Pre allocate the list space
Use an explicit counter
Use drop=FALSE
Then you can do the following.
A <- read.table(text = "ax ay
1 3
2 4", header = TRUE)
B <- read.table(text = "bx by
5 7
6 8", header = TRUE)
out <- vector("list", length = ncol(A) * ncol(B))
counter <- 1
for (i in 1:ncol(A)) {
for (j in 1:ncol(B)) {
out[[counter]] <- cbind(A[,i, drop = FALSE], B[,j, drop = FALSE])
counter <- counter + 1
}
}
out
## [[1]]
## ax bx
## 1 1 5
## 2 2 6
##
## [[2]]
## ax by
## 1 1 7
## 2 2 8
##
## [[3]]
## ay bx
## 1 3 5
## 2 4 6
##
## [[4]]
## ay by
## 1 3 7
## 2 4 8
If I understand the question, I think you can use combn to select the columns you want. For instance, if wanted all combinations of 8 columns taken 2 at at time, you could do:
combn(1:8, 2)
Which gives (in part for readability):
combn(1:8,2)[,c(1:5, 15:18)]
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
[1,] 1 1 1 1 1 3 3 3 3
[2,] 2 3 4 5 6 5 6 7 8
So then columns of this matrix can be used as the indices you want.