Project R: Variable "depth" of for-loops; generalization - r

Thank you for helping me out with this one. It's a problem that has been bothering me for quite a time now. I feel like I am so close to the answer, but not quite there yet.
Issue is the following:
Suppose I want to combine every possible combination of m-elements of n-vectors and store the result of, lets say, a multiplication. For the 2d-problem I need two interlaced for-loops:
dim2_Matrix <- matrix(0,nrow=2,ncol=3)
for (i in 1:2){
for (j in 1:3){
dim2_Matrix[i,j] <- i*j
}
}
The inner loop will run through all 3 items, multiplying them with the first item of the outer loop. Once that procedure is done, i will be increased and the inner loop starts from j=1 again. We have 2*3 = 6 combinations.
Now lets raise that to a 3D-Problem. We need a third loop for that:
dim3_Matrix <- array(0,dim=c(2,3,4))
for (i in 1:2){
for (j in 1:3){
for (k in 1:4){
dim3_Matrix[i,j,k] <- i*j*k
}
}
}
It runs the most inner loop 4times, increases the middle one, runs the 4 inner loops again...until we get 2*3*4 = 24 combinations in a 3D-Array.
I could continue like this with dim4, dim5 etc.
My problem now is that I want to keep the script variable. Sometimes I'll want to combine 2, sometimes 3, sometimes n-vectors. Suppose I know how many "layers" there are before the loops are run...how do I get a generalized form of this?
I am as far as this:
n_dimensions <- 3 # specify n° of dimensions
m_Elements <- c(2,3,4) # 2 elements in 1st dim, 3 in 2nd, 4 in 3rd
for (i in 1:n_dimensions){
for (j in 1:m_Elements[i]){
# ...
}
}
But this will go like:
i1 j1 --> i1 j2
i2 j1 --> i2 j2 --> i2 j3
i3 j1 --> i3 j2 --> i3 j3 --> i3 j4
so this is 2 + 3 + 4 combinations instead of 2*3*4.
Please note: multiplying is only an example. Storing the results in a matrix/tensor is not the main problem. It's how to interlace the loops and to generalize it.
Thanks for reading through, I hope you get what I mean!

You can try something like this.
X<-list(1:2, 1:3, 1:4) #one entry for each dimension
Z<-expand.grid(X)
Z looks like:
Var1 Var2 Var3
1 1 1 1
2 2 1 1
3 1 2 1
4 2 2 1
5 1 3 1
6 2 3 1
7 1 1 2
8 2 1 2
9 1 2 2
10 2 2 2
11 1 3 2
12 2 3 2
13 1 1 3
14 2 1 3
15 1 2 3
16 2 2 3
17 1 3 3
18 2 3 3
19 1 1 4
20 2 1 4
21 1 2 4
22 2 2 4
23 1 3 4
24 2 3 4
So now you have every combination in a data.frame and you can use apply functions or something similar to do what you need to do. Such as:
apply(Z,1,prod)
[1] 1 2 2 4 3 6 2 4 4 8 6 12 3 6 6 12 9 18 4 8 8 16 12 24

Your code is equivalent to:
dim2_Matrix = outer(1:2, 1:3)
dim3_Matrix = outer(dim2_Matrix, 1:4)
Which can be generalized to:
dim_n_Matrix <- function(n) {
x <- 1:2
if (n>1) {for (n in 2:n) {x <- outer(x, 1:(n+1))}} else {x <- matrix(1:2, nrow = 1)}
return(x)
}

Related

How do I run my loop for each simulation and create a new vector with these values?

This is my data frame:
time<-rep(c(1:5),4)
sim1<-rep(c(paste("sim",1)),5)
sim2<-rep(c(paste("sim",2)),5)
sim3<-rep(c(paste("sim",3)),5)
sim4<-rep(c(paste("sim",4)),5)
sim<-c(sim1,sim2,sim3,sim4)
id<-as.vector(replicate(4,sample(1:5)))
df<-data.frame(time,sim,id)
df$simnu<-as.numeric(df$sim)
Which should look something like this:
time sim id simnu
1 1 sim 1 1 1
2 2 sim 1 3 1
3 3 sim 1 2 1
4 4 sim 1 4 1
5 5 sim 1 5 1
6 1 sim 2 1 2
7 2 sim 2 5 2
8 3 sim 2 4 2
9 4 sim 2 2 2
10 5 sim 2 3 2
11 1 sim 3 2 3
12 2 sim 3 3 3
13 3 sim 3 4 3
14 4 sim 3 1 3
15 5 sim 3 5 3
16 1 sim 4 3 4
17 2 sim 4 5 4
18 3 sim 4 2 4
19 4 sim 4 1 4
20 5 sim 4 4 4
I have created this loop that subsets the data by simulation and then calculates the output I want:
surveillance<-5
n<-1
simsub<-df[which(df$simnu==1),names(df)%in%c("time","sim","id")]
while (n<=surveillance){
print (n)
rndid<-df[sample(nrow(simsub),1),]
print(rndid)
if(n<rndid$time){
n<-n+1
} else {
tinf<-sum(length(df[which(simsub$time<=n),1]))
prev<-tinf/length(simsub[,1])
print(paste(prev,"prevalence"))
break
}
}
My question is how do I run this loop for each simulation and return the values of this as a vector?
My suggestion for you is to take a look at the lapply function (resp. sapply and vapply), and avoid using while, to be honest it's a bit tricky to help without really knowing what is happening in your code, but in any case here's an example how you can use lapply, however since I don't know what your code should return I can't be sure that the output is correct
I added comments and questions with your original lines, hope this helps
# first define a function that takes one simnu and returns whatever you want it to return
my_calc_fun <- function(sim_nr){
## you can subset the DF without which, names, or %in%
# simres[[i]] <- my_df[which(my_df$simnu==i),names(my_df)%in%c("time","sim","id")]
sim_df <- my_df[my_df$simnu == sim_nr, c("time","sim","id")]
for(n in 1:surveillance){
## I'm not sure that is what you meant to do,
## you are sampling the full DF, but you want a sample
## from the subset i.e., simres[[i]]
# rndid<-my_df[sample(nrow(simres[[i]]),1),]
row_id <- sample(nrow(sim_df), 1)
rndid <- sim_df[row_id, ]
if(n >= rndid$time){
## what are you trying to sum here?
## because you are giving the function one number length(....)
## and just like above you are subsetting the full DF here
# tinf<-sum(length(my_df[which(simres[[i]]$time<=n),1]))
tinf <- length(sim_df[sim_df$time<=n, 1])
# is this the value you want to return for each simnu?
prev <- tinf/length(sim_df["time"])
break
}
}
return(c('simnu'=sim_nr, 'prev' = prev))
}
# apply this function on all values of simnu and save to list
result_all <- lapply(unique(my_df$simnu), my_calc_fun)
result_all

Building all combinations of a vector - looking for a nicer way

I have a simple case which I could solve in an ugly way, but I am sure a much cleverer (and faster) way must exist.
Let's take this vector
d <- 1:6
I want to list all the possible combinations in a "going-forward" way :
1 2
1 3
1 4
1 5
1 6
2 3
2 4
2 5
...
5 6
The working way I could first come with is the following
n <- 6
combDF <- data.frame()
for( i in 1:(n-1)){
thisVal <- rep(i,n-i)
nextVal <- cumsum(rep(1,n-1)) + 1
nextVal <- nextVal[nextVal > i]
print("---")
print(thisVal)
print(nextVal[nextVal > i])
df <- data.frame(thisVal = thisVal, nextVal = nextVal)
combDF <- rbind(combDF, df)
}
I am sure there must be a cleverer way to doing that.
Loud debugging? I just found this way
as.data.frame(t(combn(d,m=2)))
V1 V2
1 1 2
2 1 3
3 1 4
4 1 5
5 1 6
6 2 3
7 2 4
8 2 5
9 2 6
10 3 4
11 3 5
12 3 6
13 4 5
14 4 6
15 5 6
One approach using expand.grid and subsetting:
d <- 1:6
foo <- expand.grid(a = d, b = d)
foo[foo[, "a"] > foo[, "b"], c("b", "a")]
This may not be the best way to go about things if you have large vectors and memory constraints as the expand.grid call generates a lot of items that are removed, but it is quite readable and communicates intent clearly.

Calculating molecular formulas out of mass of certain elements

For a chemistry project at school I want to calculate molecular masses of all possible combinations of molecular formulas including carbon (1 atom up to 100), oxygen (1 up to 50), hydrogen (1 up to 200), nitrogen (1 up to 20) and sulfur (1 up to 10) and save the results in one vector and the corresponding molecular formula string in another vector. The masses are numeric values: 12, 16, 1, 14 and 32. The strings are "C", "O", "H", "N", "S".
I want to delete molecular formulas that make no sense like C1 O100 H0 N20 S10 from the string and the corresponding mass, too. So to be more specific only leave the ones with a O/C relation between 0 and 1, a H/C relation between 2 and 1, a N/C relation between 0 and 0.2 and a S/C relation between 0 and 0.1.
Is there a easy way to do this, is using a for loop the only way or is there a faster way (maybe arrays?) and how can I take account to the relations of molecules?
Would be vary happy for some ideas or basic code to solve this.
..so #Gregor to disclude the relations of atoms that dont make sense probably will be better before the whole list is created? #Barker Yes atoms like Nitrogen should go from 0 to max. I am very new to R so when I try a loop I end up with the last value calculated...(reduced amount of dimensions).
z=matrix(0,1,5*20*10*2*2)
C=12
O=16
H=1
N=14
S=32
for( u in 1:length(z)) {
for(i in 1:5) {
for (j in 1:20) {
for(k in 1:10 ) {
for(l in 0:1) {
for(m in 0:1){
z[1,u] <- C*i+H*j+O*k+N*l+S*m
}
}
}
}
}
}
does anyone know where the mistake is here?
expand.grid is a good place to start in generating combinations. For example, to create a data.frame with combinations of H and C you could do this
mol = expand.grid(C = 1:3, H = 1:4)
mol
# C H
# 1 1 1
# 2 2 1
# 3 3 1
# 4 1 2
# 5 2 2
# 6 3 2
# 7 1 3
# 8 2 3
# 9 3 3
# 10 1 4
# 11 2 4
# 12 3 4
You can add on the other elements in expand.grid as well and also adjust the inputs up to 1:200 or however many you want. If your computer has enough memory, you'll be able to create the 10MM row data frame as specified in your question - though that is pretty big. If you could reduce the total number of combinations to 1MM it will be much easier on your memory.
The next step would be to delete rows that don't meet your ratio criteria. Here's one example, to make sure that the number of H is between 1 and 2 times the number of C:
mol = mol[mol$H >= mol$C & mol$H <= 2 * mol$C, ]
mol
# C H
# 1 1 1
# 4 1 2
# 5 2 2
# 8 2 3
# 9 3 3
# 11 2 4
# 12 3 4
Repeat steps like that for all your conditions.
Finally you can calculate the weights and put it in a new column:
mol$weight = with(mol, C * 12 + H * 1)
mol
# C H weight
# 1 1 1 13
# 4 1 2 14
# 5 2 2 26
# 8 2 3 27
# 9 3 3 39
# 11 2 4 28
# 12 3 4 40
You could use matrix multiplication for the weight calculation, but there's no need with a small number of possible elements. If you had 20 or more possible input elements it would make sense to do it that way.
Bonus! Formulas can be created with paste or paste0:
mol$formula = paste0("C", mol$C, " H", mol$H)
mol
# C H weight formula
# 1 1 1 13 C1 H1
# 4 1 2 14 C1 H2
# 5 2 2 26 C2 H2
# 8 2 3 27 C2 H3
# 9 3 3 39 C3 H3
# 11 2 4 28 C2 H4
# 12 3 4 40 C3 H4
Of course, most of these still won't make chemical sense - C1 H1 isn't something that would really exist, but maybe you can come up with even smarter conditions to get rid of more of the impossibilities!

Using two grouping designations to create one 'combined' grouping variable

Given a data.frame:
df <- data.frame(grp1 = c(1,1,1,2,2,2,3,3,3,4,4,4),
grp2 = c(1,2,3,3,4,5,6,7,8,6,9,10))
#> df
# grp1 grp2
#1 1 1
#2 1 2
#3 1 3
#4 2 3
#5 2 4
#6 2 5
#7 3 6
#8 3 7
#9 3 8
#10 4 6
#11 4 9
#12 4 10
Both coluns are grouping variables, such that all 1's in column grp1 are known to be grouped together, and so on with all 2's, etc. Then the same goes for grp2. All 1's are known to be the same, all 2's the same.
Thus, if we look at the 3rd and 4th row, based on column 1 we know that the first 3 rows can be grouped together and the second 3 rows can be grouped together. Then since rows 3 and 4 share the same grp2 value, we know that all 6 rows, in fact, can be grouped together.
Based off the same logic we can see that the last six rows can also be grouped together (since rows 7 and 10 share the same grp2).
Aside from writing a fairly involved set of for() loops, is there a more straight forward approach to this? I haven't been able to think one one yet.
The final output that I'm hoping to obtain would look something like:
# > df
# grp1 grp2 combinedGrp
# 1 1 1 1
# 2 1 2 1
# 3 1 3 1
# 4 2 3 1
# 5 2 4 1
# 6 2 5 1
# 7 3 6 2
# 8 3 7 2
# 9 3 8 2
# 10 4 6 2
# 11 4 9 2
# 12 4 10 2
Thank you for any direction on this topic!
I would define a graph and label nodes according to connected components:
gmap = unique(stack(df))
gmap$node = seq_len(nrow(gmap))
oldcols = unique(gmap$ind)
newcols = paste0("node_", oldcols)
df[ newcols ] = lapply(oldcols, function(i) with(gmap[gmap$ind == i, ],
node[ match(df[[i]], values) ]
))
library(igraph)
g = graph_from_edgelist(cbind(df$node_grp1, df$node_grp2), directed = FALSE)
gmap$group = components(g)$membership
df$group = gmap$group[ match(df$node_grp1, gmap$node) ]
grp1 grp2 node_grp1 node_grp2 group
1 1 1 1 5 1
2 1 2 1 6 1
3 1 3 1 7 1
4 2 3 2 7 1
5 2 4 2 8 1
6 2 5 2 9 1
7 3 6 3 10 2
8 3 7 3 11 2
9 3 8 3 12 2
10 4 6 4 10 2
11 4 9 4 13 2
12 4 10 4 14 2
Each unique element of grp1 or grp2 is a node and each row of df is an edge.
One way to do this is via a matrix that defines links between rows based on group membership.
This approach is related to #Frank's graph answer but uses an adjacency matrix rather than using edges to define the graph. An advantage of this approach is it can deal immediately with many > 2 grouping columns with the same code. (So long as you write the function that determines links flexibly.) A disadvantage is you need to make all pair-wise comparisons between rows to construct the matrix, so for very long vectors it could be slow. As is, #Frank's answer would work better for very long data, or if you only ever have two columns.
The steps are
compare rows based on groups and define these rows as linked (i.e., create a graph)
determine connected components of the graph defined by the links in 1.
You could do 2 a few ways. Below I show a brute force way where you 2a) collapse links, till reaching a stable link structure using matrix multiplication and 2b) convert the link structure to a factor using hclust and cutree. You could also use igraph::clusters on a graph created from the matrix.
1. construct an adjacency matrix (matrix of pairwise links) between rows
(i.e., if they in the same group, the matrix entry is 1, otherwise it's 0). First making a helper function that determines whether two rows are linked
linked_rows <- function(data){
## helper function
## returns a _function_ to compare two rows of data
## based on group membership.
## Use Vectorize so it works even on vectors of indices
Vectorize(function(i, j) {
## numeric: 1= i and j have overlapping group membership
common <- vapply(names(data), function(name)
data[i, name] == data[j, name],
FUN.VALUE=FALSE)
as.numeric(any(common))
})
}
which I use in outer to construct a matrix,
rows <- 1:nrow(df)
A <- outer(rows, rows, linked_rows(df))
2a. collapse 2-degree links to 1-degree links. That is, if rows are linked by an intermediate node but not directly linked, lump them in the same group by defining a link between them.
One iteration involves: i) matrix multiply to get the square of A, and
ii) set any non-zero entry in the squared matrix to 1 (as if it were a first degree, pairwise link)
## define as a function to use below
lump_links <- function(A) {
A <- A %*% A
A[A > 0] <- 1
A
}
repeat this till the links are stable
oldA <- 0
i <- 0
while (any(oldA != A)) {
oldA <- A
A <- lump_links(A)
}
2b. Use the stable link structure in A to define groups (connected components of the graph). You could do this a variety of ways.
One way, is to first define a distance object, then use hclust and cutree. If you think about it, we want to define linked (A[i,j] == 1) as distance 0. So the steps are a) define linked as distance 0 in a dist object, b) construct a tree from the dist object, c) cut the tree at zero height (i.e., zero distance):
df$combinedGrp <- cutree(hclust(as.dist(1 - A)), h = 0)
df
In practice you can encode steps 1 - 2 in a single function that uses the helper lump_links and linked_rows:
lump <- function(df) {
rows <- 1:nrow(df)
A <- outer(rows, rows, linked_rows(df))
oldA <- 0
while (any(oldA != A)) {
oldA <- A
A <- lump_links(A)
}
df$combinedGrp <- cutree(hclust(as.dist(1 - A)), h = 0)
df
}
This works for the original df and also for the structure in #rawr's answer
df <- data.frame(grp1 = c(1,1,1,2,2,2,3,3,3,4,4,4,5,5,6,7,8,9),
grp2 = c(1,2,3,3,4,5,6,7,8,6,9,10,11,3,12,3,6,12))
lump(df)
grp1 grp2 combinedGrp
1 1 1 1
2 1 2 1
3 1 3 1
4 2 3 1
5 2 4 1
6 2 5 1
7 3 6 2
8 3 7 2
9 3 8 2
10 4 6 2
11 4 9 2
12 4 10 2
13 5 11 1
14 5 3 1
15 6 12 3
16 7 3 1
17 8 6 2
18 9 12 3
PS
Here's a version using igraph, which makes the connection with #Frank's answer more clear:
lump2 <- function(df) {
rows <- 1:nrow(df)
A <- outer(rows, rows, linked_rows(df))
cluster_A <- igraph::clusters(igraph::graph.adjacency(A))
df$combinedGrp <- cluster_A$membership
df
}
Hope this solution helps you a bit:
Assumption: df is ordered on the basis of grp1.
## split dataset using values of grp1
split_df <- split.default(df$grp2,df$grp1)
parent <- vector('integer',length(split_df))
## find out which combinations have values of grp2 in common
for (i in seq(1,length(split_df)-1)){
for (j in seq(i+1,length(split_df))){
inter <- intersect(split_df[[i]],split_df[[j]])
if (length(inter) > 0){
parent[j] <- i
}
}
}
ans <- vector('list',length(split_df))
index <- which(parent == 0)
## index contains indices of elements that have no element common
for (i in seq_along(index)){
ans[[index[i]]] <- rep(i,length(split_df[[i]]))
}
rest_index <- seq(1,length(split_df))[-index]
for (i in rest_index){
val <- ans[[parent[i]]][1]
ans[[i]] <- rep(val,length(split_df[[i]]))
}
df$combinedGrp <- unlist(ans)
df
grp1 grp2 combinedGrp
1 1 1 1
2 1 2 1
3 1 3 1
4 2 3 1
5 2 4 1
6 2 5 1
7 3 6 2
8 3 7 2
9 3 8 2
10 4 6 2
11 4 9 2
12 4 10 2
Based on https://stackoverflow.com/a/35773701/2152245, I used a different implementation of igraph because I already had an adjacency matrix of sf polygons from st_intersects():
library(igraph)
library(sf)
# Use example data
nc <- st_read(system.file("shape/nc.shp", package="sf"))
nc <- nc[-sample(1:nrow(nc),nrow(nc)*.75),] #drop some polygons
# Find intersetions
b <- st_intersects(nc, sparse = F)
g <- graph.adjacency(b)
clu <- components(g)
gr <- groups(clu)
# Quick loop to assign the groups
for(i in 1:nrow(nc)){
for(j in 1:length(gr)){
if(i %in% gr[[j]]){
nc[i,'group'] <- j
}
}
}
# Make a new sfc object
nc_un <- group_by(nc, group) %>%
summarize(BIR74 = mean(BIR74), do_union = TRUE)
plot(nc_un['BIR74'])

Divide each rows by a different number

I've looked on the internet but I haven found the answer that I'm looking for, but shure it's out there...
I've a data frame, and I want to divide (or any other operation) every cell of a row by a value that it's placed in the second column of my data frame.
So first row from col3 to last col, divide each cell by the value of col2 of that certain row, and so on for every single row.
I have solved this by using a For loop, col2 (delta) it's now a vector, and col3 to end it's a data.frame (mu). The results are append to a new data frame by using rbind.
The question is; I'm pretty sure that this can be done by using the function apply, sapply or similar, but I have not gotten the results that I've been looking so far (not the good ones as I do with the loop for). ¿How can I do it without using a loop for?
Loop for I've been using so far.
In resume.
I want to divide each mu by the delta value of it's own row.
for (i in 1:(dim(mu)[1])){
RA_row <- mu[i,]/delta[i]
RA <- rbind(RA, RA_row)
}
transcript delta mu_5 mu_15 mu_25 mu_35 mu_45 mu_55 mu_65
1 YAL001C 0.066702720 2.201787e-01 1.175731e-01 2.372506e-01 0.139281317 0.081723456 1.835414e-01 1.678318e-01
2 YAL002W 0.106000180 3.685822e-01 1.326865e-01 2.887973e-01 0.158207858 0.193476082 1.867039e-01 1.776946e-01
3 YAL003W 0.022119345 2.271518e+00 2.390637e+00 1.651997e+00 3.802739732 2.733559839 2.772454e+00 3.571712e+00
Thanks
It appears as though you want just:
mu2 <- mu[-(1:2)]/mu[[2]]
# same as mu[-(1:2), ]/mu[['delta']]
That should produce a new dataframe with the division by row. Somewhat more dangerous would be to do the division "in place".
mu[-(1:2)] <- mu[-(1:2)]/mu[[2]]
> mu <- data.frame(a=1,b=1:10, c=rnorm(10), d=rnorm(10) )
> mu
a b c d
1 1 1 -1.91435943 0.45018710
2 1 2 1.17658331 -0.01855983
3 1 3 -1.66497244 -0.31806837
4 1 4 -0.46353040 -0.92936215
5 1 5 -1.11592011 -1.48746031
6 1 6 -0.75081900 -1.07519230
7 1 7 2.08716655 1.00002880
8 1 8 0.01739562 -0.62126669
9 1 9 -1.28630053 -1.38442685
10 1 10 -1.64060553 1.86929062
> (mu2 <- mu[-(1:2)]/mu[[2]])
c d
1 -1.914359426 0.450187101
2 0.588291656 -0.009279916
3 -0.554990812 -0.106022792
4 -0.115882600 -0.232340537
5 -0.223184021 -0.297492062
6 -0.125136500 -0.179198716
7 0.298166649 0.142861258
8 0.002174452 -0.077658337
9 -0.142922281 -0.153825205
10 -0.164060553 0.186929062
> (mu[-(1:2)] <- mu[-(1:2)]/mu[[2]] )
> mu
a b c d
1 1 1 -1.914359426 0.450187101
2 1 2 0.588291656 -0.009279916
3 1 3 -0.554990812 -0.106022792
4 1 4 -0.115882600 -0.232340537
5 1 5 -0.223184021 -0.297492062
6 1 6 -0.125136500 -0.179198716
7 1 7 0.298166649 0.142861258
8 1 8 0.002174452 -0.077658337
9 1 9 -0.142922281 -0.153825205
10 1 10 -0.164060553 0.186929062

Resources