Passing vector elements to call as consecutive arguments - r

Given function and test vector:
multiply_stuff <- function(...) {
arguments <- list(...)
Reduce(f = `*`, x = arguments)
}
test_vec <- c(1, 20, 3, 40, 5, 60)
I would like to create an unevaluated call automatically listing all arguments of the passed vector. In this example that would be equivalent of expression:
call("multiply_stuff",
test_vec[1],
test_vec[2],
test_vec[3],
test_vec[4],
test_vec[5],
test_vec[6])
Attempts
For instance for the vector:
test_vec_B <- c(1, 5, 6, 8, 9, 11, 12, 14, 20, 11)
I would like to automatically list all test_vec_B arguments within call("multiply_stuff",...). Naturally this won't work:
call("multiply_stuff", test_vec_B)
call("multiply_stuff", list(test_vec_B))
Desired results
Unevaluated expression equivalent to:
call(
"multiply_stuff",
test_vec_B[1],
test_vec_B[2],
test_vec_B[3],
test_vec_B[4],
test_vec_B[5],
test_vec_B[6],
test_vec_B[7],
test_vec_B[8],
test_vec_B[9],
test_vec_B[10]
)

You can create a call object and then add arguments to it:
multiply_stuff <- function(...) {
arguments <- list(...)
Reduce(f = `*`, x = arguments)
}
test_vec_B <- c(1, 5, 6, 8, 9, 11, 12, 14, 20, 11)
get_call <- function(f, arg_vector){
my_call <- call(f)
my_call[2:(length(arg_vector) + 1)] <- arg_vector
return(my_call)
}
multiply_stuff(1, 5, 6, 8, 9, 11, 12, 14, 20, 11)
[1] 878169600
test_call <- get_call("multiply_stuff", test_vec_B)
eval(test_call)
[1] 878169600
Explanation: when you create a call object, you can access/modify the function and its arguments by index just like usual. Index 1 is the function call, indices from 2 onwards are arguments.Run to verify:
test_call2 <- call("sum", 1, 2)
test_call2[1]
test_call2[2]
test_call2[3]
eval(test_call2)
test_call2[3] <- 1234
eval(test_call2)

Related

Extract a vector from nested lists in R

I am trying to extract a vector from a nested list based on the value of another variable\element within the same nested list. Hopefully my example will explain what I'm trying to do.
To begin, I have a list of lists like so:
## Create the inner lists
# Inner list 1
listInner1 <- list(
value = c(0.25),
index = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10),
left = c(),
right = c()
)
listInner1$left$index <- c(1, 2, 3, 4, 5)
listInner1$left$good <- TRUE
listInner1$right$index <- c(6, 7, 8, 8, 10)
listInner1$right$good <- TRUE
# Inner list 2
listInner2 <- list(
value = c(1.5),
index = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10),
left = c(),
right = c()
)
listInner2$left$index <- c(1, 2, 3)
listInner2$left$good <- TRUE
listInner2$right$index <- c(4, 5, 6, 7, 8, 9, 10)
listInner2$right$good <- TRUE
# Inner list 3
listInner3 <- list(
value = c(0.5),
index = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10),
left = c(),
right = c()
)
listInner3$left$index <- c(1, 2, 3, 4, 5)
listInner3$right$index <- c( 6, 7, 8, 9, 10)
listInner3$left$left$index <- c(2, 4, 6, 8, 10)
listInner3$left$right$index <- c(1, 3, 5 ,7, 9)
listInner3$left$left$good <- TRUE
listInner3$left$right$good <- TRUE
# put all inner lists into single list object
listMiddle <- list(listInner1, listInner2, listInner3)
# one more list for fun
listMaster <- list(listMiddle)
As you can see, some of the left and right elements of the nested lists contain the element good = TRUE and some don't.
What I'm trying to do is if a particular nested list contains the element good = TRUE then to extract the element index from that same nested list.
For example, manually creating my desired output for the above example would look something like this:
ans <- list(
index.1 = c(1, 2, 3, 4, 5),
index.2 = c(6, 7, 8, 8, 10),
index.3 = c(1, 2, 3),
index.4 = c(4, 5, 6, 7, 8, 9, 10),
index.5 = c(2, 4, 6, 8, 10),
index.6 = c(1, 3, 5 ,7, 9)
)
The object ans contains all the index vectors that are contained within a nested list that also contains good = TRUE.
Any suggestions as to how I could do this?
Here is an option where we bind the nested elements to a more easily approachable format with rrapply, then, we get the index of 'good' columns, extract the corresponding 'index' elements from that position index by looping over in map2 (based on the the TRUE values), transpose the list , keep only the elements having greater than 0 length, flatten the list and set the names (if needed)
library(purrr)
library(rrapply)
library(stringr)
library(dplyr)
out <- rrapply(listMaster, how = 'bind')
i1 <- grep('good', names(out))
map2(out[i1-1], out[i1], `[`) %>%
transpose %>%
map( ~ keep(.x, lengths(.x) > 0)) %>%
flatten %>%
setNames(str_c('index.', seq_along(.)))
-output
$index.1
[1] 1 2 3 4 5
$index.2
[1] 6 7 8 8 10
$index.3
[1] 1 2 3
$index.4
[1] 4 5 6 7 8 9 10
$index.5
[1] 2 4 6 8 10
$index.6
[1] 1 3 5 7 9

identify and quantify cluster types in igraph

I have the following graph:
That's made from the given data:
K01 <- cbind(c(1, 3, 4, 6, 7, 8, 9, 11, 13, 14),
c(2, 4, 5, 7, 8, 6, 10, 12, 14, 15))
K02 <- graph_from_edgelist(K01, directed = FALSE)
I'd like to be able to quantify the number and type of all subclusters in the graph. So I'd like to be able to be able separate clusters of the same size by whether they're circular or not, or other properties about the nodes.
I can get cluster size pretty easily with:
K03 <- components(K02)
K04 <- groups(K03)
But this doesn't help me sort by cluster structure.
I'm not entirely sure but it seems like as long as there are no loops, if the number of edges in a group is equal to or greater than the number of vertex, it is circular.
with(do.call(rbind, lapply(K04, function(x){
g = induced_subgraph(K02, x)
data.frame(n = vcount(g),
type = ifelse(ecount(g) >= vcount(g), "circular", "simple"))
})), table(n, type))
# type
#n simple circular
# 2 3 0
# 3 2 1
So, thanks to d.b for his help on this. I broke down and wrote a function that I think performs the task I was trying to accomplish. It's a bit verbose and probably clunky, but it seems to work for what i need it to.
######
# take an igraph graph
# and return a nested list the length of the number of unique cluster structures
# where each list item is a list of clusters that shared that structure
# names ?
######
CategorizeSubGraphs <- function(IGraph) {
cat("\n")
# get groups
IGroups <- groups(components(IGraph))
Structures <- vector("list",
length = length(IGroups))
StructNames <- vector("list",
length = length(IGroups))
AllSubGraphs <- vector("list",
length = length(IGroups))
# initialize a progress bar
pBar <- txtProgressBar(style = 1L)
# Get all graph structures, remove names for unique()
for (i in seq_along(Structures)) {
AllSubGraphs[[i]] <- induced_subgraph(graph = IGraph,
vids = IGroups[[i]])
Structures[[i]] <- AllSubGraphs[[i]][seq(length(AllSubGraphs[[i]][1]))]
StructNames[[i]] <- dimnames(Structures[[i]])[[1]]
dimnames(Structures[[i]]) <- list(NULL,
NULL)
setTxtProgressBar(pb = pBar,
value = i / length(Structures))
}
cat("\n")
# categorize by structure
GraphSizes <- sapply(unique(Structures),
function(x) nrow(x))
AllSizes <- sapply(Structures,
function(x) nrow(x))
GraphTemplates <- unique(Structures)
Result <- vector("list",
length = length(GraphTemplates))
StructureCategory <- vector("integer",
length = length(AllSizes))
# assign each subgraph to a category
for (i in seq_along(StructureCategory)) {
Catch <- which(GraphSizes == AllSizes[i])
if (length(Catch) == 1L) {
StructureCategory[i] <- Catch
} else {
for (j in seq_along(Catch)) {
if (all(GraphTemplates[[Catch[j]]] == Structures[[i]])) {
StructureCategory[i] <- Catch[j]
}
}
}
setTxtProgressBar(pb = pBar,
value = i / length(StructureCategory))
}
Count <- rep(0L,
length(Result))
ResultSizes <- sapply(unique(StructureCategory),
function(x) length(which(StructureCategory == x)))
for (i in seq_along(Result)) {
Result[[i]] <- vector("list",
length = ResultSizes[i])
}
# collect all subgraphs into their distinct categories
for (i in seq_along(StructureCategory)) {
Count[StructureCategory[i]] <- Count[StructureCategory[i]] + 1L
Result[[StructureCategory[i]]][[Count[StructureCategory[i]]]] <- AllSubGraphs[[i]]
}
cat("\n")
return(Result)
}
So this tries to, and I think successfully sorts and summarizes all the subgraphs in a given graph.
So given:
K01 <- cbind(c(1, 3, 4, 6, 7, 8, 9, 11, 13, 14, 16, 18, 18, 17, 20, 21, 21),
c(2, 4, 5, 7, 8, 6, 10, 12, 14, 15, 18, 17, 19, 16, 21, 22, 23))
K02 <- graph_from_edgelist(K01,
directed = FALSE)
Which starts with:
K03 <- CategorizeSubGraphs(IGraph = K02)
length(K03) # the number of distinct subgraph types
[1] 5
lengths(K03) # the number of individual subgraphs for each type
[1] 3 2 1 1 1
And you can plot the individual subclusters/subclusters out appropriately, and grab their names from the graph object.
I've not tested this with graphs with interconnected subgraphs, because I need it with a fairly specific type of data, but it's here if anyone else has this question!

Distribute function computation over several cores - R

I have a function like the following one:
FXN <- function(a, b, c, d, e, f, g){
value <- (a*b/c+d^e)-f+g
write.table(value, paste(a, b, c, d, e, f, g, ".txt", sep = "")}
(But it is a lot more computationally heavy, this just demonstrates the basic idea - does some computation on a single core; writes it to a file.)
If I want to evaluate this function in parallel on 64 cores for all combinations of
a = c(1, 2, 3, 4, 5)
b = c(5, 6, 7, 8, 9)
c = c(11, 12, 13, 14, 15)
d = c(3, 4)
e = c(5, 7, 8)
f = c(4, 5, 6)
g = c(2, 4, 6)
How can I do this? There are 6750 combinations of these values, so the function needs to be run 6750 times.
I am trying the following, but am not sure how foreach handles the function computation. Ideally for my actual function, it just runs the function on each core, writes the table, and moves onto the next combination which hasn't already been run.
library(foreach)
n_cores = 64
registerDoMC(n_cores)
foreach (a = c(1, 2, 3, 4, 5)) %dopar% {
foreach (b = c(5, 6, 7, 8, 9)) %dopar% {
foreach (c = c(11, 12, 13, 14, 15)) %dopar% {
foreach (d = c(3, 4)) %dopar% {
foreach (e = c(5, 7, 8)) %dopar% {
foreach (f = c(4, 5, 6)) %dopar% {
foreach (g = c(2, 4, 6)) %dopar% {
FXN(
a=a,
b=b,
c=c,
d=d,
e=e,
f=f,
g=g
)
}
}
}
}
}
}
}
Would this theoretically work? What other way could I distribute the 6750 combinations over 64 cores, speeding up the computations?
Thanks!

Create conditional variable in multiple data.tables (or data.frames)

I want to execute the same action in multiple data.tables (or data.frames). For example, I want to create the same variable conditional on the same rule in all data.tables.
A simple example can be (df1=df2=df3, without loss of generality here)
df1 <- data.frame(var1 = c(1, 2, 2, 2, 1), var2 =c(20, 10, 10, 10, 20), var3 = c(10, 8, 15, 7, 9))
df2 <- data.frame(var1 = c(1, 2, 2, 2, 1), var2 =c(20, 10, 10, 10, 20), var3 = c(10, 8, 15, 7, 9))
df3 <- data.frame(var1 = c(1, 2, 2, 2, 1), var2 =c(20, 10, 10, 10, 20), var3 = c(10, 8, 15, 7, 9))
My approach was: (i) to create a list of the data frames (list.df), (ii) to loop on this list trying to create the variable:
list.df
list.df<-vector('list',3)
for(j in 1:3){
name <- paste('df',j,sep='')
list.df[j] <- name
}
My (bad) tentative:
for(i in 1:3){
a<-get(paste(list.df[[i]], "$var1", sep=""))
b<-get(paste(list.df[[i]], "$var2", sep=""))
name<-paste(list.df[[i]], "$var.new", sep="")
assign(name, ifelse(a==2 & b==10, 1, 0))
}
Clearly r cannot create this new variable the way I am doing as I get a error message "object not found".
Any clues on how to fix my bad code? I have a feeling that dplyr could help me but I don't know how.
We can use mget after creating the strings of object names with paste so that we get the values ie. data.frames in a list. We loop through the list (lapply(...,) and transform each dataset by creating the variable ('varNew') which is a binary variable. We can either use ifelse on the logical statement or just wrap with + to coerce the TRUE/FALSE to 1/0.
lst <- lapply(mget(paste0('df', 1:3)), transform,
varNew = +(var1==2 & var2==10))
If we need to update the original objects, we can use list2env.
list2env(lst, envir = .GlobalEnv)
df1
df2

ga() function giving error when provided with suggestions

I am trying to get an optimized order of 20 as per self-defined function f (see below). So, I am using GA package of R. While using ga() function I want to provide some initial suggestions (population) to the algorithm. The initial suggestion is in the same format as the function ga() demands. But I am getting an error. Here is my code.
library(GA)
f <- function(z) sum((z-c(20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3,2,1))^2)
set.seed(123)
c1 <- c(1, 3, 4, 2, 5, 7, 6, 9, 10, 8, 11, 12, 13, 15, 14, 16, 17, 18, 19, 20)
sugg <- as.data.frame(c1)
sugg$c2 <- sample(c1, 20)
sugg$c3 <- sample(c1, 20)
sugg$c4 <- sample(c1, 20)
sugg$c5 <- sample(c1, 20)
sugg$c6 <- sample(c1, 20)
sugg$c7 <- sample(c1, 20)
sugg$c8 <- sample(c1, 20)
sugg$c9 <- sample(c1, 20)
sugg$c10 <- sample(c1, 20)
sugg$c11 <- sample(c1, 20)
sugg$c12 <- sample(c1, 20)
sugg$c13 <- sample(c1, 20)
sugg$c14 <- sample(c1, 20)
sugg$c15 <- sample(c1, 20)
sugg$c16 <- sample(c1, 20)
sugg$c17 <- sample(c1, 20)
sugg$c18 <- sample(c1, 20)
sugg$c19 <- sample(c1, 20)
sugg$c20 <- sample(c1, 20)
sugg <- as.matrix(sugg)
set.seed(123)
result <- ga(type = "permutation", fitness = f, min = c(1), max = c(20), maxiter = 100, suggestions = sugg)
> result <- ga(type = "permutation", fitness = f, min = c(1), max = c(20), maxiter = 100, suggestions = sugg)
Iter = 1 | Mean = 1312.72 | Best = 1804
Error in crossover(object, parents) :
number of items to replace is not a multiple of replacement length
Can somebody help me explain why am I getting this error? When I don't provide the suggestion ga() runs fine.
Old question, but might help someone.
My problem is not the same as yours (for me suggestions doesn't seem to make a difference), but anyway this might help understand what's going on. From the code in the function:
# check suggestions
if(is.null(suggestions))
{ suggestions <- matrix(nrow = 0, ncol = nvars) }
else
{ if(is.vector(suggestions))
{ if(nvars > 1) suggestions <- matrix(suggestions, nrow = 1)
else suggestions <- matrix(suggestions, ncol = 1) }
else
{ suggestions <- as.matrix(suggestions) }
if(nvars != ncol(suggestions))
stop("Provided suggestions (ncol) matrix do not match number of variables of the problem!")
}

Resources