time complexity issue when checking the connectivity between nodes in R igraph - r

I'd like to see which first degree nodes are connected to which second degree nodes for every node in a given graph. Suppose I generate a graph with 1000 nodes.
library(igraph)
g <- erdos.renyi.game(1000, 0.2)
When I calculate the set of adjacent nodes and second degree nodes for every node in the graph, there's no issue.
The code runs pretty quick independent from the size of the network. Once I add an if statement to check if two nodes are connected as:
for(j in adjacent){
for(k in secondDegreNodes){
if( are.connected(g, j, k) ){
}
}
}
My code takes forever. What is the exact complexity issue that I am facing? Is there a better way to conduct this operation?
There is something odd definetely happening. Even the following code block does not converge though it's the simplest operation.
g <- erdos.renyi.game(1000, 0.3)
A <- as_adjacency_matrix(g)
from<- 1
to <- nrow(A)
for(i in from:to){
for(j in i:to){
if(A[i,j] == 1){
#do nothing
}
}
}
EDIT: I believe there could be some issues with igraph package. I generated the graph in R igraph and coded everything in Java language, and it worked. As I expected, there is no complexity issue with the algorithm. However, I have no idea what is wrong with igraph.

In the example that you give, indexing matrix A repeatedly in a for-loop is rather inefficient. In this particular instance it is due to A being of class dgCMatrix from package Matrix.
When you compare the performance before and after converting A to another class, you will notice the difference. With N = 300 nodes, the duration of the for-loop decreases from 23.5 seconds to 0.1 seconds on my machine once I convert to the standard matrix class. There are moreover (N^2 + N) / 2 comparisons to be made. The squared term means going from 300 nodes to 3 x 300 = 900 nodes will roughly increase the computing time ninefold (at the very least).
If you further Rprofile the code, you will see that when subsetting an object of class dgCMatrix (i.e. A[i, j]) a number of further R functions are called, whereas the function [ is implemented straight in C for the basic matrix class. In addition, dgCMatrix is from the S4 object system. That means i.a. that finding the right method to use for [ is a little more costly than usual.
Finally, if you rely on R, you will be much better of (in general) using vectorised operations. These will typically avoid a deep call stack of further (inefficient) R functions and will often be implemented in C. With the adjacency matrix, you can quickly find second degree nodes by inspecting A_2 = A %*% A which will be very fast also (or I suspect especially so) for an object of class dgCMatrix.
Timing:
library(igraph)
N <- 300
g <- erdos.renyi.game(N, 0.3)
A <- as_adjacency_matrix(g)
from<- 1
to <- nrow(A)
class(A)
# [1] "dgCMatrix"
# attr(,"package")
# [1] "Matrix"
# run through matrix via for loop
# 23.5 seconds
system.time({
for(i in from:to){
for(j in i:to){
if(A[i,j] == 1) {}
}
}
})
# change class
A <- as.matrix(A)
class(A)
# [1] "matrix"
# run for loop again
# 0.097 seconds
system.time({
for(i in from:to){
for(j in i:to){
if(A[i,j] == 1) {}
}
}
})

Related

Speed of Daisy Function

I'm working on improving the speed of a function (for a dissimilarity measure) I'm writing which is quite similar mathematically to the Euclidean distance function. However, when I time my function compared to that implemented in the daisy function from the cluster package, I find quite a significant difference in speed, with daisy performing much better. Given that (I'm assuming) a dissimilarity measure would require O(n x p) time due to the need to compare each object to itself over all variables (where n is number of objects and p is number of variables), I find it difficult to understand how the daisy function performs so well (near constant time, from the few experiments I've done) relative to my simple and direct implementation. I present the code I have used both to implement and test below. I have tried looking through the r source code for the implementation of the daisy function, but I found it difficult to understand. I found no nested for loop. Any help with understanding why this function performs so fast and how I could possibly modify my code to have similar speed would be very highly appreciated.
euclidean <- function (df){
no_obj <- nrow(df)
dist <- array(0, dim = c(no_obj, no_obj))
for (i in 1:no_obj){
for (j in 1:no_obj){
dist_v <- 0
if(i != j){
for (v in 1:ncol(df)){
dist_v <- dist_v + sqrt((df[i,v] - df[j,v])^2)
}
}
dist[i,j] <- dist_v
}
}
return(dist)
}
data("iris")
tic <- Sys.time()
dst <- euclidean(iris[,1:4])
time <- difftime(Sys.time(), tic, units = "secs")[[1]]
print(paste("Time taken [Euclidean]: ", time))
tic <- Sys.time()
dst <- daisy(iris[,1:4])
time <- difftime(Sys.time(), tic, units = "secs")[[1]]
print(paste("Time taken [Daisy]: ", time))
one option:
euclidean3 <- function(df) {
require(data.table)
n <- nrow(df)
i <- CJ(1:n, 1:n) # generate all row combinations
dl <- sapply(df, function(x) sqrt((x[i[[1]]] - x[i[[2]]])^2)) # loop over columns
dv <- rowSums(dl) # sum values of columns
d <- matrix(dv, n, n) # fill in matrix
d
}
dst3 <- euclidean3(iris[,1:4])
all.equal(euclidean(iris[,1:4]), dst3) # TRUE
[1] "Time taken [Euclidean3]: 0.008"
[1] "Time taken [Daisy]: 0.002"
Largest bottleneck in your code is selecting data.frame elements in loop (df[j,v])). Maybe changing it to matrix also could improver speed. I believe there could be more performant approach on stackoverflow, you just need to search by correct keywords...

What is the difference between stat:kmeans and "naive" k-means

I am trying to understand what the stat:kmeans does differently to the simple version explained eg on Wikipedia. I am honestly so supremely clueless.
Reading the help on kmeans I learned that the default algorithm is Hartigan–Wong not the more basic method, so there should be a difference, but playing around with some normal distributed variables I couldn't find a case where they differed substantially and predictably.
For reference, this is my utterly horrible code I tested it against
##squre of eudlidean metric
my_metric <- function(x=vector(),y=vector()) {
stopifnot(length(x)==length(y))
sum((x-y)^2)
}
## data: xy data
## k: amount of groups
my_kmeans <- function(data, k, maxIt=10) {
##get length and check if data lengths are equal and if enough data is provided
l<-length(data[,1])
stopifnot(l==length(data[,2]))
stopifnot(l>k)
## generate the starting points
ms <- data[sample(1:l,k),]
##append the data with g column and initilize last
data$g<-0
last <- data$g
it<-0
repeat{
it<-it+1
##iterate through each data point and assign to cluster
for(i in 1:l){
distances <- c(Inf,Inf,Inf)
for(j in 1:k){
distances[j]<-my_metric(data[i,c(1,2)],ms[j,])
}
data$g[i] <- which.min(distances)
}
##update cluster points
for(i in 1:k){
points_in_cluster <- data[data$g==i,1:2]
ms[i,] <- c(mean(points_in_cluster[,1]),mean(points_in_cluster[,2]))
}
##break condition: nothing changed
if(my_metric(last,data$g)==0 | it > maxIt){
break
}
last<-data$g
}
data
}
First off, this was a duplication (as I just found out) of this post.
But I will still try to give an example: When the clusters are separated, Lloyd tends to leave the centers inside the clusters they start in, meaning that some may end up partitioned while some others might be lumped together

What is going on with floating point precision here?

This question is in reference is an observation from a code-golf challenge.
The submitted R solution is a working solution, but a few of us (maybe just I) seems to be dumbfounded as to why the initial X=m reassignment is necessary.
The code is golfed down a bit by #Giuseppe, so I'll write a few comments for the reader.
function(m){
X=m
# Re-assign input m as X
while(any(X-(X=X%*%m))) 0
# Instead of doing the meat of the calculation in the code block after `while`
# OP exploited its infinite looping properties to perform the
# calculations within the condition check.
# `-` here is an abuse of inequality check and relies on `any` to coerce
# the numeric to logical. See `as.logical(.Machine$double.xmin)`
# The code basically multiplies the matrix `X` with the starting matrix `m`
# Until the condition is met: X == X%*%m
X
# Return result
}
Well as far as I can tell. Multiplying X%*%m is equivalent to X%*%X since X is a just an iteratively self-multiplied version of m. Once the matrix has converged, multiplying additional copies of m or X does not change its value. See linear algebra textbook or v(m)%*%v(m)%*%v(m)%*%v(m)%*%v(m)%*%m%*%m after defining the above function as v. Fun right?
So the question is, why does #CodesInChaos's implementation of this idea not work?
function(m){while(any(m!=(m=m%*%m)))0 m}
Is this caused by a floating point precision issue? Or is this caused by the a function in the code such as the inequality check or .Primitive("any")? I do not believe this is caused by as.logical since R seems to coerce errors smaller than .Machine$double.xmin to 0.
Here is a demonstration of above. We are simply looping and taking the difference between m and m%*%m. This error becomes 0 as we try to converge the stochastic matrix. It seems to converge then blow to 0/INF eventually depending on the input.
mat = matrix(c(7/10, 4/10, 3/10, 6/10), 2, 2, byrow = T)
m = mat
for (i in 1:25) {
m = m%*%m
cat("Mean Error:", mean(m-(m=m%*%m)),
"\n Float to Logical:", as.logical(m-(m=m%*%m)),
"\n iter", i, "\n")
}
Some additional thoughts on why this is a floating point math issue
1) the loop indicates that this is probably not a problem with any or any logical check/conversion step but rather something to do with float matrix math.
2) #user202729's comment in the original thread that this issue persists in Jelly, a code golf language gives more credence to the idea that this is a perhaps a floating point issue.
The different methods iterate different functions, both starting with seed value m. Function iteration only converges to a given fixed point if that fixed point is stable and the seed is within the basin of attraction of that fixed point.
In the original code, you are iterating the function
f <- function(X) X %*% m
The limit matrix is a stable fixed-point under the assumption (stated in the Code Gulf problem) that a well-defined limit exists. Since the function definition depends on m, it isn't surprising that the fixed point is a function of m.
On the other hand, the proposed variation using m = m %*% m is obtained by iterating the function
g <- function(X) X %*% X
Note that all idempotent matrices are fixed points of this function but clearly they can't all be stable fixed points. Apparently, the limiting matrix in the original fixed function is not a stable fixed point of g (even though it is a fixed point).
To really nail this down, you would need to get into the theory of matrix fixed points under function iteration to show why the fixed point in the case of g is unstable.
This is indeed a floating point math issue. To see it, see the results of this function:
test2 <- function(m) {
c <- 0
res <- list()
while (any(m!=(m=m%*%m))) {
c <- c + 1
res[[c]] <- m
}
print(c)
res
}
To test equality with some tolerance, you can use:
test3 <- function(m) {
while (!isTRUE(all.equal(m, m <- m %*% m))) 0
m
}

Output multiple vectors from for loop in R

As someone relatively new to R I'm having an issue with creating a for loop.
I have a very large data set with 9000 observations and 25 categorical variables, which I've transformed into binary data and preformed hierarchical clustering. Now I want to try K-Modes clustering to produce an Elbow Plot using the "within-cluster simple-matching distance for each cluster", which is outputted from kmodes$withindiff. I can sum this for each of the k in 1:8 clusters to get the Elbow Plot.
library(klaR)
for(k in 1:8)
{
WCSM[k] <- sum(kmodes(data,k,iter.max=100)$withindiff)
}
plot(1:8,WCSM,type="b", xlab="Number of Clusters",ylab="Within-Cluster
Simple-Matching Distance Summed", main="K-modes Elbow Plot")
My issue is that I want further output from k-modes. For each k in 1:8 I would like to get the vector of integers indicating the cluster to which each object is allocated to given by kmodes$cluster. I need to create a for loop that loops through each k in 1:8 and saves each of the outputs into 8 separate vectors. But I don't know how to do such a for loop. I could just run the 8 lines of code separately but they each take 15mins to run with iter.max=10 so increasing this to iter.max=100 will need to be left running overnight so a loop would be useful.
cl.kmodes2=kmodes(data, 2,iter.max=100)
cl.kmodes3=kmodes(data, 3,iter.max=100)
cl.kmodes4=kmodes(data, 4,iter.max=100)
cl.kmodes5=kmodes(data, 5,iter.max=100)
cl.kmodes6=kmodes(data, 6,iter.max=100)
cl.kmodes7=kmodes(data, 7,iter.max=100)
cl.kmodes8=kmodes(data, 8,iter.max=100)
Ultimately I want to compare the results from the hierarchical binary clustering to the k-modes clustering by getting the Adjusted Rand Index. For example, cutting the tree at k=4 for the hierarchical cluster and comparing this to a 4 cluster solution from k-modes:
dist.binary = dist(data, method="binary")
cl.binary = hclust(dist.binary, method="complete")
hcl.4 = cutree(cl.binary, k = 4)
tab = table(hcl.4, cl.kmodes4$cluster)
library(e1071)
classAgreement(tab)
I agree with Imo, using a list is the best solution.
If you don't want to do that, you could also use assign() to create a new vector in every iteration:
library(klaR)
for(k in 1:8) {
assign(paste("cl.kmodes", k, sep = ""), kmodes(data, k, iter.max = 100))
}
The best method is to put the output from your clusters into a named list:
library(klaR)
myClusterList <- list()
for(k in 1:8) {
myClusterList[[paste0("k.", i)]] <- kmodes(data, i,iter.max=100)
}
You can then pull out the any of the contents easily:
sum(myClusterList[["k.1"]]$withindiff)
or
sum(myClusterList[[1]]$withindiff)
You can also save the list to use in future R sessions, see ?save.

Calculating a GP correlation matrix outside of a loop

So I am trying to calculate the correlation matrix associated with a Gaussian Process using R and was hoping for some suggestions for doing so without using the triple for-loop I have written below. Mainly I want to try and condense the code for readable purposes and also to speed up calculations.
#Example Data
n = 500
x1 = sample(1:100,n,replace=T)
x2 = sample(1:100,n,replace=T)
x3 = sample(1:100,n,replace=T)
X = cbind(x1,x2,x3)
R = matrix(NA,nrow=n,ncol=n)
for(i in 1:nrow(X)){
for(j in 1:nrow(X)){
temp = 0
for(k in 1:ncol(X)){
temp = -abs(X[i,k]-X[j,k])^1.99 + temp
}
R[i,j] = exp(temp)
}
}
So as n gets large, the code gets much slower. Also worth noting, since this is a correlation matrix, the matrix is syymetric and the diagonal is equal to 1.
It's much faster using this:
y <- t(X)
R <- exp(-sapply(1:ncol(y), function(i) colSums((y-y[,i])^2)))
If you want ot keep your original formula:
R <- exp(-sapply(1:ncol(y), function(i) colSums(abs(y-y[,i])^1.99)))
I'm wondering if you could cut your calculation and looping times in half by changing these two lines? (Actually the timing was improved by more than 50% 14.304 secs improved to 6.234 secs )
1: for(j in 1:nrow(X)){
2: R[i,j] = exp(temp)
To:
1: for(j in i:nrow(X)){
2: R[i,j] = R[j,i]= exp(temp)
Tested:
> all.equal(R, R2)
[1] TRUE
That way you populate the lower triangle without doing any calculations.BTW, what's with the 1.99? This is perhaps a problem more suited to submitting as a C program. The Rcpp package supports this and there are a lot of worked examples on SO. Perhaps a search on: [r] rcpp nested loops

Resources