How to permute a network in igraph for R? - r

I'm trying to write a code for a Monte Carlo procedure in R. My goal is to estimate the significance of a metric calculated for a weighted, unipartite, undirected network formatted for the package igraph.
So far, I included the following steps in the code:
1. Create the weighted, unipartite, undirected network and calculate the observed Louvain modularity
nodes <- read.delim("nodes.txt")
links <- read.delim("links.txt")
anurosnet <- graph_from_data_frame(d=links, vertices=nodes, directed=F)
anurosnet
modularity1 = cluster_louvain(anurosnet)
modularity1$modularity #observed value
obs=modularity1$modularity
obs
real<-data.frame(obs)
real
2. Create the empty vector
Nperm = 9 #I am starting with a low n, but intend to use at least 1000 permutations
randomized.modularity=matrix(nrow=length(obs),ncol=Nperm+1)
row.names(randomized.modularity)=names(obs)
randomized.modularity[,1]=obs
randomized.modularity
3. Permute the original network preserving its characteristics, calculate the Louvain modularity for all randomized networks, and compile the results in the vector
i<-1
while(i<=Nperm){
randomnet <- rewire(anurosnet, with=each_edge(0.5)) #rewire vertices with constant probability
E(randomnet)$weight <- sample(E(anurosnet)$weight) #shuffle initial weights and assign them randomly to edges
mod<-(cluster_louvain(randomnet))
mod$modularity
linha = mod$modularity
randomized.modularity[,i+1]=linha
print(i)
i=i+1
}
randomized.modularity #Here the result is not as expected
4. Plot the observed value against the distribution of randomized values
niveis<-row.names(randomized.modularity)
for(k in niveis)
{
if(any(is.na(randomized.modularity[k,]) == TRUE))
{
print(c(k, "metrica tem NA"))
} else {
nome.arq<- paste("modularity",k,".png", sep="")
png(filename= nome.arq, res= 300, height= 15, width=21, unit="cm")
plot(density(randomized.modularity[k,]), main="Observed vs. randomized",)
abline(v=obs[k], col="red", lwd=2, xlab="")
dev.off()
print(k)
nome.arq<- paste("Patefield_Null_mean_sd_",k,".txt", sep="")
write.table(cbind(mean(randomized.modularity[k,]),sd(randomized.modularity[k,])), file=paste(nome.arq,sep=""),
sep=" ",row.names=TRUE,col.names=FALSE)
}
}
5. Estimate the P-value (significance)
significance=matrix(nrow=nrow(randomized.modularity),ncol=3)
row.names(significance)=row.names(randomized.modularity)
colnames(significance)=c("p (rand <= obs)", "p (rand >= obs)", "p (rand=obs)")
signif.sup=function(x) sum(x>=x[1])/length(x)
signif.inf=function(x) sum(x<=x[1])/length(x)
signif.two=function(x) ifelse(min(x)*2>1,1,min(x)*2)
significance[,1]=apply(randomized.modularity,1,signif.inf)
significance[,2]=apply(randomized.modularity,1,signif.sup)
significance[,3]=apply(significance[,-3],1,signif.two)
significance
Something is going wrong in step 3. I expected the vector to be filled with 10 values, but for some reason it stops after a while.
The slot "mod$modularity" suddenly receives 2 values instead of 1.
The two TXT files mentioned in the beginning of the code can be downloaded from here:
https://1drv.ms/t/s!AmcVKrxj94WClv8yQyqyl4IWk5mNvQ
https://1drv.ms/t/s!AmcVKrxj94WClv8z_Pow5Tg2U7mjLw
Could you please help me?

Your error is due to a mismatch in dimensions with your randomized.modularity matrix and some of your randomized modularity results. In your example your matrix end up being [1 x Nperm] however sometimes 2 modularity scores are returned during the permutations. To fix this I simply store the results in a list. The rest of your analysis will need to be adjusted since you have a mismatch of modularity scores.
library(igraph)
nodes <- read.delim("nodes.txt")
links <- read.delim("links.txt")
anurosnet <- graph_from_data_frame(d=links, vertices=nodes, directed=F)
anurosnet
modularity1 = cluster_louvain(anurosnet)
modularity1$modularity #observed value
obs <- modularity1$modularity
obs
real<-data.frame(obs)
real
Nperm = 100 #I am starting with a low n, but intend to use at least 1000 permutations
#randomized.modularity <- matrix(nrow=length(obs),ncol=Nperm+1)
#row.names(randomized.modularity) <- names(obs)
randomized.modularity <- list()
randomized.modularity[1] <- obs
randomized.modularity
for(i in 1:Nperm){
randomnet <- rewire(anurosnet, with=each_edge(0.5)) #rewire vertices with constant probability
E(randomnet)$weight <- sample(E(anurosnet)$weight) #shuffle initial weights and assign them randomly to edges
mod <- (cluster_louvain(randomnet))
mod$modularity
linha = mod$modularity
randomized.modularity <- c(randomized.modularity, list(linha))
}
randomized.modularity
Better way to write the loop
randomized.modularity <- lapply(seq_len(Nperm), function(x){
randomnet <- rewire(anurosnet, with=each_edge(0.5)) #rewire vertices with constant probability
E(randomnet)$weight <- sample(E(anurosnet)$weight) #shuffle initial weights and assign them randomly to edges
return(cluster_louvain(randomnet)$modularity)
})

Related

How to fill a non-zero coefficient/gamma plot when optimizing with CVXR package?

I'm replicating an article of Kozak, Nagel and Santosh; Shrinking the Cross-section. Therefore I'm creating a model that will select a few characteristics out of a large set of characteristics, that together are a good representation of an SDF.
In this model I make use of Ridge and Lasso techniques and my supervisor advised me to use the CVXR package. I minimize my objective with two loss functions which are multiplied with two sets of gammas. The main goal of my code is to end with a plot that has one of the gammas on the x-axis (the ridge) and the number of non-zero coefficients on the y-axis (so not the lasso parameter).
However, since the number of non-zero coefficients is an outcome of the optimizer I can not state that I want an optut with n non-zero coefficients.
Is there anyone who know how to produce my desired outcome? Code that I used is stated below.
# Grid for L1 penalty
cv.gamma_1 <- seq(0.005,0.02, by = (0.0075/15) )
# Grid for L2 penalty
cv.kappa <- 10^seq(-2,0.5,(2/24))
cv.Tt <- nrow(cv.train)
cv.tau <- sum(diag(cv.Sigma.train))
cv.gamma_2 <- as.numeric(cv.tau/((cv.kappa^2)*cv.Tt))
# Create results Matrix
coef_matrix <- matrix(nrow = length(cv.gamma_2), ncol = Nn, data = 0)
for (i in 1:length(cv.gamma_1)) {
for (j in 1:length(cv.gamma_2)) {
objective <- loss_1 + cv.gamma_2[j] * loss_2 + cv.gamma_1[i] * loss_3
prob <- Problem(Minimize(objective))
result <- solve(prob)
model_betas <- result$getValue(beta)
# Compute R-squared of model with these betas
r_score <- Rsquared(Mu_OOS = cv.Mu.test, Sigma_OOS = cv.Sigma.test, betas = model_betas)
# Coef matrix
non_zeros <- sum( round(model_betas,2) != 0.00)
if (non_zeros != 0){
if (coef_matrix[j,non_zeros] < r_score){
coef_matrix[j,non_zeros] <- r_score}
}
}
For now I ran my optimizer and counted the number of non-zeros, made a matrix with non-zeros on the y-axis and gamma on x-axis. Therefore, I do not have values on all my non-zero values in the matrix.
my plot:
Desired plot:

Adding ties in a network based on node attribute (weight)

I am simulating network change over time using igraph in r and am looking for an efficient and scalable way to code this for use in business.
The main drivers of network change are:
New nodes
New ties
New node weights
In the first stage, in the network of 100 nodes 10% are randomly connected. The node weights are also assigned at random. The network is undirected. There are 100 stages.
In each of the following stages:
Ten (10) new nodes occur randomly and are added to the model. They are unconnected in this stage.
The node weights of these new nodes are assigned at random.
The new ties between two nodes in time t+1 are a probabilistic function of the network distance between these nodes in the network and the node weight at previous stage (time t). Nodes at greater network distance are less likely to connect than nodes nodes at shorter distance. The decay function is exponential.
Nodes with greater weight attract more ties than those with smaller weights. The relationship between node weight and increased probability of tie-formation should be super-linear.
In each step, 10% of the total existing ties is added as a function what the previous point.
The network ties and nodes from previous stages are carried over (i.e. the networks are cumulative).
At each stage, the node weight can change randomly up to 10% of its current weight (i.e. a weight of 1 can change to {0.9-1.1} in t+1)
At each stage, the network needs to be saved.
How can this be written?
Edit: these networks will be examined on a number of graph-level characteristics at a later stage
This is what I have now, but doesn't include the node weights. How do we include this efficiently?
# number of nodes and ties to start with
n = 100
p = 0.1
r = -2
# build random network
net1 <- erdos.renyi.game(n, p, "gnp", directed = F)
#plot(net1)
write_graph(net1, paste0("D://network_sim_0.dl"), format="pajek")
for(i in seq(1,100,1)){
print(i)
time <- proc.time()
net1 <- read_graph(paste0("D://network_sim_",i-1,".dl"), format="pajek")
# how many will we build in next stage?
new_ties <- round(0.1*ecount(net1), 0) # 10% of those in net1
# add 10 new nodes
net2 <- add_vertices(net1, 10)
# get network distance for each dyad in net1 + the new nodes
spel <- data.table::melt(shortest.paths(net2))
names(spel) <- c("node_i", "node_j", "distance")
# replace inf with max observed value + 1
spel$distance[which(!is.finite(spel$distance))] <- max(spel$distance[is.finite(spel$distance)]) +1
# assign a probability (?) with a exponential decay function. Smallest distance == greatest prob.
spel$prob <- -0.5 * spel$distance^r # is this what I need?
#hist(spel$prob, freq=T, xlab="Probability of tie-formation")
#hist(spel$distance, freq=T, xlab="Network Distance")
# lets sample new ties from this probability
spel$index <- seq_along(spel$prob)
to_build <- subset(spel, index %in% sample(spel$index, size = new_ties, prob=spel$prob))
net2 <- add_edges(net2, as.numeric(unlist(str_split(paste(to_build$node_i, to_build$node_j), " "))))
# save the network
write_graph(net2, paste0("D://network_sim_",i,".dl"), format="pajek")
print(proc.time()-time)
}
I will try to answer this question, as far as I understand.
There are a couple of assumptions I made; I should clarify them.
First, what distribution will node weights follow?
If you are modeling an event that naturally occurs, it is most likely that the node weights follow a normal distribution. However, if the event is socially-oriented and other social mechanisms influence the event or the event popularity, the node weights might follow a different distribution-- mostly likely a power distribution.
Mainly, this is likely to true for customer-related behaviors. So, it would be beneficial for you to consider the random distribution you will model for the node weights.
For the following example, I use normal distributions to define value from a normal distribution for each node. At the end of each iteration, I let the node weights change up to %10 {.9,1.10}.
Second, what is the probability function of tie formation?
We have two inputs for making a decision: distance weights and node weights. So, we will create a function by using these two inputs and define probability weights. What I understood is that the smaller the distance is, the higher the likelihood is. And then the greater the node weight is, the higher the likelihood is, as well.
It might not be the best solution, but I did the followings:
First, calculate the decay function of distances and call it distance weights. Then, I get the node weights and create a super-linear function using both distance and node weights.
So, there are some parameters you can play with and see whether you get a result you want.
Btw, I did not change most of your codes. Also, I did not focus on processing time a lot. There are still rooms to impove.
library(scales)
library(stringr)
library(igraph)
# number of nodes and ties to start with
n <- 100
p <- 0.2
number_of_simulation <- 100
new_nodes <- 15 ## new nodes for each iteration
## Parameters ##
## How much distance will be weighted?
## Exponential decay parameter
beta_distance_weight <- -.4
## probability function parameters for the distance and node weights
impact_of_distances <- 0.3 ## how important is the distance weights?
impact_of_nodes <- 0.7 ## how important is the node weights?
power_base <- 5.5 ## how important is having a high score? Prefential attachment or super-linear function
# build random network
net1 <- erdos.renyi.game(n, p, "gnp", directed = F)
# Assign normally distributed random weights
V(net1)$weight <- rnorm(vcount(net1))
graph_list <- list(net1)
for(i in seq(1,number_of_simulation,1)){
print(i)
time <- proc.time()
net1 <- graph_list[[i]]
# how many will we build in next stage?
new_ties <- round(0.1*ecount(net1), 0) # 10% of those in net1
# add 10 new nodes
net2 <- add_vertices(net1, new_nodes)
## Add random weights to new nodes from a normal distribution
V(net2)$weight[is.na(V(net2)$weight)] <- rnorm(new_nodes)
# get network distance for each dyad in net1 + the new nodes
spel <- reshape2::melt(shortest.paths(net2))
names(spel) <- c("node_i", "node_j", "distance")
# replace inf with max observed value + 1
spel$distance[which(!is.finite(spel$distance))] <- max(spel$distance[is.finite(spel$distance)]) +1
# Do not select nodes if they are self-looped or have already link
spel <- spel[!spel$distance %in% c(0,1) , ]
# Assign distance weights for each dyads
spel$distance_weight <- exp(beta_distance_weight*spel$distance)
#hist(spel$distance_weight, freq=T, xlab="Probability of tie-formation")
#hist(spel$distance, freq=T, xlab="Network Distance")
## Get the node weights for merging the data with the distances
node_weights <- data.frame(id= 1:vcount(net2),node_weight=V(net2)$weight)
spel <- merge(spel,node_weights,by.x='node_j',by.y='id')
## probability is the function of distince and node weight
spel$prob <- power_base^((impact_of_distances * spel$distance_weight) + (impact_of_nodes * spel$node_weight))
spel <- spel[order(spel$prob, decreasing = T),]
# lets sample new ties from this probability with a beta distribution
spel$index <- seq_along(spel$prob)
to_build <- subset(spel, index %in% sample(spel$index, new_ties, p = 1/spel$index ))
net2 <- add_edges(net2, as.numeric(unlist(str_split(paste(to_build$node_i, to_build$node_j), " "))))
# change in the weights up to %10
V(net2)$weight <- V(net2)$weight*rescale(rnorm(vcount(net2)), to = c(0.9, 1.1))
graph_list[[i+1]] <- net2
print(proc.time()-time)
}
To get the results or write the graph to Pajek, you can use the following:
lapply(seq_along(graph_list),function(x) write_graph(graph_list[[x]], paste0("network_sim_",x,".dl"), format="pajek"))
EDIT
To change the node weight, you can use the following syntax.
library(scales)
library(stringr)
library(igraph)
# number of nodes and ties to start with
n <- 100
p <- 0.2
number_of_simulation <- 100
new_nodes <- 10 ## new nodes for each iteration
## Parameters ##
## How much distance will be weighted?
## Exponential decay parameter
beta_distance_weight <- -.4
## Node weights for power-law dist
power_law_parameter <- -.08
## probability function parameters for the distance and node weights
impact_of_distances <- 0.3 ## how important is the distance weights?
impact_of_nodes <- 0.7 ## how important is the node weights?
power_base <- 5.5 ## how important is having a high score? Prefential attachment or super-linear function
# build random network
net1 <- erdos.renyi.game(n, p, "gnp", directed = F)
## MADE A CHANGE HERE
# Assign normally distributed random weights
V(net1)$weight <- runif(vcount(net1))^power_law_parameter
graph_list <- list(net1)
for(i in seq(1,number_of_simulation,1)){
print(i)
time <- proc.time()
net1 <- graph_list[[i]]
# how many will we build in next stage?
new_ties <- round(0.1*ecount(net1), 0) # 10% of those in net1
# add 10 new nodes
net2 <- add_vertices(net1, new_nodes)
## Add random weights to new nodes from a normal distribution
V(net2)$weight[is.na(V(net2)$weight)] <- runif(new_nodes)^power_law_parameter
# get network distance for each dyad in net1 + the new nodes
spel <- reshape2::melt(shortest.paths(net2))
names(spel) <- c("node_i", "node_j", "distance")
# replace inf with max observed value + 1
spel$distance[which(!is.finite(spel$distance))] <- max(spel$distance[is.finite(spel$distance)]) + 2
# Do not select nodes if they are self-looped or have already link
spel <- spel[!spel$distance %in% c(0,1) , ]
# Assign distance weights for each dyads
spel$distance_weight <- exp(beta_distance_weight*spel$distance)
#hist(spel$distance_weight, freq=T, xlab="Probability of tie-formation")
#hist(spel$distance, freq=T, xlab="Network Distance")
## Get the node weights for merging the data with the distances
node_weights <- data.frame(id= 1:vcount(net2),node_weight=V(net2)$weight)
spel <- merge(spel,node_weights,by.x='node_j',by.y='id')
## probability is the function of distince and node weight
spel$prob <- power_base^((impact_of_distances * spel$distance_weight) + (impact_of_nodes * spel$node_weight))
spel <- spel[order(spel$prob, decreasing = T),]
# lets sample new ties from this probability with a beta distribution
spel$index <- seq_along(spel$prob)
to_build <- subset(spel, index %in% sample(spel$index, new_ties, p = 1/spel$index ))
net2 <- add_edges(net2, as.numeric(unlist(str_split(paste(to_build$node_i, to_build$node_j), " "))))
# change in the weights up to %10
V(net2)$weight <- V(net2)$weight*rescale(rnorm(vcount(net2)), to = c(0.9, 1.1))
graph_list[[i+1]] <- net2
print(proc.time()-time)
}
Result
So, to validate whether the code is working, I checked a small number of iteration with limited nodes: 10 iterations with 4 nodes. For each iteration, I added 3 new nodes and one new tie.
I did this simulation with three different settings.
The first setting focuses on only the weight function of distances: the more close nodes are, the more likely that a new tie will be formed between them.
The second setting focuses on only the weight function of node: the more weight nodes have, the more likely that a new tie will be formed with them.
The third setting focuses on the weight functions of both distance and node: the more weight nodes have and the more they are close, the more likely that a new tie will be formed with them.
Please observe the network behaviors how each setting provided different results.
Only Distance Matters
Only Node Weight Matters
Both Node Weight and Distance Matter

What should be an Optimal value of K in K means Clustering for it to be implemented on ANY Dataset?

Like the Question speaks, I'm making a Visualization tool that is bound to work for any dataset provided. What should be the Optimal K value I should select and How?
So you can use Calinski criterion from vegan package, also your phrasing of question is little debatable. I am hoping this is what you expecting, please comment in case of otherwise.
For example, You can do:
n = 100
g = 6
set.seed(g)
d <- data.frame(
x = unlist(lapply(1:g, function(i) rnorm(n/g, runif(1)*i^2))),
y = unlist(lapply(1:g, function(i) rnorm(n/g, runif(1)*i^2))))
require(vegan)
fit <- cascadeKM(scale(d, center = TRUE, scale = TRUE), 1, 10, iter = 1000)
plot(fit, sortg = TRUE, grpmts.plot = TRUE)
calinski.best <- as.numeric(which.max(fit$results[2,]))
cat("Calinski criterion optimal number of clusters:", calinski.best, "\n")
This would result in value of 5, which means you can use 5 clusters, the algorithm works with the fundamentals on withiness and betweeness of k means clustering. You can also write a manual code basis on that.
From the documentation from here:
criterion: The criterion that will be used to select the best
partition. The default value is "calinski", which refers to the
Calinski-Harabasz (1974) criterion. The simple structure index ("ssi")
is also available. Other indices are available in function clustIndex
(package cclust). In our experience, the two indices that work best
and are most likely to return their maximum value at or near the
optimal number of clusters are "calinski" and "ssi".
A manual code would look like something as below:
At the first iteration since there is no SSB( Betweeness of the variance).
wss <- (nrow(d)-1)*sum(apply(d,2,var))
#TSS = WSS ##No betweeness at first observation, total variance equal to withness variance, TSS is total sum of squares, WSS is within sum of squress
for (i in 2:15) wss[i] <- sum(kmeans(d,centers=i)$withinss) #from second observation onward, since TSS would remain constant and between sum of squares will increase, correspondingly withiness would decrease.
#Plotting the same using the plot command for 15 iterations.(This is not constant, you have to decide what iterations you can do here.
plot(1:15, wss, type="b", xlab="Number of Clusters",
ylab="Within groups sum of squares",col="mediumseagreen",pch=12)
An output of above can look like this, Here after the point at which the line become constant is the point that you have to pick for optimum cluster size, in this case it is 5 :

How to predict cluster labeling using DBSCAN object and Gower distance matrix for new data in R

I'm having issue with predicting cluster labeling for a test data, based on a dbscan clustering model on the training data.
I used gower distance matrix when creating the model:
> gowerdist_train <- daisy(analdata_train,
metric = "gower",
stand = FALSE,
type = list(asymm = c(5,6)))
Using this gowerdist matrix, the dbscan clustering model created was:
> sb <- dbscan(gowerdist_train, eps = .23, minPts = 50)
Then I try to use predict to label a test dataset using the above dbscan object:
> predict(sb, newdata = analdata_test, data = analdata_train)
But I receive the following error:
Error in frNN(rbind(data, newdata), eps = object$eps, sort = TRUE,
...) : x has to be a numeric matrix
I can take a guess on where this error might be coming from, which is probably due to the absence of the gower distance matrix that hasn't been created for the test data.
My question is, should I create a gower distance matrix for all data (datanal_train + datanal_test) separately and feed it into predict? how else would the algorithm know what the distance of test data from the train data is, in order to label?
In that case, would the newdata parameter be the new gower distance matrix that contains ALL (train + test) data? and the data parameter in predict would be the training distance matrix, gowerdist_train?
What I am not quite sure about is how would the predict algorithm distinguish between the test and train data set in the newly created gowerdist_all matrix?
The two matrices (new gowerdist for all data and the gowerdist_train) would obviously not have the same dimensions. Also, it doesn't make sense to me to create a gower distance matrix only for test data because distances must be relative to the test data, not the test data itself.
Edit:
I tried using gower distance matrix for all data (train + test) as my new data and received an error when fed to predict:
> gowerdist_all <- daisy(rbind(analdata_train, analdata_test),
metric = "gower",
stand = FALSE,
type = list(asymm = c(5,6)))
> test_sb_label <- predict(sb, newdata = gowerdist_all, data = gowerdist_train)
ERROR: Error in 1:nrow(data) : argument of length 0 In addition:
Warning message: In rbind(data, newdata) : number of columns of
result is not a multiple of vector length (arg 1)
So, my suggested solution doesn't work.
I decided to create a code that would use KNN algorithm in dbscan to predict cluster labeling using gower distance matrix. The code is not very pretty and definitely not programmaticaly efficient but it works. Happy for any suggestions that would improve it.
The pseydocode is:
1) calculate new gower distance matrix for all data, including test and train
2) use the above distance matrix in kNN function (dbscan package) to determine the k nearest neighbours to each test data point.
3) determine the cluster labels for all those nearest points for each test point. Some of them will have no cluster labeling because they are test points themselves
4) create a count matrix to count the frequency of clusters for the k nearest points for each test point
5) use very simple likelihood calculation to choose the cluster for the test point based on its neighbours clusters (the maximum frequency). this part also considers the neighbouring test points. That is, the cluster for the test point is chosen only when the maximum frequency is largest when you add the number of neighbouring test points to the other clusters. Otherwise, it doesn't decide the cluster for that test point and waits for the next iteration when hopefully more of its neighboring test points have had their cluster label decided based on their neighbours.
6) repeat above (steps 2-5) until you've decided all clusters
** Note: this algorithm doesn't converge all the time. (once you do the math, it's obvious why that is) so, in the code i break out of the algorithm when the number of unclustered test points doesn't change after a while. then i repeat 2-6 again with new knn (change the number of nearest neighbours and then run the code again). This will ensure more points are involved in deciding in th enext round. I've tried both larger and smaller knn's and both work. Would be good to know which one is better. I haven't had to run the code more than twice so far to decide the clusters for the test data point.
Here is the code:
#calculate gower distance for all data (test + train)
gowerdist_test <- daisy(all_data[rangeofdataforgowerdist],
metric = "gower",
stand = FALSE,
type = list(asymm = listofasymmvars),
weights = Weights)
summary(gowerdist_test)
Then use the code below to label clusters for test data.
#library(dbscan)
# find the k nearest neibours for each point and order them with distance
iteration_MAX <- 50
iteration_current <- 0
maxUnclusterRepeatNum <- 10
repeatedUnclustNum <- 0
unclusteredNum <- sum(is.na(all_data$Cluster))
previousUnclustereNum <- sum(is.na(all_data$Cluster))
nn_k = 30 #number of neighbourhoods
while (anyNA(all_data$Cluster) & iteration_current < iteration_MAX)
{
if (repeatedUnclustNum >= maxUnclusterRepeatNum) {
print(paste("Max number of repetition (", maxUnclusterRepeatNum ,") for same unclustered data has reached. Clustering terminated unsuccessfully."))
invisible(gc())
break;
}
nn_test <- kNN(gowerdist_test, k = nn_k, sort = TRUE)
# for the TEST points in all data, find the closets TRAIN points and decide statistically which cluster they could belong to, based on the clusters of the nearest TRAIN points
test_matrix <- nn_test$id[1: nrow(analdata_test),] #create matrix of test data knn id's
numClusts <- nlevels(as.factor(sb_train$cluster))
NameClusts <- as.character(levels(as.factor(sb_train$cluster)))
count_clusters <- matrix(0, nrow = nrow(analdata_test), ncol = numClusts + 1) #create a count matrix that would count number of clusters + NA
colnames(count_clusters) <- c("NA", NameClusts) #name each column of the count matrix to cluster numbers
# get the cluster number of each k nearest neibhour of each test point
for (i in 1:nrow(analdata_test))
for (j in 1:nn_k)
{
test_matrix[i,j] <- all_data[nn_test$id[i,j], "Cluster"]
}
# populate the count matrix for the total clusters of the neighbours for each test point
for (i in 1:nrow(analdata_test))
for (j in 1:nn_k)
{
if (!is.na(test_matrix[i,j]))
count_clusters[i, c(as.character(test_matrix[i,j]))] <- count_clusters[i, c(as.character(test_matrix[i,j]))] + 1
else
count_clusters[i, c("NA")] <- count_clusters[i, c("NA")] + 1
}
# add NA's (TEST points) to the other clusters for comparison
count_clusters_withNA <- count_clusters
for (i in 2:ncol(count_clusters))
{
count_clusters_withNA[,i] <- t(rowSums(count_clusters[,c(1,i)]))
}
# This block of code decides the maximum count of cluster for each row considering the number other test points (NA clusters) in the neighbourhood
max_col_countclusters <- apply(count_clusters,1,which.max) #get the column that corresponds to the maximum value of each row
for (i in 1:length(max_col_countclusters)) #insert the maximum value of each row in its associated column in count_clusters_withNA
count_clusters_withNA[i, max_col_countclusters[i]] <- count_clusters[i, max_col_countclusters[i]]
max_col_countclusters_withNA <- apply(count_clusters_withNA,1,which.max) #get the column that corresponds to the maximum value of each row with NA added
compareCountClust <- max_col_countclusters_withNA == max_col_countclusters #compare the two count matrices
all_data$Cluster[1:nrow(analdata_test)] <- ifelse(compareCountClust, NameClusts[max_col_countclusters - 1], all_data$Cluster) #you subtract one because of additional NA column
iteration_current <- iteration_current + 1
unclusteredNum <- sum(is.na(all_data$Cluster))
if (previousUnclustereNum == unclusteredNum)
repeatedUnclustNum <- repeatedUnclustNum + 1
else {
repeatedUnclustNum <- 0
previousUnclustereNum <- unclusteredNum
}
print(paste("Iteration: ", iteration_current, " - Number of remaining unclustered:", sum(is.na(all_data$Cluster))))
if (unclusteredNum == 0)
print("Cluster labeling successfully Completed.")
invisible(gc())
}
I guess you can use this for any other type of clustering algorithm, it doesn't matter how you decided the cluster labels for the train data, as long as they are in your all_data before running the code.
Hope this help.
Not the most efficient or rigorous code. So, happy to see suggestions how to improve it.
*Note: I used t-SNE to compare the clustering of train with the test data and looks impressively clean. so, it seems it is working.

How do I calculate weighted degree distributions with igraph in R?

Consider a dataframe df where the first two columns are node pairs and successive columns V1, V2, ..., Vn represent flows between the nodes (potentially 0, implying no edge for that column's network). I would like to conduct analysis on degree, community detection, and other network measures using the flows as weights.
Then to analyze the graph with respect to the weights in V1 I do:
# create graph and explore unweighted degrees with respect to V1
g <- graph.data.frame( df[df$V1!=0,] )
qplot(degree(g))
x <- 0:max(degree(g))
qplot(x,degree.distribution(g))
# set weights and explore weighted degrees using V1
E(g)$weights <- E(g)$V1
qplot(degree(g))
The output from the third qplot is no different than the first. What am I doing wrong?
Update:
So graph.strength is what I am looking for, but graph.strength(g) in my case gives standard degree output followed by:
Warning message:
In graph.strength(g) :
At structural_properties.c:4928 :No edge weights for strength calculation,
normal degree
I must be setting the weights incorrectly, is it not sufficient to do E(g)$weights <- E(g)$V1 and why can g$weights differ from E(g)$weights?
The function graph.strength can be given a weights vector with the weights argument. I think what is going wrong in your code is that you should call the weights attribute E(g)$weight not E(g)$weights.
I created an equivalent degree.distribution function for weighted graphs for my own code by taking the degree.distribution code and making one change:
graph.strength.distribution <- function (graph, cumulative = FALSE, ...)
{
if (!is.igraph(graph)) {
stop("Not a graph object")
}
# graph.strength() instead of degree()
cs <- graph.strength(graph, ...)
hi <- hist(cs, -1:max(cs), plot = FALSE)$density
if (!cumulative) {
res <- hi
}
else {
res <- rev(cumsum(rev(hi)))
}
res
}

Resources