Adding ties in a network based on node attribute (weight) - r

I am simulating network change over time using igraph in r and am looking for an efficient and scalable way to code this for use in business.
The main drivers of network change are:
New nodes
New ties
New node weights
In the first stage, in the network of 100 nodes 10% are randomly connected. The node weights are also assigned at random. The network is undirected. There are 100 stages.
In each of the following stages:
Ten (10) new nodes occur randomly and are added to the model. They are unconnected in this stage.
The node weights of these new nodes are assigned at random.
The new ties between two nodes in time t+1 are a probabilistic function of the network distance between these nodes in the network and the node weight at previous stage (time t). Nodes at greater network distance are less likely to connect than nodes nodes at shorter distance. The decay function is exponential.
Nodes with greater weight attract more ties than those with smaller weights. The relationship between node weight and increased probability of tie-formation should be super-linear.
In each step, 10% of the total existing ties is added as a function what the previous point.
The network ties and nodes from previous stages are carried over (i.e. the networks are cumulative).
At each stage, the node weight can change randomly up to 10% of its current weight (i.e. a weight of 1 can change to {0.9-1.1} in t+1)
At each stage, the network needs to be saved.
How can this be written?
Edit: these networks will be examined on a number of graph-level characteristics at a later stage
This is what I have now, but doesn't include the node weights. How do we include this efficiently?
# number of nodes and ties to start with
n = 100
p = 0.1
r = -2
# build random network
net1 <- erdos.renyi.game(n, p, "gnp", directed = F)
#plot(net1)
write_graph(net1, paste0("D://network_sim_0.dl"), format="pajek")
for(i in seq(1,100,1)){
print(i)
time <- proc.time()
net1 <- read_graph(paste0("D://network_sim_",i-1,".dl"), format="pajek")
# how many will we build in next stage?
new_ties <- round(0.1*ecount(net1), 0) # 10% of those in net1
# add 10 new nodes
net2 <- add_vertices(net1, 10)
# get network distance for each dyad in net1 + the new nodes
spel <- data.table::melt(shortest.paths(net2))
names(spel) <- c("node_i", "node_j", "distance")
# replace inf with max observed value + 1
spel$distance[which(!is.finite(spel$distance))] <- max(spel$distance[is.finite(spel$distance)]) +1
# assign a probability (?) with a exponential decay function. Smallest distance == greatest prob.
spel$prob <- -0.5 * spel$distance^r # is this what I need?
#hist(spel$prob, freq=T, xlab="Probability of tie-formation")
#hist(spel$distance, freq=T, xlab="Network Distance")
# lets sample new ties from this probability
spel$index <- seq_along(spel$prob)
to_build <- subset(spel, index %in% sample(spel$index, size = new_ties, prob=spel$prob))
net2 <- add_edges(net2, as.numeric(unlist(str_split(paste(to_build$node_i, to_build$node_j), " "))))
# save the network
write_graph(net2, paste0("D://network_sim_",i,".dl"), format="pajek")
print(proc.time()-time)
}

I will try to answer this question, as far as I understand.
There are a couple of assumptions I made; I should clarify them.
First, what distribution will node weights follow?
If you are modeling an event that naturally occurs, it is most likely that the node weights follow a normal distribution. However, if the event is socially-oriented and other social mechanisms influence the event or the event popularity, the node weights might follow a different distribution-- mostly likely a power distribution.
Mainly, this is likely to true for customer-related behaviors. So, it would be beneficial for you to consider the random distribution you will model for the node weights.
For the following example, I use normal distributions to define value from a normal distribution for each node. At the end of each iteration, I let the node weights change up to %10 {.9,1.10}.
Second, what is the probability function of tie formation?
We have two inputs for making a decision: distance weights and node weights. So, we will create a function by using these two inputs and define probability weights. What I understood is that the smaller the distance is, the higher the likelihood is. And then the greater the node weight is, the higher the likelihood is, as well.
It might not be the best solution, but I did the followings:
First, calculate the decay function of distances and call it distance weights. Then, I get the node weights and create a super-linear function using both distance and node weights.
So, there are some parameters you can play with and see whether you get a result you want.
Btw, I did not change most of your codes. Also, I did not focus on processing time a lot. There are still rooms to impove.
library(scales)
library(stringr)
library(igraph)
# number of nodes and ties to start with
n <- 100
p <- 0.2
number_of_simulation <- 100
new_nodes <- 15 ## new nodes for each iteration
## Parameters ##
## How much distance will be weighted?
## Exponential decay parameter
beta_distance_weight <- -.4
## probability function parameters for the distance and node weights
impact_of_distances <- 0.3 ## how important is the distance weights?
impact_of_nodes <- 0.7 ## how important is the node weights?
power_base <- 5.5 ## how important is having a high score? Prefential attachment or super-linear function
# build random network
net1 <- erdos.renyi.game(n, p, "gnp", directed = F)
# Assign normally distributed random weights
V(net1)$weight <- rnorm(vcount(net1))
graph_list <- list(net1)
for(i in seq(1,number_of_simulation,1)){
print(i)
time <- proc.time()
net1 <- graph_list[[i]]
# how many will we build in next stage?
new_ties <- round(0.1*ecount(net1), 0) # 10% of those in net1
# add 10 new nodes
net2 <- add_vertices(net1, new_nodes)
## Add random weights to new nodes from a normal distribution
V(net2)$weight[is.na(V(net2)$weight)] <- rnorm(new_nodes)
# get network distance for each dyad in net1 + the new nodes
spel <- reshape2::melt(shortest.paths(net2))
names(spel) <- c("node_i", "node_j", "distance")
# replace inf with max observed value + 1
spel$distance[which(!is.finite(spel$distance))] <- max(spel$distance[is.finite(spel$distance)]) +1
# Do not select nodes if they are self-looped or have already link
spel <- spel[!spel$distance %in% c(0,1) , ]
# Assign distance weights for each dyads
spel$distance_weight <- exp(beta_distance_weight*spel$distance)
#hist(spel$distance_weight, freq=T, xlab="Probability of tie-formation")
#hist(spel$distance, freq=T, xlab="Network Distance")
## Get the node weights for merging the data with the distances
node_weights <- data.frame(id= 1:vcount(net2),node_weight=V(net2)$weight)
spel <- merge(spel,node_weights,by.x='node_j',by.y='id')
## probability is the function of distince and node weight
spel$prob <- power_base^((impact_of_distances * spel$distance_weight) + (impact_of_nodes * spel$node_weight))
spel <- spel[order(spel$prob, decreasing = T),]
# lets sample new ties from this probability with a beta distribution
spel$index <- seq_along(spel$prob)
to_build <- subset(spel, index %in% sample(spel$index, new_ties, p = 1/spel$index ))
net2 <- add_edges(net2, as.numeric(unlist(str_split(paste(to_build$node_i, to_build$node_j), " "))))
# change in the weights up to %10
V(net2)$weight <- V(net2)$weight*rescale(rnorm(vcount(net2)), to = c(0.9, 1.1))
graph_list[[i+1]] <- net2
print(proc.time()-time)
}
To get the results or write the graph to Pajek, you can use the following:
lapply(seq_along(graph_list),function(x) write_graph(graph_list[[x]], paste0("network_sim_",x,".dl"), format="pajek"))
EDIT
To change the node weight, you can use the following syntax.
library(scales)
library(stringr)
library(igraph)
# number of nodes and ties to start with
n <- 100
p <- 0.2
number_of_simulation <- 100
new_nodes <- 10 ## new nodes for each iteration
## Parameters ##
## How much distance will be weighted?
## Exponential decay parameter
beta_distance_weight <- -.4
## Node weights for power-law dist
power_law_parameter <- -.08
## probability function parameters for the distance and node weights
impact_of_distances <- 0.3 ## how important is the distance weights?
impact_of_nodes <- 0.7 ## how important is the node weights?
power_base <- 5.5 ## how important is having a high score? Prefential attachment or super-linear function
# build random network
net1 <- erdos.renyi.game(n, p, "gnp", directed = F)
## MADE A CHANGE HERE
# Assign normally distributed random weights
V(net1)$weight <- runif(vcount(net1))^power_law_parameter
graph_list <- list(net1)
for(i in seq(1,number_of_simulation,1)){
print(i)
time <- proc.time()
net1 <- graph_list[[i]]
# how many will we build in next stage?
new_ties <- round(0.1*ecount(net1), 0) # 10% of those in net1
# add 10 new nodes
net2 <- add_vertices(net1, new_nodes)
## Add random weights to new nodes from a normal distribution
V(net2)$weight[is.na(V(net2)$weight)] <- runif(new_nodes)^power_law_parameter
# get network distance for each dyad in net1 + the new nodes
spel <- reshape2::melt(shortest.paths(net2))
names(spel) <- c("node_i", "node_j", "distance")
# replace inf with max observed value + 1
spel$distance[which(!is.finite(spel$distance))] <- max(spel$distance[is.finite(spel$distance)]) + 2
# Do not select nodes if they are self-looped or have already link
spel <- spel[!spel$distance %in% c(0,1) , ]
# Assign distance weights for each dyads
spel$distance_weight <- exp(beta_distance_weight*spel$distance)
#hist(spel$distance_weight, freq=T, xlab="Probability of tie-formation")
#hist(spel$distance, freq=T, xlab="Network Distance")
## Get the node weights for merging the data with the distances
node_weights <- data.frame(id= 1:vcount(net2),node_weight=V(net2)$weight)
spel <- merge(spel,node_weights,by.x='node_j',by.y='id')
## probability is the function of distince and node weight
spel$prob <- power_base^((impact_of_distances * spel$distance_weight) + (impact_of_nodes * spel$node_weight))
spel <- spel[order(spel$prob, decreasing = T),]
# lets sample new ties from this probability with a beta distribution
spel$index <- seq_along(spel$prob)
to_build <- subset(spel, index %in% sample(spel$index, new_ties, p = 1/spel$index ))
net2 <- add_edges(net2, as.numeric(unlist(str_split(paste(to_build$node_i, to_build$node_j), " "))))
# change in the weights up to %10
V(net2)$weight <- V(net2)$weight*rescale(rnorm(vcount(net2)), to = c(0.9, 1.1))
graph_list[[i+1]] <- net2
print(proc.time()-time)
}
Result
So, to validate whether the code is working, I checked a small number of iteration with limited nodes: 10 iterations with 4 nodes. For each iteration, I added 3 new nodes and one new tie.
I did this simulation with three different settings.
The first setting focuses on only the weight function of distances: the more close nodes are, the more likely that a new tie will be formed between them.
The second setting focuses on only the weight function of node: the more weight nodes have, the more likely that a new tie will be formed with them.
The third setting focuses on the weight functions of both distance and node: the more weight nodes have and the more they are close, the more likely that a new tie will be formed with them.
Please observe the network behaviors how each setting provided different results.
Only Distance Matters
Only Node Weight Matters
Both Node Weight and Distance Matter

Related

How to permute a network in igraph for R?

I'm trying to write a code for a Monte Carlo procedure in R. My goal is to estimate the significance of a metric calculated for a weighted, unipartite, undirected network formatted for the package igraph.
So far, I included the following steps in the code:
1. Create the weighted, unipartite, undirected network and calculate the observed Louvain modularity
nodes <- read.delim("nodes.txt")
links <- read.delim("links.txt")
anurosnet <- graph_from_data_frame(d=links, vertices=nodes, directed=F)
anurosnet
modularity1 = cluster_louvain(anurosnet)
modularity1$modularity #observed value
obs=modularity1$modularity
obs
real<-data.frame(obs)
real
2. Create the empty vector
Nperm = 9 #I am starting with a low n, but intend to use at least 1000 permutations
randomized.modularity=matrix(nrow=length(obs),ncol=Nperm+1)
row.names(randomized.modularity)=names(obs)
randomized.modularity[,1]=obs
randomized.modularity
3. Permute the original network preserving its characteristics, calculate the Louvain modularity for all randomized networks, and compile the results in the vector
i<-1
while(i<=Nperm){
randomnet <- rewire(anurosnet, with=each_edge(0.5)) #rewire vertices with constant probability
E(randomnet)$weight <- sample(E(anurosnet)$weight) #shuffle initial weights and assign them randomly to edges
mod<-(cluster_louvain(randomnet))
mod$modularity
linha = mod$modularity
randomized.modularity[,i+1]=linha
print(i)
i=i+1
}
randomized.modularity #Here the result is not as expected
4. Plot the observed value against the distribution of randomized values
niveis<-row.names(randomized.modularity)
for(k in niveis)
{
if(any(is.na(randomized.modularity[k,]) == TRUE))
{
print(c(k, "metrica tem NA"))
} else {
nome.arq<- paste("modularity",k,".png", sep="")
png(filename= nome.arq, res= 300, height= 15, width=21, unit="cm")
plot(density(randomized.modularity[k,]), main="Observed vs. randomized",)
abline(v=obs[k], col="red", lwd=2, xlab="")
dev.off()
print(k)
nome.arq<- paste("Patefield_Null_mean_sd_",k,".txt", sep="")
write.table(cbind(mean(randomized.modularity[k,]),sd(randomized.modularity[k,])), file=paste(nome.arq,sep=""),
sep=" ",row.names=TRUE,col.names=FALSE)
}
}
5. Estimate the P-value (significance)
significance=matrix(nrow=nrow(randomized.modularity),ncol=3)
row.names(significance)=row.names(randomized.modularity)
colnames(significance)=c("p (rand <= obs)", "p (rand >= obs)", "p (rand=obs)")
signif.sup=function(x) sum(x>=x[1])/length(x)
signif.inf=function(x) sum(x<=x[1])/length(x)
signif.two=function(x) ifelse(min(x)*2>1,1,min(x)*2)
significance[,1]=apply(randomized.modularity,1,signif.inf)
significance[,2]=apply(randomized.modularity,1,signif.sup)
significance[,3]=apply(significance[,-3],1,signif.two)
significance
Something is going wrong in step 3. I expected the vector to be filled with 10 values, but for some reason it stops after a while.
The slot "mod$modularity" suddenly receives 2 values instead of 1.
The two TXT files mentioned in the beginning of the code can be downloaded from here:
https://1drv.ms/t/s!AmcVKrxj94WClv8yQyqyl4IWk5mNvQ
https://1drv.ms/t/s!AmcVKrxj94WClv8z_Pow5Tg2U7mjLw
Could you please help me?
Your error is due to a mismatch in dimensions with your randomized.modularity matrix and some of your randomized modularity results. In your example your matrix end up being [1 x Nperm] however sometimes 2 modularity scores are returned during the permutations. To fix this I simply store the results in a list. The rest of your analysis will need to be adjusted since you have a mismatch of modularity scores.
library(igraph)
nodes <- read.delim("nodes.txt")
links <- read.delim("links.txt")
anurosnet <- graph_from_data_frame(d=links, vertices=nodes, directed=F)
anurosnet
modularity1 = cluster_louvain(anurosnet)
modularity1$modularity #observed value
obs <- modularity1$modularity
obs
real<-data.frame(obs)
real
Nperm = 100 #I am starting with a low n, but intend to use at least 1000 permutations
#randomized.modularity <- matrix(nrow=length(obs),ncol=Nperm+1)
#row.names(randomized.modularity) <- names(obs)
randomized.modularity <- list()
randomized.modularity[1] <- obs
randomized.modularity
for(i in 1:Nperm){
randomnet <- rewire(anurosnet, with=each_edge(0.5)) #rewire vertices with constant probability
E(randomnet)$weight <- sample(E(anurosnet)$weight) #shuffle initial weights and assign them randomly to edges
mod <- (cluster_louvain(randomnet))
mod$modularity
linha = mod$modularity
randomized.modularity <- c(randomized.modularity, list(linha))
}
randomized.modularity
Better way to write the loop
randomized.modularity <- lapply(seq_len(Nperm), function(x){
randomnet <- rewire(anurosnet, with=each_edge(0.5)) #rewire vertices with constant probability
E(randomnet)$weight <- sample(E(anurosnet)$weight) #shuffle initial weights and assign them randomly to edges
return(cluster_louvain(randomnet)$modularity)
})

generating random x and y coordinates with a minimum distance

Is there a way in R to generate random coordinates with a minimum distance between them?
E.g. what I'd like to avoid
x <- c(0,3.9,4.1,8)
y <- c(1,4.1,3.9,7)
plot(x~y)
This is a classical problem from stochastic geometry. Completely random points in space where the number of points falling in disjoint regions are independent of each other corresponds to a homogeneous Poisson point process (in this case in R^2, but could be in almost any space).
An important feature is that the total number of points has to be random before you can have independence of the counts of points in disjoint regions.
For the Poisson process points can be arbitrarily close together. If you define a process by sampling the Poisson process until you don't have any points that are too close together you have the so-called Gibbs Hardcore process. This has been studied a lot in the literature and there are different ways to simulate it. The R package spatstat has functions to do this. rHardcore is a perfect sampler, but if you want a high intensity of points and a big hard core distance it may not terminate in finite time... The distribution can be obtained as the limit of a Markov chain and rmh.default lets you run a Markov chain with a given Gibbs model as its invariant distribution. This finishes in finite time but only gives a realisation of an approximate distribution.
In rmh.default you can also simulate conditional on a fixed number of points. Note that when you sample in a finite box there is of course an upper limit to how many points you can fit with a given hard core radius, and the closer you are to this limit the more problematic it becomes to sample correctly from the distribution.
Example:
library(spatstat)
beta <- 100; R = 0.1
win <- square(1) # Unit square for simulation
X1 <- rHardcore(beta, R, W = win) # Exact sampling -- beware it may run forever for some par.!
plot(X1, main = paste("Exact sim. of hardcore model; beta =", beta, "and R =", R))
minnndist(X1) # Observed min. nearest neighbour dist.
#> [1] 0.102402
Approximate simulation
model <- rmhmodel(cif="hardcore", par = list(beta=beta, hc=R), w = win)
X2 <- rmh(model)
#> Checking arguments..determining simulation windows...Starting simulation.
#> Initial state...Ready to simulate. Generating proposal points...Running Metropolis-Hastings.
plot(X2, main = paste("Approx. sim. of hardcore model; beta =", beta, "and R =", R))
minnndist(X2) # Observed min. nearest neighbour dist.
#> [1] 0.1005433
Approximate simulation conditional on number of points
X3 <- rmh(model, control = rmhcontrol(p=1), start = list(n.start = 42))
#> Checking arguments..determining simulation windows...Starting simulation.
#> Initial state...Ready to simulate. Generating proposal points...Running Metropolis-Hastings.
plot(X3, main = paste("Approx. sim. given n =", 42))
minnndist(X3) # Observed min. nearest neighbour dist.
#> [1] 0.1018068
OK, how about this? You just generate random number pairs without restriction and then remove the onces which are too close. This could be a great start for that:
minimumDistancePairs <- function(x, y, minDistance){
i <- 1
repeat{
distance <- sqrt((x-x[i])^2 + (y-y[i])^2) < minDistance # pythagorean theorem
distance[i] <- FALSE # distance to oneself is always zero
if(any(distance)) { # if too close to any other point
x <- x[-i] # remove element from x
y <- y[-i] # and remove element from y
} else { # otherwise...
i = i + 1 # repeat the procedure with the next element
}
if (i > length(x)) break
}
data.frame(x,y)
}
minimumDistancePairs(
c(0,3.9,4.1,8)
, c(1,4.1,3.9,7)
, 1
)
will lead to
x y
1 0.0 1.0
2 4.1 3.9
3 8.0 7.0
Be aware, though, of the fact that these are not random numbers anymore (however you solve problem).
You can use rejection sapling https://en.wikipedia.org/wiki/Rejection_sampling
The principle is simple: you resample until you data verify the condition.
> set.seed(1)
>
> x <- rnorm(2)
> y <- rnorm(2)
> (x[1]-x[2])^2+(y[1]-y[2])^2
[1] 6.565578
> while((x[1]-x[2])^2+(y[1]-y[2])^2 > 1) {
+ x <- rnorm(2)
+ y <- rnorm(2)
+ }
> (x[1]-x[2])^2+(y[1]-y[2])^2
[1] 0.9733252
>
The following is a naive hit-and-miss approach which for some choices of parameters (which were left unspecified in the question) works well. If performance becomes an issue, you could experiment with the package gpuR which has a GPU-accelerated distance matrix calculation.
rand.separated <- function(n,x0,x1,y0,y1,d,trials = 1000){
for(i in 1:trials){
nums <- cbind(runif(n,x0,x1),runif(n,y0,y1))
if(min(dist(nums)) >= d) return(nums)
}
return(NA) #no luck
}
This repeatedly draws samples of size n in [x0,x1]x[y0,y1] and then throws the sample away if it doesn't satisfy. As a safety, trials guards against an infinite loop. If solutions are hard to find or n is large you might need to increase or decrease trials.
For example:
> set.seed(2018)
> nums <- rand.separated(25,0,10,0,10,0.2)
> plot(nums)
runs almost instantly and produces:
Im not sure what you are asking.
if you want random coordinates here.
c(
runif(1,max=y[1],min=x[1]),
runif(1,max=y[2],min=x[2]),
runif(1,min=y[3],max=x[3]),
runif(1,min=y[4],max=x[4])
)

Error in NbClust: not enough objects to cluster

I am trying to use the NbClust method in R to determine the best number of clusters in a cluster analysis following the approach in the book from Manning.
However, I get an error message saying:
Error in hclust(md, method = "average"): must have n >= 2 objects to
cluster.
Even though the hclust method appears to work. Therefore, I assume that the problem is (which is also stated by the error message), that NbClust tries to create groups with only one object inside.
Here is my code:
mydata = read.table("PLR_2016_WM_55_5_Familienstand_aufbereitet.csv", skip = 0, sep = ";", header = TRUE)
mydata <- mydata[-1] # Without first line (int)
data.transformed <- t(mydata) # Transformation of matrix
data.scale <- scale(data.transformed) # Scaling of table
data.dist <- dist(data.scale) # Calculates distances between points
fit.average <- hclust(data.dist, method = "average")
plot(fit.average, hang = -1, cex = .8, main = "Average Linkage Clustering")
library(NbClust)
nc <- NbClust(data.scale, distance="euclidean",
min.nc=2, max.nc=15, method="average")
I found a similar problem here, but I was not able to adapt the code.
There are some problems in your dataset.
The last 4 rows do not contain data and must be deleted.
mydata <- read.table("PLR_2016_WM_55_5_Familienstand_aufbereitet.csv", skip = 0, sep = ";", header = TRUE)
mydata <- mydata[1:(nrow(mydata)-4),]
mydata[,1] <- as.numeric(mydata[,1])
Now rescale the dataset:
data.transformed <- t(mydata) # Transformation of matrix
data.scale <- scale(data.transformed) # Scaling of table
For some reason data.scale is not a full rank matrix:
dim(data.scale)
# [1] 72 447
qr(data.scale)$rank
# [1] 71
Hence, we delete a row from data.scale and transpose it:
data.scale <- t(data.scale[-72,])
Now the dataset is ready for NbClust.
library(NbClust)
nc <- NbClust(data=data.scale, distance="euclidean",
min.nc=2, max.nc=15, method="average")
The output is
[1] "Frey index : No clustering structure in this data set"
*** : The Hubert index is a graphical method of determining the number of clusters.
In the plot of Hubert index, we seek a significant knee that corresponds to a
significant increase of the value of the measure i.e the significant peak in Hubert
index second differences plot.
*** : The D index is a graphical method of determining the number of clusters.
In the plot of D index, we seek a significant knee (the significant peak in Dindex
second differences plot) that corresponds to a significant increase of the value of
the measure.
*******************************************************************
* Among all indices:
* 8 proposed 2 as the best number of clusters
* 4 proposed 3 as the best number of clusters
* 8 proposed 4 as the best number of clusters
* 1 proposed 5 as the best number of clusters
* 1 proposed 8 as the best number of clusters
* 1 proposed 11 as the best number of clusters
***** Conclusion *****
* According to the majority rule, the best number of clusters is 2
*******************************************************************

MCMC in R Modify Proposal

I've been working with MCMC for population genetics and I have some doubts.
I'm not experienced in statistics and because of that I have difficulty.
I have code to run MCMC, 1000 iterations. I start by creating a matrix with 0's (50 columns = 50 individuals and 1000 lines for 1000 iterations).
Then I create a random vector to substitute the first line of the matrix. This vector has 1's and 2's, representing population 1 or population 2.
I also have genotype frequencies and the genotypes of the 50 individuals.
What I want is to, according to the genotype frequencies and genotypes, determine to what population an individual belongs.
Then, I'll keep changing the population assigned to a random individual and checking if the new value should be accepted.
niter <- 1000
z <- matrix(0,nrow=niter,ncol=ncol(targetinds))
z[1,] <- sample(1:2, size=ncol(z), replace=T)
lhood <- numeric(niter)
lhood[1] <- compute_lhood_K2(targetinds, z[1,], freqPops)
accepted <- 0
priorz <- c(1e-6, 0.999999)
for(i in 2:niter) {
z[i,] <- z[i-1,]
# propose new vector z, by selecting a random individual, proposing a new zi value
selind <- sample(1:nind, size=1)
# proposal probability of selecting individual at random
proposal_ratio_ind <- log(1/nind)-log(1/nind)
# propose a new index for the selected individual
if(z[i,selind]==1) {
z[i,selind] <- 2
} else {
z[i,selind] <- 1
}
# proposal probability of changing the index of individual is 1/2
proposal_ratio_cluster <- log(1/2)-log(1/2)
propratio <- proposal_ratio_ind+proposal_ratio_cluster
# compute f(x_i|z_i*, p)
# the probability of the selected individual given the two clusters
probindcluster <- compute_lhood_ind_K2(targetinds[,selind],freqPops)
# likelihood ratio f(x_i|z_i*,p)/f(x_i|z_i, p)
lhoodratio <- probindcluster[z[i,selind]]-probindcluster[z[i-1,selind]]
# prior ratio pi(z_i*)/pi(z_i)
priorratio <- log(priorz[z[i,selind]])-log(priorz[z[i-1,selind]])
# accept new value according to the MH ratio
mh <- lhoodratio+propratio+priorratio
# reject if the random value is larger than the MH ratio
if(runif(1)>exp(mh)) {
z[i,] <- z[i-1,] # keep the same z
lhood[i] <- lhood[i-1] # keep the same likelihood
} else { # if accepted
lhood[i] <- lhood[i-1]+lhoodratio # update the likelihood
accepted <- accepted+1 # increase the number of accepted
}
}
It is asked that I have to change the proposal probability so that the new proposed values are proportional to the likelihood. This leads to a Gibbs sampling MCMC algorithm, supposedly.
I don't know what to change in the code to do this. I also don't understand very well the concept of proposal probability and how to chose the prior.
Grateful if someone knows how to clarify my doubts.
Your current proposal is done here:
# propose a new index for the selected individual
if(z[i,selind]==1) {
z[i,selind] <- 2
} else {
z[i,selind] <- 1
}
if the individual is assigned to cluster 1, then you propose to switch assignment deterministically by assigning them to cluster 2 (and vice versa).
You didn't show us what freqPops is, but if you want to propose according to freqPops then I believe the above code has to be replaced by
z[i,selind] <- sample(c(1,2),size=1,prob=freqPops)
(at least that is what I understand when you say you want to propose based on the likelihood - however, that statement of yours is unclear).
For this now to be a valid mcmc gibbs sampling algorithm you also need to change the next line of code:
proposal_ratio_cluster <- log(freqPops[z[i-1,selind]])-log(fregPops[z[i,selind]])

How to predict cluster labeling using DBSCAN object and Gower distance matrix for new data in R

I'm having issue with predicting cluster labeling for a test data, based on a dbscan clustering model on the training data.
I used gower distance matrix when creating the model:
> gowerdist_train <- daisy(analdata_train,
metric = "gower",
stand = FALSE,
type = list(asymm = c(5,6)))
Using this gowerdist matrix, the dbscan clustering model created was:
> sb <- dbscan(gowerdist_train, eps = .23, minPts = 50)
Then I try to use predict to label a test dataset using the above dbscan object:
> predict(sb, newdata = analdata_test, data = analdata_train)
But I receive the following error:
Error in frNN(rbind(data, newdata), eps = object$eps, sort = TRUE,
...) : x has to be a numeric matrix
I can take a guess on where this error might be coming from, which is probably due to the absence of the gower distance matrix that hasn't been created for the test data.
My question is, should I create a gower distance matrix for all data (datanal_train + datanal_test) separately and feed it into predict? how else would the algorithm know what the distance of test data from the train data is, in order to label?
In that case, would the newdata parameter be the new gower distance matrix that contains ALL (train + test) data? and the data parameter in predict would be the training distance matrix, gowerdist_train?
What I am not quite sure about is how would the predict algorithm distinguish between the test and train data set in the newly created gowerdist_all matrix?
The two matrices (new gowerdist for all data and the gowerdist_train) would obviously not have the same dimensions. Also, it doesn't make sense to me to create a gower distance matrix only for test data because distances must be relative to the test data, not the test data itself.
Edit:
I tried using gower distance matrix for all data (train + test) as my new data and received an error when fed to predict:
> gowerdist_all <- daisy(rbind(analdata_train, analdata_test),
metric = "gower",
stand = FALSE,
type = list(asymm = c(5,6)))
> test_sb_label <- predict(sb, newdata = gowerdist_all, data = gowerdist_train)
ERROR: Error in 1:nrow(data) : argument of length 0 In addition:
Warning message: In rbind(data, newdata) : number of columns of
result is not a multiple of vector length (arg 1)
So, my suggested solution doesn't work.
I decided to create a code that would use KNN algorithm in dbscan to predict cluster labeling using gower distance matrix. The code is not very pretty and definitely not programmaticaly efficient but it works. Happy for any suggestions that would improve it.
The pseydocode is:
1) calculate new gower distance matrix for all data, including test and train
2) use the above distance matrix in kNN function (dbscan package) to determine the k nearest neighbours to each test data point.
3) determine the cluster labels for all those nearest points for each test point. Some of them will have no cluster labeling because they are test points themselves
4) create a count matrix to count the frequency of clusters for the k nearest points for each test point
5) use very simple likelihood calculation to choose the cluster for the test point based on its neighbours clusters (the maximum frequency). this part also considers the neighbouring test points. That is, the cluster for the test point is chosen only when the maximum frequency is largest when you add the number of neighbouring test points to the other clusters. Otherwise, it doesn't decide the cluster for that test point and waits for the next iteration when hopefully more of its neighboring test points have had their cluster label decided based on their neighbours.
6) repeat above (steps 2-5) until you've decided all clusters
** Note: this algorithm doesn't converge all the time. (once you do the math, it's obvious why that is) so, in the code i break out of the algorithm when the number of unclustered test points doesn't change after a while. then i repeat 2-6 again with new knn (change the number of nearest neighbours and then run the code again). This will ensure more points are involved in deciding in th enext round. I've tried both larger and smaller knn's and both work. Would be good to know which one is better. I haven't had to run the code more than twice so far to decide the clusters for the test data point.
Here is the code:
#calculate gower distance for all data (test + train)
gowerdist_test <- daisy(all_data[rangeofdataforgowerdist],
metric = "gower",
stand = FALSE,
type = list(asymm = listofasymmvars),
weights = Weights)
summary(gowerdist_test)
Then use the code below to label clusters for test data.
#library(dbscan)
# find the k nearest neibours for each point and order them with distance
iteration_MAX <- 50
iteration_current <- 0
maxUnclusterRepeatNum <- 10
repeatedUnclustNum <- 0
unclusteredNum <- sum(is.na(all_data$Cluster))
previousUnclustereNum <- sum(is.na(all_data$Cluster))
nn_k = 30 #number of neighbourhoods
while (anyNA(all_data$Cluster) & iteration_current < iteration_MAX)
{
if (repeatedUnclustNum >= maxUnclusterRepeatNum) {
print(paste("Max number of repetition (", maxUnclusterRepeatNum ,") for same unclustered data has reached. Clustering terminated unsuccessfully."))
invisible(gc())
break;
}
nn_test <- kNN(gowerdist_test, k = nn_k, sort = TRUE)
# for the TEST points in all data, find the closets TRAIN points and decide statistically which cluster they could belong to, based on the clusters of the nearest TRAIN points
test_matrix <- nn_test$id[1: nrow(analdata_test),] #create matrix of test data knn id's
numClusts <- nlevels(as.factor(sb_train$cluster))
NameClusts <- as.character(levels(as.factor(sb_train$cluster)))
count_clusters <- matrix(0, nrow = nrow(analdata_test), ncol = numClusts + 1) #create a count matrix that would count number of clusters + NA
colnames(count_clusters) <- c("NA", NameClusts) #name each column of the count matrix to cluster numbers
# get the cluster number of each k nearest neibhour of each test point
for (i in 1:nrow(analdata_test))
for (j in 1:nn_k)
{
test_matrix[i,j] <- all_data[nn_test$id[i,j], "Cluster"]
}
# populate the count matrix for the total clusters of the neighbours for each test point
for (i in 1:nrow(analdata_test))
for (j in 1:nn_k)
{
if (!is.na(test_matrix[i,j]))
count_clusters[i, c(as.character(test_matrix[i,j]))] <- count_clusters[i, c(as.character(test_matrix[i,j]))] + 1
else
count_clusters[i, c("NA")] <- count_clusters[i, c("NA")] + 1
}
# add NA's (TEST points) to the other clusters for comparison
count_clusters_withNA <- count_clusters
for (i in 2:ncol(count_clusters))
{
count_clusters_withNA[,i] <- t(rowSums(count_clusters[,c(1,i)]))
}
# This block of code decides the maximum count of cluster for each row considering the number other test points (NA clusters) in the neighbourhood
max_col_countclusters <- apply(count_clusters,1,which.max) #get the column that corresponds to the maximum value of each row
for (i in 1:length(max_col_countclusters)) #insert the maximum value of each row in its associated column in count_clusters_withNA
count_clusters_withNA[i, max_col_countclusters[i]] <- count_clusters[i, max_col_countclusters[i]]
max_col_countclusters_withNA <- apply(count_clusters_withNA,1,which.max) #get the column that corresponds to the maximum value of each row with NA added
compareCountClust <- max_col_countclusters_withNA == max_col_countclusters #compare the two count matrices
all_data$Cluster[1:nrow(analdata_test)] <- ifelse(compareCountClust, NameClusts[max_col_countclusters - 1], all_data$Cluster) #you subtract one because of additional NA column
iteration_current <- iteration_current + 1
unclusteredNum <- sum(is.na(all_data$Cluster))
if (previousUnclustereNum == unclusteredNum)
repeatedUnclustNum <- repeatedUnclustNum + 1
else {
repeatedUnclustNum <- 0
previousUnclustereNum <- unclusteredNum
}
print(paste("Iteration: ", iteration_current, " - Number of remaining unclustered:", sum(is.na(all_data$Cluster))))
if (unclusteredNum == 0)
print("Cluster labeling successfully Completed.")
invisible(gc())
}
I guess you can use this for any other type of clustering algorithm, it doesn't matter how you decided the cluster labels for the train data, as long as they are in your all_data before running the code.
Hope this help.
Not the most efficient or rigorous code. So, happy to see suggestions how to improve it.
*Note: I used t-SNE to compare the clustering of train with the test data and looks impressively clean. so, it seems it is working.

Resources