Clustering algorithm for unweighted graphs - graph

I am having unweighted and undirected graph as my network which is basically the network of proteins.I want to cluster this graph and divide this graph in to disjoint clusters. Can any 1 suggest clustering algorithms which i can apply on the biological network which is unweighted and undirected graph.

Several graph partitioning algorithms exist, they use different paradigm to tackle the same problem.
The most common is the Louvain's method optimizing Newman's modularity.
In python using Networkx as graph library you can use community to partition your graph.
The fastest graph partitioning uses Metis. It is based hierarchical graph coarsening.
You also have N-cut originally designed to segment images.
Finally, you can partition graphs using stochastic block-model techniques. A very good python implementation of Louvain and several block-model techniques can be found in Graph-tool.
My favorite is the latter, it is fast (based on the Boost graph library), relatively easy to use and tuneable.
EDIT: Note that in graph-tool, what we call Louvain modularity is indeed Newman's algorithm, the docs are here.

Related

Algorithm for efficient identification of bipartite graph

I'm looking for an algorithm to efficiently determine if a given graph (implemented either as an adyacency matrix or as an adyacency list — the one that makes the algorithm run faster) is bipartite or not.
I'm aware that a slight modification of BFS algorithm can suit this purpose, but I haven't been able to lower BFS's time-complexity of O(|E|+|V|).
I'd expect to find an algorithm that is a bit faster, taking advantage o the fact that a bipartite graph lacks cicles with an odd number of edges.
Does anyone know such algorithm or have any suggestion to address this problem?

Difference between Graph Neural Networks and GraphSage

What is the difference between the basic Graph Convolutional Neural Networks and GraphSage?
Which of the methods is more suited to unsupervised learning and in that case how is the loss function defined?
Please share the base papers for both the methods.
Graph Convolutional Networks are inherently transductive i.e they can only generate embeddings for the nodes present in the fixed graph during the training.
This implies that, if in the future the graph evolves and new nodes (unseen during the training) make their way into the graph then we need to retrain the whole graph in order to compute the embeddings for the new node. This limitation makes the transductive approaches inefficient to get applied on the ever-evolving graphs (like social networks, protein-protein networks, etc) because of their inability to generalize on unseen nodes.
On the other hand, the GraphSage algorithm exploits the rich node features and the topological structure of each node’s neighborhood simultaneously to generate representations for new nodes without retraining efficiently. In addition to this GraphSage performs neighborhood sampling which provides the GraphSage algorithm its unique ability to scale up to billions of nodes in the graph
To find more detail one can follow this blogpost https://sachinsharma9780.medium.com/a-comprehensive-case-study-of-graphsage-algorithm-with-hands-on-experience-using-pytorchgeometric-6fc631ab1067
GCN Paper
GraphSage

Network Detection using Spectral Clustering, are there any good function in R?

I currently have an adjacency matrix I would like to perform spectral clustering on to determine the community each node belongs to. I have looked around, but there do not look to be implementations in either igraph or other packages.
Another issue is determining how many clusters you want. I was wondering if R has any packages that might help one find the optimal number of clusters to break an adjacency matrix into? Thanks.
I cannot advise for R, however, I can suggest this example implementation of Spectral Clustering using Python and Networkx (which is comparable to iGraph). It should not be hard to translate this into R.
For an introduction to Spectral Clustering see lectures 28-34 here and this paper.

evaluation metric for community detection using igraph in R?

I am running Community Detection in graphs and I run different community detection algorithm implemented in igraph listed here :
1. Edge-betweennes.community(w,-d)
2. walktrap.community (w,-d)
3. fastgreedy.community(w)
4. spinglass.community (w,d, not for unconnected graph)
5. infomap.community (w,d)
6. label.propagation.community(w)
7. Multivel.community(w)
8.leading.eigenvector.community (w)
as I have two types of graph one is directed an weighted and the other one is undirected and unweighted,
the one which I could use for both are four (1,2,4,5) which I get the error on the forth one as my graph is an unconnected graph, so there is three.
now I want to compare them using different evaluation metrics provided in here http://lab41.github.io/Circulo/ , as I searched there is modularity and compare.communities ( metrics listed here :http://www.inside-r.org/packages/cran/igraph/docs/compare.communities are ("vi", "nmi","split.join", "rand","adjusted.rand) in igraph).
what I am wondering about are :
is there any other algorithm which is implemented in igraph and is not in the list? and which will give me overlapping communities as well.
which of these metric could be used for weighted and directed graph and is there any implementation in igraph?
also which metric could be used for which algorithm? , as I go through one of the article "edge-betweeness"the metric used in there was the ground truth and they compare to the known community graph.
thank you in advance.
Yes, there are many algorithms which are not in iGraph package, to name one: RG+, presented in Cluster "Cores and Modularity Maximization" on 2010.
Modularity by far is the best measure to evaluate communities.
edge.betweenness simply gives you the betweenness centrality values of all the edges, it's not a measure to evaluate communities but can be used for one.

JUNG graphs: vertex similarity?

I have a JUNG graph containing about 10K vertices and 100K edges, and I'd like to get a measure of similarity between any pair of vertices.
The vertices represent concepts (e.g. dog, house, etc), and the links represent relations between concepts (e.g. related, is_a, is_part_of, etc).
The vertices are densely inter-linked, so a shortest-path approach doesn't give good results (the shortest paths are always very short).
What approaches would you recommend to rank the connectivity between vertices?
JUNG has some algorithms to score the importance of vertices, but I don't understand if there are measures of similarity between 2 vertices.
SimPack seems also promising.
Any hints?
The centrality scores don't measure similarity of pairs of vertices, but some kind of (depending on the method) centrality of single nodes of the network in general. Therefore this approach is possibly not what you want.
SimPack indeed has a nice goal set out, but for graphs it implements isomorphism-based comparations, which rather compare multiple graphs for similarity than pairs of nodes of one given graph. Therefore this is out of scope for now.
What you are seeking are so-called graph clustering methods (also called network module determination or network community determination methods), which divide the graph (network) into multiple partitions so that the nodes in each partition are more strongly interconnected with each other than with nodes of other partitions.
The most classic method is maybe the betweenness centrality clustering of Newman & Girvan where you can exploit the dendrogram for similarity calculation, and it is in JUNG. Of course there are throngs of methods nowadays. You may want to try (shameless plug) our ModuLand method, or read the fine table of module detection algorithms at the end of the Electronic Supplementary Material. That is an overlapping graph clustering method family, that is its result for each node is a vector containing the strengths of belonging to any respective cluster of the network. Pairwise node similarity is easy to derive from pairs of these node-to-cluster vectors.
Graph clustering is non-trivial, and possible you would need to adapt any method for very precise domain-specific results, but that's up to the reader ;) Good luck!

Resources