Using the nodes argument for centrality metrics in sna - r

I am trying to calculate centrality metrics for a specific node within a graph using statnet (I can't just use igraph since it doesn't have certain metrics I'd like).
How do I use the nodes argument of these functions to specify this? For example, take prestige
# use the faux.magnolia.high dataset from ergm (1461 vertices and 974 edges)
library("ergm")
data(“faux.magnolia.high”)
# Try calculating for node 1
sna::prestige(faux.magnolia.high, nodes = 1, gmode = "graph")
1
# Try calculating for node 2
sna::prestige(faux.magnolia.high, nodes = 2, gmode = "graph")
NA

Looks like this is a bug in degree-related versions of prestige. This will work but calculations will be done on the whole graph:
sna::prestige(faux.magnolia.high, gmode = "graph")[2]
See Skye's full response on the statnet mailing list:
http://mailman13.u.washington.edu/pipermail/statnet_help/2016/002175.html

Related

Creating weighted igraph network using two-column edge list

I'm in the process of creating a weighted igraph network object from a edge list containing two columns from and to. It has proven to be somewhat challenging for me, because when doing a workaround, I notice changes in the network metrics and I believe I'm doing something wrong.
library(igraph)
links <- read.csv2("edgelist.csv")
vertices <- read.csv2("vertices.csv")
network <- graph_from_data_frame(d=links,vertices = vertices,directed = TRUE)
##the following step is included to remove self-loops that I have used to include all isolate nodes to the network##
network <- simplify(network,remove.multiple = FALSE, remove.loops = TRUE)
In this situation I have successfully created a network object. However, it is not weighted. Therefore I create a second network object by taking the adjacency matrix from the objected created earlier and creating the new igraph object from it like this:
gettheweights <- get.adjacency(network)
network2 <- graph_from_adjacency_matrix(gettheweights,mode = "directed",weighted = TRUE)
However, after this when I call both of the objects, I notice a difference in the number of edges, why is this?
network2
IGRAPH ef31b3a DNW- 200 1092 --
network
IGRAPH 934d444 DN-- 200 3626 --
Additionally, I believe I've done something wrong because if they indeed would be the same network, shouldn't their densities be the same? Now it is not the case:
graph.density(network2)
[1] 0.02743719
graph.density(network)
[1] 0.09110553
I browsed and tried several different answers found from here but many were not 1:1 identical and I failed to find a solution.
All seems to be in order. When you re-project a network with edge-duplicates to be represented as a weight by the number of edges between given vertices, the density of your graph should change.
When you you test graph.density(network2) and graph.density(network), they should be different if indeed edge-duplicates were reduced to single-edges with weight as an edge attribute, as your output from network2 and network suggest.
This (over-) commented code goes through the process.
library(igraph)
# Data that should resemble yours
edges <- data.frame(from=c("A","B","C","D","E","A","A","A","B","C"),
to =c("A","C","D","A","B","B","B","C","B","D"))
vertices <- unique(unlist(edges))
# Building graphh in the same way as you do
g0 <- graph_from_data_frame(d=edges, vertices=vertices, directed = TRUE)
# Note that the graph is "DN--": directed, named, but NOT Weighted, since
# Instead of weighted edges, we have a whole lot of dubble edges
(g0)
plot(g0)
# We can se the dubble edges in the adjacency matrix as >1
get.adjacency(g0)
# Use simplify to remove LOOPS ONLY as we can see in the adjacency metrix test
g1 <- simplify(g0, remove.multiple = FALSE, remove.loops = TRUE)
get.adjacency(g1) == get.adjacency(g0)
# Turn the multiple edges into edge-weights by jumping through an adjacency matrix
g2 <- graph_from_adjacency_matrix(get.adjacency(g1), mode = "directed", weighted = TRUE)
# Instead of multiple edges (like many links between "A" and "B"), there are now
# just single edges with weights (hence the density of the network's changed).
graph.density(g1) == graph.density(g2)
# The former doubble edges are now here:
E(g2)$weight
# And we can see that the g2 is now "Named-Directed-Weighted" where g1 was only
# "Named-Directed" and no weights.
(g1);(g2)
# Let's plot the weights
E(g2)$width = E(g2)$weight*5
plot(g2)
A shortcoming of this/your method, however, is that the adjacency matrix is able to carry only the edge-count between any given vertices. If your edge-list contains more variables than i and j, the use of graph_from_data_frame() would normally embed edge-attributes of those variables for you straight from your csv-import (which is nice).
When you convert the edges into weights, however, you would loose that information. And, come to think of it, that information would have to be "converted" too. What would we do with two edges between the same vertices that have different edge-attributes?
At this point, the answer goes slightly beyond your question, but still stays in the realm of explaining the relation between graphs of multiple edges between the same vertices and their representation as weighted graphs with only one structural edge per verticy.
To convert edge-attributes along this transformation into a weighted graph, I suggest you'd use dplyr to "rebuild" any edge-attributes manually in order to keep control of how they are supposed to be merged down when recasting into a weighted one.
This picks up where the code above left off:
# Let's imagine that our original network had these two edge-attributes
E(g0)$coolness <- c(1,2,1,2,3,2,3,3,2,2)
E(g0)$hotness <- c(9,8,2,3,4,5,6,7,8,9)
# Plot the hotness
E(g0)$color <- colorRampPalette(c("green", "red"))(10)[E(g0)$hotness]
plot(g0)
# Note that the hotness between C and D are very different
# When we make your transformations for a weighted netowk, we loose the coolness
# and hotness information
g2 <- g0 %>% simplify(remove.multiple = FALSE, remove.loops = TRUE) %>%
get.adjacency() %>%
graph_from_adjacency_matrix(mode = "directed", weighted = TRUE)
g2$hotness # Naturally, the edge-attributes were lost!
# We can use dplyr to take controll over how we'd like the edge-attributes transfered
# when multiple edges in g0 with different edge attributes are supposed to merge into
# one single edge
library(dplyr)
recalculated_edge_attributes <-
data.frame(name = ends(g0, E(g0)) %>% as.data.frame() %>% unite("name", V1:V2, sep="->"),
hotness = E(g0)$hotness) %>%
group_by(name) %>%
summarise(mean_hotness = mean(hotness))
# We used a string-version of the names of connected verticies (like "A->B") to refere
# to the attributes of each edge. This can now be used to merge back the re-calculated
# edge-attributes onto the weighted graph in g2
g2_attributes <- data.frame(name = ends(g2, E(g2)) %>% as.data.frame() %>% unite("name", V1:V2, sep="->")) %>%
left_join(recalculated_edge_attributes, by="name")
# And manually re-attatch our mean-attributes onto the g2 network
E(g2)$mean_hotness <- g2_attributes$mean_hotness
E(g2)$color <- colorRampPalette(c("green", "red"))(max(E(g2)$mean_hotness))[E(g2)$mean_hotness]
# Note how the link between A and B has turned into the brown mean of the two previous
# green and red hotness-edges
plot(g2)
Sometimes, your analyses may benefit from either structure (weighted no duplicates or unweighted with duplicates). Algorithms for, for example, shortest paths are able to incorporate edge-weight as described in this answer, but other analyses might not allow for or be intuitive when using the weighted version of your network data.
Let purpose guide your structure.

Setting Size of Detected Communities in R

I'm using igraph community-detection and the community sizes are either too small or too large. Is there any way to specify the size of the detected communities? If not, is there any way for me to manually split or merge communities detected from igraph? Thanks!
Whilst I don't think it's possible to set/specify the size of a community detected by igraph, some of the community detection algorithms allow you to specify how many communities you want (an alternative to splitting/merging).
You can use either the cluster_spinglass() function and set spins to be the number of communities desired. Or use one of the hierarchical methods and then use cut_at() to get the desired number of communities, using the no argument to specify how many communities you want.
Example code:
# Set up your graph object
g <-[an igraph object] # set up your graph
# Use spinglass to create a set number of communities
sg <- g %>% cluster_spinglass(spins = 10) # produces 10 communities using spinglass algorithm
# Use hierarchical methods and cut_at to create a set number of communities
walk <- g %>% cluster_walktrap() %>% cut_at(no = 10)
eb <- g %>% cluster_edge_betweenness() %>% cut_at(no = 10)
Note that the spinglass method will give you back a communities object, whereas the cut_at method simply gives you back the community indices for all nodes in the graph (i.e. a simple numeric vector).
You can find more details on the communities help page.

After clustering in R (iGraph, etc), can you maintain nodes+edges from a cluster to do individual cluster analysis?

Basically I have tried a few different ways of clustering. I can usually get to a point in iGraph where each node is labeled with a cluster. I can then identify all the nodes within a single cluster. However, this loses their edges.
I'd have to re-iterate back over the original dataset for all the nodes in cluster 1 to get only those where both nodes+the edge are within the cluster. I'd have to do this for every cluster.
This seems like a painfully long process and there is probably a shortcut my google-fu is missing.
So, is there an easy way to, after clustering or performing community detection processes, to maintain an individual cluster/community as its own smaller graph -- that is, retaining all nodes AND edges between them?
You can use delete.vertices() to create a subgraph. Example:
library(igraph)
set.seed(123)
# create random graph
g <- barabasi.game(100, directed = F)
plot(g, layout=layout.fruchterman.reingold)
# do community detection
wc <- multilevel.community(g)
V(g)$community <- membership(wc)
# make community 1 subgraph
g_sub <- delete.vertices(g, V(g)[community != 1])
plot(g_sub, layout=layout.fruchterman.reingold)
An alternative:
#Create random network
d <- sample_gnm(n=50,m=40)
#Identify the communities
dc <- cluster_walktrap(d)
#Induce a subgraph out of the first community
dc_1 <- induced.subgraph(d,dc[[1]])
#plot that specific community
plot(dc_1)

How to spread out community graph made by using igraph package in R

Trying to find communities in tweet data. The cosine similarity between different words forms the adjacency matrix. Then, I created graph out of that adjacency matrix. Visualization of the graph is the task here:
# Document Term Matrix
dtm = DocumentTermMatrix(tweets)
### adjust threshold here
dtms = removeSparseTerms(dtm, 0.998)
dim(dtms)
# cosine similarity matrix
t = as.matrix(dtms)
# comparing two word feature vectors
#cosine(t[,"yesterday"], t[,"yet"])
numWords = dim(t)[2]
# cosine measure between all column vectors of a matrix.
adjMat = cosine(t)
r = 3
for(i in 1:numWords)
{
highElement = sort(adjMat[i,], partial=numWords-r)[numWords-r]
adjMat[i,][adjMat[i,] < highElement] = 0
}
# build graph from the adjacency matrix
g = graph.adjacency(adjMat, weighted=TRUE, mode="undirected", diag=FALSE)
V(g)$name
# remove loop and multiple edges
g = simplify(g)
wt = walktrap.community(g, steps=5) # default steps=2
table(membership(wt))
# set vertex color & size
nodecolor = rainbow(length(table(membership(wt))))[as.vector(membership(wt))]
nodesize = as.matrix(round((log2(10*membership(wt)))))
nodelayout = layout.fruchterman.reingold(g,niter=1000,area=vcount(g)^1.1,repulserad=vcount(g)^10.0, weights=NULL)
par(mai=c(0,0,1,0))
plot(g,
layout=nodelayout,
vertex.size = nodesize,
vertex.label=NA,
vertex.color = nodecolor,
edge.arrow.size=0.2,
edge.color="grey",
edge.width=1)
I just want to have some more gap between separate clusters/communities.
To the best of my knowledge, you can't layout vertices of the same community close to each other, using igraph only. I have implemented this function in my package NetPathMiner. It seems it is a bit hard to install the package just for the visualization function. I will write the a simple version of it here and explain what it does.
layout.by.attr <- function(graph, wc, cluster.strength=1,layout=layout.auto) {
g <- graph.edgelist(get.edgelist(graph)) # create a lightweight copy of graph w/o the attributes.
E(g)$weight <- 1
attr <- cbind(id=1:vcount(g), val=wc)
g <- g + vertices(unique(attr[,2])) + igraph::edges(unlist(t(attr)), weight=cluster.strength)
l <- layout(g, weights=E(g)$weight)[1:vcount(graph),]
return(l)
}
Basically, the function adds an extra vertex that is connected to all vertices belonging to the same community. The layout is calculated based on the new graph. Since each community is now connected by a common vertex, they tend to cluster together.
As Gabor said in the comment, increasing edge weights will also have similar effect. The function leverages this information, by increasing a cluster.strength, edges between created vertices and their communities are given higher weights.
If this is still not enough, you extend this principle (calculating the layout on a more connected graph) by adding edges between all vertices of the same communities (forming a clique). From my experience, this is a bit of an overkill.

Generating undirected network with pre-specified degree distribution without any self loops

I would like to generate an undirected network with 100 nodes, where half of the nodes have a degree of 10 and the other half has a degree of 3. Is such a network possible to construct without self loops?
Using the code specified below:
library(graph)
degrees=c(rep(3,50),rep(10,50))
names(degrees)=paste("node",seq_along(degrees)) #nodes must be names
x=randomNodeGraph(degrees)
I can obtain such a graph, but there are self-loops included.
Is there any way to get a graph without self loops?
It is easy to do with the graph package from Bioconductor (see here)
#install graph from Bioconductor
source("http://bioconductor.org/biocLite.R")
biocLite("graph")
#load graph and make the specified graph
library(graph)
degrees=c(rep(3,50),rep(10,50))
names(degrees)=paste("node",seq_along(degrees)) #nodes must be names
x=randomNodeGraph(degrees)
#verify graph
edges=edgeMatrix(x)
edgecount=table(as.vector(edges))
table(edgecount)
#edgecount
# 3 10
#50 50
The Erdős–Gallai theorem answers the question if such a graph is possible to construct.
It is based on the non-decreasing degree sequence of your graph which is in your case
ds <- c( rep( 10, 50 ), rep(3,50) )
You can calculate the right-hand side of the inequality by
rhs <- (1:100 * 0:99) + c( rev(cumsum(rev(apply( data.frame(ds, 1:100) , 1, min ))))[-1], 0 )
And the left-hand side by
lhs <- cumsum( ds )
Finally:
all( lhs <= rhs )
[1] TRUE

Resources