Creating a subgraph using Cypher projection - graph

I am trying to create a subgraph of my graph using Cypher projection because I want to use the GDS library. First, I am creating a subgraph using Cypher query which works perfectly fine. Here is the query:
// Filter for only recurrent events
WITH [path=(m:IDHcodel)--(n:Tissue)
WHERE (m.node_category = 'molecular' AND n.event_class = 'Recurrence')
AND NOT EXISTS((m)--(:Tissue{event_class:'Primary'})) | m] AS recur_events
// Obtain the sub-network with 2 or more patients in edges
MATCH p=(m1)-[r:hasIDHcodelPatients]->(m2)
WHERE (m1 IN recur_events AND m2 IN recur_events AND r.total_common_patients >= 2)
WITH COLLECT(p) AS all_paths
WITH [p IN all_paths | nodes(p)] AS path_nodes, [p IN all_paths | relationships(p)] AS path_rels
RETURN apoc.coll.toSet(apoc.coll.flatten(path_nodes)) AS subgraph_nodes, apoc.coll.flatten(path_rels) AS subgraph_rels
So far so good. Now all I am trying to do is a Cypher projection by sending the subgraph nodes and subgraph rels as parameters in the GDS create query and this gives me a null pointer exception:
// All the above lines except using WITH instead of RETRUN in the last line. ie.,
...
WITH apoc.coll.toSet(apoc.coll.flatten(path_nodes)) AS subgraph_nodes, apoc.coll.flatten(path_rels) AS subgraph_rels
// Call gds library to create a graph by sending subgraph_nodes and subgraph_rels as parameters
CALL gds.graph.create.cypher(
'example',
'MATCH (n) where n in $sn RETURN id(n) as id',
'MATCH ()-[r]-() where r in $sr RETURN r.start as source , r.end as target',
{parameters: {sn: subgraph_nodes, sr: subgraph_rels} }
) YIELD graphName AS graph, nodeQuery, nodeCount AS nodes, relationshipQuery, relationshipCount AS rels
RETURN graph
What could be wrong? Thanks.

To access start and end node of a relationship, there is a slightly different syntax that you are using:
WITH apoc.coll.toSet(apoc.coll.flatten(path_nodes)) AS subgraph_nodes, apoc.coll.flatten(path_rels) AS subgraph_rels
// Call gds library to create a graph by sending subgraph_nodes and subgraph_rels as parameters
CALL gds.graph.create.cypher(
'example',
'MATCH (n) where n in $sn RETURN id(n) as id',
'MATCH ()-[r]-() where r in $sr RETURN id(startNode(r)) as source , id(endNode(r)) as target',
{parameters: {sn: subgraph_nodes, sr: subgraph_rels} }
) YIELD graphName AS graph, nodeQuery, nodeCount AS nodes, relationshipQuery, relationshipCount AS rels
RETURN graph
This is what I noticed, hopefully this is the only error.

Related

How is the number of random walks determined in GDS/Neo4j?

I am running the random walk algorithm on my Neo4j graph named 'example', with the minimum allowed walk length (2) and walks per node (1). Namely,
CALL gds.beta.randomWalk.stream(
'example',
{
walkLength: 2,
walksPerNode: 1,
randomSeed: 42,
concurrency: 1
}
)
YIELD nodeIds, path
RETURN nodeIds, [node IN nodes(path) | node.name ] AS event_name
And I get 41 walks. How is this number determined? I checked the graph and it contains 161 nodes and 574 edges. Any insights?
Added later: Here is more info on the projected graph that I am constructing. Basically, I am filtering on nodes and relationships and just projecting the subgraph and doing nothing else. Here is the code -
// Filter for only IDH Codel recurrent events
WITH [path=(m:IDHcodel)--(n:Tissue)
WHERE (m.node_category = 'molecular' AND n.event_class = 'Recurrence')
AND NOT EXISTS((m)--(:Tissue{event_class:'Primary'})) | m] AS recur_events
// Obtain the sub-network with 2 or more patients in edges
MATCH p=(m1)-[r:hasIDHcodelPatients]-(m2)
WHERE (m1 IN recur_events AND m2 IN recur_events AND r.total_common_patients >= 2)
WITH COLLECT(p) AS all_paths
WITH [p IN all_paths | nodes(p)] AS path_nodes, [p IN all_paths | relationships(p)] AS path_rels
WITH apoc.coll.toSet(apoc.coll.flatten(path_nodes)) AS subgraph_nodes, apoc.coll.flatten(path_rels) AS subgraph_rels
// Form the GDS Cypher projection
CALL gds.graph.create.cypher(
'example',
'MATCH (n) where n in $sn RETURN id(n) as id',
'MATCH ()-[r]-() where r in $sr RETURN id(startNode(r)) as source , id(endNode(r)) as target, { LINKS: { orientation: "UNDIRECTED" } }',
{parameters: {sn: subgraph_nodes, sr: subgraph_rels} }
)
YIELD graphName AS graph, nodeQuery, nodeCount AS nodes, relationshipQuery, relationshipCount AS rels
RETURN graph, nodes, rels
Thanks.
It seems that the documentation is missing the description for the sourceNodes parameter, which would tell you how many walks will be created.
We don't know the default value, but we can use the parameter to set the source nodes that the walk should start from.
For example, you could use all the nodes in the graph to be treated as a source node (the random walk will start from them).
MATCH (n)
WITH collect(n) AS nodes
CALL gds.beta.randomWalk.stream(
'example',
{ sourceNodes:nodes,
walkLength: 2,
walksPerNode: 1,
randomSeed: 42,
concurrency: 1
}
)
YIELD nodeIds, path
RETURN nodeIds, [node IN nodes(path) | node.name ] AS event_name
This way you should get 161 walks as there are 161 nodes in your graph and the walksPerNode is set to 1, so a single random walk will start from every node in the graph. In essence, the number of source nodes times the walks per node will determine the number of random walks.

All path *lengths* from source to target in Directed Acyclic Graph

I have a graph with an adjacency matrix shape (adj_mat.shape = (4000, 4000)). My current problem involves finding the list of path lengths (the sequence of nodes is not so important) that traverses from the source (row = 0 ) to the target (col = trans_mat.shape[0] -1).
I am not interested in finding the path sequences; I am only interested in propagating the path length. As a result, this is different from finding all simple paths - which would be too slow (ie. find all paths from source to target; then score each path). Is there a performant way to do this quickly?
DFS is suggested as one possible strategy (noted here). My current implementation (below) is simply not optimal:
# create graph
G = nx.from_numpy_matrix(adj_mat, create_using=nx.DiGraph())
# initialize nodes
for node in G.nodes:
G.nodes[node]['cprob'] = []
# set starting node value
G.nodes[0]['cprob'] = [0]
def propagate_prob(G, node):
# find incoming edges to node
predecessors = list(G.predecessors(node))
curr_node_arr = []
for prev_node in predecessors:
# get incoming edge weight
edge_weight = G.get_edge_data(prev_node, node)['weight']
# get predecessor node value
if len(G.nodes[prev_node]['cprob']) == 0:
G.nodes[prev_node]['cprob'] = propagate_prob(G, prev_node)
prev_node_arr = G.nodes[prev_node]['cprob']
# add incoming edge weight to prev_node arr
curr_node_arr = np.concatenate([curr_node_arr, np.array(edge_weight) + np.array(prev_node_arr)])
# update current node array
G.nodes[node]['cprob'] = curr_node_arr
return G.nodes[node]['cprob']
# calculate all path lengths from source to sink
part_func = propagate_prob(G, 4000)
I don't have a large example by hand (e.g. >300 nodes), but I found a non recursive solution:
import networkx as nx
g = nx.DiGraph()
nx.add_path(g, range(7))
g.add_edge(0, 3)
g.add_edge(0, 5)
g.add_edge(1, 4)
g.add_edge(3, 6)
# first step retrieve topological sorting
sorted_nodes = nx.algorithms.topological_sort(g)
start = 0
target = 6
path_lengths = {start: [0]}
for node in sorted_nodes:
if node == target:
print(path_lengths[node])
break
if node not in path_lengths or g.out_degree(node) == 0:
continue
new_path_length = path_lengths[node]
new_path_length = [i + 1 for i in new_path_length]
for successor in g.successors(node):
if successor in path_lengths:
path_lengths[successor].extend(new_path_length)
else:
path_lengths[successor] = new_path_length.copy()
if node != target:
del path_lengths[node]
Output: [2, 4, 2, 4, 4, 6]
If you are only interested in the number of paths with different length, e.g. {2:2, 4:3, 6:1} for above example, you could even reduce the lists to dicts.
Background
Some explanation what I'm doing (and I hope works for larger examples as well). First step is to retrieve the topological sorting. Why? Then I know in which "direction" the edges flow and I can simply process the nodes in that order without "missing any edge" or any "backtracking" like in a recursive variant. Afterwards, I initialise the start node with a list containing the current path length ([0]). This list is copied to all successors, while updating the path length (all elements +1). The goal is that in each iteration the path length from the starting node to all processed nodes is calculated and stored in the dict path_lengths. The loop stops after reaching the target-node.
With igraph I can calculate up to 300 nodes in ~ 1 second. I also found that accessing the adjacency matrix itself (rather than calling functions of igraph to retrieve edges/vertices) also saves time. The two key bottlenecks are 1) appending a long list in an efficient manner (while also keeping memory) 2) finding a way to parallelize. This time grows exponentially past ~300 nodes, I would love to see if someone has a faster solution (while also fitting into memory).
import igraph
# create graph from adjacency matrix
G = igraph.Graph.Adjacency((trans_mat_pad > 0).tolist())
# add edge weights
G.es['weight'] = trans_mat_pad[trans_mat_pad.nonzero()]
# initialize nodes
for node in range(trans_mat_pad.shape[0]):
G.vs[node]['cprob'] = []
# set starting node value
G.vs[0]['cprob'] = [0]
def propagate_prob(G, node, trans_mat_pad):
# find incoming edges to node
predecessors = trans_mat_pad[:, node].nonzero()[0] # G.get_adjlist(mode='IN')[node]
curr_node_arr = []
for prev_node in predecessors:
# get incoming edge weight
edge_weight = trans_mat_pad[prev_node, node] # G.es[prev_node]['weight']
# get predecessor node value
if len(G.vs[prev_node]['cprob']) == 0:
curr_node_arr = np.concatenate([curr_node_arr, np.array(edge_weight) + propagate_prob(G, prev_node, trans_mat_pad)])
else:
curr_node_arr = np.concatenate([curr_node_arr, np.array(edge_weight) + np.array(G.vs[prev_node]['cprob'])])
## NB: If memory constraint, uncomment below
# set max size
# if len(curr_node_arr) > 100:
# curr_node_arr = np.sort(curr_node_arr)[:100]
# update current node array
G.vs[node]['cprob'] = curr_node_arr
return G.vs[node]['cprob']
# calculate path lengths
path_len = propagate_prob(G, trans_mat_pad.shape[0]-1, trans_mat_pad)

Best way to count downstream with edge data

I have a NetworkX problem. I create a digraph with a pandas DataFrame and there is data that I set along the edge. I now need to count the # of unique sources for nodes descendants and access the edge attribute.
This is my code and it works for one node but I need to pass a lot of nodes to this and get unique counts.
graph = nx.from_pandas_edgelist(df, source="source", target="target",
edge_attr=["domain", "category"], create_using=nx.DiGraph)
downstream_nodes = list(nx.descendants(graph, node))
downstream_nodes.append(node)
subgraph = graph.subgraph(downstream_nodes).copy()
domain_sources = {}
for s, t, v in subgraph.edges(data=True):
if v["domain"] in domain_sources:
domain_sources[v["domain"]].append(s)
else:
domain_sources[v["domain"]] = [s]
down_count = {}
for k, v in domain_sources.items():
down_count[k] = len(list(set(v)))
It works but, again, for one node the time is not a big deal but I'm feeding this routine at least 40 to 50 nodes. Is this the best way? Is there something else I can do that can group by an edge attribute and uniquely count the nodes?
Two possible enhancements:
Remove copy from line creating the sub graph. You are not changing anything and the copy is redundant.
Create a defaultdict with keys of set. Read more here.
from collections import defaultdict
import networkx as nx
# missing part of df creation
graph = nx.from_pandas_edgelist(df, source="source", target="target",
edge_attr=["domain", "category"], create_using=nx.DiGraph)
downstream_nodes = list(nx.descendants(graph, node))
downstream_nodes.append(node)
subgraph = graph.subgraph(downstream_nodes)
domain_sources = defaultdict(set)
for s, t, v in subgraph.edges(data=True):
domain_sources[v["domain"]].add(s)
down_count = {}
for k, v in domain_sources.items():
down_count[k] = len(set(v))

Neo4j: match with multiple relations in timely manner

Consider following nodes that are connected between each other with 2 type of edges: direct and intersect. The query needs to discover all possible paths between 2 nodes that satisfies all following rules:
0..N direct edges
0..1 intersect edge
intersect edge can be between direct edges
These paths are considered valid between nodeA and nodeZ:
(nodeA)-[:direct]->(nodeB)-[:direct]->(nodeC)->[:direct]->(nodeZ)
(nodeA)-[:intersect]->(nodeB)-[:direct]->(nodeC)->[:direct]->(nodeZ)
(nodeA)-[:direct]->(nodeB)-[:intersect]->(nodeC)->[:direct]->(nodeZ)
(nodeA)-[:direct]->(nodeB)->[:direct]->(nodeC)-[:intersect]->(nodeZ)
Basically intersect edge can happen anywhere in the path but only once.
My ideal cypher query in non-existing neo4j version would be this:
MATCH (from)-[:direct*0..N|:intersect*0..1]->(to)
But neo4j doesn't support multiple constraints for edges type :(.
UPDATE 23.04.16
There 6609 nodes (out of 550k total), 5184 edges of type direct (out of 440k total) and 34119 of type intersect (out of 37289 total). There are some circular references expected (which neo4j avoids, isn't it?)
The query that looked promising but failed to finish in a manner of seconds:
MATCH p = (from {from: 1})-[:direct|intersect*0..]->(to {to: 99})
WHERE
123 < from.departureTS < 123 + 86400 //next day
AND REDUCE(s = 0, x IN RELATIONSHIPS(p) | CASE TYPE(x) WHEN 'intersect' THEN s + 1 ELSE s END) <= 1
return p;
Here is a query that conforms to the stated requirements:
MATCH p = (from)-[:direct|intersect*0..]->(to)
WHERE REDUCE(s = 0, x IN RELATIONSHIPS(p) |
CASE WHEN TYPE(x) = 'intersect' THEN s + 1 ELSE s END) <= 1
return p;
It returns all paths with 0 or more direct relationships and 0 or 1 intersect relationships.
This will do what you want:
// Cybersam's correction:
MATCH p = ((from)-[:direct*0..]->(middle)-[:intersect*0..1]->(middle2)-[:direct*0..]->(to)‌​) return DISTINCT p;
return p
Here's the test scenario I used:
create (a:nodeA {name: "A"})
create (b:nodeB {name: "B"})
create (c:nodeC {name: "C"})
create (z:nodeZ {name: "Z"})
merge (a)-[:direct {name: "D11"}]->(b)-[:direct {name: "D21"}]->(c)-[:direct {name: "D31"}]->(z)
merge (a)-[:intersect {name: "I12"}]->(b)-[:direct {name: "D22"}]->(c)-[:direct {name: "D32"}]->(z)
merge (a)-[:direct {name: "D13"}]->(b)-[:intersect {name: "I23"}]->(c)-[:direct {name: "D33"}]->(z)
merge (a)-[:direct {name: "D14"}]->(b)-[:direct {name: "D24"}]->(c)-[:intersect {name: "I34"}]->(z)
merge (a)-[:intersect {name: "I15"}]->(z)
// Cybersam's correction:
MATCH p = ((from)-[:direct*0..]->(middle)-[:intersect*0..1]->(middle2)-[:direct*0..]->(to)‌​) return DISTINCT p;
return p
I made the mistake of thinking the graph on the browser reflected the data that was returned in "p" - it did not, you have to look at the "rows" part of the report to get all the details.
This query will also return single nodes- which fits the requirements.

How do you find all vertices that have no incoming edges?

Below I'm trying to find all vertices where there are no incoming edges using a filter on the vertices. fullyQualifiedName is a unique index. I noticed some vertices that appeared to have incoming edges so I added a step below to just print them out if they existed. I would have expected no output since I thought I had filtered these vertices above; however, I'm still seeing incoming edges displayed.
def g = BerkeleyGraphFactory.create()
def vertices = g.V.filter {
it.inE('depends').count() == 0
}
Set<String> u = []
u.addAll(vertices.collect {v->
v.fullyQualifiedName
})
u.each {
def focusIter = g.V('fullyQualifiedName', it)
def vertex = focusIter.next()
// this shouldn't print out anything since these vertices were filtered above
vertex.inE('depends').each { e->
def classRefV = e.outV.next()
println it + " is used by " + classRefV.name + " " + e.toString()
}
}
I can't seem to recreate your problem. A rough simplification of your code here seems to show that things work as expected:
gremlin> g = TinkerGraphFactory.createTinkerGraph()
==>tinkergraph[vertices:6 edges:6]
gremlin> ids = g.V.filter{!it.inE('knows').hasNext()}.id.toList()
==>1
==>3
==>5
==>6
gremlin> ids.collect{g.v(it).inE('knows').toList()}
==>[]
==>[]
==>[]
==>[]
Perhaps you can try to convert your code to match the approach I took to see if that helps? I'm not sure what else to say short of you providing some sample data to work with for your specific case where the problem can be recreated.

Resources