How can I optimize this query in neo4j? - graph

I have a unidirectional graph.
The structure is as follows:
There are about 20,000 nodes in the graph.
I make the simplest request: MATCH (b1)-[:NEXT_BAR*10]->(b2) RETURN b1.id, b2.id LIMIT 5
The request is processed quickly.
But if I increase the number of relationships, the query takes much longer to process. In other words, the speed depends on the number of relationships.
This request takes longer than 5 minutes to complete: MATCH (b1)-[:NEXT_BAR*10000]->(b2) RETURN b1.id, b2.id LIMIT 5
This is still a simplified version. The request can have more than two nodes and the number of relationships can still be a range.
How can I optimize a query with a large number of relationships?
Perhaps there are other graph DBMS where there is no such problem?

Variable-length relationship queries have exponential time and memory complexity.
If R is the average number of suitable relationships per node, and D is the depth of the search, then the complexity is O(R ** D). This complexity will exist in any DBMS.

The theory is simple here, but there are a couple of intricacies in the query execution.
-[:NEXT_BAR*10000]- matches a path that is precisely 10000 edges in size, so query engine spends some time to find these paths. Another thing to mention is that in (b1)-[...]- >(b2), b1 and b2 are not specific, which means that the query engine has to scall all nodes. If there is a limit, yea, scall all should return a limited number of items. The whole execution also depends on the efficiency of variable-length path implementation.
Some of the following might help:
Is it feasible to start from a specific node?
If there are branches, the only hope is aggressive filtering because of exponential complexity (as cybersam well explained).
Use a smaller number in the variable expand, or a range, e.g., [NEXT_BAR*..10000]. In this case, the query engine will match any path up to 10000 in size (different semantics, but maybe applicable).
* means the DFS type of execution. On the other hand, BFS might be the right approach. Memgraph (DISCLAIMER: I'm the co-founder and CTO) also supports BFS type of execution with filtering lambda.
Here is a Python script I've used to generate and import data into Memgraph. By using small nodes_no you can quickly notice the execution patterns.
import mgclient
# Make a connection to the database.
connection = mgclient.connect(
host='127.0.0.1',
port=7687,
sslmode=mgclient.MG_SSLMODE_REQUIRE)
connection.autocommit = True
cursor = connection.cursor()
# Clean and setup database instance.
cursor.execute("""MATCH (n) DETACH DELETE n;""")
cursor.execute("""CREATE INDEX ON :Node(id);""")
# Import dataset.
nodes_no = 10
# Create nodes.
for identifier in range(0, nodes_no):
cursor.execute("""CREATE (:Node {id: "%s"});""" % identifier)
# Create edges.
for identifier in range(1, nodes_no):
cursor.execute("""
MATCH (start_node:Node {id: "%s"})
MATCH (end_node:Node {id: "%s"})
CREATE (start_node)-[:NEXT_BAR]->(end_node);
""" % (identifier - 1, identifier))

Related

Efficient controlled random walks in gremlin

Goal
The objective is to efficiently generate random walks on a relatively large graph with uneven probabilities of going through edges depending on their type.
Configuration
Ubuntu VM, 23Go RAM
JanusGraph 0.6.1 full
Local graph (default conf/remote.yaml file used)
~1.8m vertices (~28k will be start nodes for the random walks)
~21m relationships (they can all be used in the random walks)
What I am doing
I am currently generating random walks with the sample command:
g.V(<startnode_id>).
repeat( local( both().sample(1) ) ).
times(<desired_randomwalk_length>).
path()
What I tried
I tried using a gremlinpython script to create a random walk generator that would first get all edges connected to the current node, then pick randomly an edge to go through and repeat <desired_randomwalk_length> times.
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from gremlin_python.process.anonymous_traversal import traversal
from gremlin_python.structure.graph import Vertex
from typing import List
connection = DriverRemoteConnection(<URL>, "g")
g = traversal().withRemote(connection)
def get_next_node(start:Vertex) -> Vertex:
next_vertices = g.V(start.id).both().fold().next()
return next_vertices[randint(0, len(next_vertices)-1)]
def get_random_walk(start:Vertex, length:int=10) -> List[Vertex]:
current_node = start
random_walk = [current_node]
for _ in range(length):
current_node = get_next_node(current_node)
random_walk.append(current_node)
return random_walk
Issues
While testing on a subset of the total graph (400k vertices, 1.5m rel), I got these results
Sample query, <desired_randomwalk_length> of 10: 100k random walks in 1h10
Gremlinpython function, <desired_randomwalk_length> of 4: 2k random walks in 1h+
The sample command is really fast, but there are a few problems:
It doesn't seem to truly be a uniform distribution pick amongst the edges (it seems to be successive coin tosses) which could lead to certain paths being taken more often, which then diminishes the interest of generating random walks. (I can't directly do what is recommended here as the nodes ids aren't in a sequence, thus I have to acquire them first.)
I haven't found a way to give different probabilities to different types of relationships.
Is there a better way to do random walks with Gremlin?
If there is none, is there a way to modify the sample query to rectify the assign probabilities to types of edges? Maybe even a way to have a better distribution of the sampling?
In last recourse, is there a way to improve the queries to make this "by hand" with a gremlinpython script?
Thanks to everyone reading/replying!
EDIT
Is there a way to do the following:
Given a r_type1, r_type2, r_type3, ... the acceptable relationship type for this random walk
Given a proba1, proba2, proba3, ... the probabilities of going through these relationship types
For each step
Sample a node for each relationship type r_type1, r_type2, r_type3, ...
Keep only one according to the probabilities proba1, proba2, proba3, ...
I think the second step could be done be sampling multiple nodes for each relationships, in accordance with the probas (which could be done by using a gremlinpython script to build the query). This still leaves the question of how to sample on multiple relationships from a single node, and how to randomly pick one in the sampled nodes.
I hope this is clear!
Thanks to #Kelvin Lawrence's Practical Gremlin (especially the union section), I managed to do what I wanted (or close enough).
The Gremlin query I get is the following:
g.V(<vertex_id>).
repeat(
local(
union(
both(<relationship_type1>).sample(N1),
both(<relationship_type2>).sample(N2),
...
).
sample(1)
)
).times(<walk_length>).
path()
The N_ values are set independently of the node, such that the least probable transition yields exactly 1 sample. This also means that the probabilities are not exactly respected where the number of relationships of a given type is inferior to the corresponding N_ value.
The union part is built in python using gremlinpython (nb_samples is the dictionary storing the number of samples needed for each relationship type)
from gremlin_python.process.graph_traversal import __, GraphTraversal
next_node_traversal:GraphTraversal = __.union(
*[
__.both(key).sample(nb_samples[key])
for key in nb_samples
]
).sample(1)
(Here we are using the * operator to unpack the list when passing it as argument to the union method)

Generate non-repeating random number using timestamp in Marklogic (XQuery)?

I want to generate non-repeating random number having time stamp in it. What could be the possible code for it?
I've tried using sem:uuid-string() function but it generates 36 long character which is very long.
I'd suggest taking a look at the ml-unique library. It provides 3 different methods for generating unique ids in MarkLogic, and explains to pros and cons of each. Maybe one of those fits your needs, or you can copy the code, and adapt as needed.
Note that a timestamp alone is not enough to guarantee uniqueness, particularly if generating multiple ids in one request, or when processing data in parallel.
The length of uuid string makes the chance of collisions very small by the way.
HTH!
It is not possible to generate a non-repeating random number and have the results fit into finite size. If 36 bytes is too large that further limits the theoretical maximum. The server itself uses 64 bit random numbers (effectively xdmp:random) for unique ID's. Attempting to to do better, with respect to collision probability, is futile - no matter what or how long a URI you use, internally references will be created as a 64 bit random number or as a hash value. The methods recommended will not produce an effectively colliding URI with less probability then the server itself will given non-colliding URI's of any size. Most likely attempts at more complex 'random' URI generation will result in much worse results due to the subtly of pseudo random number algorithms.
The code below generates (with arbitrary high probability) 10 different random numbers. Every iteration of for loop inserts newly generated random number into MarkLogic database. Exception error((), 'BREAK') will be thrown when 10 different numbers were already generated.
xquery version "1.0-ml";
xdmp:document-insert("/doc/random.xml",<root><a>{xdmp:random(100)}</a></root>);
try {
for $i in (1 to 200) (:200 can be replace with larger number to reduce probability that 10 different random numbers will never be selected.:)
return xdmp:invoke-function( function() as item()?
{ let $myrandom:= xdmp:random(100), $last:= count(doc("/doc/random.xml")/root/*)
return
if ($last lt 10) then (
if (doc("/doc/random.xml")/root/a/text() = $myrandom) then () else (xdmp:node-insert-after(doc("/doc/random.xml")/root/a[last()], <a>{$myrandom}</a>)))
else (if ($last eq 10) then (error((), 'BREAK')) else ())},
<options xmlns="xdmp:eval">
<transaction-mode>update</transaction-mode>
<transaction-mode>update-auto-commit</transaction-mode>
</options>)}
catch ($ex) {
if ($ex/error:code eq 'BREAK') then ("10 different random numbers were generated") else xdmp:rethrow() };

How could I calculate the number of recursions that a recursive rule does?

I deal with a problem; I want to calculate how many recursions a recursive rule of my code does.
My program examines whether an object is component of a computer hardware or not(through component(X,Y) predicate).E.g component(computer,motherboard) -> true.
It does even examine the case an object is not directly component but subcomponent of another component. E.g. subcomponent(computer,ram) -> true. (as ram is component of motherboard and motherboard is component of computer)
Because my code is over 400 lines I will present you just some predicates of the form component(X,Y) and the rule subcomponent(X,Y).
So, some predicates are below:
component(computer,case).
component(computer,power_supply).
component(computer,motherboard).
component(computer,storage_devices).
component(computer,expansion_cards).
component(case,buttons).
component(case,fans).
component(case,ribbon_cables).
component(case,cables).
component(motherboard,cpu).
component(motherboard,chipset).
component(motherboard,ram).
component(motherboard,rom).
component(motherboard,heat_sink).
component(cpu,chip_carrier).
component(cpu,signal_pins).
component(cpu,control_pins).
component(cpu,voltage_pins).
component(cpu,capacitors).
component(cpu,resistors).
and so on....
My rule is:
subcomponent(X,Z):- component(X,Z).
subcomponent(X,Z):- component(X,Y),subcomponent(Y,Z).
Well, in order to calculate the number of components that a given component X to a given component Y has-that is the number of recursions that the recursive rule subcomponents(X,Y), I have made some attempts that failed. However, I present them below:
i)
number_of_components(X,Y,N,T):- T is N+1, subcomponent(X,Y).
number_of_components(X,Y,N,T):- M is N+1, subcomponent(X,Z), number_of_components(Z,Y,M,T).
In this case I get this error: "ERROR: is/2: Arguments are not sufficiently instantiated".
ii)
number_of_components(X,Y):- bagof(Y,subcomponent(X,Y),L),
length(L,N),
write(N).
In this case I get as a result either 1 or 11 and after this number true and that's all. No logic at all!
iii)
count_elems_acc([], Total, Total).
count_elems_acc([Head|Tail], Sum, Total) :-
Count is Sum + 1,
count_elems_acc(Tail, Count, Total).
number_of_components(X,Y):- bagof(Y,subcomponent(X,Y),L),
count_elems_acc(L,0,Total),
write(Total).
In this case I get as results numbers which are not right according to my knowledge base.(or I mistranslate them-because this way seems to have some logic)
So, what am I doing wrong and what should I write instead?
I am looking forward to reading your answers!
One thing you could do is iterative deepening with call_with_depth_limit/3. You call your predicate (in this case, subcomponent/2). You increase the limit until you get a result, and if you get a result, the limit is the deepest recursion level used. You can see the documentation for this.
However, there is something easier you can do. Your database can be represented as an unweighted, directed, acyclic graph. So, stick your whole database in a directed graph, as implemented in library(ugraphs), and find its transitive closure. In the transitive closure, the neighbours of a component are all its subcomponents. Done!
To make the graph:
findall(C-S, component(C, S), Es),
vertices_edges_to_ugraph([], Es, Graph)
To find the transitive closure:
transitive_closure(Graph, Closure)
And to find subcomponents:
neighbours(Component, Closure, Subcomponents)
The Subcomponents will be a list, and you can just get its length with length/2.
EDIT
Some random thoughts: in your case, your database seems to describe a graph that is by definition both directed and acyclic (the component-subcomponent relationship goes strictly one way, right?). This is what makes it unnecessary to define your own walk through the graph, as for example nicely demonstrated in this great question and answers. So, you don't need to define your own recursive subcomponent predicate, etc.
One great thing about representing the database as a term when working with it, instead of keeping it as a flat table, is that it becomes trivial to write predicates that manipulate it: you get Prolog's backtracking for free. And since the S-representation of a graph that library(ugraph) uses is well-suited for Prolog, you most probably end up with a more efficient program, too.
The number of calls of a predicate can be a difficult concept. I would say, use the tools that your system make available.
?- profile(number_of_components(computer,X)).
20===================================================================
Total time: 0.00 seconds
=====================================================================
Predicate Box Entries = Calls+Redos Time
=====================================================================
$new_findall_bag/0 1 = 1+0 0.0%
$add_findall_bag/1 20 = 20+0 0.0%
$free_variable_set/3 1 = 1+0 0.0%
...
so:count_elems_acc/3 1 = 1+0 0.0%
so:subcomponent/2 22 = 1+21 0.0%
so:component/2 74 = 42+32 0.0%
so:number_of_components/2 2 = 1+1 0.0%
On the other hand, what is of utmost importance is the relation among clause variables. This is the essence of Prolog. So, try to read - let's say, in plain English - your rules.
i) number_of_components(X,Y,N,T) what relation N,T have to X ? I cannot say. So
?- leash(-all),trace.
?- number_of_components(computer,Y,N,T).
Call: (7) so:number_of_components(computer, _G1931, _G1932, _G1933)
Call: (8) _G1933 is _G1932+1
ERROR: is/2: Arguments are not sufficiently instantiated
Exception: (8) _G1933 is _G1932+1 ?
ii) number_of_components(X,Y) here would make much sense if Y would be the number_of_components of X. Then,
number_of_components(X,Y):- bagof(S,subcomponent(X,S),L), length(L,Y).
that yields
?- number_of_components(computer,X).
X = 20.
or better
?- aggregate(count, S^subcomponent(computer,S), N).
N = 20.
Note the usage of S. It is 'existentially quantified' in the goal where it appears. That is, allowed to change while proving the goal.
iii) count_elements_acc/3 is - more or less - equivalent to length/2, so the outcome (printed) seems correct, but again, it's the relation between X and Y that your last clause fails to establish. Printing from clauses should be used only when the purpose is to perform side effects... for instance, debugging...

Traverse Graph With Directed Cycles using Relationship Properties as Filters

I have a Neo4j graph with directed cycles. I have had no issue finding all descendants of A assuming I don't care about loops using this Cypher query:
match (n:TEST{name:"A"})-[r:MOVEMENT*]->(m:TEST)
return n,m,last(r).movement_time
The relationships between my nodes have a timestamp property on them, movement_time. I've simulated that in my test data below using numbers that I've imported as floats. I would like to traverse the graph using the timestamp as a constraint. Only follow relationships that have a greater movement_time than the movement_time of the relationship that brought us to this node.
Here is the CSV sample data:
from,to,movement_time
A,B,0
B,C,1
B,D,1
B,E,1
B,X,2
E,A,3
Z,B,5
C,X,6
X,A,7
D,A,7
Here is what the graph looks like:
I would like to calculate the descendants of every node in the graph and include the timestamp from the last relationship using Cypher; so I'd like my output data to look something like this:
Node:[{Descendant,Movement Time},...]
A:[{B,0},{C,1},{D,1},{E,1},{X,2}]
B:[{C,1},{D,1},{E,1},{X,2},{A,7}]
C:[{X,6},{A,7}]
D:[{A,7}]
E:[{A,3}]
X:[{A,7}]
Z:[{B,5}]
This non-Neo4J implementation looks similar to what I'm trying to do: Cycle enumeration of a directed graph with multi edges
This one is not 100% what you want, but very close:
MATCH (n:TEST)-[r:MOVEMENT*]->(m:TEST)
WITH n, m, r, [x IN range(0,length(r)-2) |
(r[x+1]).movement_time - (r[x]).movement_time] AS deltas
WHERE ALL (x IN deltas WHERE x>0)
RETURN n, collect(m), collect(last(r).movement_time)
ORDER BY n.name
We basically find all the paths between any of your nodes (beware cartesian products get very expensive on non-trivial datasets). In the WITH we're building a collection delta's that holds the difference between two subsequent movement_time properties.
The WHERE applies an ALL predicate to filter out those having any non-positive value - aka we guarantee increasing values of movement_time along the path.
The RETURN then just assembles the results - but not as a map, instead one collection for the reachable nodes and the last value of movement_time.
The current issue is that we have duplicates since e.g. there are multiple paths from B to A.
As a general notice: this problem is much more elegantly and more performant solvable by using Java traversal API (http://neo4j.com/docs/stable/tutorial-traversal.html). Here you would have a PathExpander that skips paths with decreasing movement_time early instead of collection all and filter out (as Cypher does).

How to write this in Cypher

I have around 644 nodes in my graph database(Neo4j) . I need to compute distances between all these 644 nodes and display it visually in the GUI. I want to pre-compute and store the distances between every two pairs of nodes in the database itself rather than retrieving the nodes on to the server and then finding the distances between them on the fly and then showing on the GUI.
I want to understand how to write such a query in CYPHER. Please let me know.
I think this can work:
// half cross product
match (a),(b)
where id(a) < id(b)
match p=shortestPath((a)-[*]-(b))
with a,b,length(p) as l
create (a)-[:DISTANCE {distance:l}]->(b)
Set 4950 properties, created 4950 relationships, returned 0 rows in 4328 ms
But the browser viz will blow up with this, just that you know.
Regarding your distance measure (it won't be that fast but should work):
MATCH (a:User)-[:READ]->(book)<-[:READ]-(b:User)
WITH a,b,count(*) as common,
length(a-[:READ]->()) as a_read,
length(b-[:READ]->()) as b_read
CREATE UNIQUE (a)-[:DISTANCE {distance:common/(a_read+b_read-common)}]-(b)

Resources