Referencing graph nodes by integer ID - graph

As a bit of a learning project, I am working to replace a somewhat slow program in perl with a Chapel implementation. I've got the algorithms down, but I'm struggling with the best way to reference the data in Chapel. I can do a direct translation, but it seems likely I'm missing a better way.
Details of existing program:
I have a graph with ~32000 nodes and ~2.1M edges. State is saved in
data files, but it's run as a daemon that keeps data in memory.
Each node has a numeric ID (assigned by another system) and have a variety
of other attributes defined by string, integer, and boolean values.
The edges are directional and have a couple of boolean values
attributed to them.
I have an external system that interacts with this daemon that I cannot change. It makes requests, such as "Add node (int) with these attributes", "find shortest path from node (int) to node (int)", or "add edges from node (int) to node(s) (int, int, int)"
In Perl, the program uses hashes with common integer IDs for node and edge attributes. I can certainly replicate this in Chapel with associative arrays.
Is there a better way to bundle this all together? I've been trying to wrap my head around ways to have opaque node and edge with each item defined, but struggling with how to reference them with the integer IDs in an easy fashion.
If somebody can provide an ideal way to do the following, it would get me the push I need.
Create two nodes with xx attributes identified by integer ID.
Create an edge between the two with xx attribues
Respond to request "show me the xx attribute of node (int)"
Cheers, and thanks.

As you might expect, there are a number of ways to approach this in Chapel, though I think given your historical approach and your external system's interface, associative domains and arrays are definitely an appropriate way to go. Specifically, given your desire to refer to nodes by integer IDs makes associative domains/arrays a natural match.
For Chapel newbies: associative domains are essentially sets of arbitrary values, like the set of integer node IDs in this case. Associative arrays are mappings from the indices of an associative domain to elements (variables) of a given type . Essentially, the domain represents the keys and the array the values in a key-value store or hash table.
To represent the nodes and edges themselves, I'm going to take the approach of using Chapel records. Here's my record for a node:
record node {
var id: int;
var str: string,
i: int,
flag: bool;
var edges: [1..0] edge;
}
As you can see, it stores its id as an integer, arbitrary attribute fields of various types (a string str, an integer i, and a boolean flag — you can probably come up with better names for your program), and an array of edges which I'll return to in a second. Note that it may or may not be necessary for each node to store its ID... perhaps in any context where you'd have the node, you would already know its ID, in which case storing it could be redundant. Here I stored it just to show you could, not because you must.
Returning to the edges: In your question, it sounded as though edges might have their own integer IDs and get stored in the same pool as the nodes, but here I've taken a different approach: In my experience, given a node, I typically want the set of edges leading out of it, so I have each node store an array of its outgoing edges. Here, I'm using a dense 1D array of edges which is initially empty (1..0 is an empty range in Chapel since 1 > 0). You could also use an associative array of edges if you wanted to give them each a unique ID. Or you could remove the edges from the node data structure altogether and store them globally. Feel free to ask follow-up questions if you'd prefer a different approach.
Here's my record for representing an edge:
record edge {
var from, to: int,
flag1, flag2: bool;
}
The first two fields (from and to) indicate the nodes that the edge connects. As with the node ID above, it may be that the from field is redundant / unnecessary, but I've included it here for completeness. The two flag fields are intended to represent the data attributes you'd associate with an edge.
Next, I'll create my associative domain and array to represent the set of node IDs and the nodes themselves:
var NodeIDs: domain(int),
Nodes: [NodeIDs] node;
Here, NodeIDs is an associative domain (set) of integer IDs representing the nodes. Nodes is a an associative array that maps from those integers to values of type node (the record we defined above).
Now, turning to your three operations:
Create two nodes with xx attributes identified by integer ID.
The following declaration creates a node variable named n1 with some arbitrary attributes using the default record constructor/initializer that Chapel provides for records that don't define their own:
var n1 = new node(id=1, "node 1", 42, flag=true);
I can then insert it into the array of nodes as follows:
Nodes[n1.id] = n1;
This assignment effectively adds n1.id to the NodeIDs domain and copies n1 into the corresponding array element in Nodes. Here's an assignment that creates a second anonymous node and adds it to the set:
Nodes[2] = new node(id=2, "node 2", i=133);
Note that in the code above, I've assumed that you want to choose the IDs for each node explicitly (e.g., perhaps your data file establishes the node IDs?). Another approach (not shown here) might be to have them be automatically determined as the nodes are created using a global counter (maybe an atomic counter if you're creating them in parallel).
Having populated our Nodes, we can then iterate over them serially or in parallel (here I'm doing it in parallel; replacing forall with for will make them serial):
writeln("Printing all node IDs (in an arbitrary order):");
forall nid in NodeIDs do
writeln("I have a node with ID ", nid);
writeln("Printing all nodes (in an arbitrary order):");
forall n in Nodes do
writeln(n);
The order in which these loops print the IDs and nodes is arbitrary for two reasons: (1) they're parallel loops; (2) associative domains and arrays store their elements in an arbitrary order.
Create an edge between the two with xx attribues
Since I associated the edges with nodes, I took the approach of creating a method on the node type that will add an edge to it:
proc node.addEdge(to: int, flag1: bool, flag2: bool) {
edges.push_back(new edge(id, to, flag1, flag2));
}
This procedure takes the destination node ID, and the attributes as its arguments, creates an edge using that information (and supplying the originating node's ID as the from field), and uses the push_back() method on rectangular arrays to add it to the list of edges.
I then call this routine three times to create some edges for node 2 (including redundant and self-edges since I only have two nodes so far):
Nodes[2].addEdge(n1.id, true, false);
Nodes[2].addEdge(n1.id, false, true);
Nodes[2].addEdge(2, false, false);
And at this point, I can loop over all of the edges for a given node as follows:
writeln("Printing all edges for node 2: (in an arbitrary order):");
forall e in Nodes[2].edges do
writeln(e);
Here, the arbitrary printing order is only due to the use of the parallel loop. If I'd used a serial for loop, I'd traverse the edges in the order they were added due to the use of a 1D array to represent them.
Respond to request "show me the xx attribute of node (int)"
You've probably got this by now, but I can get at arbitrary attributes of a node simply by indexing into the Nodes array. For example, the expression:
...Nodes[2].str...
would give me the string attribute of node 2. Here's a little helper routine I wrote to get at (and print) some various attributes):
proc showAttributes(id: int) {
if (!NodeIDs.member(id)) {
writeln("No such node ID: ", id);
return;
}
writeln("Printing the complete attributes for node ", id);
writeln(Nodes[id]);
writeln("Printing its string field only:");
writeln(Nodes[id].str);
}
And here are some calls to it:
showAttributes(n1.id);
showAttributes(2);
showAttributes(3);
I am working to replace a somewhat slow program in perl with a Chapel implementation
Given that speed is one of your reasons for looking at Chapel, once your program is correct, re-compile it with the --fast flag to get it running quickly.

Related

Use RocksDB to support key-key-value (RowKey->Containers) by splitting the container

Support I have key/value where value is a logical list of strings where I can append strings. To avoid the situation where inserting a single string item to the queue causing re-write the entire list, I'd using multiple key-value pairs to represent it.
Key -> metadata of the value such as length and subkey format
Key-l1 -> value of item 1 in list
Key-l2 -> value of item 2 in list
Key-ln -> the lastest value in the list
I'd override the key comparer in RocksDB such that sorting of Key-ln formatted key is sort Key part first and ln second (i.e. group by and sort by Key and within the same Key value sort by ln). This way, all the list items along with its root key and metadata are grouped together in sst during initial bulk insert and during later sst compaction.
Appending a new list item becomes (1) first read Key-metadata to get the current list size of n; 2) insert Key-l(n+1) with new value. Deleting list item works as it is for RocksDB by deleting Key-ln and update the metadata.
To ensure the consistency, (1) and (2) will be done inside a RocksDB transaction.
This design seems to be ok?
Now, if I want to add anther feature of TTL for entire key-value(list), I'd use TTL support already in RocksDB. My understanding is that TTL to remove expired item happens during compaction. However, such compaction is not done under a transaction. RocksDB doesn't know that Key-metadata and Key-ln entries are related. It is entirely possible that there is a time window where Key->metadata(root node) is deleted while child nodes of (Key-ln) is not deleted yet (or reverse order). If during this time window, someone reads or update the list, it will get an inconsistent for the Key-list. Any remedy for it?
Thanks
You should use Merge Operator, it's designed for such value append use case. Your design is read-before-write, which has performance penalty, in general it should be avoided if possible: What's read-before-write in NoSQL?.
Options options;
options.merge_operator.reset(new StringAppendOperator(','));
DB::Open(options, kDBPath, &db)
...
db->Merge(WriteOptions(), "key", "value1");
db->Merge(WriteOptions(), "key", "value2");
db_->Get(ReadOptions(), "key", &result); // return "value1,value2"
The above example uses a predefined StringAppendOperator, which simply append new values at the end. You can defined your own MergeOperator to customize the merge operation.
In the backend, the merge operation is done on the read path (and compaction to reduce the version number), details: Merge Operator Implementation.

Neo4j - how to include start node in my query?

I'm attempting to build a recommendation engine for a library system.
This is my db schema:
My starting point is a LoanerCard. The flow is then supposed to look like this: Get all copies -> get the material -> get all copies of the material (including the original) -> get LoanerCard from copy -> get all loaned copies -> return the material name of the copies + an aggregated count to indicate the strength of the recommendation.
My best attempt so far has resulted in this query:
MATCH (L:LoanerCard {Barcode:"10007"})-[:LOANED]->(myLoans)-[:COPY_OF]-
(masterMaterial),
(masterMaterial)<-[:COPY_OF]-(allCopies),
(allCopies)<-[:LOANED]-(coLoaners),
(coLoaners)-[r:LOANED]->(theirCopies),
(theirCopies)-[:COPY_OF]-(materials)
RETURN materials.Title as Recommended, count(*) as Strength ORDER BY Strength DESC
My issue here is that when I traverse the graph it doesn't include the original copy and the adjacent LoanerCards of that so essentially it only traverses the area circled in red and never reaches LoanerCard 10817 and 10558
How can I design my query so it includes these?
A MATCH clause automatically filters out duplicate relationships. Therefore, in order to traverse the same relationships twice, you need to split your MATCH clause in two.
Try this:
MATCH (:LoanerCard {Barcode:"10007"})-[:LOANED]->()-[:COPY_OF]-(masterMaterial)
MATCH (masterMaterial)<-[:COPY_OF]-()<-[:LOANED]-()-[:LOANED]->()-[:COPY_OF]-(materials)
RETURN materials.Title as Recommended, count(*) as Strength ORDER BY Strength DESC

Neo4j Cypher query to find nodes that are not connected too slow

Given we have the following Neo4j schema (simplified but it shows the important point). There are two types of nodes NODE and VERSION. VERSIONs are connected to NODEs via a VERSION_OF relationship. VERSION nodes do have two properties from and until that denote the validity timespan - either or both can be NULL (nonexistent in Neo4j terms) to denote unlimited. NODEs can be connected via a HAS_CHILD relationship. Again these relationships have two properties from and until that denote the validity timespan - either or both can be NULL (nonexistent in Neo4j terms) to denote unlimited.
EDIT: The validity dates on VERSION nodes and HAS_CHILD relations are independent (even though the example coincidentally shows them being aligned).
The example shows two NODEs A and B. A has two VERSIONs AV1 until 6/30/17 and AV2 starting from 7/1/17 while B only has one version BV1 that is unlimited. B is connected to A via a HAS_CHILD relationship until 6/30/17.
The challenge now is to query the graph for all nodes that aren't a child (that are root nodes) at one specific moment in time. Given the example above, the query should return just B if the query date is e.g. 6/1/17, but it should return B and A if the query date is e.g. 8/1/17 (because A isn't a child of B as of 7/1/17 any more).
The current query today is roughly similar to that one:
MATCH (n1:NODE)
OPTIONAL MATCH (n1)<-[c]-(n2:NODE), (n2)<-[:VERSION_OF]-(nv2:ITEM_VERSION)
WHERE (c.from <= {date} <= c.until)
AND (nv2.from <= {date} <= nv2.until)
WITH n1 WHERE c IS NULL
MATCH (n1)<-[:VERSION_OF]-(nv1:ITEM_VERSION)
WHERE nv1.from <= {date} <= nv1.until
RETURN n1, nv1
ORDER BY toLower(nv1.title) ASC
SKIP 0 LIMIT 15
This query works relatively fine in general but it starts getting slow as hell when used on large datasets (comparable to real production datasets). With 20-30k NODEs (and about twice the number of VERSIONs) the (real) query takes roughly 500-700 ms on a small docker container running on Mac OS X) which is acceptable. But with 1.5M NODEs (and about twice the number of VERSIONs) the (real) query takes a little more than 1 minute on a bare-metal server (running nothing else than Neo4j). This is not really acceptable.
Do we have any option to tune this query? Are there better ways to handle the versioning of NODEs (which I doubt is the performance problem here) or the validity of relationships? I know that relationship properties cannot be indexed, so there might be a better schema for handling the validity of these relationships.
Any help or even the slightest hint is greatly appreciated.
EDIT after answer from Michael Hunger:
Percentage of root nodes:
With the current example data set (1.5M nodes) the result set contains about 2k rows. That's less than 1%.
ITEM_VERSION node in first MATCH:
We're using the ITEM_VERSION nv2 to filter the result set to ITEM nodes that have no connection other ITEM nodes at the given date. That means that either no relationship must exist that is valid for the given date or the connected item must not have an ITEM_VERSION that is valid for the given date. I'm trying to illustrate this:
// date 6/1/17
// n1 returned because relationship not valid
(nv1 ...)->(n1)-[X_HAS_CHILD ...6/30/17]->(n2)<-(nv2 ...)
// n1 not returned because relationship and connected item n2 valid
(nv1 ...)->(n1)-[X_HAS_CHILD ...]->(n2)<-(nv2 ...)
// n1 returned because connected item n2 not valid even though relationship is valid
(nv1 ...)->(n1)-[X_HAS_CHILD ...]->(n2)<-(nv2 ...6/30/17)
No use of relationship-types:
The problem here is that the software features a user-defined schema and ITEM nodes are connected by custom relationship-types. As we can't have multiple types/labels on a relationship the only common characteristic for these kind of relationships is that they all start with X_. That's been left out of the simplified example here. Would searching with the predicate type(r) STARTS WITH 'X_' help here?
What Neo4j version are you using.
What percentage of your 1.5M nodes will be found as roots at your example date, and if you don't have the limit how much data comes back? Perhaps the issue is not in the match so much as in the sorting at the end?
I'm not sure why you had the VERSION nodes in your first part, at least you don't describe them as relevant for determining a root node.
You didn't use relationship-types.
MATCH (n1:NODE) // matches 1.5M nodes
// has to do 1.5M * degree optional matches
OPTIONAL MATCH (n1)<-[c:HAS_CHILD]-(n2) WHERE (c.from <= {date} <= c.until)
WITH n1 WHERE c IS NULL
// how many root nodes are left?
// # root nodes * version degree (1..2)
MATCH (n1)<-[:VERSION_OF]-(nv1:ITEM_VERSION)
WHERE nv1.from <= {date} <= nv1.until
// has to sort all those
WITH n1, nv1, toLower(nv1.title) as title
RETURN n1, nv1
ORDER BY title ASC
SKIP 0 LIMIT 15
I think a good start for improvement would be to match on nodes using an index so you can quickly get a smaller relevant subset of nodes to search. Your approach right now must inspect all your :NODEs and all their relationships and patterns off of them every single time, which, as you've found, won't scale with your data.
Right now the only nodes in your graph with date/time properties are your :ITEM_VERSION nodes, so let's start with those. You'll need an index on :ITEM_VERSION's from and until properties for fast lookup.
The nulls are going to be problematic for your lookups, as any inequality against a null value returns null, and most workarounds to working with nulls (using COALESCE() or several ANDs/ORs for null cases) seem to prevent usage of index lookups, which is the point of my particular suggestion.
I would encourage you to replace your nulls in from and until with min and max values, which should let you take advantage of finding nodes by index lookup:
MATCH (version:ITEM_VERSION)
WHERE version.from <= {date} <= version.until
MATCH (version)<-[:VERSION_OF]-(node:NODE)
...
That should at least provide quick access to a smaller subset of nodes at the start for continuing your query.

Riak inserting a list and querying a list

I was wondering if there was a effecient way of handling arrays/lists in Riak. Right now I'm storing the whole array as a string and searching the string to find out if a element exists in the array.
ID (key) : int[] (Value)
And also How do I write a map/reduce query to give all the keys for which the value array contains a element
For example 1 : 2,3,4
2 : 2,5
How would I write a M/R query to give me all the keys for which value contains 2 the result is 1,2 in this case.
Any help is appreciated
If you are searching for a specific element in the list and are using the LevelDB backend, you could create a secondary index that will contain the values of the array. Secondary indexes in Riak may contain multiple values and can be searched for equality, which should allow you to search for single elements in the array without having to resort to MapReduce.
If you need to make more complicated queries based on either several elements in the list or other parameters, you could retrieve a subset of records based on the secondary index and then process them further on the client side or perhaps even through a MapReduce job.

Combination of List and Hash in Qt

I need a data structure in which each element has a specific index but can also be retrieved using a key.
I need that data structure for model-view Programming in Qt.
On the one hand, the View asks for an element in a specific row.
On the other hand, the model wants to insert and modify elements with a given key.
Both operations should run in O(1).
Here is an example of what I want:
The View sees the following:
list[0]: "Alice", aged 22
list[1]: "Carol", aged 15
list[2]: "Bob", aged 23
The Model sees:
hash["Alice"]: "Alice", aged 22
hash["Bob"]: "Bob", aged 23
hash["Carol"]: "Carol", aged 15
My idea was the following: I have a QList<Value> and a QHash<Key, Value*>.
The hash points to the place in the list, where the corresponding element is stored.
This is the code to insert/edit values:
if (hash.contains(key))
*hash[key] = value;
else
{
int i = list.size();
list << value;
hash[key] = &list[i];
}
The problem is that this code does not always work.
Sometimes it works as expected, but it happens that the data structure is not consistent any more.
I suspect, it is because QList moves it's content through memory because it allocates new space or something like that.
Operations which are important (should run in expected O(1)):
Insert key/value pair (appends the value to the end of the list)
Look up and modify a value using a key
Look up and modify a value using an index
Other operations which have to be possible, but don't have to be possible in constant run time:
Delete an element by index
Delete an element by key
Insert in the middle of the array
Swap elements in the array / sort array
Get the index of a key
My two questions are:
Is there any data structure which does what I want?
If there is not, how could I fix this or is there a better approach?
Approach 1: Instead of the pointer, you can store the list index in the hash. Then you have one more indirection (from the hash, you get the index, then you retrieve from the list), but it is still O(1). The difference in speed should not be too much.
Approach 2: Both the list and the hash operate with pointers. Then they will stay valid. However, deleting based on index or key will become O(n), as you have to find the object manually in the non-corresponding container.
I also wonder how you want to solve the issue of deletion by index or insertion in the middle anyway. In both cases, the hash will point to wrong entries (both in your approach and Approach 1). Here you would be forced to go with Approach 2.

Resources