Suppose I have a vertex person, and it has multiple edges, I want to project properties from all traversal. What is the most efficient way to write a query in Cosmos DB Gremlin API?
I tried the following, but its performance is slow.
g.V().
hasLabel('person').
project('Name', 'Language', 'Address').
by('name').
by(out('speaks').values('language')).
by(out('residesAt').values('city'))
Also, I have multiple filters and sorting for each traversal.
I don't think you can write that specific traversal as you've shown it any more efficiently than it is already written, especially if you've added filters to the out('speaks') and out('residesAt') traversals to further limit those paths and as it stands in your example you only return the first "language" or "city" found which is obviously faster than traversing all of those possible paths.
It does stand out to me that you are trying to retrieve all the "person" vertices. You don't say that you have additional filters there specifically, but if you do not then the cost of this traversal could be steep if you have millions of "person" vertices coming back. Typically, traversals that only filter on a vertex label will be costly as most graphs do not optimize those sorts of lookups. In the worst case, such a situation could mean that you have to do a full graph scan to get that initial set of vertices.
Related
I have a Graph model region (vertex) -> has_person (edge) -> person (vertex). I want to get region vertices that has person with name Tom.
This query works fine:
g.V().hasLabel("person").has("name", "Tom").inE("has_person").outV().hasLabel("region").
But why following queries hang:
g.V().hasLabel("region").and(
__.hasLabel("person").has("name", "Tom").inE("has_person").outV().hasLabel("region")
)
g.V().and(
__.hasLabel("person").has("name", "Tom").inE("has_person").outV().hasLabel("region")
).hasLabel("region")
When writing graph traversals with Gremlin you need to think about how the graph database you are using is optimizing your traversal (e.g. is a global index being used)?
You should consider the indexing capability of your graph database and examine the output of the profile() step. It will tell you if indices are being used and where. My guess is that the query that works "fine" is using and index to find "Tom" and then is able to quickly traverse that one index to find the regions that have "has_person" edges related to him. Most every graph will be capable of optimizing that sort of pattern. Your following queries that "hang" will typically not be optimized by most graphs to utilize an index and it's mostly because of the pattern you've chosen with and() step which isn't a pattern most optimizations seek. My guess is that both of those traversals are filtering almost completely in-memory.
Fwiw, your query that works "fine" is the optimal way to write that I think given what you state as your desired output. Your first hanging query I don't think will ever return results because it requires that the vertex have a label that is both "region" and "person" which is not possible. The second hanging query seems to not require the and() in the first place and is double filtering the "region" label.
Context:
I do have a graph with about 2000 vertices, and 6000 edges, this over time might grow to 10000 vertices and 100000 edges. Currently I am upserting the new vertices using the following traversal query:
Upserting Vertices & Edges
queryVertex = "g.V().has(label, name, foo).fold().coalesce(
unfold(), addV(label).property(name, foo).property(model, 2)
).property(model, 2)"
The intent here is to look for vertex, named foo, and if found update its model property, otherwise create a new vertex and set the model property. this is issued twice: once for the source vertex and then for the target vertex.
Once the two related vertices are created, another query is issued to create the edge between them:
queryEdge = "g.V('id_of_source_vertex').coalesce(
outE(edge_label).filter(inV().hasId('id_of_target_vertex')),
addE(edge_label).to(V('id_of_target_vertex'))
).property(model, 2)"
here, if there is an edge between the two vertices, the model property on edge is updated, otherwise it creates the edge between them.
And the pseudocode that does this, is something as follows:
for each edge in the list of new edges:
//upsert source and target vertices:
execute queryVertex for edge.source
execute queryVertex for edge.target
// upsert edge:
execute queryEdge
This works, but it is highly inefficient; for example for the mentioned graph size it takes several minutes to finish, and with some in-app concurrency, it reduces the time only by couple of minutes. Surely, there must be a more efficient way of doing this for such a small graph size.
Question
* How can I make these upserts faster?
Bulk loading should typically be relegated to the provider specific tools that are optimized to handle such tasks. Gremlin really doesn't provide abstractions to cover the diverse group of bulk loader tools that are out there for each of the various graph database systems that implement TinkerPop. For Neptune, which is how you tagged your question, that would mean using the Neptune Bulk Loader.
Speaking specifically to your question, though you might see some optimizations to what you described as your approach. From a Gremlin perspective, I imagine you would see some savings here by submitting a single Gremlin request per edge by combining your existing traversals:
g.V().has(label, name, foo).fold().
coalesce(unfold(),
addV(label).property(name, foo)).
property(model, 2).as('source').
V().has(label, name, bar).fold().
coalesce(unfold(),
addV(label).property(name, bar)).
property(model, 2).as('target').
coalesce(inE(edge_label).where(outV().as('source')),
addE(edge_label).from('source').to('target')).
property(model, 2)
I think I got that right - untested, but hopefully you get the idea. Basically, we just reference the vertices already in memory via step labels so that we don't need to requery them. You might try other tactics as well if you continue with Gremlin-style bulk loading like ordering your edges so that you could batch together more edge loads to reduce the amount of vertex lookups and submit vertex/edge data in a more dynamic fashion as described here.
Is there anyway to list all partition keys with gremlin API?
I knew that I can create a PartitionStrategy while creating a graph. However, how can I know which partitions already have in a graph?
In the context of PartitionStrategy, there are only two ways to know what partitions are in the graph:
You know and keep track of the various partitions yourself as they are created.
You query the graph for the partitions by getting the unique list of partition names in the partition key: g.V().values("nameOfYourPartitionKey").dedup()
Obviously, the second approach listed above could be very expensive since it is a global traversal. For especially large graphs you may need to use an OLAP-style traversal with SparkGraphComputer.
I'm trying to produce a Gremlin query whereby I need to find vertexes which have edges from specific other vertexes. The less abstract version of this query is I have user vertexes, and those are related to group vertexes (i.e subjects in a school, so students who are in "Year 6 Maths" and "Year 6 English" etc). An extra difficulty is the ability for subgroups to exist in this query.
The query I need to find those users who are in 2 or more groups specified by the user.
Currently I have a brief solution, but in production usage using Amazon Netpune this query performs way too poorly, even with a small amount of data. I'm sure there's a simpler way of achieving this :/
g.V()
.has('id', 'group_1')
.repeat(out("STUDENT", "SUBGROUP"))
.until(hasLabel("USER"))
.aggregate("q-1")
.V()
.has('id', 'group_2')
.repeat(out("STUDENT", "SUBGROUP"))
.until(hasLabel("USER"))
.where(within("q-1"))
.aggregate("q-2")
.V()
.hasLabel(USER)
.where(within("q-2"))
# We add some more filtering here, such as search terms
.dedup()
.range(0, 10)
.values("id")
.toList()
The first major change you can do is to not bother iterating all of V() again for "USER" - that's already that output from the prior steps so collecting "q-2" just to use it for a filter doesn't seem necessary:
g.V().
has('id', 'group_1').
repeat(out("STUDENT", "SUBGROUP")).
until(hasLabel("USER")).
aggregate("q-1").
V().
has('id', 'group_2').
repeat(out("STUDENT", "SUBGROUP")).
until(hasLabel("USER")).
where(within("q-1")).
# We add some more filtering here, such as search terms
dedup().
range(0, 10).
values("id")
That should already be a huge savings to your query because that change avoids iterating the entire graph in memory (i.e. full scan of all vertices) as there was no index lookup there.
I don't know what your additional filters are here:
# We add some more filtering here, such as search terms
but I would definitely look to try to filter the users earlier in your query rather than later. Perhaps consider using emit() on your repeats() to filter better. You should probably also dedup() your "q-1" and reduce the size of the list there.
I'd be curious to know how much just the initial change I suggested works as that was probably the biggest bulk of your query cost (unless you have a really deep/wide tree of student/subgroups I guess). Perhaps there is more that could be tweaked here though, but it would be nice to know that you at least have a traversal with satisfying performance at this point.
I have an HBase database that stores adjacency lists for a directed graph, with the edges in each direction stored in a pair of column families, where each row denotes a vertex. I am writing a mapreduce job, which takes as its input all nodes which also have an edge pointing from the same vertices as have an edge pointed at some other vertex (nominated as the subject of the query). This is a little difficult to explain, but in the following diagram, the set of nodes taken as the input, when querying on vertex 'A', would be {A, B, C}, by virtue of their all having edges from vertex '1':
To perform this query in HBase, I first lookup the vertices with edges to 'A' in the reverse edges column family yielding {1}, and the, for every element in that set, lookup the vertices with edges from that element of the set, in the forward edges column family.
This should yield a set of key-value pairs: {1: {A,B,C}}.
Now, I would like to take the output of this set of queries and pass it to a hadoop mapreduce job, however, I can't find a way of 'chaining' hbase queries together to provide the input to a TableMapper in the Hbase mapreduce API. So far, my only idea has been to provide another initial mapper which takes the results of the first query (on the reverse edges table), for each result, performs the query on the forward edges table, and yields the results to be passed to a second map job. However, performing IO from within a map job makes me uneasy, as it seems rather counter to the mapreduce paradigm (and could lead to a bottleneck if several mappers are all trying to access HBase at once). Therefore, can anyone suggest an alternative strategy for performing this sort of query, or offer any advice about best practices for working with hbase and mapreduce in such a way? I'd also be interested to know if there's any improvements to my database schema that could mitigate this problem.
Thanks,
Tim
Your problem is not flowing so well with the Map/Reduce paradigm. I've seen the shortest path problem solved by many M/R chained together. This is not so efficient but needed to get the global view at the reducer level.
In your case, it seems that you could perform all the requests within your mapper by following the edges and keeping a list of seen nodes.
However, performing IO from within a map job makes me uneasy
You should not worry about that. Your data model is absolutely random and trying to perform data locality will be extremely hard therefore you don't have much choice but to query all this data across the network. HBase is designed to handle large parallel queries. Having multiple mapper query on disjoint data will yield into a well distribution of request and a high throughput.
Make sure to keep small block size in HBase tables to optimize your reads and have as little as possible HFile for your regions. I'm assuming your data is quite static here so doing a major compaction will merge the HFile together and reduce the number of files to read.