I have 2 questions:
How to index this query?
g.V(vertexId).repeat(out().hasLabel('location')).emit().tree().next()
in the Titan 1.0 documentation, there are only ways given to index the graph once when the data is already inserted .
However in the generate-modern.groovy file here
we see that indexing is done before the creation of vertices which seems reasonable. However I am unable to do it when trying to use buildMixedIndex as it is throwing me
illegal argument exception :Unknown external index backend search
My approach was
def location = mgmt.makeVertexLabel("location").make()
def displayName = mgmt.makePropertyKey("displayName").dataType(String.class).cardinality(Cardinality.SINGLE).make()
def shortName = mgmt.makePropertyKey("shortName").dataType(String.class).cardinality(Cardinality.SINGLE).make()
def description = mgmt.makePropertyKey("description").dataType(String.class).cardinality(Cardinality.SINGLE).make()
def latitude = mgmt.makePropertyKey("latitude").dataType(String.class).cardinality(Cardinality.SINGLE).make()
def longitude = mgmt.makePropertyKey("longitude").dataType(String.class).cardinality(Cardinality.SINGLE).make()
def locationByName = mgmt.buildIndex("displayNameAndShortNameAndDescriptionAndLatitudeAndLongitude", Vertex.class).addKey(displayName).addKey(shortName).addKey(description)
.addKey(latitude).addKey(longitude).indexOnly(location).buildMixedIndex('search')
Where I am getting it wrong?
If that query is taking a long time, the problem is likely that it is visiting too many elements or it is stuck in an infinite loop. The existing JanusGraph/Titan indexes won't help for that. You already have a direct vertex lookup by id, g.V(vertexId), and the rest of the query is traversing the neighborhood from that vertex. I'd suggest using edge labels, i.e. out('friends'), to limit the number of edges you visit. You could also use simplePath() to eliminate cyclic paths. You could also use times() or until() to keep a limit on the number of times you loop with the repeat() step.
The configuration example you referenced only used composite indexes, which do not require an indexing backend.
Mixed indexes require configuring an indexing backend, either Elasticsearch, Lucene, or Solr. Pick one of these, then make sure you pass the correct configuration properties when you initialize your graph. You can find several examples in the distribution zip file in the conf directory. For example, in the janusgraph-cassandra-es.properties, you'll find:
index.search.backend=elasticsearch
index.search.hostname=127.0.0.1
index.search.elasticsearch.client-only=true
where search in index.X.backend is the chosen index configuration name you must pass to buildMixedIndex(X).
Here's an answer.
Both composite and mixed index are only avaiable for the first level gremlin query, not for the second level. Vertex-Centric index is required for the second level query.
Related
I want to create 1000+ Edges in a single query.
Currently, I am using the AWS Neptune database and gremlin.net for creating it.
The issue I am facing is related to the speed. It took huge time because of HTTP requests.
So I am planning to combine all of my queries in a string and executing in a single shot.
_g.AddE("allow").From(_g.V().HasLabel('person').Has('name', 'name1')).To(_g.V().HasLabel('phone').Where(__.Out().Has('sensor', 'nfc'))).Next();
There are chances that the "To" (target) Vertex may not be available in the database. When it is the case this query fails as well. So I had to apply a check if that vertex exists before executing this query using hasNext().
So as of now its working fine, but when I am thinking of combining all 1000+ edge creation at once, is it possible to write a query which doesn't break if "To" (target) Vertex not found?
You should look at using the Element Existence pattern for each vertex as shown in the TinkerPop Recipes.
In your example you would replace this section of your query:
_g.V().HasLabel('person').Has('name', 'name1')
with something like this (I don't have a .NET environment to test the syntax):
__.V().Has('person', 'name', 'name1').Fold().
coalesce(__.Unfold(), __.AddV('person').Property('name', 'name1')
This will act as an Upsert and either return the existing vertex or add a new one with the name property. This same pattern can then be used on your To step to ensure that it exists before the edge is created as well.
I want to return a node where the node has a property as a specific uuid and I just want to return one of them (there could be several matches).
g.V().where('application_uuid', eq(application_uuid).next()
Would the above query return all the nodes? How do I just return 1?
I also want to get the property map of this node. How would I do this?
You would just do:
g.V().has('application_uuid', application_uuid).next()
but even better would be the signature that includes the vertex label (if you can):
g.V().has('vlabel', 'application_uuid', application_uuid).next()
Perhaps going a bit further if you explicitly need just one you could:
g.V().has('vlabel', 'application_uuid', application_uuid).limit(1).next()
so that both the graph provider and/or Gremlin Server know your intent is to only next() back one result. In that way, you may save some extra network traffic/processing.
This is a very basic query. You should read more about gremlin. I can suggest Practical Gremlin book.
As for your query, you can use has to filter by property, and limit to get specific number of results:
g.V().has('application_uuid', application_uuid).limit(1).next()
Running your query without the limit will also return a single result since the query result is an iterator. Using toList() will return all results in an array.
I'm new to Gremlin and still learning.
I'd like to include the starting vertex in the results of the following:
g.V('leafNode').repeat(out()).emit()
This gives me a collection of vertexes starting from an arbitrary leaf node "upwards" to the root vertex. However this collection excludes the V('leafNode') vertex itself.
How do I include the V('leafNode') in this collection?
Thanks
-John
There are two places for the emit in this statement: either before the repeat or after. If it comes before the repeat, it will be performed before evaluating the next loop.
Source: http://tinkerpop.apache.org/docs/current/reference/#repeat-step
So the following should take care of what your request.
g.V('leafNode').emit().repeat(out())
I need to insert about 1 million of nodes in Neo4j. I need to specify that each node is unique, so every time I insert a node it has to be checked that there's not the same node yet. Also the relationships must be unique.
I'm using Python and Cypher:
uq = 'CREATE CONSTRAINT ON (a:ipNode8) ASSERT a.ip IS UNIQUE'
...
queryProbe = 'MERGE (a:ipNode8 {ip:"' + prev + '"})'
...
queryUpdateRelationship= 'MATCH (a:ipNode8 {ip:"' + prev + '"}),(b:ipNode8 {ip:"' + next + '"}) MERGE (a)-[:precede]->(b)'
The problem is that after putting 40-50K nodes into Neo4j , the insertion speed slows down quickly and I can not to put anything else.
Your question is quite open ended. In addition to #InverseFalcon's recommendations, here are some other things you can investigate to speed things up.
Read the Performance Tuning documentation, and follow the recommendations. In particular, you might be running into memory-related issues, so the Memory Tuning section may be very helpful.
Your Cypher query(ies) can probably be sped up. For instance, if it makes sense, you can try something like the following. The data parameter is expected to be a list of objects having the format {a: 123, b: 234}. You can make the list as long as appropriate (e.g., 20K) to avoid running out of memory on the server while it processes the list within a single transaction. (This query assumes that you also want to create b if it does not exist.)
UNWIND {data} AS d
MERGE (a:ipNode8 {ip: d.a})
MERGE (b:ipNode8 {ip: d.b})
MERGE (a)-[:precede]->(b)
There are also periodic execution APOC procedures that you might be able to use.
For mass inserts like this, it's best to use LOAD CSV with periodic commit or the import tool.
I believe it's also best practice to use a parameterized query instead of appending values into a string.
Also, you created a unique property constraint on :ipNode8, but not :ipNode, which is the first one you MERGE. Seems like you'll need a unique constraint for that one too.
I'm a little confused about this one. Several similar examples can be found throughout the documentation. Such as :
g.V.has('name','hercules').next()
g.query().has("name",EQUAL,"hercules").vertices()
Could someone clarify what the difference in the process is between the two above?
Thanks
The first is gremlin-groovy syntax:
g.V.has('name','hercules').next()
and either iterates all vertices looking for vertices that have a "name" property with a value of "hercules". In the event that "name" is indexed then titan will utilize the index to avoid the linear scan to find such vertices.
The second is basically Java and the Titan API. The above gremlin-groovy code basically compiles down to your second statement:
g.query().has("name",EQUAL,"hercules").vertices()
however, in the case of the second statement it returns an iterator of all vertices that match the filter and doesn't just pop off the first one as shown in the gremlin statement (given the use of next()).