Why is Gremlin Server / JanusGraph ignoring some of my requests? - gremlin

I'm using the Gremlin Python library to perform traversals on a JanusGraph deployment of Gremlin Server (the same also happens using just Tinkergraph). Some long traversals (with thousands of instructions) don't get a response, no errors, no timeouts, no log entries or errors on the server or client. Nothing.
The conditions for this silence treatment aren't clear. The described behaviour doesn't linearly depend on bytes or number of instructions. For instance, this code will hang forever for me:
g = traversal().withRemote(DriverRemoteConnection('ws://localhost:8182/gremlin', 't'))
g = g.inject("")
for i in range(0, 8000):
g = g.constant("test")
print(f"submitting traversal with length={len(g.bytecode.step_instructions)}")
result = g.next()
print(f"done, got: {result}") # this is never reached
It doesn't depend on just the number of bytes in the request message since the number of instructions beyond which I don't get response doesn't change even with very large constant values in place of just "test". For instance, injecting 7000 values with many paragraphs of Lorem Ipsum works as expected and returns in a few milliseconds.
While it shouldn't matter (since I should be getting a proper error instead of nothing), I've already increased server-side maxHeaderSize, maxChunkSize, maxContentLength etc. to ridiculously high numbers. Changing the serialization format (e.g. from GraphSONMessageSerializerV3d0 to GraphBinaryMessageSerializerV1) doesn't help either.
Note: I know that very long traversals are an anti-pattern in Gremlin, but sometimes it's not possible or very inefficient to structure traversals such that they can use injected values instead.

I've answered this question on gremlin-users not realizing it was also asked here on StackOverflow. For completeness, I'll duplicate my response here.
The issue is less related to bytes and string lengths and more with the length of the traversal chain (i.e. the number of steps in your traversal). You end up hitting a JVM limit on the stack size on the server. You can increase the stack size on the jvm by changing the size of the -Xss value which should allow you a longer traversal length. That will likely come with the need to re-examine other JVM settings like -Xmx and perhaps garbage collection options.
I do find it interesting that you don't get any error messages though - you should see a stackoverflow somewhere, unless the server is just wholly bogged down by your request. I'd consider throwing more -Xmx at it to see if you can get it to respond with an error at least or to keep an eye on server logs to at least see it surface there.

Related

Neptune and Gremlin - wild CPU utilization

Issue
I'm seeing extremely high CPU utilization when making [what seems like] a fairly common ask to a graph database. In fact, the utilization is so large that Amazon Neptune seems to "tap out" when I execute multiple of the queries in rapid succession / simultaneously.
I've experimented with even the largest (db.r5d.24xlarge) instance, which costs $14k/ month (😅), and I still see the general behavior -- memory is fine, CPU utilization goes wild.
Further, the query isn't exactly quick (~5 seconds), so I'm kind of getting the worst of both worlds...
Ask
Can someone help me understand what I can do to address this / review my query? My hope was, just as a relational database can handle thousands of concurrent requests, Amazon Neptune could do similar.
I suspect I might be using .limit() in the wrong spot. That said, the results produced are correct. If the limit() is moved up before the .and(), I'm concerned I'd end up with suboptimal results, since -- due to the until() clauses -- it could just be paths that stopped w/o making it to the toPortId.
Update: I’m wondering if it might make sense to add a timeLimit() clause to the repeat() clause … anecdotally, after some testing, it seems like I’m paying a significant time penalty to rigorously verify that there are, in fact, no routes to certain places.
Detail
I'm new to Gremlin / Neptune / graph databases in general, so it's possible/likely that I'm either doing something wrong (or have something incorrectly configured) -or- I overestimated the ability of these tools to handle queries.
My use-case requires that I "qualify" (or disqualify) a large number of candidate destinations -- that is:
Starting at ORIGIN (fromPortId) are there / what are the paths to CANDIDATE (toPortId) for which the duration properties sums to less than LIMIT (travelTimeBudget)?
Offending Query
const result = await g.withSack(INITIAL_CHECKIN_PENALTY)
.V(fromPortId)
.repeat(
(__.outE().hasLabel('HAS_VOYAGE_TO').sack(operator.sum).by('duration'))
.sack(operator.sum).by(__.constant(LAYEROVER_PENALTY))
.inV()
.simplePath()
)
.until(
__.or(
__.has('code', toPortId),
__.sack().is(gte(travelTimeBudget)),
__.loops().is(maxHops))
)
.and(
__.has('code', toPortId),
__.sack().is(lte(travelTimeBudget))
)
.limit(4)
.order().by(__.sack(), order.asc)
.local(
__.union(
__.path().by('code').by('duration'),
__.sack()
).fold()
)
.local(
__.unfold().unfold().fold()
)
.toList()
.then(data => {
return data
})
.catch(error => {
console.log('ERROR', error);
return false
});
Execution
Database: Amazon Neptune (db.r5.large)
Querying from / runtime: AWS Lambda running Node.js
Graph
5,300 vertices (Port)
42,000 edges (HAS_VOYAGE_TO)
Data Model
Note: For this issue, I'm only executing the "upper plane" / top half.
To address the CPU discussion, perhaps a small explanation of the Neptune instance architecture will be beneficial. Each Neptune instance has a worker thread pool. The number of workers in the pool is twice the number of vCPU the instance has. That controls how many queries (maximum) can be running concurrently on a given instance. Each instance also has a query queue that will hold queries sent to the instance until a worker becomes available. For a somewhat complex query, against a small instance type (like the db.r5.large), seeing 20 to 30% CPU utilization is not unexpected. A significant portion of the instance memory is used to cache graph data locally in a most recently used fashion. The remaining memory is allocated to the worker threads for query execution. So larger instances have (a) more workers (b) more memory for caching graph data locally and (c) more memory for use during query execution.
Without having the data it's hard to attempt to modify your query and test it, but it may well be possible to optimize various parts of it to improve its overall efficiency.
As there are a lot of topics that we could discuss here, and I am more than happy to do that, it might be easiest, if you are able, to open an AWS support case and ask that the support engineer create a ticket and have it assigned to me. I will be happy to then get on the phone with you and we can spend some time discussing your use case if that would help.
If you are not able to open a support case, we can connect other ways. I'm easy to find on LinkedIn if you prefer to reach out that way.
I very much want to help you with this and your other questions, but I feel the most expeditious way might be for us to get on the phone.
I'm also happy to keep discussing here of course.

Gremlin console keeps returning "Connection to server is no longer active" error

I tried to run a Gremlin query adding a property to vertex through Gremlin console.
g.V().hasLabel("user").has("status", "valid").property(single, "type", "valid")
I constantly get this error:
org.apache.tinkerpop.gremlin.jsr223.console.RemoteException: Connection to server is no longer active
This error happens after query is running for one or two minutes.
I tried some simple queries like g.V().limit(10) and it works fine.
Since the affected vertex count is more than 4 million, not sure if it is failing due to timeout issue.
I also tried to split it into small batches:
g.V().hasLabel("user").has("status", "valid").hasNot("type").limit(200000).property(single, "type", "valid")
It succeeded for first few batches and started failing again.
Is there any recommendations for updating millions of vertices?
The precise approach you take may vary depending on the backend graph database and storage you are using as well as the capacity of the hardware being used.
The capacity of the hardware where Gremlin Server is running in terms of number of CPUs and most importantly, memory, will also be a factor as will the setting of the query timeout value.
To do this in Gremlin, if you had a way to identify distinct ranges of vertices easily you could split this up into multiple threads each doing batches of updates. If the example you show is representative of your actual need then that is likely not possible in this case.
Likewise some graph databases provide a bulk load capability that is often a good way to do large batch updates but probably not an option here as you need to do essentially a conditional update based on looking at the current presence (or not) of a property.
Without more information about your data model and hardware etc. the best answer is probably to do two things:
Use smaller limits. Maybe try 5K or even just 1K at first and work up from there until you find a reliable sweet spot.
Increase the query timeout settings.
You may need to experiment to find the sweet spot for your environment as the capacity of the hardware will definitely play a role in situations like this as well as how you write your query.

gremlin.net with Cosmos DB: RequestRateTooLarge

I get RequestRateTooLarge exception when doing queries that involve around 60 vertices. The problem seems to be related to the number of vertices and edges involved in the query (it does not happen with "smaller" queries). Increasing the throughput does not solve the problem, it just happens less frequently.
Would it be useful to wait some time between retrievals of the results of the query? I.e. doing a Thread.Sleep() between calls to something like query.ExecuteNextAsync() of Graph API. I could not find an equivalent in gremlin.net so I haven't tried yet.
If this is not a solution, what can I do?
Introducing a wait period will address the RequestRateTooLarge exception you are experiencing.
Another option is to increase the reserved throughput for the container. You can find additional information about exceeding the reserved throughput exception here.

How the chances of getting "read-your-writes" consistency are increased in Dynamo?

In Section 5 of Dynamo paper, there is the following content:
In particular, since each write usually follows a read operation, the
coordinator for a write is chosen to be the node that replied fastest to the
previous read operation which is stored in the context information of the
request. This optimization enables us to pick the node that has the data that
was read by the preceding read operation thereby increasing the chances of
getting "read-your-writes" consistency.
How the chances of getting "read-your-writes" consistency are increased?
"read-your-writes" means that a read following a write gets the value set by the
write. The read and the write are performed by two different clients for this
context. The reason is that the choice of the write coordinator does not impact
on the chances of getting "read-your-writes" by the same client.
But the above text is talking about a write following a read. Here is my guess.
The read coordinator will try to do syntactic reconciliation if it is possible.
If syntactic reconciliation is impossible because of divergent versions, the
client need to do semantic reconciliation before doing a write. Either way, the
versions on all the nodes involved in the read operation is an ancestor of the
reconciled version. So the following write can be sent to any of them to get
applied. The earliest time for a write to be seen by a read is after the
following steps are finished:
Client contact the write coordinator.
The write coordinator generates the version clock for the new version.
The write coordinator writes the new version locally.
The shorter the time to perform the above steps, the more likely another
following read sees the new version. Since it is very possible that the node
which replied fastest to the previous read can perform the following steps in a
shorter time. Such a node is chosen as the write coordinator.
Section 2.3 talks about performing the reconciliation at read time rather than write time.
Data versioning - "One can determine whether two versions of an
object are on parallel branches or have a causal ordering, by
examine their vector clocks."
This paragraph from section 4. [emphasis mine]
In Dynamo, when a client wishes to update an object, it must specify
which version it is updating. This is done by passing the context it
obtained from an earlier read operation, which contains the vector
clock information. Upon processing a read request, if Dynamo has
access to multiple branches that cannot be syntactically reconciled,
it will return all the objects at the leaves, with the corresponding
version information in the context. An update using this context is
considered to have reconciled the divergent versions and the
branches are collapsed into a single new version
So by performing the read first, you're effectively reconciling all divergent versions prior to writing. By writing to that same node, the version you've updated is marked with the context and vector clock of the most up to date version and all divergent branches can be collapsed. This is sent to the top N nodes (as you've stated) as fast as possible. But by removing the divergent branches - you reduce the chance that multiple values could be returned. You only need one of the N nodes read in the next read to get the reconciled write. ie - the node as part of the quorum of R reads says - "I am the reconciled version, and all others must bow to me". (and if that has already been distributed to another of the "R" nodes, then there's even greater chance of getting the reconciled version in the quorum)
But, if you wrote to a different node, one that you hadn't read from - the vector clock that is being updated may not necessarily be a reconciled version of the object. Therefore, you could still have divergent branches. The following read will try and reconcile it, but it's more more probable that you could have multiple divergent data and no reconciliation.
If you've made it this far, I think the most interesting part is that per Section 6, client applications can dictate the values of N, R and W - ie - number of nodes that constitute the pool to draw from, and the number of nodes that must agree on a read or write for it to be successful.
Geez - my head hurts now.
I re-read the Dynamo paper. I have a new understanding of "read-your-write" consistency. "read-your-writes" involves only one client. Image the following requests performed by one client on the same key:
read-1
write-1
read-2
"read-your-writes" means that read-2 sees write-1. The write coordinator has the best chance to have write-1. To ensure "read-your-writes", it is desired that the write coordinator replies fastest to read-2. It is highly possible that the node replies fastest to read-1 also reply fastest to read-2. So choose the node replies fastest to read-1 as the write coordinator.
And what is the node that replied fastest to the previous read operation? Such a node only makes sense if client-driven coordination is used. For server-side coordination, the coordinator nodes replies to the client and the other involved nodes reply to the coordinator node. replied fastest is meaningless in this case.

implementing a download manager that supports resuming

I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download).
From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets.
So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those.
Is this the right way to do this?
What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file)
When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request?
When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks.
Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously?
If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers).
Note: I'll probably be using Qt, but this is a general question so I left code out of it.
Regarding the request/response:
for a Range-d request, you could get three different responses:
206 Partial Content - resuming supported and possible; check Content-Range header for size/range of response
200 OK - byte ranges ("resuming") not supported, whole resource ("file") follows
416 Requested Range Not Satisfiable - incorrect range (past EOF etc.)
Content-Range usu. looks like this: Content-Range: bytes 21010-47000/47022, that is bytes start-end/total.
Check the HTTP spec for details, esp. sections 14.5, 14.16 and 14.35
I am not an expert on C++, however, I had once done a .net application which needed similar functionality (download scheduling, resume support, prioritizing downloads)
i used microsoft bits (Background Intelligent Transfer Service) component - which has been developed in c. windows update uses BITS too. I went for this solution because I don't think I am a good enough a programmer to write something of this level myself ;-)
Although I am not sure if you can get the code of BITS - I do think you should just have a look at its documentation which might help you understand how they implemented it, the architecture, interfaces, etc.
Here it is - http://msdn.microsoft.com/en-us/library/aa362708(VS.85).aspx
I can't answer all your questions, but here is my take on two of them.
Chunk size
There are two things you should consider about chunk size:
The smaller they are the more overhead you get form sending the HTTP request.
With larger chunks you run the risk of re-downloading the same data twice, if one download fails.
I'd recommend you go with smaller chunks of data. You'll have to do some test to see what size is best for your purpose though.
In memory vs. files
You should write the data chunks to in memory buffer, and then when it is full write it to the disk. If you are going to download large files, it can be troublesome for your users, if they run out of RAM. If I remember correctly the IIS stores requests smaller than 256kb in memory, anything larger will be written to the disk, you may want to consider a simmilar approach.
Besides keeping track of what were the offsets marking the beginning of your segments and each segment length (unless you want to compute that upon resume, which would involve sort the offset list and calculate the distance between two of them) you will want to check the Accept-Ranges header of the HTTP response sent by the server to make sure it supports the usage of the Range header. The best way to specify the range is "Range: bytes=START_BYTE-END_BYTE" and the range you request includes both START_BYTE and byte END_BYTE, thus consisting of (END_BYTE-START_BYTE)+1 bytes.
Requesting micro chunks is something I'd advise against as you might be blacklisted by a firewall rule to block HTTP flood. In general, I'd suggest you don't make chunks smaller than 1MB and don't make more than 10 chunks.
Depending on what control you plan to have on your download, if you've got socket-level control you can consider writing only once every 32K at least, or writing data asynchronously.
I couldn't comment on the MMF idea, but if the downloaded file is large that's not going to be a good idea as you'll eat up a lot of RAM and eventually even cause the system to swap, which is not efficient.
About handling the chunks, you could just create several files - one per segment, optionally preallocate the disk space filling up the file with as many \x00 as the size of the chunk (preallocating might save you sometime while you write during the download, but will make starting the download slower), and then finally just write all of the chunks sequentially into the final file.
One thing you should beware of is that several servers have a max. concurrent connections limit, and you don't get to know it in advance, so you should be prepared to handle http errors/timeouts and to change the size of the chunks or to create a queue of the chunks in case you created more chunks than max. connections.
Not really an answer to the original questions, but another thing worth mentioning is that a resumable downloader should also check the last modified date on a resource before trying to grab the next chunk of something that may have changed.
It seems to me you would want to limit the size per download chunk. Large chunks could force you to repeat download of data if the connection aborted close to the end of the data part. Specially an issue with slower connections.
for the pause resume support look at this simple example
Simple download manager in Qt with puase/ resume support

Resources