Titan+dynamodb traversal backend performance - amazon-dynamodb

we try to use Titan (1.0.0 version) with DynamoDB backend like our recommendation system engine. We have a huge user’s database with their relationships. It contains about 3.5 millions of users and about 2 billions of relationships between users.
Here is the code that we used to create schema
https://gist.github.com/angryTit/3b1a4125fc72bc8b9e9bb395892caf92
As you can see we use one composite index to find starting point of traversal fast, 5 edge’s types and some properties.
In our case users can have a really big amount of edges. Each could have tens of thousands edges.
Here is the code that we use to provide recommendations online
https://gist.github.com/angryTit/e0d1e18c0074cc8549b053709f63efdf
The problem that the traversal works very slow.
This one
https://gist.github.com/angryTit/e0d1e18c0074cc8549b053709f63efdf#file-reco-L28
tooks 20 - 30 seconds in case when user has about 5000 - 6000 edges.
Our DynamoDB’s tables has enough read/write capacity (we can see from CloudWatch that consumed capacity lower than provided by 1000 units.)
Here is our configuration of Titan
https://gist.github.com/angryTit/904609f0c90beca5f90e94accc7199e5
We tried to run it inside Lambda functions with max memory and on the big instance (r3.8xlarge) but with the same results...
Are we doing something wrong or it's normal in our case?
Thank you.

The general recommendation with the system would be to use vertex centric indexes to speed up your traversals on Titan. Also, Titan is a dead project. If you're looking for updates to the code, JanusGraph has forked the Titan code and continued to update it.

Related

Gremlin: OLAP vs dividing query

I have a query (link below) I must execute once per day or once per week in my application to find groups of connected users. In the query I check all possible groups for each user of the application (not all users are evaluated but could be a lot). For the moment I'm only making performance tests in localhost using Gremlin Server, since my application is not live yet.
The problem is that when testing this query simulating many users the query reaches the time limit a request can take that is configured in Gremlin Server by default, another problem is that the query does not take full CPU usage since it seems a single query is designed to use a single thread or a reduced amount of CPU processing in some way.
So I have 2 solutions in mind, divide the query in one chunk per user or use OLAP:
Solution 1:
Send a query to get the users first and then send one query per user, then remove duplicates in the server code, this should work in my case and since I can send all the queries at the same time I can use all resources available and bypass the time limits.
Solution 2:
Use OLAP. I guess OLAP does not have a time limit. The problem: My idea is to use Amazon Neptune and OLAP is not supported there as far as I know.
In this question about it:
Gremlin OLAP queries on AWS Neptune
David says:
Update: Since GA (June 2018), Neptune supports multiple queries in a single request/transaction
What does it mean "multiple queries in a single request"?
How my solution 1 compares with OLAP?
Should I look for another database service that supports OLAP instead of Neptune? Which one could be? I don't want an option that implies learning to setup my own "Neptune like" server, I have limited time.
My query in case you want to take a look:
https://gremlify.com/69cb606uzaj
This is a bit of a complicated question.
The problem is that when testing this query simulating many users the query reaches the time limit a request can take that is configured in Gremlin Server by default,
I'll assume there is a reason you can't change the default value, but for those who might be reading this answer the timeout is configurable both at the server (with evaluationTimeout in the server yaml) and per request both for scripts and bytecode based requests.
another problem is that the query does not take full CPU usage since it seems a single query is designed to use a single thread or a reduced amount of CPU processing in some way.
If you're testing with TinkerGraph in Gremlin Server then know that TinkerGraph is really simple. It doesn't do anything internally to run any aspect of a traversal in parallel (without TinkerGraphComputer which is OLAP related).
So I have 2 solutions in mind, divide the query in one chunk per user or use OLAP:
Either approach has the potential to work. In the first solution you suggest a form of poor man's OLAP where you must devise your own methods for doing this parallel processing (i.e. manage thread pools, synchronize state, etc). I think that this approach is a common first step that folks take to deal with this sort of problem. I'd wonder if you need to be as fine grained as one user per request. I would think that sending several at a time would be acceptable but only testing in your actual environment would yield the answer to that. The nice thing about this solution is that it will typically work on any graph system, including Neptune.
Using your second solution with OLAP is trickier. You have the obvious problem that Neptune does not directly support it, but going to a different provider that does will not instantly solve your problem. While OLAP rids you of having to worry about how to optimally parallelize your workload, it doesn't mean that you can instantly take that Gremlin query you want to run, throw it into Spark and get an instant win. For example, and I take this from the TinkerPop Reference Documentation:
In OLAP, where the atomic unit of computing is the vertex and its local
"star graph," it is important that the anonymous traversal does not leave the
confines of the vertex’s star graph. In other words, it can not traverse to an
adjacent vertex’s properties or edges.
In your query, there are already a places where you "leave the star graph" so you would immediately find problems there to solve. Usually that limitation can be worked around for OLAP purposes but it's not as simple as adding withComputer() to your traversal and getting a win in this case.
Going further down this path of using OLAP with a graph other than Neptune, you would probably want to at least consider if this complex traversal could be better written as a custom VertexProgram which might better bind your use case to the the capabilities of BSP than what the more generic TraversalVertexProgram does when processing arbitrary Gremlin. For that matter, a mix of Gremlin OLAP, a custom VertexProgram and some standard map/reduce style processing might ultimately lead to the most elegant and efficient answer.
An idea I've been considering for graphs that don't support OLAP has been to subgraph() (with Java) the portion of the graph that is relevant to your algorithm and then execute it locally in TinkerGraph! I think that might make sense in some use cases where the algorithm has some limits that can be defined ahead of time to form the subgraph, where those limits can be easily filtered and where the resulting subgraph is not so large that it takes an obscene amount of time to construct. It would be even better if the subgraph had some use beyond a single algorithm - almost behaving like a cache graph. I have no idea if that is useful to you but it's a thought. Here's a recent blog post I wrote that talks about writing VertexPrograms. Perhaps you will find it interesting.
All that said about OLAP, I think that your first solution seems fine to start with. You don't have a multi-billion edge graph yet and can probably afford to take this approach for now.
What does it mean "multiple queries in a single request"?
I believe that this just means that you can send a script like:
g.addV().iterate()
g.addV().iterate()
g.V()
where multiple Gremlin commands can be executed within the scope of a single transaction where each command must be "separated by newline ('\n'), spaces (' '), semicolon ('; '), or nothing (for example: g.addV(‘person’).next()g.V() is valid)". I think that only the last command returns a value. It doesn't seem like that particular feature would be helpful in your case. I would look more to batch users within a particular request where possible.
If you a looking for a native OLAP graph engine, perhaps take look at AnzoGraphDB which scales and performs much better for that style of more complex querying than anything else we know of. It's an MPP engine, so every core works on the query in parallel. Depending on how much data you need it to act on, the free version (single node only, RAM limited) may well be all you need and can be used commercially. You can find it in the AWS Marketplace or on Docker Hub.
Disclaimer: I work for Cambridge Semantics Inc.

Gremlin: ConcurrentModificationException and multithreading

My application is not live yet, so I'm testing the performance of my Gremlin queries before it gets into production.
To test I'm using a query that adds edges from one vertex to 300 other vertices. It does more things but that's the simple description. I added this mentioned workload of 300 just for testing.
If I run the query 300 times one after the other it takes almost 3 minutes to finish and 90.000 edges are created (300 x 300).
I'm worried because if I have like 60.000 users using my application at the same time they probably are going to be creating 90.000 edges in 2 minutes using this query and 60.000 users at the same time is not much in my case.
If I have 1 million users at the same time I'm going to be needing many servers at full capacity, that is out of my budget.
Then I noticed that when my test is executing the CPU doesn't show much activity, I don't know why, I don't know how the DB works internally.
So I thought that maybe a more real situation could be to call my queries all at the same time because that is what's going to happen with the real users, when I tried test that I got ConcurrentModificationException.
Is far as I understand this error happens because an edge or vertex is being read or written in 2 queries at the same time, this is something that could happen a lot in my application because all the user vertices are changing connections to the same 4 vertices all the time, this "collisions" are going to be happening all the time.
I'm testing in local using gremlin server 3.4.8 connecting using sockets with Node.js. My plan is to use AWS Neptune as my database when it goes to production.
What can I do to recover hope? There must be very important things about this subject that I don't know because I don't know how graph databases work internally.
Edit
I implemented a logic to retry the queries requests when receiving an error using the "Exponential Backoff" approach. It fixed the ConcurrentModificationException
but there are a lot of problems in Gremlin Server when sending multiple queries at the same time that shows how multi-threading is totally unsupported and unstable in Gremlin Server and we should try multi-threading in other Gremlin compatible databases as the response says. I experienced random inconsistencies in the data returned by the queries and errors like NegativeArraySize and other random stuff coming from the database, be warned of this to not waste time thinking that your code could be bugged like it happened to me.
While TinkerPop and Gremlin try to provide a vendor agnostic experience, they really only do so at their interface level. So while you might be able to run the same query in JanusGraph, Neptune, CosmosDB, etc, you will likely find that there are differences in performance depending on the nature of the query and the degree to which the graph in question is capable of optimizing that query.
For your case, consider TinkerGraph as you are running your tests there locally. TinkerGraph is an in-memory graph without transaction capability and is not proven thread safe for writes. If you apply a heavy write workload to it, I could envision ConcurrentModificationException easy to generate. Now consider JanusGraph. If you had tested your heavy write workload with that, you might have found that you were struck with tons of TemporaryLockingException errors if your schema had required a unique property key and would have to modify your code to do transactional retries with exponential backoff.
The point here is that if your target graph is Neptune and you have a traversal you've tested for correctness and now are concerned about performance, it's probably time to load test on Neptune to see if any problems arise there.
I'm worried because if I have like 60.000 users using my application at the same time they probably are going to be creating 90.000 edges in 2 minutes using this query and 60.000 users at the same time is not much in my case. If I have 1 million users at the same time I'm going to be needing many servers at full capacity, that is out of my budget.
You will want to develop a realistic testing plan. Is 60,000 users all pressing "submit" to trigger this query at the exact same time really what's going to happen? Or is it more likely that you have 100,000 users with some doing reads and perhaps every half second three of them happens to click the "submit".
Your graph growth rate seems fairly high and the expected usage you've described here will put your graph in the category of billions of edges quite quickly (not to mention whatever other writes you might have). Have you tested your read workloads on a graph with billions of edges? Have you tested that explicitly on Neptune? Have you thought about how you will maintain a multi-billion edge graph (e.g. changing the schema when a new feature is needed, ensuring that it is growing correctly, etc.)?
All of these questions are rhetorical and just designed to make you think about your direction. Good luck!
Although there is already an accepted answer I want to suggest another approach to deal with this problem.
Idea is to figure out whether you really need to do synchronous writes on the Graph. I'm suggesting that when you receive a request, just use the attributes in this request to fetch the sub-graph / neighbors, and continue with the business logic.
Simultaneously put the event in an SQS or something to have the writes taken care of by an Asynchronous system - say AWS Lambda. Because in SQS + Lambda you can decide the write concurrency to your systems' comfort (low enough that your query does NOT cause above exception).
Further suggestion: You have abnormally high write blast radius - your query should NOT touch that many nodes while writing. You can try converting some of the edges to vertices in order to reduce the radius. Then while inserting a node you'll just have to make one edge to the vertex which was previously an edge instead of making hundreds of edges to all the vertices this node is related to. I hope this makes sense.

Elastic Cache vs DynamoDb DAX

I have use case where I write data in Dynamo db in two table say t1 and t2 in transaction.My app needs to read data from these tables lot of times (1 write, at least 4 reads). I am considering DAX vs Elastic Cache. Anyone has any suggestions?
Thanks in advance
K
ElastiCache is not intended for use with DynamoDB.
DAX is good for read-heavy apps, like yours. But be aware that DAX is only good for eventually consistent reads, so don't use it with banking apps, etc. where the info always needs to be perfectly up to date. Without further info it's hard to tell more, these are just two general points to consider.
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second. While DynamoDB offers consistent single-digit millisecond latency, DynamoDB with DAX takes performance to the next level with response times in microseconds for millions of requests per second for read-heavy workloads. With DAX, your applications remain fast and responsive, even when a popular event or news story drives unprecedented request volumes your way. No tuning required. https://aws.amazon.com/dynamodb/dax/
AWS recommends that you use **DAX as solution for this requirement.
Elastic Cache is an old method and it is used to store the session states in addition to the cache data.
DAX is extensively used for intensive reads through eventual consistent reads and for latency sensitive applications. Also DAX stores cache using these parameters:-
Item cache - populated with items with based on GetItem results.
Query cache - based on parameters used while using query or scan method
Cheers!
I'd recommend to use DAX with DynamoDB, provided you're having more read calls using item level API (and NOT query level API), such as GetItem API.
Why? DAX has one weird behavior as follows. From, AWS,
"Every write to DAX alters the state of the item cache. However, writes to the item cache don't affect the query cache. (The DAX item cache and query cache serve different purposes, and operate independently from one another.)"
Hence, If I elaborate, If your query operation is cached, and thereafter if you've write operation that affect's result of previously cached query and if same is not yet expired, in that case your query cache result would be outdated.
This out of sync issue, is also discussed here.
I find DAX useful only for cached queries, put item and get item. In general very difficult to find a use case for it.
DAX separates queries, scans from CRUD for individual items. That means, if you update an item and then do a query/scan, it will not reflect changes.
You can't invalidate cache, it only invalidates when ttl is reached or nodes memory is full and it is dropping old items.
Take Aways:
doing puts/updates and then queries - two seperate caches so out of sync
looking for single item - you are left only with primary key and default index and getItem request (no query and limit 1). You can't use any indexes for gets/updates/deletes.
Using ConsistentRead option when using query to get latest data - it works, but only for primary index.
Writing through DAX is slower than writing directly to Dynamodb since you have a hop in the middle.
XRay does not work with DAX
Use Case
You have queries that you don't really care they are not up to date
You are doing few putItem/updateItem and a lot of getItem

Efficient way to Increment Vertex counter property in janusgraph

I am using janusGraph-0.2.0 with Cassandra backend with ES.
I want to store no.of views in Vertex property, Need an efficient and scalable way to increment/store the views count without impacting read performance.
Read views property from graph while fetching vertex, and update new views count in another query. (Wont impact read performance, but counter is not synchronised)
g.V().has("key","keyId").valueMap(true);
g.V(id).property('views', 21);
Using sack to store value 1, and add it to views property.
g.withSack(0).V().has("key","keyId").
sack(assign).by("views").sack(sum).by(constant(1)).
property("views", sack())
Use in-memory storage (Redis) to increment counters, and persist the updates in graph periodically.
Any other better approach ?
Is there any way to use cassendra's counter functionality in janusGraph?
There is no way to use Cassandra counters with JanusGraph. Even more, there is no way to use Cassandra counters with general Cassandra table. The logic of Cassandra counter developed in such way that updating the counter don't require a lock. That is why you get a lot of limitations in exchange for great performance.
Counting views isn't that easy task. In short, my suggestion would be to go with option 3.
I would go with Redis and periodical update to JanusGraph in case when we are in a single data center and your single master server can handle all requests (you can ofcourse use some hash ring to split your counters among different Redis servers but it will increase complexity costs for maintenance).
In case you have multiple data centers your single master Redis server cannot handle all requests I would go with Cassandra counters.
In case you have a very big amount of view events so even Cassandra counters (with their cache) cannot handle all requests because the disk is accessed too many times and you cannot scale more because of high cost then the logic would be harder. I have never been in such situation so it is just theoretically. In this case I would develop the application servers to cache and group views and periodically send this cached data to RabbitMQ workers so that they could update Cassandra counters and then update necessary vertex with total views amount in JanusGraph. In such case very frequently vertex views would be grouped so that we don't need to update counter with +1 each time but with +100 or +1000 views in a single update. It would decease disk usage very much and you would have eventually consistent and fast counters. Again, this solution is only theoretical and should be tested. I believe other solutions also exist.

Which technology is best suited to store and query a huge readonly graph?

I have a huge directed graph: It consists of 1.6 million nodes and 30 million edges. I want the users to be able to find all the shortest connections (including incoming and outgoing edges) between two nodes of the graph (via a web interface). At the moment I have stored the graph in a PostgreSQL database. But that solution is not very efficient and elegant, I basically need to store all the edges of the graph twice (see my question PostgreSQL: How to optimize my database for storing and querying a huge graph).
It was suggested to me to use a GraphDB like neo4j or AllegroGraph. However the free version of AllegroGraph is limited to 50 million nodes and also has a very high-level API (RDF), which seems too powerful and complex for my problem. Neo4j on the other hand has only a very low level API (and the python interface is not mature yet). Both of them seem to be more suited for problems, where nodes and edges are frequently added or removed to a graph. For a simple search on a graph, these GraphDBs seem to be too complex.
One idea I had would be to "misuse" a search engine like Lucene for the job, since I'm basically only searching connections in a graph.
Another idea would be, to have a server process, storing the whole graph (500MB to 1GB) in memory. The clients could then query the server process and could transverse the graph very quickly, since the graph is stored in memory. Is there an easy possibility to write such a server (preferably in Python) using some existing framework?
Which technology would you use to store and query such a huge readonly graph?
LinkedIn have to manage a sizeable graph. It may be instructive to check out this info on their architecture. Note particularly how they cache their entire graph in memory.
There is also OrientDB a open source document-graph dbms with commercial friendly license (Apache 2). Simple API, SQL like language, ACID Transactions and the support for Gremlin graph language.
The SQL has extensions for trees and graphs. Example:
select from Account where friends traverse (1,7) (address.city.country.name = 'New Zealand')
To return all the Accounts with at least one friend that live in New Zealand. And for friend means recursively up to the 7th level of deep.
I have a directed graph for which I (mis)used Lucene.
Each edge was stored as a Document, with the nodes as Fields of the document that I could then search for.
It performs well enough, and query times for fetching in and outbound links from a node would be acceptable to a user using it as a web based tool. But for computationally intensive, batch calculations where I am doing many 100000s queries I am not satisfied with the query times I'm getting. I get the sense that I am definitely misusing Lucene so I'm working on a second Berkeley DB based implementation so that I can do a side by side comparison of the two. If I get a chance to post the results here I will do.
However, my data requirements are much larger than yours at > 3GB, more than could fit in my available memory. As a result the Lucene index I used was on disk, but with Lucene you can use a "RAMDirectory" index in which case the whole thing will be stored in memory, which may well suit your needs.
Correct me if I'm wrong, but since each node is list of the linked nodes, seems to me a DB with a schema is more of a burden than an advantage.
It also sound like Google App Engine would be right up your alley:
It's optimized for reading - and there's memcached if you want it even faster
it's distributed - so the size doesn't affect efficiency
Of course if you somehow rely on Relational DB to find the path, it won't work for you...
And I just noticed that the q is 4 months old
So you have a graph as your data and want to perform a classic graph operation. I can't see what other technology could fit better than a graph database.

Resources