I am impressed by the RocksDB: the merge operation and its columnar storage. But I found that RocksDB runs on a single node. It is more like a library. In my use case, the size of KV doesn't fit in one node. So I need a "management system" which I call "control plane". The control plane manages a cluster of RocksDB nodes and presents the cluster as a big KV storage layer.
The control plane
management membership of nodes and partition each node holds.
detect node failures and recover.
detect hot partition and split.
I could not find resources for building on top of RocksDB. It could be wonderful if there are open source projects out there. My team is not big enough to build on top of RocksDB on our own.
This is a poor quality question and will probably get closed.
But just to give you some direction - what you are asking for is the architecture of a distributed database.
You can start by combining raft with rocks and incrementally add the features required
Related
We have a process that involves loading a large block of data, applying some transformations to it, and then outputting what has changed. We currently run a web app where multiple instances of these large blocks of data are processed in the same CLR instance, and this leads to garbage collection thrashing and OOM errors.
We have proven that hosting some tracked state in a longer running process works perfectly to solve our main problem. The issue we now face is, as a stateful system, we need to host it and manage coordination with other parts of the system (also change tracking instances).
I'm evaluating Actors in Service Fabric and Akka at the moment, there are a number of other options, but before I proceed, I would like peoples thoughts on this approach with the following considerations:
We have a natural partition point in our system (Authority) which means we can divide our top level data set easily. Each partition will be represented by a top level instance that needs to organise a few sub-actors in its own local cluster, but we would expect a single host machine to be able to run multiple clusters.
Each Authority Cluster of actors would ideally be hosted together on a single machine to benefit from local communication and some use of shared local resources to get around limits on message size.
The actors themselves should be separate processes on the same box (Akka seems to run local Actors in the same CLR instance, which would crash everything on OOM - is this true?), this will enable me to spin up a process, run the transformation through it, emit the results and tear it down without impacting the other instances memory / GC. I appreciate hardware resource contention would still be a problem, but I expect this to be more memory than CPU intensive, so expect a RAM heavy box.
Because the data model is quite large, and the messages can contain either model fragments or changes to model fragments, it's difficult to work with immutability. We do not want to clone every message payload into internal state and apply it to the model, so ideally any actor solution used would enable us to work with the original message payload. This may cause problems with restoring an actor state as it wants to save and replay these on wakeup, but as we have state tracking internally, we can just store the resulting output of this on sleep.
We need a coordinator that can spin up instances of an Authority Cluster. There needs to be some elasticity in terms of the number of VM's/Machines and the number of Authority Clusters hosted on them, and something needs to handle creation and destruction of these.
We have a lot of .NET code, all our models, transformations and validation is defined in it, and will need to be heavily re-used. Whatever solution will need to support .Net
My questions then are:
While this feels like a good fit for Actors, I have reservations and wonder if there is something more appropriate? Everything I have tried has come back to a hosted processes of some kind.
If actors are the right way to go, which tech stack would put me closest to what I am trying to achieve with the above concerns taken into account?
IMO (coming at this from a JVM Akka perspective, thus why I changed the akka tag to akka.net; I don't have a great knowledge about the CLR side of things), there seems to be a mismatch between
We do not want to clone every message payload into internal state and apply it to the model, so ideally any actor solution used would enable us to work with the original message payload.
and
The actors themselves should be separate processes on the same box (Akka seems to run local Actors in the same CLR instance, which would crash everything on OOM - is this true?)
Assuming that you're talking about the same OS process, those are almost certainly mutually incompatible: exchanging messages strongly suggests serialization and is thus isomorphic to a copy operation. It's possible that something using shared memory between OS processes could work, but you may well have to make a choice about which is more important.
Likewise, the parent/child relationship in the "traditional" (Erlang/Akka) style actor model trivially gives you the local cluster of actors (which, since they're running in the same OS process allows the Akka optimization of not copying messages until you cross an OS process boundary), while "virtual actor" implementations as found in Service Fabric or Orleans (or, I'd argue Cloudstate or Lagom) basically assume distribution.
Semantically, the virtual actor models implicitly assume that actors are eternal (though their eternal essence may not always be incarnate). For your use-case, this doesn't necessarily seem to be the case.
I think a cluster of Akka.Net instances with sharded Authority actors spawning shorter-lived child actors best fits, assuming that you're getting OOM issues from trying to process multiple large blocks of data simultaneously. You would have to implement the instance scale-up/down logic yourself.
I have not worked with Akka.net so I can't speak to that at all, but I'd be happy to speak to what you're talking about in a Service Fabric context.
Service Fabric has no issue with the concept of running multiple clusters. In its terminology, the whole of your system would be called an Application and would have a version when deployed to the SF cluster. If you wanted to create multiple instances of it, all you'd need to do is select what you wanted to call the deployed app instance and it'll stand up provisioning for you.
SF has a notion of placement constraints, metric balancing and custom rules that you can utilize if you think you can better balance the various resources than its automatic balancing (or you need to for network DMZ purposes). While I've never personally grouped things down to a single machine, I frequently limit access of services to single VM scale sets (we host in Azure).
To the last point though, you'll still have message size limits, but you can also override them to some degree. In your project containing service interfaces, just set the following attribute above your namespace:
[assembly:FabricTransportRemotingSettings(MaxMessageSize=<(long)new size in bytes>)] and you're good to go.
Services can be configured to run using a Shared or Exclusive process model.
Regarding your state requirement, it's not necessarily clear to me what you're trying to do, but I think you're saying that that it's not critical that your actors store any state since they can work from some centrally-provided model.
You might look then at volatile state persistence then as it'll mean that state is saved for the actors in memory, but should you lose the replicas, nothing is written to disk so it's all lost. Or if you don't care and are ok just sending the model to the actors for any work, you can configure them to be stateless.
On the other hand, if you're still looking to retain state in the actors and simply are concerned about immutability, rest assured that actor state isn't immutable and can be updated trivially. There are simply order of operation concerns you need to keep in mind (e.g. if you retrieve the state, make a change, save it, 1) you must commit the transaction for it to take and 2) if you modify the state but don't save it, it'll obviously not persist - pull a fresh copy in a new transaction for any modifications). There's a whole pile of guidelines here.
Assuming your coordinator is intended to save some sort of state, might I recommend a singleton stateful service. Presumably it's not receiving an inordinate amount of use so a single instance is sufficient and it can easily save state (without the annoyance of identifying which state is on which partition). As for spinning up services, I covered this in the first bullet, but use the ApplicationManager on the built-in FabricClient to set up new applications and the ServiceManager to create instances of necessary services within each.
Service Fabric supports .NET Core 3.1 through .NET 5 as of the latest 8.0 release though note a minor serialization issues with an easy workaround with .NET 5.
If you have an Azure support subscription, I'd encourage you to write to the team under Development questions and share your concerns. Alternatively, on the third Thursday of each month at 10 AM PST, they also have a community call on Teams that you're welcome to join and you can find past calls here.
Again, I can't speak to whether this is a better fit than Akka.NET, but our stack is built atop Service Fabric. While it has some shortcomings (what framework doesn't?) it's an excellent platform for distributed software development.
We are experimenting with the Cosmos Gremlin API because we are building a large scale knowledge-management-system which is naturally suited for a graph DB. Knowledge items are highly interconnected and therefore a graph is much better than a relational or a document-oriented (hierarchical) structure.
We need atomic write operations (not full transaction support, just atomic writes). E.g. we need to create several vertices and edges in one atomic write operation.
After carefully reading the documentation and extensively searching for solutions, our current state of knowledge is following:
Cosmos Gremlin API stores vertices as documents and outgoing edges as part of the "outgoing document".
A Gremlin statement creating vertices and egdes might be split up and executed in parallel.
There is no transaction support and there are no atomic write operations.
Write operations are not idempotent.
The two facts taken together mean: If you execute a graph write operation and an error occurs somewhere along the traversal, you have no chance whatsoever to recover from it in a clean way. Let's say you add an edge, add some vertices, perform some side-effect-steps and something goes wrong. Which vertices and edges are persisted and which are not? Since you cannot simply run the statement a second time (vertices with the ids already exist), you're kind of stuck. In addition, this is nothing which can be solved on the end-user-level in the UI.
Taken this points into account, it seems, that the Cosmos Gremlin API is not ready for a production app. When you have a look at the Gremlin "data explorer" in the portal, that seems even more true. I looks like a prototype.
Since edges are stored on the "outgoing document", one should always traverse the graph using the outgoing edges, not the incoming.
This takes away a lot from the efficiency of working with a graph DB: To traverse both directions efficiently.
It leads to workarounds: For each outgoing edge, create an "inverse edge" on the incoming vertex.
So I'd like to ask the question: Should one use Cosmos Gremlin API in production? So far I haven't seen or read about anyone who does so.
Write operations are not idempotent.
It is possible to write queries in an idempotent way however it's not really done in a nice readable and maintainable way. See an idempotent gremlin example here: https://spin.atomicobject.com/2021/08/10/idempotent-queries-in-gremlin/
Taken this points into account, it seems, that the Cosmos Gremlin API is not ready for a production app
This really depends on your application requirements, not all production applications require atomicity or transactions. Sometimes some systems can drop data or if needed you can do various things to ensure data integrity - Though this often puts more responsibility of the application developer
So I'd like to ask the question: Should one use Cosmos Gremlin API in production? So far I haven't seen or read about anyone who does so.
I haven't seen too many stories of using it in production anecdotally, it looks like CosmosDB is relatively popular but hard to tell what proportion of users are running which API.
I'm thinking about learn JanusGraph to use in my new big project but i can't understand some things.
Janus can be used like any database and supports "insert", "update", "delete" operations so JanusGraph will write data into Cassandra or other database to store these data, right?
Where JanusGraph store the Nodes, Edges, Attributes etc, it will write these into database, right?
These data should be loaded in memory by Janus or will be read from Cassandra all the time?
The data that JanusGraph read, must be load in JanusGraph in every query or it will do selects in database to retrieve the data I need?
The data retrieved in database is only what I need or Janus will read all records in database all the time?
Should I use JanusGraph in my project in production or should I wait until it becomes production ready?
I'm developing some kind of social network that need to store friendship, posts, comments, user blocks and do some elasticsearch too, in this case, what database backend should I use?
Janus will write data into Cassandra or other database to store these data, right?
Where Janus store the Nodes, Edges, Attributes etc, it will write these into database, right?
Janus Graph will write the data into whatever storage backend you configure it to use. This includes Cassandra. It writes this data into the underlaying database using the data model roughly outlined here
These data should be loaded in memory by Janus or will be read from Cassandra all the time?
The data retrieved in database is only what I need or Janus will read all records in database all the time?
Janus Graph will only load into memory vertices and edges which you touch during a query/traversal. So if you do something like:
graph.traversal().V().hasLabel("My Amazing Label");
Janus will read and load into memory only the vertices with that label. So you don't need to worry about initializing a graph connection and then waiting for the entire graph to be serialised into memory before you can query. Janus is a lazy reader.
Should I use Janus in my project in production or should I wait until it becomes production ready?
That is entirely up to you and your use case. Janus is being used in production already as can be seen here at the bottom of the page. Janus was forked from and improved on TitanDB which is also used in several production use cases. So if you wondering "is it ready" then I would say yes, it's clearly ready given it's existing uses.
what database backend should I use?
Again, that's entirely up to you. I use Cassandra because it can scale horizontally and I find it easier to work with. It also seems to suit all different sizes of data.
I have toyed with Google Big Table and that seems very powerful as well. However, it's only really suited for VERY big data and it's also only on the cloud where as Cassandra can be hosted locally very easily.
I have not used Janus with HBase or BerkeleyDB so I can't comment there.
It's very simple to change between backends though (all you need to do is adjust some configs and check your dependencies are in place) so during your development feel free to play around with the backends. You only really need to commit to a backend when you go production or are more sure of each backend.
When considering what storage backend to use for a new project it's important to consider what tradeoffs you'd like to make. In my personal projects, I've enjoyed using NoSQL graph databases due to the following advantages over relational dbs
Not needing to migrate schemas increases productivity when rapidly iterating on a new project
Traversing a heavily normalized data-model is not as expensive as with JOINs in an RDBMS
Most include in-memory configurations which are great for experimenting & testing.
Support for multi-machine clusters and Partition Tolerance.
Here are sample JanusGraph and Neo4j backends written in Kotlin:
https://github.com/pm-dev/janusgraph-exploration
https://github.com/pm-dev/neo4j-exploration
The main advantage with JanusGraph is the flexibility of pluging-in whichever storage backend you'd like.
In riak documentation, there are often examples that you could model your e-commerce datastore in certain way. But here is written:
In a production Riak cluster being hit by lots and lots of concurrent writes,
value conflicts are inevitable, and Riak Data Types
are not perfect, particularly in that they do not guarantee strong
consistency and in that you cannot specify the rules yourself.
From http://docs.basho.com/riak/latest/theory/concepts/crdts/#Riak-Data-Types-Under-the-Hood, last paragraph.
So, is it safe enough to user Riak as primary datastore in e-commerce app, or its better to use another database with stronger consistency?
Riak out of the box
In my opinion out of the box Riak is not safe enough to use as the primary datastore in an e-commerce app. This is because of the eventual consistency nature of Riak (and a lot of the NoSQL solutions).
In the CAP Theorem distributed datastores (Riak being one of them) can only guarentee at most 2 of:
Consistency (all nodes see the same data at the same time)
Availability (a guarantee that every request receives a response
about whether it succeeded or failed)
Partition tolerance (the system
continues to operate despite arbitrary partitioning due to network
failures)
Riak specifically errs on the side of Availability and Partition tolerance by having eventual consistency of data held in its datastore
What Riak can do for an e-commerce app
Using Riak out of the box, it would be a good source for the content about the items being sold in your e-commerce app (content that is generally written once and read lots is a great use case for Riak), however maintaining:
count of how many items left
money in a users' account
Need to be carefully handled in a distributed datastore.
Implementing consistency in an eventually consistent datastore
There are several methods you can use, they include:
Implement a serialization method when writing updates to values that you need to be consistent (ie: go through a single/controlled service that guarantees that it will only update a single item sequentially), this would need to be done outside of Riak in your API layer
Change the replication properties of your consistent buckets so that you can 'guarantee' you never retrieve out of date data
At the bucket level, you can choose how many copies of data you want
to store in your cluster (N, or n_val), how many copies you wish to
read from at one time (R, or r), and how many copies must be written
to be considered a success (W, or w).
The above method is similar to using the strong consistency model available in the latest versions of Riak.
Important note: In all of these data store systems (distributed or not) you in general will do:
Read the current data
Make a decision based on the current value
Change the data (decrement the Item count)
If all three of the above actions cannot be done in atomic way (either by locking or failing the 3rd if the value was changed by something else) an e-commerce app is open to abuse. This issue exists in traditional SQL storage solutions (which is why you have SQL Transactions).
I'm new to non-php web applications and to nosql databases. I was looking for a smart solution matching my application requirements and I was very surprised when I knew that there exist graph based db. Well I found neo4j very nice and very suitable for my application, but as I've already wrote I'm new to this and I have some limitations in understending how it works. I hope you guys could help me to learn.
If I embed neo4j in a servlet program then the database access I create is shared among the different threads of that servet right? so I need to put database creation in init() method and the shutdown in the destroy() right? And it will be thread safe.(every dot is a "right?") But what if I want to create a database shared among the whole application?
I heard that graph databases in general relies on a relational low level. Is that true for neo4j? But if it is then I see an high level interface to the real persistence layer, so what a Connection is in this case? Are there some techniques like connection pooling or these low level things are all managed by neo4j?
In my application I need to join some objects to users and many other classification stuff. any of these object has an unique id (a String). then If some one asks to view some stuff about object having id=QW then I need to load the vertex associate to object.QW. Is this an easy operation for graph datbases?
If I need to manage authentications, so as I receive the couple (usr,pwd) and I need to check whether exists this couple in my graph. Is the same problem as before or there exist some good variation for managing authentications?
thanks
If you're coming from PHP world in most cases you're better of running Neo4j in server mode and access it either via REST directly or use a client driver like https://github.com/jadell/neo4jphp. If you still want to embed Neo4j in a servlet environment, the GraphDatabaseService is a shared component, maybe stored within the ServletContext. On a per request (and therefore per-thread) basis you start and commit transactions.
Neo4j is a native graph database. The bare metal persistence layer is optimized for navigating from one node to its neighbors as fast as possible and written by the Neo4j devteam themselves. There are other graph databases out there reusing other persistence technologies for their underlying persistence.
Best thing is to run the Neo4j online course at http://www.neo4j.org/learn/online_course.
see SecurityRules
As the Neo4j is NoSql Graph Database,
Genration of the Unique ID you have to handle using the GUID(with 3.x autonincremented proery also supported for particular label),
as the Neo4j default genrated id is unique but can be realocated to the another object once the first assigned object is deleted,
I am .net developer in my project I used the Neo4j rest api it works well, i will sugesst you to go with that,as it is implemented using async-awit programing pattern, so long running operation you can pass to DB and utilize your web server resources in more prominent way.