Does DynamoDB expose an API to query or detect when there is conflict in merging item data - amazon-dynamodb

DynamoDB is an AP system based on the original dynamo paper.
Is there any API to detect when a merge conflict has happened or resolved?
Is there any API to provide a strategy to resolve a conflict if it happens.

Your question is based on a wrong premise. Although DynamoDB shares the name, and some goals and implementation details, with the original "Dynamo" paper, it is not very close, and the data model in particular is completely different.
Whereas in the Dynamo paper multiple clients could store multiple different values for an item concurrently - and later readers need to resolve the conflict - DynamoDB does things very differently:
If two clients replace an item, DynamoDB offers a "last write wins" - one of these writes will win, you don't know or care which.
If two clients modify different attributes in the same item concurrently, both changes will be merged. I never found this explicitly promised, but it appears to work this way.
You also have a powerful conditional update feature, which can do a modification to a single item based on some condition on the old value of this item. These conditional updates are guaranteed to be isolated, so they can be used to ensure safe concurrent modification. For example, a conditional update can be used to implement so-called optimistic locking: An item has a version attribute among other attributes, a client reads the old item, decides what to change it to, and then does the write - with the condition that the version still hasn't changed. If the condition fails (because some other client raced us), the write fails and the client tries the whole process again (read again, apply a change, and write back).
DynamoDB also has a new feature of full (multi-item) transactions. This feature did not exist in Dynamo at all.

Related

DDD and uniqueness constraint

How would one validate a unique constraint using DDD? Let's say that an Entity has a property name that must be unique among the system and there is a specific EntityRepository method nameExists(name): bool... This is what I found people suggests to do, because the repository is the abstraction of the collection of all the Entityies and should be able to perform this check.
So before creating/adding the new Entity the command / domain service could check for the existence of a newName against the repository, but I think that this will not always work because of concurrency.
In a concurrent scenario where two transactions are started simultaneously, the EntityRepository's nameExists method might return false for both transactions, and as a result of this two entries with the same name will be incorrectly inserted.
I am sure that I am missing something basic, but the answers I found all point to the repository exists method - TBH others say that a UNIQUE constraint should be put on the DB to catch the concurrency case, but what if one uses Event Sourcing or a persistence layer that does not have unique constraints?
| Follow up question |
What if the uniqueness constraint is to be applied in different levels of a hierarchy?
A Container's name must be unique in the system and then Child names must be unique inside a Container.
Let's say that a transactional DB takes care of the uniqueness at the lowest possible level, what about the domain?
Should I still express the uniqueness logic at the domain level, e.g. with a Domain Service for the system-level uniqueness and embedding Child entities inside the Container entity and having a business rule (and therefore making Container the aggregate root)?
Or should I not bother with "replicating" the uniqueness in the domain and (given there are no other rules to apply between the two) split Container and Child? Will the domain lack expressiveness then?
I am sure that I am missing something basic
Not something basic.
The term we normally use for enforcing a constraint, like uniqueness, across a set of entities is set validation. Greg Young calls your attention to a specific question:
What is the business impact of having a failure
Most set constraints fall into one of two categories
constraints that need to be true when the system reaches steady state, but may not hold while work is in progress. In business processes, these are often handled by detecting conflicts in the stored data, and then invoking various mitigation processes to resolve the conflict.
constraints that need to be true always.
The first category includes things like double booking a seat on an airplane; it's not necessarily a problem unless both people show up, and even then you can handle it by bumping someone to another seat, or another flight.
In these cases, you make a best effort - you look at a recent copy of the set, make sure there are no conflicts there, then hope for the best (accepting that some percentage of the time, you'll have missed a change).
See Memories, Guesses and Apologies (Pat Helland, 2007).
Second category is the hard one; to ensure the invariant holds you have to lock the entire set to ensure that races don't allow two different writers to insert conflicting information.
Relational databases tend to be really good at set validation - putting the entire set into a single database is going to be the right answer (note the assumption that the set is small enough to fit into a single database -- trying to lock two databases at the same time is hard).
Another possibility is to ensure that only one writer can update the set at any given time -- you don't have to worry about a losing a race when you are the only one running in it.
Sometimes you can lock a smaller set -- imagine, for example, having a collection of locks with numbers, and the hash code for the name tells you which lock you have to grab.
This simplest version of this is when you can use the name as the aggregate identifier itself.
if one uses Event Sourcing or a persistence layer that does not have unique constraints?
Sometimes, you introduce a persistent store dedicated to the set, just to ensure that you can maintain the invariant. See "microservices".
But if you can't change the database, and you can't use a database with the locking guarantees that you need, and the business absolutely has to have the set valid at all times... then you single thread that part of the work.
Everybody that wants to change a name puts a request into a queue, and the one thread responsible for managing the invariant certifies each and every change.
There's no magic; just hard work and trade offs.

Doctrine 2 optimistic locking only on changed fields

Is anyone aware of any bundle, or of any plans for Doctrine to implement a new optimistic locking strategy like the one in Telerik framework?
http://docs.telerik.com/data-access/developers-guide/crud-operations/concurrency-control/data-access-tasks-define-model-concurrency-optimistic#checking_for_any_changes
This strategy is great because it's not using version numbers, or timestamps, it compares the old values of the inputs with new values, and only for the changed inputs.
So if I have two different forms updating the same entity, using the current strategies (timestamp or versioning), 2 users will get into conflict even if the do not update the same data.
I know this is a rather old question on the interwebs, so, I'm going to glaze over how this can be done in theory.
The per column lock implies there is a mechanism for detecting per column changes - therefore either the row is hashed (or the columns we are concerned about) and that hash comparison is used to detect/track changes OR we track each column individually.
This tracking would then have to be applied on a per request basis to determine what's changed, when. This is where the normal #Version decorator logic would apply.
Telerik is able to handle the per column changes because the information is compared (array comparisons, most likely) in the browser. This is an isolated state of the data that is not in sync with other browsers/database until update.
Easiest option here is to not allow the two forms to collide (in terms of data) and remove the lock - OR - decouple the forms data from the primary table and allow them to be updated separately (and merged with a sql join).

Firebase web - transaction on query

Can I run a transaction on a query referring to multiple locations ?
In the doc I see that for example startAt returns a firebase.database.Query which has a ref property of type firebase.database.Reference which has the transaction method.
So can I do:
ref.startAt(ver).ref.transaction(transactionUpdate).then(... ?
Would the transaction then operate on multiple locations and update them correctly ?
What I'm trying to do is to get all locations since a particular version (key) and then mark them as 'read' so that a writing client will not update them. For that I need a transaction rather than a simple update.
Thx!
The answer is "no" to all questions.
The ref property of a Query gives you the reference of the node on which you set up the query. Consider how you built the query in the first place. In other words, ref.startAt(x).ref is equivalent to ref.
Manipulating a reference (navigating to children, adding query options, etc.) is completely independent of any query results. It's just local, trivial path manipulation, very similar to formatting a URL.
Transactions can only operate on a single node, by definition, using that node's value snapshots for incremental updates. They cannot "operate on multiple locations and update them correctly". These are not SQL transactions, the only thing common is the name – which might be, unfortunately, confusing.
The starting node doesn't have to be a leaf node. But if you start a transaction on a "parent" node, the client will have to download every child to create a whole snapshot, potentially multiple times if any of them is modified by another client.
This is most certainly a very slow, fragile and expensive operation, both for the user and you, the owner of the database. In general, it's not recommended to run transactions if the node might grow unbounded.
I suggest revising the presented strategy. Updating "all children" just to store a "read" marker simply does not scale.
You could for example store the last read ID of the client in a single node, and write security rules to enforce that no data with an ID less than this may be modified.

DynamoDB: Conditional writes vs. the CAP theorem

Using DynamoDB, two independent clients trying to write to the same item at the same time, using conditional writes, and trying to change the value that the condition is referencing. Obviously, one of these writes is doomed to fail with the condition check; that's ok.
Suppose during the write operation, something bad happens, and some of the various DynamoDB nodes fail or lose connectivity to each other. What happens to my write operations?
Will they both block or fail (sacrifice of "A" in the CAP theorem)? Will they both appear to succeed and only later it turns out that one of them actually was ignored (sacrifice of "C")? Or will they somehow both work correctly due to some magic (consistent hashing?) going on in the DynamoDB system?
It just seems like a really hard problem, but I can't find anything discussing the possibility of availability issues with conditional writes (unlike with, for instance, consistent reads, where the possibility of availability reduction is explicit).
There is a lack of clear information in this area but we can make some pretty strong inferences. Many people assume that DynamoDB implements all of the ideas from its predecessor "Dynamo", but that doesn't seem to be the case and it is important to keep the two separated in your mind. The original Dynamo system was carefully described by Amazon in the Dynamo Paper. In thinking about these, it is also helpful if you are familiar with the distributed databases based on the Dynamo ideas, like Riak and Cassandra. In particular, Apache Cassandra which provides a full range of trade-offs with respect to CAP.
By comparing DynamoDB which is clearly distributed to the options available in Cassandra I think we can see where it is placed in the CAP space. According to Amazon "DynamoDB maintains multiple copies of each item to ensure durability. When you receive an 'operation successful' response to your write request, DynamoDB ensures that the write is durable on multiple servers. However, it takes time for the update to propagate to all copies." (Data Read and Consistency Considerations). Also, DynamoDB does not require the application to do conflict resolution the way Dynamo does. Assuming they want to provide as much availability as possible, since they say they are writing to multiple servers, writes in DyanmoDB are equivalent to Cassandra QUORUM level. Also, it would seem DynamoDB does not support hinted handoff, because that can lead to situations requiring conflict resolution. For maximum availability, an inconsistent read would only have to be at the equivalent of Cassandras's ONE level. However, to get a consistent read given the quorum writes would require a QUORUM level read (following the R + W > N for consistency). For more information on levels in Cassandra see About Data Consistency in Cassandra.
In summary, I conclude that:
Writes are "Quorum", so a majority of the nodes the row is replicated to must be available for the write to succeed
Inconsistent Reads are "One", so only a single node with the row need be available, but the data returned may be out of date
Consistent Reads are "Quorum", so a majority of the nodes the row is replicated to must be available for the read to succeed
So writes have the same availability as a consistent read.
To specifically address your question about two simultaneous conditional writes, one or both will fail depending on how many nodes are down. However, there will never be an inconsistency. The availability of the writes really has nothing to do with whether they are conditional or not I think.

How to implement gapless, user-friendly IDs in NHibernate?

I'm designing an application where my Order objects need to have a sequential and user-friendly Id field. I'm avoiding the HiLo algorithm because of the rather large gaps it produces (see here). Naturally, Guid values would make my corporate users go bananas. I'm also avoiding Oracle sequences because of the major disadvantages of it:
(From: NHibernate POID Generators revealed)
Post insert generators, as the name
suggest, assigns the id’s after the
entity is stored in the database. A
select statement is executed against
database. They have many drawbacks,
and in my opinion they must be used
only on brownfield projects. Those
generators are what WE DO NOT SUGGEST
as NH Team.
> Some of the drawbacks are the
following:
Unit Of Work is broken with the use of
those strategies. It doesn’t matter if
you’re using FlushMode.Commit, each
Save results in an insert statement
against DB. As a best practice, we
should defer insertions to the commit,
but using a post insert generator
makes it commit on save (which is what
UoW doesn’t do).
Those strategies
nullify batcher, you can’t take the
advantage of sending multiple queries
at once(as it must go to database at
the time of Save).
Any ideas/experience on implementing user-friendly IDs without major gaps between them?
Edit:
User friendly Id fields are ones my corporate users can memorize and even discuss and/or have phone conversations talking about a particular Order by its code, e.g. "I'm calling to know why the order #1625 was denied.".
The Id doesn't need to be strictly gapless, but I am worried that my users would get confused when they see gaps like 100, 201, 305. For my older projects, I currently implement NHibernate using Oracle sequences which occasionally lose a few sequences when exceptions are thrown, but yet keep a rather tidy order to them. The downside to them is how they break the Unit of Work which results in additional hits to the database for every Save command with or without the Session.Flush.
One option would be to keep a key-table that simply stores an incrementing value. This can introduce a few problems, namely possible locking issues as well as additional hits to the database.
Another option might be to refine what you mean by "User-friendly Id". This could consist of a combination of a Date/Time and a customer-specific sequence (or including the customer id as well). Also, your order id does not necessarily have to be the actual key on the table. There is nothing to say that you can't use a surrogate key with a separate "calculated" column which represents the order id.
The bottom-line is that it sounds like you want to use a surrogate key, but have the benefits of a natural key. It can be very difficult to have it both ways and a lot comes down to how you actually plan on using the data, how users interpret the data, and personal preference.

Resources