DynamoDB atomic counter for account balance - amazon-dynamodb

In DynamoDB an Atomic Counter is a number that avoids race conditions
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html#WorkingWithItems.AtomicCounters
What makes a number atomic, and can I add/subtract from a float in non-unit values?
Currently I am doing: "SET balance = balance + :change"
(long version) I'm trying to use DynamoDB for user balances, so accuracy is paramount. The balance can be updated from multiple sources simultaneously. There is no need to pre-fetch the balance, we will never deny a transaction, I just care that when all the operations are finished we are left with the right balance. The operations can also be applied in any order, as long as the final result is correct.
From what I understand, this should be fine, but I haven't seen any atomic increment examples that do changes of values other than "1"
My hesitation arises because questions like Amazon DynamoDB Conditional Writes and Atomic Counters suggest using conditional writes for similar situation, which sounds like a terrible idea. If I fetch balance, change and do a conditional write, the write could fail if the value has changed in the meantime. However, balance is the definition of business critical, and I'm always nervous when ignoring documentation
-Additional Info-
All writes will originate from a Lambda function, and I expect pretty much 100% success rates in writes. However, I also maintain a history of all changes, and in the event the balance is in an "unknown" state (eg network timeout), could lock the table and recalculate the correct balance from history.
This I think gives the best "normal" operation. 99.999% of the time, all updates will work with a single write. Failure could be very costly, as we would need to scan a clients entire history to recreate the balance, but in terms of trade-off that seems a pretty safe bet.

The documentation for atomic counter is pretty clear and in my opinion it will be not safe for your use case.
The problem you are solving is pretty common, AWS recommends using optimistic locking in such scenarios.
Please refer to the following AWS documentation,
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBMapper.OptimisticLocking.html

It appears that this concept is workable, from a AWS staff reply
Often application writers will use a combination of both approaches,
where you can have an atomic counter for real-time counting, and an
audit table for perfect accounting later on.
https://forums.aws.amazon.com/thread.jspa?messageID=470243&#470243
There is also confirmation that the update will be atomic and any update operation will be consistent
All non batch requests you send to DynamoDB gets processed atomically
- there is no interleaving involved of any sort between requests. Write requests are also consistent, so any write request will update
the latest version of the item at the time the request is received.
https://forums.aws.amazon.com/thread.jspa?messageID=621994&#621994
In fact, every write to a given item is strongly consistent
in DynamoDB, all operations against a given item are serialized.
https://forums.aws.amazon.com/thread.jspa?messageID=324353&#324353

Related

Strength of atomic update in Firestore increment

I'm Firestore user recently diving into a concept of "atomic" update, especially Firestore documents' increment update. There is a classic article on Firestore increment in context of atomic update. And here comes my question.
Q, How strong is this atomic increment(number) update? Does this operation really have no limitation when it comes to operating truly atomically?
Let me explain a bit of details with an example case. We know that Firestore has a write limitation of 10,000 (up to 10 MiB per second) per db instance, and we also know that Firestore's increment method updates documents atomically. So, I hope to know if the below extreme example case would work perfectly atomically.
This Firestore instance only has a single document, and numerous users-maybe 10000 users maximum- update a single document using increment method, which increments a same field value as much as a random double number between 0 and 1 each, WITHIN a single second: 10000 updates in 1 second;
Above case makes use of Firestore write rate limit per second as much as possible, and all operations are updating a single field of same document. If increment method deals with update requests truly atomically, we might say all 10000 details will be calculated correctly into a single field.
But, this is only theoretic and conceptual idea, and it seems really hard for Firestore(or even any other db systems) to make no exception when it performs such an extreme set of increment operations when it has to deal with other upcoming operations linearly. It means that the Firestore instance would keep going on with upcoming API requests. This is a real world problem, actually. Let's say a lovely singer, Ariana Grande's Instagram post is just uploaded. If we deal with the event with Firestore document, we would have to deal with thousands of increment requests for likes per a single second.
So, i hope to know if there is truly no limitations for atomic increment method even there comes a set of high number of extremely concurrent increment requests to very few number of target documents. Hope this question reach to firebase gurus in the community! Comments are really welcomed! Thanks in advance [:
I'm not sure I understand your question completely, but I'll try to help anyway by explaining how Firestore and its increment operation work.
Firestore's main write limits come from the fact that data needs to be synchronized between data centers for each write operation. This is not a quota-type limit, but a physical limit of how fast data can be pushed across the wires.
Since you talk about frequent writes to a single document, you're going to sooner hit the soft limit of 1 sustained write per second per document. This is also caused by the physical nature of how the database works, and needs to synchronize the documents and indexes between servers/data centers.
While using the increment() operation means that no roundtrip is needed between the client and the server, it makes no difference to the data that needs to be read/written on the servers themselves. Therefore it makes no difference to the documented throughput limits.
If you need to perform counts beyond the documented throughput limits, have a look at the documentation on using a distributed counter.

Firestore, atomic writes/updates on more than 500 documents

One of the main reason for using firestore batche writes is that they are atomic and ensure data consistency. However they have a limit of 500 operations. Considering a large application, one may have denormalized user data in more than 500 documents. So when a user updates any of his/her profile details, I have to update it in all those more than 500 documents while maintaining data consistency (atomic updates) at the same time.
An intuitive solution would be maintaining an array of batches, and keeping track of those which fail, and then retry the failed batches manually.
However I want to ask that:
1) If there are any best practices or some other more easy and reliable methods of achieving this, because considering the limit 500 operations per batch, most of the commercial apps have to face the same issue.
2) Also is there a more smart approach present out there than just denormalizing data, so that through "that smart approach", this whole issue of data consistency (as stated above) can be avoided in the first place.
An intuitive solution would be maintaining an array of batches, and keeping track of those which fail, and then retry the failed batches manually.
That's a viable solution that you can go ahead with.
1) If there are any best practices or some other more easy and reliable methods of achieving this, because considering the limit 500 operations per batch, most of the commercial apps have to face the same issue.
I can tell you what I do. I usually create a counter variable and increment its value every time I add an update operation to the batch. Then create an if statement and every time you increment the counter, check to see if it reached 500. At that time, commit the current batch, reset the counter and start a new batch, picking up where you left off. Do this till you finish all batch writes.
2) Also is there a more smart approach present out there than just denormalizing data, so that through "that smart approach", this whole issue of data consistency (as stated above) can be avoided in the first place.
The problem of the batch writes cannot be solved with the help of denormalization. Duplicating data, isn't a solution.

Aggregation on FireStore/CloudDatastore. Use Cloud Functions onCreate/Update?

I want to create an expense tracker and one of the things I want to find out is how much did I spend in each month per category.
How should I do this in FireStore/DataStore?
Pull down required data and do aggregation locally? Seems very slow?
Perform aggregation everytime a transaction is created/updated and save it in a table? But this may result in many invocations of the functions, which may be costly?
Is there a better way? Seems like 2 is currently the best option? But I wonder if theres anyway I can reduce costs?
I note that I may not need the aggregated data to be realtime, so is there a way to debounce the cloud function execution? Since I note that at times, I will batch insert a bunch of transactions. Wonder if theres a way to disable functions for certain queries and manually call them after the batch has finished for example?
The two approaches you describe are indeed the most common.
The best approach mostly depends on the number of transactions you have. If you have few transactions, then it may be totally fine to do the aggregation on each client. But as you get more transactions, the overhead of downloading the data will become prohibitive and you're more likely to want to keep a running total in the database.
I'd normally recommend keeping the total up to date with any transaction. You can even do that with client-side code, by using transactions (to prevent multiple users overwriting each other's updates) and server-side security rules (to prevent malicious actors from writing an aggregate that doesn't match its transaction).
If you want to aggregate in batches, you'll want to run code periodically, either in a server you control, or in Cloud Functions.
There is nothing built into Cloud Functions to debounce document writes. You could probably keep a debounce counter in Firestore, but that would then be reading/writing a document on each transaction.
More reasonable seems to run a function on a timer, as described in this blog post and shown in this video. But you'll need to make sure your data structure in that case allows the code to detect what transactions it needs to aggregate.
One way to do this is to ensure the transactions can be ordered in some way, e.g. by giving them a timestamp, and having your aggregation code keep track (likely in the database) of the last timestamp it has aggregated already. Then whenever the aggregator runs, it:
reads the current aggregated value
queries the database for transactions that have been added since it last ran
loops over those transactions, updating the aggregated value
writes the aggregated value and the last timestamp back to the database in a transaction (to ensure either both are written, or neither is written)

DynamoDB: Conditional writes vs. the CAP theorem

Using DynamoDB, two independent clients trying to write to the same item at the same time, using conditional writes, and trying to change the value that the condition is referencing. Obviously, one of these writes is doomed to fail with the condition check; that's ok.
Suppose during the write operation, something bad happens, and some of the various DynamoDB nodes fail or lose connectivity to each other. What happens to my write operations?
Will they both block or fail (sacrifice of "A" in the CAP theorem)? Will they both appear to succeed and only later it turns out that one of them actually was ignored (sacrifice of "C")? Or will they somehow both work correctly due to some magic (consistent hashing?) going on in the DynamoDB system?
It just seems like a really hard problem, but I can't find anything discussing the possibility of availability issues with conditional writes (unlike with, for instance, consistent reads, where the possibility of availability reduction is explicit).
There is a lack of clear information in this area but we can make some pretty strong inferences. Many people assume that DynamoDB implements all of the ideas from its predecessor "Dynamo", but that doesn't seem to be the case and it is important to keep the two separated in your mind. The original Dynamo system was carefully described by Amazon in the Dynamo Paper. In thinking about these, it is also helpful if you are familiar with the distributed databases based on the Dynamo ideas, like Riak and Cassandra. In particular, Apache Cassandra which provides a full range of trade-offs with respect to CAP.
By comparing DynamoDB which is clearly distributed to the options available in Cassandra I think we can see where it is placed in the CAP space. According to Amazon "DynamoDB maintains multiple copies of each item to ensure durability. When you receive an 'operation successful' response to your write request, DynamoDB ensures that the write is durable on multiple servers. However, it takes time for the update to propagate to all copies." (Data Read and Consistency Considerations). Also, DynamoDB does not require the application to do conflict resolution the way Dynamo does. Assuming they want to provide as much availability as possible, since they say they are writing to multiple servers, writes in DyanmoDB are equivalent to Cassandra QUORUM level. Also, it would seem DynamoDB does not support hinted handoff, because that can lead to situations requiring conflict resolution. For maximum availability, an inconsistent read would only have to be at the equivalent of Cassandras's ONE level. However, to get a consistent read given the quorum writes would require a QUORUM level read (following the R + W > N for consistency). For more information on levels in Cassandra see About Data Consistency in Cassandra.
In summary, I conclude that:
Writes are "Quorum", so a majority of the nodes the row is replicated to must be available for the write to succeed
Inconsistent Reads are "One", so only a single node with the row need be available, but the data returned may be out of date
Consistent Reads are "Quorum", so a majority of the nodes the row is replicated to must be available for the read to succeed
So writes have the same availability as a consistent read.
To specifically address your question about two simultaneous conditional writes, one or both will fail depending on how many nodes are down. However, there will never be an inconsistency. The availability of the writes really has nothing to do with whether they are conditional or not I think.

Riak and time-sorted records

I'd like to sort some records, stored in riak, by a function of the each record's score and "age" (current time - creation date). What is the best way do do a "time-sensitive" query in riak? Thus far, the options I'm aware of are:
Realtime mapreduce - Do the entire calculation in a mapreduce job, at query-time
ETL job - Periodically do the query in a background job, and store the result back into riak
Punt it to the app layer - Don't sort at all using riak, and instead use an application-level layer to sort and cache the records.
Mapreduce seems the best on paper, however, I've read mixed-reports about the real-world latency of riak mapreduce.
MapReduce is a quite expensive operation and not recommended as a real-time querying tool. It works best when run over a limited set of data in batch mode where the number of concurrent mapreduce jobs can be controlled, and I would therefore not recommend the first option.
Having a process periodically process/aggregate data for a specific time slice as described in the second option could work and allow efficient access to the prepared data through direct key access. The aggregation process could, if you are using leveldb, be based around a secondary index holding a timestamp. One downside could however be that newly inserted records may not show up in the results immediately, which may or may not be a problem in your scenario.
If you need the computed records to be accurate and will perform a significant number of these queries, you may be better off updating the computed summary records as part of the writing and updating process.
In general it is a good idea to make sure that you can get the data you need as efficiently as possibly, preferably through direct key access, and then perform filtering of data that is not required as well as sorting and aggregation on the application side.

Resources