Firestore rules, atomic writes, and write limits - firebase

I have a two-part question regarding Firestore rule evaluations. The parts are related which is why the single question here...
Part I - Atomic write access
Let's say that I have a written rule such as
allow write: if resource.data.claimedBy == null && request.data.claimedBy == request.auth.uid;
The idea is that any user can make a claim to this resource. But, what if this resource is made available to 1000 users all at once and they all jump to make a claim and make the .update() call all at the same time?
Will this be a first wins scenario? Firebase will be set as the field for the first user, the winner, and then everyone else will have their writes rejected because the rule would fail due to a value being present from the winner's write? Or is there any risk whatsoever that a race condition could result and somehow for a moment the value was one thing and then became another?
I feel like the rules would prohibit a race condition, but I don't know for certain.
Part II - Write limits
Ok, so this part builds off of the first. Firestore has a write limit of 1 write/second in general for a single document. Let's assume Part I works how I hope and the other 999 users will get a write rejection. Do these rejections count towards the write limit of 1 write/sec because a write was initiated, or do they not count because the rules prohibit an actual write?
Obviously, having all these claim attempts at once count as 1000 writes would be bad for the 1 write/second limit.
I am assuming here, but I believe it would not count toward that limit because my understanding is that the limit is imposed by the nature of the underlying storage mechanics, and the rules prevent going to that layer upon rejection. But also again, I don't know for certain.
Part III - Bonus part
Do writes that are rejected by a rule still count as a "write" as far as billing is concerned? I know a query for a document that does not exist (no documents actually read) still counts as a single read, so I am wondering if writes with regards to rules which prohibit the underlying write works in a similar way and incurs a charge.
Thank you so much!

You should use a transaction to avoid and prevent concurrent writes. The transaction can check if the document was previously claimed, then abort if it was.
If a write was denied by a rule, it doesn't count toward any write limits or billing, as no data in the document was actually changed.

Related

Strength of atomic update in Firestore increment

I'm Firestore user recently diving into a concept of "atomic" update, especially Firestore documents' increment update. There is a classic article on Firestore increment in context of atomic update. And here comes my question.
Q, How strong is this atomic increment(number) update? Does this operation really have no limitation when it comes to operating truly atomically?
Let me explain a bit of details with an example case. We know that Firestore has a write limitation of 10,000 (up to 10 MiB per second) per db instance, and we also know that Firestore's increment method updates documents atomically. So, I hope to know if the below extreme example case would work perfectly atomically.
This Firestore instance only has a single document, and numerous users-maybe 10000 users maximum- update a single document using increment method, which increments a same field value as much as a random double number between 0 and 1 each, WITHIN a single second: 10000 updates in 1 second;
Above case makes use of Firestore write rate limit per second as much as possible, and all operations are updating a single field of same document. If increment method deals with update requests truly atomically, we might say all 10000 details will be calculated correctly into a single field.
But, this is only theoretic and conceptual idea, and it seems really hard for Firestore(or even any other db systems) to make no exception when it performs such an extreme set of increment operations when it has to deal with other upcoming operations linearly. It means that the Firestore instance would keep going on with upcoming API requests. This is a real world problem, actually. Let's say a lovely singer, Ariana Grande's Instagram post is just uploaded. If we deal with the event with Firestore document, we would have to deal with thousands of increment requests for likes per a single second.
So, i hope to know if there is truly no limitations for atomic increment method even there comes a set of high number of extremely concurrent increment requests to very few number of target documents. Hope this question reach to firebase gurus in the community! Comments are really welcomed! Thanks in advance [:
I'm not sure I understand your question completely, but I'll try to help anyway by explaining how Firestore and its increment operation work.
Firestore's main write limits come from the fact that data needs to be synchronized between data centers for each write operation. This is not a quota-type limit, but a physical limit of how fast data can be pushed across the wires.
Since you talk about frequent writes to a single document, you're going to sooner hit the soft limit of 1 sustained write per second per document. This is also caused by the physical nature of how the database works, and needs to synchronize the documents and indexes between servers/data centers.
While using the increment() operation means that no roundtrip is needed between the client and the server, it makes no difference to the data that needs to be read/written on the servers themselves. Therefore it makes no difference to the documented throughput limits.
If you need to perform counts beyond the documented throughput limits, have a look at the documentation on using a distributed counter.

Firebase - preventing malicious user from pulling entire database

I was watching the Firebase doc videos and noticed that in this video: https://www.youtube.com/watch?v=9sOT5VOflvQ&list=PLl-K7zZEsYLn8h1NyU_OV6dX8mBhH2s_L&index=4
at 6:39, Doug mentions that it is possible to limit the amount of document returned by one query, by doing something like:
allow list if: request.query.limit <= 20
However, he mentions that, although this is beneficial because it prevents you from accidentally executing a very costly set of reads, it still won't prevent malicious users from reading everything in your database by making multiple requests and using pagination to sift through your database. I could envision some sort of infinite while loop in JavaScript that makes this very problematic and costly.
The only way that I could think to solve this problem is by somehow using timestamps perhaps, and saving some information associated with each user which informs the database of when they last made a request. Would it be possible to do this and then access those timestamps in the security rules? Something along the lines of (where the second condition is kind of pseudo-code):
allow list if: request.query.limit <= 20 && get(/databases/$(database)/documents/users/$(request.auth.uid)).data.last-time <= 100
This seems to me the most feasible way but if anyone else has thoughts on this, they would be much appreciated!
The problem is that you can't update last-time when this query happens. And since you can't update last-time when they read, there is no way to restrict the number of reads a user can perform through this mechanism.
Because of this limitation it is possible implement a write rate limit in security rules, but not a read rate limit.

Firestore pricing

There are several questions asked about this topic but I cant find one that answers my question. As described here, there is no clear explanation as to whether the minimum charges are applicable to query.get() or real-time listeners as well. Quoted:
There is a minimum charge of one document read for each query that you perform, even if the query returns no results.
The reason am asking this question even though it may seem obvious for someone is due to the section; *for each query that you perform* in that statement which could mean a one time trigger e.g with get() method.
Scenario: If 10 users are listening to changes in a collection with queries i.e query.addSnapshotListener() then change occurs in one document which matches query filter of only two users, are the other eight charged a cost of one read too?
Database used: Firestore
In this scenario I would say no, the other eight would not be counted as reads because the documents they are listening to have not been updated or have not been added/removed from that collection based on their filters (query params). The reads aren't based on changes to the collection but rather changes to the stream of documents you are specifically listening to. Because that 1 document change was not part of the documents that the other 8 users were listening to then there is no new read for them. However, if that 1 document change led to that document now matching the query filters of those other 8, then yes there would be 8 new reads for those users. Hope that makes sense.
Also it's worth noting that things like have offlinePersistence enabled via the sdk and firestore's caching maximize the efficiency of limiting reads as well as using a singleton Observable that multiple instances in your app subscribe to as oppose to opening multiple streams of the same query throughout your app. Doesn't really apply to this question directory but again while in the same vein, it's worth noting.

Is it possible to observe partial changes from an atomic Firestore write?

The Firestore docs says that both transactions and batched writes are atomic operations - either all changes are written or nothing is changed.
This question is about whether the changes of an atomic operation in Firestore can be partially observed, or whether the all or nothing guarantee applies to readers too?
Example:
Let's say that we have a Firestore database with at least two documents, X and Y.
Let's also say that there are at least two clients (A and B) connected to this database.
At some point client A executes a batched write that updates both document X and Y.
Later, client B reads document X and observes the change that client A made.
Now, if client B would read document Y too, is there a guarantee that the change made by A (in the same batched write operation) will be observed?
(Assuming that no other changes where made to those documents)
I've tested it and I've never detected any inconsistencies. However, just testing this matter can't be enough. It comes down to the level of consistency provided by Firestore, under all circumstances (high write frequency, large data sets, failover etc)
It might be the case that Firestore is allowed (for a limited amount of time) to expose the change of document X to client B but still not expose the change of document Y. Both changes will eventually be exposed.
Question is; will they be exposed as an atomic operation, or is this atomicity only provided for the write?
I've received an excellent response from Gil Gilbert in the Firebase Google Group.
In short; Firestore do guarantee that reads are consistent too. No partial observations as I was worried about.
However, Gil mentions two cases were a client could observe this kind of inconsistency anyway due to offline caching and session handling.
Please refer to Gil's response (link above) for details.

DynamoDB atomic counter for account balance

In DynamoDB an Atomic Counter is a number that avoids race conditions
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html#WorkingWithItems.AtomicCounters
What makes a number atomic, and can I add/subtract from a float in non-unit values?
Currently I am doing: "SET balance = balance + :change"
(long version) I'm trying to use DynamoDB for user balances, so accuracy is paramount. The balance can be updated from multiple sources simultaneously. There is no need to pre-fetch the balance, we will never deny a transaction, I just care that when all the operations are finished we are left with the right balance. The operations can also be applied in any order, as long as the final result is correct.
From what I understand, this should be fine, but I haven't seen any atomic increment examples that do changes of values other than "1"
My hesitation arises because questions like Amazon DynamoDB Conditional Writes and Atomic Counters suggest using conditional writes for similar situation, which sounds like a terrible idea. If I fetch balance, change and do a conditional write, the write could fail if the value has changed in the meantime. However, balance is the definition of business critical, and I'm always nervous when ignoring documentation
-Additional Info-
All writes will originate from a Lambda function, and I expect pretty much 100% success rates in writes. However, I also maintain a history of all changes, and in the event the balance is in an "unknown" state (eg network timeout), could lock the table and recalculate the correct balance from history.
This I think gives the best "normal" operation. 99.999% of the time, all updates will work with a single write. Failure could be very costly, as we would need to scan a clients entire history to recreate the balance, but in terms of trade-off that seems a pretty safe bet.
The documentation for atomic counter is pretty clear and in my opinion it will be not safe for your use case.
The problem you are solving is pretty common, AWS recommends using optimistic locking in such scenarios.
Please refer to the following AWS documentation,
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBMapper.OptimisticLocking.html
It appears that this concept is workable, from a AWS staff reply
Often application writers will use a combination of both approaches,
where you can have an atomic counter for real-time counting, and an
audit table for perfect accounting later on.
https://forums.aws.amazon.com/thread.jspa?messageID=470243&#470243
There is also confirmation that the update will be atomic and any update operation will be consistent
All non batch requests you send to DynamoDB gets processed atomically
- there is no interleaving involved of any sort between requests. Write requests are also consistent, so any write request will update
the latest version of the item at the time the request is received.
https://forums.aws.amazon.com/thread.jspa?messageID=621994&#621994
In fact, every write to a given item is strongly consistent
in DynamoDB, all operations against a given item are serialized.
https://forums.aws.amazon.com/thread.jspa?messageID=324353&#324353

Resources