I have this items node:
items
|
|----item_id_1
-name:
-type:
-price:
The idea is when one user gets an item, 2 things happen:
The item gets removed from the items node.
The item gets added to his/her my_items node.
These are items that only one user can get, obviously the user that requested them first, and then they get removed.
The problem:
If multiple users requested the item at the same time, how do I handle where this item will go?
Question:
How to make sure that if multiple users requested an item at the same time, only one will get it (No One Else)?
Are Security rules able to solve this?
I am aware of Firebase Transaction operations, but not sure if they help in my case.
Any advice is appreciated.
Thanks.
I am aware of Firebase Transaction operations, but not sure if they help in my case.
When it comes to updating a Realtime Database location (node) in a multi-user environment, then indeed a transaction is required. However, a transaction can only read and update a single location (node). There is no way you can perform multi-location transactions. On the other hand, Firestore does:
Using the Cloud Firestore client libraries, you can group multiple operations into a single transaction.
So when using Firestore, you can update documents no matter if they exist in a collection or a sub-collection. In the Realtime Database, you can only safely update children under the location (node) you are using for the transaction.
Are Security rules able to solve this?
Yes, there will be an alternative to secure your multi-location by writing security rules. Please see below an example:
Is the way the Firebase database quickstart handles counts secure?
Related
I know it's not possible to update an array within a document directly in Firestore by index, and you have to do it on the client side.
What happens when you have multiple concurrent users writing to the array of the particular document? How do we ensure that the updates to an array is made on the latest version of the document?
For example, when user A queries the document on the client side to update the array, there might have been an new update to the array made by user B that took place shortly after the query took place. Now if user A updates that array, it will be on an old version of the document, and if user A writes back to the database it will essentially overwrite user B's updates...
How can this be handled?
If you want to protect against concurrent writes, use a transaction. They protect against the scenario you describe by performing a compare-and-set (in the case of the client-side SDKs) or a lock (in the case of the Admin SDKs).
I am currently working on an iOS App that uses Cloud Firestore from Firebase.
I was wondering: what is the best way (cost, efficiency and security-wise) to upload some data to multiple Firestore documents simultaneously (or almost simultaneously)?
* The data I have to upload consists of the following: there are two users (User A is the user currently using the app, User B is the one whose profile is currently being seen by User A). If User A saves User B's profile, I must upload User B's UID to User A's Firestore Document. Then, I have to increase a counter in User A's Firestore Document. Finally, I must add User A's UID to User B's Firestore Document. - Note that with Firestore Document I mean either a document Field or a document Subcollection.
The choices are:
Upload everything from the client: seems the best method, cost-wise: it doesn't require extra Cloud Functions usage. I would create a Batch Operation and upload all the data from there*. The downside is that the client must be able to access multiple unrelated collections and documents.
Update one document from the client, then update everything else from Cloud Functions: this method is the best one efficiency and security-wise; the client only uploads data to the user's document*, without accessing unrelated collections and documents. Also, the client only has to upload a fraction of the data that it had to upload in the previous method, saving bandwidth and cellular data / WiFi usage. The downside is that the usage of Cloud Functions would increase, eventually resulting in more costs.
Update one document from the client, update the counter* from the client and then update everything else form Cloud Functions: this method is somewhat a hybrid between the first two, as I think that updating the counter from the client is more secure (Cloud Functions' .onWrite trigger may happen twice or more, increasing the counter multiple times?).
My first thought was to go with method 2, as it's far more secured and efficient, but I would like to have someone else's advice too, before "wasting" too much time coding something wrong.
I hope this isn't any kind of duplicate, as I couldn't find anything that answered my question with enough specificity.
Any advice would be much appreciated. Thank you.
I would follow the third approach: updating from the client the current user collections (the saved_profiles collection and the counter field), which are private and only accessible by this user (configure Firestore Security Rules) and updating the other user's collection (users_who_saved_my_profile) with a triggered Cloud Function. As these operations are not controlled by security rules, they can access any part of the database. This way no unnecessary permissions are granted to any user.
Firebase Firestore: How to monitor read document count by collection?
So first of something similar like question was already asked almost a year ago so dont mark it duplicate cause I need some suggestions in detail.
So here is the scenario,
lets say I have some collections of database and in future I might need to perform some ML on the DB. According to the documents visit.
That is how many times a specific document is visited and for how much time.
I know the mentioned solution above indirectly suggests to perform a read followed by write operation to the database to append the read count every time I visit the database. But it seems this needs to be done from client side
Now if you see, lets say I have some documents and client is only allowed to read the document and not given with access for writing or updating. In this case either I will have to maintain a separate collection specifically to maintain the count, which is of course from client side or else I will have to expose a specific field in the parent document (actual documents from where I am showing the data to clients) to be write enabled and rest remaining field protected.
But fecthing this data from client side sounds an alarm for lot of things and parameters cause I want to collect this data even if the client is not authenticated.
I saw the documentation of cloud functions and it seems there is not trigger function which works as a watch dog for listening if the document is being fetched.
So I want some suggestions on how can we perform this in GCP by creating own custom trigger or hook in a server.
Just a head start will be so usefull.
You cannot keep track of read counts if you are using the Client SDKs. You would have to fetch data from Firestore with some secure env in the middle (Cloud Functions or your own server).
A callable function like this may be useful:
// Returns data for the path passed in data obj
exports.getData = functions.https.onCall(async (data, context) => {
const snapshot = admin.firestore().doc(data.path).get()
//Increment the read count
await admin.firestore().collection("uesrs").doc(context.auth.uid).update({
reads: admin.firestore.FieldValue.increment(1)
})
return snapshot.data()
});
Do note that you are using Firebase Admin SDK in this case which has complete access to all Firebase resources (bypasses all security rules). So you'll need to authorize the user yourself. You can get UID of user calling the function like this: context.auth.uid and then maybe some simple if-else logic will help.
One solution would be to use a Cloud Function in order to read or write from/to Firestore instead of directly interacting with Firestore from you front-end (with one of the Client SDKs).
This way you can keep one or more counters of the number of reads as well as calculate and apply specific access rights and everything is done in the back-end, not in the front-end. With a Callable Cloud Function you can get the user ID of authenticated users out of the box.
Note that by going through a Cloud Function you will loose some advantages of the Client SDKs, for example the ability to use a listener for real-time updates or the possibility to declare access rights through standard security rules. The following article covers the advantages and drawbacks of such approach.
My users can create documents (let's say tasks) in a subcollection with a bunch of security rules checking for authentication, permissions and data validity. They can even select multiple tasks and copy them in the same collection.
Now, a regular user will likely create at most a hundred tasks at once, but what if someone with bad intentions manage to obtain my database credentials, authenticate and try to create a huge number of valid documents programmatically? This will result in Firestore scaling without problems and an unexpected surprise in my Firebase billing.
This is my first concern, but I'm also thinking about the possibility to limit a collection size for other reasons, and it would be at the same time a solution for the problem described.
I read about techniques to count documents in a collection described in the Firestore documentation, but I did not found a solution.
Keeping a counter on a doc field updated with a transaction in a cloud function would be inefficient in my case. Distributed counters increase the complexity of my data model a bit, and also I would not know how to properly read those counters in security rules for every task creation, and even if that would be an efficient solution.
Does anyone has suggestions?
I believe the way for a person to gain read/write access to your database would be to either to hack Google servers, in which case no one is safe and it doesn't really matter what you do, or to guess the exact name of your collections and documents.
As for the latter case, what I have done in my project is that for each collection and document I have used the name I wanted plus random 10-char Strings (including all kinds of chars and numbers. For example Users-x5NfaS1jCb) which kind of serve as independent, separate passwords every step of the way. This, at least, makes it difficult to guess the name of the collections and documents.
(Just like mentioned in the question) If using authentication does not cause any complications for you project, you can use it to further raise the security of your database by limiting access to users authenticating through your app only.
I guess (have never tried it) you can make use of Firebase Functions to limit the number of documents available in any given collection based on the criteria you want. This function will be invoked every time an event in created in the database.
If by "obtain my database credentials", you mean finding the username and password to your Firebase account, well it doesn't really matter what you do again. If they know what they are doing, they can take so many advantages that this particular issue will be the least of your problems.
All in all, if you ask me, your database is safe unless either someone guesses your collection and document names, or gains access to your Firebase account.
These are the only things I can think of for now. I'll try to update my answer later.
Unfortunately when using the amazing Firebase Realtime Database (ie, traditional Firebase), and the Cloud Functions thereof
There's no concept really of lockup available, other than the base transaction concept. (Which is awesome as far as it goes.) For example you can't do a say read, delete, insert.
We haven't user the new Firestore in a project yet; I'm wondering if it solves that particular problem?
This would make it tremendously useful for things like, well almost anything really, transactional game currencies, logic, etc.
Is this an advantage of Firestore?
Transactions in Firestore are more flexible than those in Realtime Database. With Realtime Database transactions, you had to choose a single location in that transaction, and you could only modify children under that location. All clients has to be using transactions to safely modify that transaction.
With Firestore transactions, you can transact using any arbitrary set of documents across any set of collections in your database, and you have atomicity on changes made to those documents. You're not obliged to choose just one collection or just one document.
There is no such thing as a "lock" in either product. Locks are not provided because they're difficult to manage correctly (avoiding deadlock) while also being scalable to millions of concurrent writers.