Should I be running this client or server side? - firebase

I need to get a user profile document, which then needs to access two other documents in separate collections, before it returns. At the moment I have implemented this client side but it takes a while. Should I/Can I run this using Cloud Functions, so that I just call one GET and retrieve everything in one go, rather than calling separate get functions sequentially from within my app?

The database retrieval from separate collections would take a similar amount of time whether it's done from the client or Cloud Function.
Collection queries should be very fast on your indexed fields, so probably your problem is the way you are handling asynchronicity. Are you waiting for the result from the first collection before starting the second query? You could dispatch both queries at the same time to cut your waiting time.

You can store all your documents in Firebase Storage and then concatenate the references from the files and download all the documents at the same time, plus you can access them quicker because you can store them into your SD card or internal storage.
Then, if the documents need to be rewritten there is not problem because if you download again from the storage it will auto replace them and the user will still have access to the documents. I tell you this because I'm doing something similar and it's working great!
Edit: As Sujil says, first make an authentication between the user and the database structure with Firebase, so only people logged in or authenticated in your app can read/write files.

Related

Best method to upload data to multiple documents in Firestore

I am currently working on an iOS App that uses Cloud Firestore from Firebase.
I was wondering: what is the best way (cost, efficiency and security-wise) to upload some data to multiple Firestore documents simultaneously (or almost simultaneously)?
* The data I have to upload consists of the following: there are two users (User A is the user currently using the app, User B is the one whose profile is currently being seen by User A). If User A saves User B's profile, I must upload User B's UID to User A's Firestore Document. Then, I have to increase a counter in User A's Firestore Document. Finally, I must add User A's UID to User B's Firestore Document. - Note that with Firestore Document I mean either a document Field or a document Subcollection.
The choices are:
Upload everything from the client: seems the best method, cost-wise: it doesn't require extra Cloud Functions usage. I would create a Batch Operation and upload all the data from there*. The downside is that the client must be able to access multiple unrelated collections and documents.
Update one document from the client, then update everything else from Cloud Functions: this method is the best one efficiency and security-wise; the client only uploads data to the user's document*, without accessing unrelated collections and documents. Also, the client only has to upload a fraction of the data that it had to upload in the previous method, saving bandwidth and cellular data / WiFi usage. The downside is that the usage of Cloud Functions would increase, eventually resulting in more costs.
Update one document from the client, update the counter* from the client and then update everything else form Cloud Functions: this method is somewhat a hybrid between the first two, as I think that updating the counter from the client is more secure (Cloud Functions' .onWrite trigger may happen twice or more, increasing the counter multiple times?).
My first thought was to go with method 2, as it's far more secured and efficient, but I would like to have someone else's advice too, before "wasting" too much time coding something wrong.
I hope this isn't any kind of duplicate, as I couldn't find anything that answered my question with enough specificity.
Any advice would be much appreciated. Thank you.
I would follow the third approach: updating from the client the current user collections (the saved_profiles collection and the counter field), which are private and only accessible by this user (configure Firestore Security Rules) and updating the other user's collection (users_who_saved_my_profile) with a triggered Cloud Function. As these operations are not controlled by security rules, they can access any part of the database. This way no unnecessary permissions are granted to any user.

how to maintain read counts of documents in firebase

Firebase Firestore: How to monitor read document count by collection?
So first of something similar like question was already asked almost a year ago so dont mark it duplicate cause I need some suggestions in detail.
So here is the scenario,
lets say I have some collections of database and in future I might need to perform some ML on the DB. According to the documents visit.
That is how many times a specific document is visited and for how much time.
I know the mentioned solution above indirectly suggests to perform a read followed by write operation to the database to append the read count every time I visit the database. But it seems this needs to be done from client side
Now if you see, lets say I have some documents and client is only allowed to read the document and not given with access for writing or updating. In this case either I will have to maintain a separate collection specifically to maintain the count, which is of course from client side or else I will have to expose a specific field in the parent document (actual documents from where I am showing the data to clients) to be write enabled and rest remaining field protected.
But fecthing this data from client side sounds an alarm for lot of things and parameters cause I want to collect this data even if the client is not authenticated.
I saw the documentation of cloud functions and it seems there is not trigger function which works as a watch dog for listening if the document is being fetched.
So I want some suggestions on how can we perform this in GCP by creating own custom trigger or hook in a server.
Just a head start will be so usefull.
You cannot keep track of read counts if you are using the Client SDKs. You would have to fetch data from Firestore with some secure env in the middle (Cloud Functions or your own server).
A callable function like this may be useful:
// Returns data for the path passed in data obj
exports.getData = functions.https.onCall(async (data, context) => {
const snapshot = admin.firestore().doc(data.path).get()
//Increment the read count
await admin.firestore().collection("uesrs").doc(context.auth.uid).update({
reads: admin.firestore.FieldValue.increment(1)
})
return snapshot.data()
});
Do note that you are using Firebase Admin SDK in this case which has complete access to all Firebase resources (bypasses all security rules). So you'll need to authorize the user yourself. You can get UID of user calling the function like this: context.auth.uid and then maybe some simple if-else logic will help.
One solution would be to use a Cloud Function in order to read or write from/to Firestore instead of directly interacting with Firestore from you front-end (with one of the Client SDKs).
This way you can keep one or more counters of the number of reads as well as calculate and apply specific access rights and everything is done in the back-end, not in the front-end. With a Callable Cloud Function you can get the user ID of authenticated users out of the box.
Note that by going through a Cloud Function you will loose some advantages of the Client SDKs, for example the ability to use a listener for real-time updates or the possibility to declare access rights through standard security rules. The following article covers the advantages and drawbacks of such approach.

Firestore read/write vs cloud function read/write

I'm using Firestore I have these questions, regarding how to user behavior will have an impact on app costs:
what's is more cost-effective:
To use a realtime form that saves in the database while the user typing in a web form
To save all the fields in the form at once using a firebase function
questions:
is it overkill to proxy with cloud functions? (just to avoid costs)
when the user types (realtime updates) is it considered as a new write to the database every time?
what's is more cost-effective:
To use a realtime form that saves in the database while the user typing in a web form
This is going to cost you a write for each time the form is save in realtime.
To save all the fields in the form at once using a firebase function
This is going to cost you a single write.
The difference in cost between the two should be obvious - multiple writes vs. a single write.
questions:
is it overkill to proxy with cloud functions? (just to avoid costs)
If you're proxying for no other reason than to save costs, it's overkill. The function invocation will cost you money, in addition to the document write, which will cost the same no matter where it originates.
when the user types (realtime updates) is it considered as a new write to the database every time?
As I said before, yes, it is.
The only real reason to send form submissions through a function is the ability to do deep, secure checking for validity of the form fields. Client side checks are not secure. You could use security rules to perform checks, but those are limited. If you need to make sure the form fields have stricly checked values, a Cloud Function might be your best choice. But it's not possible to tell given the information in your question.
There's no particular reason you need to use a function to save all at the same time -- at whatever point you would call the function, instead call a single update to the database. Using a function here is going to be strictly more expensive (assuming it provides no functionality other than the database write), since you incur the cost of the write and you incur the cost of a function execution.
Of course, its possible you have some other reason to call a cloud function to do the write beyond a simple proxy -- such as to ensure constraints that cannot be enforced by security rules alone. In that case, the cost may be worth the added functionality.
As for is it better to batch or write in real time, it will certainly be cheaper to write all at once, as you are charged for every document write to Firestore. More specifically, each set or update is charged as a single write. So, its definitely going to be less expensive to only write the document once for many fields, as opposed to write it in real time (or per field) as the user is entering data.

Logging user actions in Firebase JS

I want to log the following user actions on my Firebase app:
sign in/out
page in/out
timestamp of action
Right now, I use my own function to log actions to the database location "root > user-logs > [user's id]".
Each action is logged as
[time in milliseconds] : [action]
These logs put a lot of data in my database.
However, I won't be accessing data stored at the user-logs locations, so my belief is that this won't lower the speed of read operations at other locations in the database.
Question 1: Is the above belief true?
Question 2: Is there a better way to log customized user actions?
I first thought of creating a csv file in Cloud Storage and appending user actions to the file, but then realized that in order to write to a csv file, I would first have to download it, so I decided that writing to the database would be much faster (and easier).
Thanks.
If you write data to a location that you don't read any data from, then that write operation will not affect operations that read data from somewhere else in your database.
But storing data that you're never going to read is unlikely. Otherwise there probably wouldn't be a reason to store it. More likely you're going to want to read/query this data at some point.
Given the append-only, every-growing nature of your log data, it is unlikely that Firebase will offer the query capabilities that you need at that point. Therefor I'd recommend storing your data in a system that is more tailored towards the use-case: storing lots of data and querying that. A perfect example of such a system is Google's BigQuery.
A common way to get the data into BigQuery is to keep doing what you do now from the client: write it into the database. Then create a Cloud Function that triggers on incoming log data from your database, writes that data to BigQuery, and then deletes it from the database.
With this approach you're only using the Firebase Database for transient storage, and do the heavy lifting in BigQuery.

Firebase and indexing/search

I am considering using Firebase for an application that should people to use full-text search over a collection of a few thousand objects. I like the idea of delivering a client-only application (not having to worry about hosting the data), but I am not sure how to handle search. The data will be static, so the indexing itself is not a big deal.
I assume I will need some additional service that runs queries and returns Firebase object handles. I can spin up such a service at some fixed location, but then I have to worry about its availability ad scalability. Although I don't expect too much traffic for this app, it can peak at a couple of thousand concurrent users.
Architectural thoughts?
Long-term, Firebase may have more advanced querying, so hopefully it'll support this sort of thing directly without you having to do anything special. Until then, you have a few options:
Write server code to handle the searching. The easiest way would be to run some server code responsible for the indexing/searching, as you mentioned. Firebase has a Node.JS client, so that would be an easy way to interface the service into Firebase. All of the data transfer could still happen through Firebase, but you would write a Node.JS service that watches for client "search requests" at some designated location in Firebase and then "responds" by writing the result set back into Firebase, for the client to consume.
Store the index in Firebase with clients automatically updating it. If you want to get really clever, you could try implementing a server-less scheme where clients automatically index their data as they write it... So the index for the full-text search would be stored in Firebase, and when a client writes a new item to the collection, it would be responsible for also updating the index appropriately. And to do a search, the client would directly consume the index to build the result set. This actually makes a lot of sense for simple cases where you want to index one field of a complex object stored in Firebase, but for full-text-search, this would probably be pretty gnarly. :-)
Store the index in Firebase with server code updating it. You could try a hybrid approach where the index is stored in Firebase and is used directly by clients to do searches, but rather than have clients update the index, you'd have server code that updates the index whenever new items are added to the collection. This way, clients could still search for data when your server is down. They just might get stale results until your server catches up on the indexing.
Until Firebase has more advanced querying, #1 is probably your best bet if you're willing to run a little server code. :-)
Google's current method to do full text search seems to be syncing with either Algolia or BigQuery with Cloud Functions for Firebase.
Here's Firebase's Algolia Full-text search integration example, and their BigQuery integration example that could be extended to support full search.

Resources