How can I keep objects stored in firebase cloud function RAM? - firebase

My application needs to build a couple of large hashmaps before processing a user's request. Ideally I want to store these hashmaps in-memory on the machine, which means it never has to do any expensive processing and can process any incoming requests quickly.
But this doesn't work for firebase because there's a chance a user triggers a new instance which sets off the very time-consuming preprocessing step.
So, I tried designing my application to use the firebase database, and get only the data it needs from the database each time instead of holding all the data in-memory. But, since the cloud functions are downloading loads of data from the database, I have now triggered over 1.7 GB in download for this month, just by myself from testing. This goes over the quota.
There must be something I'm missing; all I want is a permanent memory storage of some hashmaps. All I want is for those hashmaps to be ready by the time the function is called with a request. It seems like such a simple requirement; how come there is no way to do this?

If you want to store data in the container that runs your Cloud Functions, you can use its local tempfs, which is actually kept in memory. But this will disappear when the container is recycled, which happens when your function hasn't been access for a while. So this local file system will have to be rebuilt whenever the container spins up.
If you want permanent storage of values you generate, consider using Google Cloud Storage. It is probably a more cost effective option, and definitively the most scalable one.

Related

Backing up Firestore data incrementally

I'm trying to think of the best (read automated, cheapest and easy to use) way to back up Firestore data for a production app.
I'm aware I could automate exports through a scheduled cloud function and send them over to a gcloud bucket. The problem I have with this approach is that it does not allow for "incremental updates of the new and updated documents" but only for backing up entire collections. This means that most of the data will be backed up each and every time, even though it hasn't even changed since the last backup, skyrocketing the cost up for no reason.
The approach that came to mind was having a cloud function in "my-app" project that would listen to each and every change in the Firestore, and perform the same change in the Firestore of the "my-app-backup" project.
This way, I only back up the changed data. Furthermore, backed up data would never become stale (as it's backed up in real-time), unlike the first approach where automated backups happen e.g. daily or weekly.
Is this even possible, having a single cloud function in the first Firebase project writing data into another Firebase project? If not, perhaps write the data elsewhere(not in another Firebase project)? Does the approach even make sense, or do you have a better suggestion?
If you want to export updated documents only then you can store a field updatedAt and query documents where("updatedAt", ">", "lastExportTime"). Then you can periodically run a Cloud function to export these documents. This should only cost N reads (N = number of updated documents) every time the function runs.
Furthermore, backed up data would never become stale (as it's backed up in real-time)
This works too but can also get expensive if the document updates are too frequent.

Callback to cloud functions/cloud run if timeout happens?

I have some cloud run and cloud functions that serve to parse a large number of files that users upload. Sometimes users upload an exceedingly large number of files, and that causes these functions to timeout even when I set them to their maximum runtime limits (15 minutes for Cloud Run and 9 minutes for Cloud Functions respectively.) I have a loading icon corresponding to a database entry that shows the progress of processing each batch of files that's been uploaded, and so if the function times out currently, the loading icon gets stuck for that batch in perpetuity, as the database is not updated after the function is killed.
Is there a way for me to create say a callback function to the Cloud Run/Functions to update the database and indicate that the parsing process failed if the Cloud Run/Functions timed out? There is currently no way for me to know a priori if the batch of files is too large to process, and clearly I cannot use a simple try/catch here as the execution environment itself will be killed.
One popular method is to have a public-facing API location that you can invoke by passing on the remaining queued information. You should assume that this API location is compromised so some sort of OTP should be used. This does depend on some factors, such as how these files are uploaded or the cloud trigger was handled which may require you to store that information in a database location to be retrieved.
You can set a flag on the db before you start processing, then after processing, clear/delete the flag. Then have another function regularly check for the status.
No such callback functionality exists for either product.
Serverless products are generally not meant to be used for batch processing where the batches can easily be larger than the limits of the system. They are meant for small bits of discrete work, such as simple API calls.
If you want to process larger amounts of data, considering first uploading that to Cloud Storage (which will accept files of any size), then sending a pubsub message upon completion to a backend compute product that can handle the processing requirements (such as Compute Engine).
Direct answer. For example, you might be able to achieve that by filtering and creating a sink in the relevant StackDriver logs (where a cloud function timeout crash is to be recorded), so that the relevant log records are pushed into some PubSub topic. On the other side of that topic you may have some other cloud function, which can implement the desired functionality.
Indirect answer. Without context, scope and requirement details - it is difficult to provide a good suggestion... but, based on some guesses - I am not sure that the design is optimal. Serverless services are supposed to be used for handling independent and relatively small chunks of data. If are have something large - you might like to use the first, let's say cloud function, to divide it into reasonably small chunks, so they can be processed independently by, let's say the second cloud function. In your case - can you have a cloud function per file, for example? If a file is too large (a few Gb, or dozen Gb) - can it be saved to a cloud storage and read/processed in chunks, so that the cloud functions are triggered from he cloud storage? And so on. That approach should help, but has a drawback - complexity is increased, as you have to coordinate and control how the process is going...

Firebase Functions, Cold Starts and Slow Response

I am developing a new React web site using Firebase hosting and firebase functions.
I am using a MySQL database (SQL required for heavy data reporting) in GCP Cloud Sql and GCP Secret Manager to house the database username/password.
The Firebase functions are used to pull data from the database and send the results back to the React app.
In my local emulator everything works correctly and its responsive.
When its deployed to Firebase Im noticing the 1st and sometimes the 2nd request to a function takes about 6 seconds to respond. After that they respond less than 1 sec. For the slow responses I can see in the logs the database pool is initialized.
So the slow responses are the first hit to the instance. Im assuming in my case two instances are being created.
Note that the functions that do not require a database respond quickly regardless of it being the 1st or 2nd call.
After about 15 minutes of not using a service I have the same problem. Im assuming the instances are being reclaimed and a new instance is being created.
The problem is each function will have its own independent db pool so each function will initially provide a slow response (maybe twice for the second call).
The site will see low traffic meaning most users will experience this slow response.
By removing the reference to Secret Manager and hard coding username/password the response has dropped to less than 3 seconds. But this is still not acceptable.
Is there a way to:
Increase the time that a function is reclaimed if not used?
Tag an instance that it should not be reclaimed?
Is there a way to create a global db pool that does not get shutdown between recycles?
Is there an approach to work with db connections in Firebase Functions to avoid reinit of the db pool?
Is this the nature of functions and Im limited to this behavior?
Since I am in early development, would moving to AppEngine/Node.js (the Flexible Plan) resolve recycling issues?
First of all, the issues you have been experiencing with the 1st and the 2nd requests taking the longest time are called cold starts.
This totally makes sense because new instances are spun up. You may have a cold start when:
Your function has been deployed but not yet triggered.
Your function has been idle(not processing requests) enough that it has been recycled for resources.
Your function is auto-scaling to handle capacity and creating new instances.
I understand that your five questions are intended to work around the issue of Cloud Functions recycling the instances.
The straight answer from questions 1 to 4 is No because Cloud Functions implement the serverless paradigm.
This means that one function invocation should not rely on in-memory state(database pool) set by a previous invocation.
Now this does not mean that you cannot improve the cold start boot time.
Generally the number one contributor of cold start boot time is the number of dependencies.
This video from the Google Cloud Tech channel exactly goes over the issue you have been experiencing and describes in more detail the practices implemented to tune up Cloud Functions.
If after going through the best practices from the video, your coldstart shows unacceptable values, then, as you have already suggested, you would need to use a product that allows you to have a minimum set of instances spun up like App Engine Standard.
You can further improve the readiness of the App Engine Standard instances by later on implementing warm up requests.
Warmup requests load your app's code into a new instance before any live requests reach that instance. The last document talks about loading requests which is similar to cold starts in which it is the time when your app's code is being loaded to a newly created instance.
I hope you find this useful.

Peaks in Load of Firebase RT Database cause my applications to slow down

I have a dashboard in which I use Firebase Real Time Database. I have also a backend that publishes output to the front-end, which runs in batches. Every time when there is a batch, I notice that the Load peaks to almost 100% (most of it is writing but there is also some considerable loading). This is causing my front-end dashboard to slow down.
My question is what I could do to avoid this issue? Is there a way to scale the Load up, such that its less likely to approach the 100%? What is the Firebase recommended way to handle this?
This type of spiky load is indeed commonly caused by backend processes that run in batches.
The Firebase backend storage layer runs pretty much as a single threaded process for each database, handling each (read or write) request from clients in turn. So while it is processing one request, any other requests are awaiting their turn.
This means that if you have a particularly large read or write request, it keeps other clients from getting their requests served. For this reason you'll want to take care to divide the interactions with the database (especially from the backend process) into small operations as to not interfere with the other clients.
If the backend process also needs to read a considerable part of the database for its work, consider if you can make it read from a backup of the database instead of the from the live database. The backups are made out-of-band, so don't interfere with clients, so if you can use the backup as the source for reading the data, it will significantly reduce the read load that the backend process causes.

How to take druid segment data backup?

I am new to druid. In our application we use druid for timeseries data and this can go pretty large(10-20TBs).
Druid provide you facility of deep storage. But if this deep storage crashes/or not reachable then it will result in data loss and which in turn affect the analytics the application is running.
I am thinking of taking an incremental backup druid segment data to some secure location like ftp server. So if deep storage is unavailable, then they can restore the data from this ftp server.
Is there any tool/utility available in druid to incrementally backup/restore druid segment?
In general it's important to take regular snapshots of the metadata storage as this is the "index" of what's in the Deep Storage. Maybe one snapshot per day, and store them for however long you like. It's good to store them for at least a couple of weeks, in case you need to roll back for some reason.
You also need to back up new segments in deep storage when they appear. It isn't important to take consistent snapshots, just to get every file eventually.
Also see https://groups.google.com/g/druid-user/c/itfKT5vaDl8
One other note as you mentioned data loss: Deep Storage is not queried directly - queries execute on the local segment cache in, for example, the Historical process. The Deep Storage is written to at ingestion time, so you might "lose" data that can't be ingested once it's available again, but you will continue to get analytics capability as the already-loaded data is on the historicals... Just a thought haha !
I hope that helps....?!?!

Resources