I'm wondering if it's okay to use setTimeout in Firebase Cloud Functions? I mean it's kinda working for me locally, but it has a very weird behavior: Unpredictable execution of the timeout functions.
Example: I set the timeout with a duration of 5 minutes. So after 5 minutes execute my callback. Most of the time it does that correctly, but sometimes the callback gets executed a lot later than 5 minutes.
But it's only doing so on my local computer. Is this behavior also happening when I'm deploying my functions to firebase?
Cloud Functions have a maximum time they can run, which is documented in time limits. If your timeout makes its callback after that time limit expired, the function will likely already have been terminated. The way expiration happens may be different between the local emulator and the hosted environments.
In general I'd recommend against any setTimeout of more than a few seconds. In Cloud Functions you're being billed for as long as your function is active. If you have a setTimeout of a few minutes, you're being billed for all that time, even when all your code is doing is waiting for a clock to expire. It's likely more (cost) efficient to see if the service you're waiting for has the ability to call a webhook, or to use a cron-job to check if it has completed
Related
This documentation page describes how to enable retries for asynchronous firebase functions. It mentions the maximum retry period is 7 days.
Cloud Functions guarantees at-least-once execution of an event-driven
function for each event emitted by an event source. However, by
default, if a function invocation terminates with an error, the
function will not be invoked again, and the event will be dropped.
When you enable retries on an event-driven function, Cloud Functions
will retry a failed function invocation until it completes
successfully or the retry window expires (by default, after 7 days).
Is there a way to reduce the retry period to few minutes, from the default value of 7 days?
Posting my comment as an answer:
"Unfortunately, the default Firebase Functions retry period of 7 days cannot be shortened to a few minutes. The longest possible retry period is specified by Google Cloud Functions and is 7 days. Making a new function that is activated by a timer could be a workaround to change the default Firebase Functions retry period from 7 days to a few minutes. This timer-triggered function can be used to monitor the performance of the original function and, if necessary, attempt it at predetermined intervals."
To minimize cold starts, I've set a minimum instance for my Google Cloud Function. I actually do it with the firebase admin SDK like this:
functions.runWith({ minInstances: 1 })
...but I can see it confirmed in Google Cloud Console:
I'm noticing that after every deployment, I still encounter one cold start. I would have assumed that one instance would be primed and ready for the first request, but that doesn't seem to be the case. For example, here are the logs:
You can see that ~16 hours after deployment, the first request comes in. It's a cold start that takes 8139ms. The next request comes in about another hour later, but there's no cold start and the request takes 556ms, significantly faster than the first request.
So is this the expected behaviour? Do we still encounter one cold start even if minimum instances is set? Should I then be priming the cloud function after every deployment with a dummy request to prevent my users from encountering this first cold start?
Tl;dr: The first execution of a function that has minimum instances set is not technically a cold start, but probably will be slower than later executions of that instance.
Minimum instances of a function will immediately be "warmed up" on deploy and in a warm but idle state, ready to respond to a request. However, the functions we write often need to do extra setup work when they're actually triggered for the first time.
For example, we may use dynamic imports to pull in a library or need to set up a connection to a remote DB. Even though the function instance is warm, the extra work that has to be done on the first execution means that it will probably be slower than later executions.
The benefit of the minimum instances setting is that later executions benefit from all the setup work done by the first execution, and can be much faster than if they were scaled back to zero and had to set themselves up all over again on the next request.
Update: Occasionally, an idle instance may be killed by the Cloud Functions backend. If this happens, another instance will be spun up immediately to meet the required minimum instances setting, but that new instance will need to go through its extra setup work again the first time it is triggered. However, this really shouldn't happen often.
The documentation does not make a hard guarantee about the behavior (emphasis mine):
To minimize the impact of cold starts, Cloud Functions attempts to
keep function instances idle for an unspecified amount of time after
handling a request.
So, there is an attempt (no guarantee), and it kicks in after a handling a request (not after deployment), but you don't know how long that will last. As stated, it sounds like you might want to make a request, along with the expectation that it might still not always work exactly the way you want.
After deployment of a Cloud Function, running a Firestore transaction always takes about 5 seconds. The time is lost between calling runTransaction and getting invoked inside the given transaction function.
It does not matter if anything happens inside the transaction or not.
After running the Cloud Function two times the 5 second delay disappears.
Is there any solution to this?
Most likely this time is spent loading the SDK, and establishing the first route/connection to the servers. I doubt there's much you can do in your code about this.
You could consider using the Firestore Lite SDK, which is a lot smaller and thus loads faster. It doesn't support a local disk cache nor realtime listeners, but in Cloud Functions those are unlikely to matter.
I have a Firebase realtime database structure that looks like this:
rooms/
room_id/
user_1/
name: 'hey',
connected: true
connected is basically a Boolean indicating as to whether the user is connected or not and will be set to false using onDisconnect() that Firebase provides.
Now my question is - If I trigger a cloud function every time theconnected property of a user changes , can I run a setTimeout() for 45 seconds . If the connected property is still false, at the end of the setTimeout() (for which I read that particular connected value from the db ) then I delete the node of the user (like the user_1 node above).
Will ths setTimeout() pose a problem if there are many triggers fired simultaneously?
In short, Cloud Functions have a maximum time they can run.
If your timeout makes it's callback after that time limit expired, the function will already have been terminated.
Consider using a very simple and efficient way for calling scheduled code in Cloud Functions called a cron-job.
Ref
If you use setTimeout() to delay the execution of a function for 45 seconds, that's probably not enough time to cause a problem. The default timeout for a function is 1 minute, and if you exceed that, the function will be forced to terminate immediately. If you are concerned about this, you should simply increase the timeout.
Bear in mind that you are paying for the entire time that a function executes, so even if you pause the function, you will be billed for that time. If you want a more efficient way of delaying some work for later, consider using Cloud Tasks to schedule that.
For my understanding your functionality is intend to monitoring the users that connected and is on going connect to the Firebase mealtime database from the cloud function. Is that correct?
For monitoring the Firebase Realtime database, GCP provided the tool to monitor the DB performance and usage.
Or simply you just want to keep the connection a live ?
if the request to the Firebase Realtime DB is RESTful requests like GET and PUT, it only keep the connection per request, but is is High requests, it still cost more.
Normally, we suggest the client to use the native SDKs for your app's platform, instead of the REST API. The SDKs maintain open connections, reducing the SSL encryption costs and database load that can add up with the REST API.
However, If you do use the REST API, consider using an HTTP keep-alive to maintain an open connection or use server-sent events and set keep-alive, which can reduce costs from SSL handshakes.
I'm developing a cloud function that triggers on a database new object and needs to delete that object 8 hours later. Right now I'm using a setTimeout to schedule that operation, but I'm not comfortable with such method, as I know function execution should be fast (60 secs max I read somewhere).
Any idea on how to achieve this in a proper way?
The setTimeout() method is definitely not the way to go, in this case. There is no guarantee that the Cloud Function instance will still be running 8 hours later.
Google doesn't provide a scheduler for Cloud Functions yet, and your best bet would be to create a schedule queue of some sort. When the object is created, add a task to the queue to delete it 8 hours later. Periodically (every minute, say) run a cron job via a cron service that triggers an HTTPS Cloud Function that reads the queue to see if there are any objects to be acted on.
Alternately, if the object has a create time associated with it, you could run an HTTPS Cloud Function periodically (triggered by an external cron job, again) that does a query for expired objects based on their create time and removes them.