We have a cloud function set up with pub/sub triggers.
The function is invoked topic(NAME).onPublish()
If the function is invoked when it is cold, it always runs twice.
Function execution took 284 ms, finished with status: 'ok' METHOD_NAME METHOD_ID
Received message from pub sub METHOD_NAME METHOD_ID
Function execution started METHOD_NAME METHOD_ID
Function execution took 24271 ms, finished with status: 'ok' METHOD_NAME METHOD_ID
Received message from pub sub METHOD_NAME METHOD_ID
Function execution started METHOD_NAME METHOD_ID
After that all future messages only run once, until the function goes cold again.
Is this because it takes a long time for the first invocation to complete and the timeout causes it to be run again? Any way to prevent this?
Startup time is almost for sure the issue. To verify this, try these:
comment out portion of the function until fast, to see if the problem goes away (time it in your local terminal, if you can, using timeit module)
increasing Acknowledgement Deadline seconds (upon subscription); defaults to 10 so could easily be the problem; try 20, 40 etc
ensure that the first time run, the function takes less time than the Timeout value of the function (defaults to 60 seconds - not likely to be the problem)
Related
In Firebase console, the last Event message in Functions/Log section is "Function execution took 60006 ms, finished with status: 'timeout'". Is "timeout" the status the function is supposed to finish with? Or did I miss something in the code that would say "that's the end"?
You should send a response like res.send(200) at the end for it to terminate properly
I want to know what happens if connection becomes nothing while executing some tasks for a Cloud Function.
cron-job.org says this:
You should design your scripts in a way that they send as little data as possible, ideally just a short status message at the end of the execution, e.g. "OK" — or simply nothing. In case your script is written in PHP and needs more than 30 seconds of run-time, you can use the following trick to let it continue to execute in the background: Use the PHP function ignore_user_abort(true) to tell PHP to continue the script execution after disconnection.
Let's say doing task like query through database with certain condition and delete matched data.
If there are too many data and not finish execution of the task within 30 seconds, what will happen?
I'm puzzled at what we see when running this setup:
FuncA: Google Cloud Function trigger-http
FuncB: Google Cloud Function trigger-topic
FuncA is called by a HTTP client. Upon being called, FuncA does some light work setting up a JSON object describing a task to perform, and stores this JSON into Google Cloud Storage. When the file has been written FuncA published a topic with a pointer to the gs file. At this point FuncA responds to the client and exits. Duration typically 1-2 seconds.
FuncB is informed that a topic has been published and is invoked. The task JSON is picked up and work beings. After processing FuncB stores the result info Firebase Realtime Database. FuncB exits at this point. Duration typically 10-20 seconds.
As FuncA and FuncB are in no way associated, live their individual process lifecycles on different function names (and triggers) and only share communication through pub/sub topic message passing (one-direction from A to B) we would expect that FuncA can run again and again, publishing topics at any rate. FuncB should be triggered and fan-out to scale with what ever pace FuncA is called with.
This is however not what happens.
In the logs we see results following this pattern:
10:00:00.000 FuncA: Function execution started
10:00:02.000 FuncA: Function execution took 2000 ms, finished with status: 'ok'
10:00:02.500 FuncB: Function execution started
10:00:17.500 FuncB: Function execution took 15000 ms, finished with status: 'ok'
10:00:18.000 FuncA: Function execution started
10:00:20.000 FuncA: Function execution took 2000 ms, finished with status: 'ok'
10:00:20.500 FuncB: Function execution started
10:00:35.500 FuncB: Function execution took 15000 ms, finished with status: 'ok'
...
The client calling FuncA clearly gets to wait for both FuncA and FuncB to finish, before being let through with the next request. It is expected that FuncA would finish, and allow a new call in immediately at what ever pace the calling client can "throw at it".
Beefing the client up with more threads only repeats this pattern, such that "paired" calls to FuncA->FuncB always waits for each other.
Dicsuss, clarify, ... stackoverflow, do your magic! :-)
Thanks in advance.
I read here endpoint spin-up is supposed to be transparent, which I assume means cold start times should not differ from regular execution times. Is this still the case? We are getting extremely slow and unusable cold start times - around 16 seconds - across all endpoints.
Cold start:
Function execution took 16172 ms, finished with status code: 200
After:Function execution took 1002 ms, finished with status code: 304
Is this expected behaviour and what could be causing it?
UPDATE: The cold start times seem to no longer be an issue with node 8, at least for me. I'll leave my answer below for any individuals curious about keeping their functions warm with a cron task via App Engine. However, there is also a new cron method available that may keep them warm more easily. See the firebase blog for more details about cron and Firebase.
My cold start times have been ridiculous, to the point where the browser will timeout waiting for a request. (like if it's waiting for a Firestore API to complete).
Example
A function that creates a new user account (auth.user().onCreate trigger), then sets up a user profile in firestore.
First Start After Deploy: consistently between 30 and 60 seconds, frequently gives me a "connection error" on the first try when cold (this is after waiting several seconds once Firebase CLI says "Deploy Complete!"
Cold Start: 10 - 20 seconds
When Warm: All of this completes in approximately 400ms.
As you can imagine, not many users will sit around waiting more than a few seconds for an account to be setup. I can't just let this happen in the background either, because it's part of an application process that needs a profile setup to store input data.
My solution was to add "ping" function to all of my API's, and create a cron-like scheduler task to ping each of my functions every minute, using app engine.
Ensure the ping function does something, like access a firestore document, or setup a new user account, and not just respond to the http request.
See this tutorial for app engine scheduling: https://cloud.google.com/appengine/docs/flexible/nodejs/scheduling-jobs-with-cron-yaml
Well it is about resource usage of Cloud Functions I guess, I was there too. While your functions are idle, Cloud Functions also releases its resources, at first call it reassignes those resources and at second call you are fine. I cannot say it is good or not, but that is the case.
if you try to return a value from an async function there won't be any variables in the main function definition (functions.https.onCall) and GCP will think that the function has finished and try to remove resources from it.
Previous breaking code (taking 16 + seconds):
functions.https.onCall((data, context) => {
return asyncFunction()
});
After returning a promise in the function definition the function times are a lot faster (milliseconds) as the function waits for the promise to resolve before removing resources.
functions.https.onCall((data, context) => {
return new Promise((resolve) => {
asyncFunction()
.then((message) => {
resolve(message);
}).catch((error) => {
resolve(error);
});
});
});
I have a Firebase Cloud Function that is triggered by a PubSub message. The function should either consume the message or wait to consume it at a later time.
Is there a way to return from this function without acknowledging the message, so that it will be re-delivered at a later time?
For example, can I return a Message from the cloud function? The docs seem to indicate this is possible, if I'm reading them right:
Returns
non-null functions.CloudFunction containing non-null functions.pubsub.Message A Cloud Function which you can export.
When PubSub trigger a function (Firebase or Cloud Function), if the function end correctly, the message is acknowledge. But if the function crash or raise an exception (in summary, an abnormal terminaison) the message not ack and resend immediately.
This retry loop is performed until the message is ack or the message expired (default and maximum TTL of 7 days, minimum is 10 minutes. You can customize the messageRetentionDuration in the subscription)