I'd like to create a cloud function which sends an e-mail based on a change in my database. I use postmark, but that's not relevant for this function. I looked at the firebase-examples.
My question is: What if the mail service returns an error or if the mail service is temporary down? I don't see any form of error handling in the examples.
My 'solution' would be to try again in 5 minutes for example. Is that possible and advisable in cloud functions?
If you throw an exception when sending the email fails, it should retry the function up to 7 days.
Open detailed usage states for your function in the firebase console
Edit the function
Click the link to configure retry
Enable "Retry on failure"
I haven't tried it myself yet for your use case, but it works for my storage triggered function when it fails.
Related
Really bizarre that Firebase doesn't seem to work quite like typical Express app. Whatever I write in Express and copy-paste to Firebase Functions I typically get error. There is one that I can't figure out on my own though.
This endpoint is designed to start a function and live long enough to finish even longer task. That request is a webhook (send docs, we will transform them and ping you when it's done to specified another webhook). Very simplified example below:
router.post('/', (req, res) => {
try {
generateZipWithDocuments(data) // on purpose it's not async so request can return freely
res.sendStatus(201)
} catch (error) {
res.send({ error })
}
})
On my local machine it works (both pure Express app and locally emulated Firebase Functions), but in the cloud it has problems and even though I put a cavalcade of console.log() I don't get much information. No error from Firebase.
If generateZipWithDocuments() is not asynchronous res.sendStatus() will be immediately executed after it, and the Cloud Function will be terminated (and the job done by generateZipWithDocuments() will not be completed). See the doc here for more details.
You have two possibilities:
You make it asynchronous and you wait its job is completed before sending the response. You would typically use async/await for that. Note that the maximum execution time for a Cloud Function is 9 minutes.
You delegate the long time execution job to another Cloud Function and, then, you send the response. For delegating the job to another Cloud Function, you should use Pub/Sub. See Pub/Sub triggers, the sample quickstart, and this SO thread for more details on how to implement that. In the Pub/Sub triggered Function, when the job is done you can inform the user via an email, a notification, the update of a Firestore document on which you have set a listener, etc... If generateZipWithDocuments() takes a long time, it is clearly the most user friendly option.
Does anyone know if there is an easy way to trigger a function everytime i re-deploy some funciont to firebase?
I have an specific firabase functions which i define inside GCP (this way when i do "firebase deploy" it doesnt re-deploy, unnisntal or touch in any form my current function)
but sometimes i might update this function manually on GCP and i would like to trigger a inner function of its code everytime it happens... is it possible?
ex:
exports.decrementAction = (req, res) => {/*do stuff*/res.status(200).send("ok")};
function auxiliary(){
//to be called on re-deploy
}
Unfortunately, there isn't an easy way for you to trigger a function within a code that is being redeployed. Since this code is only being deployed at the moment, this wouldn't be possible to be done automatically.
The alternative would be to have this function separately from the "root" function in the moment of deploying and use triggers to run this other Cloud Function, when the first is redeployed. This way, it would be possible to run it based in the deployment of the other.
You can get more information on the triggers available for Cloud Functions here: Calling Cloud Functions. With them, you should be able to configure the timing for the execution.
Besides that, it might be worth it to raise a Feature Request for Google's to verify the possibility of adding this in future releases.
Let me know if the information clarified!
I think there exists a manner.
With Pub/Sub you can catch logs from Stackdriver (docs). Those services allow you to store only the logs related to the deployment of a Cloud Function.
The store could be, for instance, Cloud Firestore. As you should know, there is available a trigger for Cloud Firestore events.
Finally, every time an event log related to a function's deployment is generated, it will be stored and triggers a function attached to that event. In the function, you can parse or filter the logs.
I want to call a function every 5 mins to check that my site is operational, and be emailed if the server doesn't respond.
I have decided to use cron-job.org as it was recommended to me by a colleague, and had the features that I needed. My function is a https.onCall function.
I have set it up using the above configuration using the rules outlined here:
https://firebase.google.com/docs/functions/callable-reference
but my cloud function is saying this in the console "Function execution took 10 ms, finished with status code: 400". Can anyone point me in the right direction? Can't see what I am doing wrong. I can see that error code 400 is a badly formatted request, but why?
The function is set to just return if there is no params.
enter image description hereI saw the following error in Cloud Function logs:
Error: function terminated. Recommended action: inspect logs for
termination reason. Function invocation was interrupted.
But I'm not getting any logs in Stackdriver or in Cloud Functions for Firebase logs.
When looking at stackdriver make sure to enable all logs at the filter or use this advanced filter:
resource.type="cloud_function"
resource.labels.function_name="<function_name>"
resource.labels.region="<function_region>"
Uncaught exceptions produced by your function will appear in Stackdriver Error Reporting, but if they don't try adding a catch clause and manually write it to stdout
If for some reason you find an error that is not being logged to stackdriver you can reach out to GCP support so they can check if additional logging is available at the particular function termination occurrence.
I have a Firebase Function that subscribes to a Cloud PubSub topic. App is initialized very simply like this:
import * as admin from 'firebase-admin';
admin.initializeApp();
I'm getting this error:
"Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
at GoogleAuth.getApplicationDefaultAsync (/srv/functions/node_modules/google-auth-library/build/src/auth/googleauth.js:161:19)
at process._tickCallback (internal/process/next_tick.js:68:7)"
Here's the weird thing. It typically works. In other words, if I trigger it a second time it works. And a third time. Most often it seems to fail the first time it runs after a new firebase deploy and possibly on a "cold start."
Not sure what I'm doing wrong and why it would fail only on the first run.
SOLVED! This answer helped:
Error: Could not load the default credentials (Firebase function to firestore)
From within a Firebase Function for an API call, I was publishing to a Cloud PubSub topic like this:
pubsub.topic(topicName).publish(dataBuffer, customAttributes)
I was not awaiting the response and was immediately sending the 2XX HTTP response back to the client. The execution seemed to continue fine, but obviously it did not behave as intended.
Sometimes the API response call itself would fail (and never publish the message), but sometimes not. In other cases, the publish would succeed but the Firebase Function subscribing to the topic would fail!
In all cases, this seemed to resolve itself after running the script a second time. For this reason, I still believe it had something to do with a cold start.
But since I changed it to await like this:
await pubsub.topic(topicName).publish(dataBuffer, customAttributes)
I have not seen this problem happen again.