Execute code if a Firebase function times out - firebase

Is there a built-in method to execute code in a Firebase function if it times out?
I would also like to be able to execute code when the function ends (no matter how it ends). I'm thinking this would behave like catch and finally in a try block.
Is this possible in a Firebase function or must I implement my own timer?

There is no mechanism provided for this. Furthermore, implementing your own timer is prone to failure because Cloud Functions will forcibly terminate any code left running after a timeout or crash. You really have no guarantee that any function code will reliably complete after it has been started.
Your best bet is to understand and enable retries, then write code in your function to determine if your fallback code should run during a retry execution. Your function should strive to complete without any errors in order to tell Cloud Functions to stop retrying a background event that would trigger the function.

Related

Apache Flink: Is there a way to know the retry count in asyncInvoke() api while using async retry strategies?

I am bit stuck in finding how can I get the current retry count in async I/O operator using async retry strategies?
Also which api gets invoked when retry count exceeds the set limit? And to override it, so that we can hook in some custom logic.
there is a doc that describes that here. I think at the moment there is no way to get the attempt count for the particular operator.
I think that once the jobs times out or fails the AsyncFunction.timeout() is called, so You can hook up Your code there to make sure some default result is produced.

Multiple Firestore changes with batch vs cloud functions

In a chat app, if I add a new message to the messages collection, I also need to update that particular chat's document in another collection to show the last message and the time when it was sent. Right now I am triggering a cloud function every time a new message comes, in order to update the metadata for the chat. Am I doing the right thing or would it be more appropriate to use Batched writes instead?
There is a difference that you might be aware of when using one approach vs. the other. When using a batch write, according to the official documentation:
You can execute multiple write operations as a single batch that contains any combination of set(), update(), or delete() operations. A batch of writes completes atomically and can write to multiple documents.
This means that those simultaneous updates that are made in this atomic way, either all updates succeed or all updates fail.
In case you are using a function that is triggered once a message is sent, it means that you are performing two separate actions. The first one is to send a message and the second one is to update some metadata once the message is successfully sent. In this case, you can send a message but your function may fail, according to the official documentation:
By default, without retries enabled, the semantics of executing a background function are "best effort." This means that while the goal is to execute the function exactly once, this is not guaranteed.
This are the reasons why background functions fail to complete:
On rare occasions, a function might exit prematurely due to an internal error, and by default the function might or might not be automatically retried.
More typically, a background function may fail to successfully complete due to errors thrown in the function code itself. Some of the reasons this might happen are as follows:
The function contains a bug and the runtime throws an exception.
The function cannot reach a service endpoint, or times out while trying to reach the endpoint.
The function intentionally throws an exception (for example, when a parameter fails validation).
When functions written in Node.js return a rejected promise or pass a non-null value to a callback.
The workaround in this case, is to use retry to handle transient errors.

Trigger a Cloud Function and take action when function completes

I have an application where the web ui (react) Cloud Function then runs and updates a completion value in the database.
How can I show a 'progress' indication and take it down when the Cloud Function has completed?
I initially thought I would do something like this pseudo code with Promises:
return updateDatabaseToTriggerFunctionExec()
.then listenForFunctionDoneEvent()
.then return Promise.resolve();
However, I'm not sure how to know when the function has finished and updated a value. What is the recommended way to detect when a triggered Cloud Function has completed?
You'll have to implement something like a command-response model using the database as a relay where you push commands into a location and the function pushes results out that can be listened to by the client that issued the command. The thing that makes this work is the fact that the locations of the commands and responses and known between the client and server, and they have a common knowledge of the push id that was generated for the client command.
I go over this architecture a bit during my session at Google I/O 2017 where I build a turn-based game with Firebase.
An alternative is to use a HTTP function instead, which has a more clearly-defined request-response cycle.

Difference between this.unblock and Meteor.defer?

In my methods, I use without difference alternatively this.unblock and Meteor.defer.
The idea, as I understand it, is to follow the good practice to let other method calls from the same client start running, without waiting for one single method to complete.
So is there a difference between them?
Thanks.
this.unblock()
only runs on the server
allows the next method call from the client to run before this call has finished. This is important because if you have a long running method you don't want it to block the UI until it finishes.
Meteor.defer()
can run on either the server or the client
basically says "run this code when there's no other code pending." It's similar to Meteor.setTimeout(func, 0)
If you deferred execution of a function on the server for example, it could still be running when the next method request came in and would block that request.

AWS Flow Framework, .get on Promises waits forever

I'm using the samples for the AwsFlowFramework, specifically helloworld and fileprocessing. I have followed all the setup instructions given here. All the client classes are successfully created with the aspect weaver. It all compiles and runs.
But trying to do .get on a Promise within an asynchronous method doesn't work. It waits forever and a result is never returned.
What am I doing wrong?
In particular the helloworld sample doesn't have any asynchronous method nor does it attempt to do a .get on a Promise. Therefore, it does work when copied outright and I can see in the activities client the "hello world" message printed. Yet, if I create a stub Asynchronous method to call get on the Promise<Void> returned by printHello, the client of the activities is never called and so the workflow waits forever. In fact the example works if I set the returned promise to a variable. The problem only arises if I try to call .get on the Promise. The fileprocessing example which does have asynchronous methods doesn't work.
I see the workflows and the activity types being registered in my aws console.
I'm using the Java SDK 1.4.1 and Eclipse Juno.
My list of unsuccessful attempts:
Tried it with Eclipse Indigo in case the aspect weaver does different things.
Made all asynchronous methods private as suggested by this question.
If I call .isReady() on the Promise this is always false even if I call it after I see the "helloworld" message printed (made certain by having a sleep in between). This leads me to think that Promise.get blocks the caller until Promise.isReady is true but because somehow this is never true, the client is not called and the workflow waits forever.
Tried different endpoints.
My very bad. I had a misconfiguration in the aop.xml file and so the load aspectj weaving for the remote calls was not correct.

Resources