In a chat app, if I add a new message to the messages collection, I also need to update that particular chat's document in another collection to show the last message and the time when it was sent. Right now I am triggering a cloud function every time a new message comes, in order to update the metadata for the chat. Am I doing the right thing or would it be more appropriate to use Batched writes instead?
There is a difference that you might be aware of when using one approach vs. the other. When using a batch write, according to the official documentation:
You can execute multiple write operations as a single batch that contains any combination of set(), update(), or delete() operations. A batch of writes completes atomically and can write to multiple documents.
This means that those simultaneous updates that are made in this atomic way, either all updates succeed or all updates fail.
In case you are using a function that is triggered once a message is sent, it means that you are performing two separate actions. The first one is to send a message and the second one is to update some metadata once the message is successfully sent. In this case, you can send a message but your function may fail, according to the official documentation:
By default, without retries enabled, the semantics of executing a background function are "best effort." This means that while the goal is to execute the function exactly once, this is not guaranteed.
This are the reasons why background functions fail to complete:
On rare occasions, a function might exit prematurely due to an internal error, and by default the function might or might not be automatically retried.
More typically, a background function may fail to successfully complete due to errors thrown in the function code itself. Some of the reasons this might happen are as follows:
The function contains a bug and the runtime throws an exception.
The function cannot reach a service endpoint, or times out while trying to reach the endpoint.
The function intentionally throws an exception (for example, when a parameter fails validation).
When functions written in Node.js return a rejected promise or pass a non-null value to a callback.
The workaround in this case, is to use retry to handle transient errors.
Related
Say you have an HTTP endpoint which, when triggered, publishes a PubSub message and then sends a response.
There is another Cloud Functions which is subscribed to this event, performs what it needs to perform, and then ends.
How would you go about to trace the entire sequence of function executions triggered by an initial request (in this example, the first HTTP request)?
I see in the Google Cloud Platform logs there is a function Execution ID, but this changes with each function that is triggered hence it's hard to follow the sequence of executions. Is there an automated way of doing this? Or does it need custom implementation?
Thanks!
You will need a custom solution. If you want to trace this all the way back to the client request, you will need to generate some unique ID on the client, and pass that along to the HTTP function, which would then pass that along to the pubsub function via the message payload. And so on.
You might find it helpful to use StackDriver logging to collect the logs around that unique ID.
Most firebase cloud function trigger function signatures include a context object which has an eventId property.
Looking at the documentation, this doesn't seem to be the case for HTTPS-triggers.
Is it safe to assume that calls to HTTP functions will only trigger once per request?
Jack's answer is mostly correct, but I'll clarify here.
The documentation on execution semantics is here. To be clear:
HTTP functions are invoked at most once. This is because of the
synchronous nature of HTTP calls, and it means that any error on
handling function invocation will be returned without retrying. The
caller of an HTTP function is expected to handle the errors and retry
if needed.
There is no guarantee that an HTTP function is executed exactly once. Some executions may fail before they reach the function. This is different from all other (background) types of function that provider at least once execution:
Background functions are invoked at least once. This is because of
asynchronous nature of handling events, in which there is no caller
that waits for the response and could retry on error. An emitted event
invokes the function with potential retries on failure (if requested
on function deployment) and sporadic duplicate invocations for other
reasons (even if retries on failure were not requested).
So, for background functions to be 100% correct, they should be idempotent.
If you want to retry failed HTTP functions, the client will have to perform the retry, and in that case, you may want that HTTP function to be idempotent as well. The client will have to provide the unique key on retry, in that case.
Note that it's not possible to mark an HTTP function for internal retries. That's only possible for background functions.
HTTPS functions will only trigger once compared to background functions that have a at least once delivery guarantee.
(I cant find the docs where I read it. If I find it i will update the question)
I have an application where the web ui (react) Cloud Function then runs and updates a completion value in the database.
How can I show a 'progress' indication and take it down when the Cloud Function has completed?
I initially thought I would do something like this pseudo code with Promises:
return updateDatabaseToTriggerFunctionExec()
.then listenForFunctionDoneEvent()
.then return Promise.resolve();
However, I'm not sure how to know when the function has finished and updated a value. What is the recommended way to detect when a triggered Cloud Function has completed?
You'll have to implement something like a command-response model using the database as a relay where you push commands into a location and the function pushes results out that can be listened to by the client that issued the command. The thing that makes this work is the fact that the locations of the commands and responses and known between the client and server, and they have a common knowledge of the push id that was generated for the client command.
I go over this architecture a bit during my session at Google I/O 2017 where I build a turn-based game with Firebase.
An alternative is to use a HTTP function instead, which has a more clearly-defined request-response cycle.
What is the CPU costless between sending data to a client via publish/subscription and via Method calls?
My answer is going to assume that you simply want to send data to the client:
It depends on what you want to do. If you want realtime updates the subscriptions model is ideal when compares to call a server method each 5 seconds or so.
When you don't want reactive updates, simply pass the flag reactive:false in your find() query.
Methods are used when you want, for example, to return the results on an aggregation (because meteor doesn't support reactivity for aggregations), to get updates for unsupported operator ($where is not supported yet), etc.
Usually the bottlenecks exists in the application design/architecture.
I'm using the samples for the AwsFlowFramework, specifically helloworld and fileprocessing. I have followed all the setup instructions given here. All the client classes are successfully created with the aspect weaver. It all compiles and runs.
But trying to do .get on a Promise within an asynchronous method doesn't work. It waits forever and a result is never returned.
What am I doing wrong?
In particular the helloworld sample doesn't have any asynchronous method nor does it attempt to do a .get on a Promise. Therefore, it does work when copied outright and I can see in the activities client the "hello world" message printed. Yet, if I create a stub Asynchronous method to call get on the Promise<Void> returned by printHello, the client of the activities is never called and so the workflow waits forever. In fact the example works if I set the returned promise to a variable. The problem only arises if I try to call .get on the Promise. The fileprocessing example which does have asynchronous methods doesn't work.
I see the workflows and the activity types being registered in my aws console.
I'm using the Java SDK 1.4.1 and Eclipse Juno.
My list of unsuccessful attempts:
Tried it with Eclipse Indigo in case the aspect weaver does different things.
Made all asynchronous methods private as suggested by this question.
If I call .isReady() on the Promise this is always false even if I call it after I see the "helloworld" message printed (made certain by having a sleep in between). This leads me to think that Promise.get blocks the caller until Promise.isReady is true but because somehow this is never true, the client is not called and the workflow waits forever.
Tried different endpoints.
My very bad. I had a misconfiguration in the aop.xml file and so the load aspectj weaving for the remote calls was not correct.