How can I execute a callback after a set of asynchronous service calls complete? - apache-flex

In Flex, I'm making a set of asynchronous calls:
service.method1.send().addResponder(responder1);
service.method2.send().addResponder(responder2);
service.method3.send().addResponder(responder3);
I want to execute some code after all of these service calls have returned (either success or failure, I don't care which). How can I do this?

Implement a CallResponder that will monitor the results from each responder & increment a variable in the listener after each result. When the variable hits three, execute some code.

Related

Will Spring KafkaContainerStoppingErrorHandler commits offset for batch listener

I am working on Spring Kafka implementation and my use case is consume messages from Kafka topic as batch (using batch listener). when I consumer the list of messages, will iterate and call the REST endpoint for message enrichment. In case REST API fails for any runtime exception, I have implemented retry logic using spring retry. I want to stop the container, after the number of retries fails. So planning to use KafkaContainerStoppingErrorHandler to achieve this. Does the KafkaContainerStoppingErrorHandler commits the previous success messages - say if we receive 10 messages, and for message 1,2,3,4, enrichment call is success and for message 5 enrichment API call fails. so when we restart the container, will I get all 10 again or will I receive messages 5- 10?
or is there a way we can achieve above use case? I looked into all types of error handles of Spring kafka and need input on how to achieve above requirement.
You will get them all again.
You can use the DefaultErrorHandler (with a custom recoverer) and throw a BatchListenerFailedException to indicate which record in the batch failed.
The error handler will commit the offsets up to that record and call the recoverer with the failed record; in your custom recoverer you can stop the container (use the same logic as the container stopping error handler).
In versions before 2.8, this same functionality is provided by the RecoveringBatchErrorHandler.

grpc client completion queue not shutting down

My code performs the following:
1)Create grpc channel
2)start monitoring completion queue in a different thread
3)Issue shutdown on completion queue
After executing step 3, I expect "(cq.Next(&tag, &ok)" to return false as there are no pending events with above 3 steps. But it is observed that "(cq.Next(&tag, &ok)" never returns false. Please let me know if I am missing something.
Thanks,
Ikshu
In order to get channel state notification, a tag was being added to the queue and that use to always post some events. so the cq->next() never returned false. I fixed this issue by achieving same functionality by using already existing standard API for channel state. So closing the bug.

Is there a way to abort a ScheduledActivity of a SchedulableState?

I would like to know if it is possible to cancel an execution of an ScheduledActivity.
Example:
A SchedulableState of type A is created, and the scheduledActivity will execute a flow that creates another SchedulableState of type A. It means that the app will always execute the flow determined in the activity and create another state of type A.
How can I abort the execution of the activity?
How can I identify if is there a ScheduledActivity waiting to be executed?
As of Corda 4.x, there is no API to cancel scheduled activities.
Instead, you'd have to connect to the node's database directly and drop the required rows from the node's NODE_SCHEDULED_STATES table.

How to have the same execution id in logs on Cloud Functions for Firebase

There's multiple executions happening interleaved/concurrently in firebase. The log id changes when new execution happens and the old execution id is forgotten. So the execution id moves forward only. When the old function resumes, it uses new execution id. Is there a way to achieve old execution id for old function & new execution id for new function.
Workflow:
Lets say Function1 & Function2 are diffrent triggers of same function.
1. Function1 does some db reads and do http requests. This returns an http promise - This takes some time(maybe some ms). Lets assume its execution-id from log is 154690519665944.
2. Function2 get triggered while function1 was waiting. function2 gets execution-id 154690574405903. function2 also does same thing and waits for http response.
3. Function1 resumes and it got http response and while logging it uses another execution-id 154694739233261 in log.
What happened to execution-id 154690519665944 ?
Since there's multiple triggers happening simultaneously, the only way to find whether a function completed successfully is to check logs. So by using execution-id as the filter, I could have find whether the function executed successfully or not. But because firebase changes execution-id randomly, I guess I have to find another solution.
PS: There's an update call which will trigger the same function. Does that change the parent function execution-id ?
Without seeing your code or complete logs, I don't think a definitive answer can be provided.
However, it sounds like the question you're asking is how can you keep data in memory across asynchronous transactions. Regardless of Firebase or not, you basically have two options:
commit that data to the database during the first part of the
transaction and then retrieve it in the second part
pass the data
from the first part to the second part so that it already has it.
You seem to be relying on the execution-id, so I would recommend taking the latter approach and passing that id along as part the input to your httprequest and having the server your calling return it in its response.

In meteor, are successive operations on the server synchronous?

If, as part of a single Meteor.call, I make two calls to the database on the server, will these happen synchronously or do I need to use a callback?
Meteor.methods({
reset: function(id) {
Players.remove(_id:id);
// Will the remove definitely have finished before the find?
Players.find();
...
}
From the docs:
In Meteor, your server code runs in a single thread per request, not in the asynchronous callback style typical of Node. We find the linear execution model a better fit for the typical server code in a Meteor application.
If you read the docs on docs.meteor.com/#remove
you can find this :
Blockquote
On the server, if you don't provide a callback, then remove blocks until the database acknowledges the write and then returns the number of removed documents, or throws an exception if something went wrong. If you do provide a callback, remove returns immediately. Once the remove completes, the callback is called with a single error argument in the case of failure, or a second argument indicating the number of removed documents if the remove was successful.
Blockquote
On the client, remove never blocks. If you do not provide a callback and the remove fails on the server, then Meteor will log a warning to the console. If you provide a callback, Meteor will call that function with an error argument if there was an error, or a second argument indicating the number of removed documents if the remove was successful.
So on server side you choose if you want it to run in a sync or async way, it depends if you send a callback or not.

Resources