I have following flow in my Mule app:
<flow name="UpdateStatusFlow">
<vm:inbound-endpoint path="UpdateStatusFlow" exchange-pattern="request-response"/>
<async processingStrategy="asynchronous">
<request-reply>
<vm:outbound-endpoint path="request"/>
<vm:inbound-endpoint path="reply"/>
</request-reply>
</async>
<custom-transformer class="com.example.EanTransformer"/>
</flow>
In async scope I depend on what has come to vm:inbound-endpoint and I assumed it is copy of that payload, so another copy is sent to EanTransformer which also modify payload. But it seems that no copy is made because in async flow I get already modified by EanTransformer ean code, which is not what I expected. If I add some delay in EanTransformer everything is fine, which means for me that this transformer haven't modify the message yet.
So the question is: if async scope really gets copy of the message (as it is written here: http://www.mulesoft.org/documentation/display/current/Async+Scope+Reference) or work on the same message as next components? Or am I doing something wrong?
I am using Mule 3.3.1.
If you read the referenced page further, it states that:
Even though the Async scope receives a copy of the Mule message, the payload is not copied. The same payload object(s) will be referenced by both Mule messages: the one that continues down the original flow and the one processed by the Async scope.
In other words, if the payload of your message is a mutable object
(for example a bean with different fields in it) and a message
processor in your async scope changes the value of one of the fields,
the message processors outside of the Async scope will see the changed
values.
The Async flow does not receive a Copy of the message - rather it receives the original payload. So any changes to the payload should be done before the Async flow. Here it is clearly written:
Even though the Async scope receives a copy of the Mule message, the payload is not copied. The same payload object(s) will be referenced by both Mule messages: the one that continues down the original flow and the one processed by the Async scope.
In other words, if the payload of your message is a mutable object (for example a bean with different fields in it) and a message processor in your async scope changes the value of one of the fields, the message processors outside of the Async scope will see the changed values.
Related
I am thinking of a way to manage failed messages in Rebus.
In my second level retry strategy I want to save the message and exception details into the database so that I can later review the error details and decide whether to resend the message to the be reprocessed or ignore and delete.
In the handler I am capturing details as follows:
public async Task Handle(IFailed<StudentCreated> failedMessage)
{
//Logic to Defer Message with rebus_defer_count not shown
DictionarySerializer dictionarySerializer = new
DictionarySerializer();
ObjectSerializer objectSerializer = new ObjectSerializer();
string headers =
dictionarySerializer.SerializeToString(failedMessage.Headers);
string message =
objectSerializer.SerializeToString(failedMessage.Message);
Exception lastException= failedMessage.Exceptions.Last();
string exception = objectSerializer.SerializeToString(lastException);
//Logic to save the message and error details in the database not shown
}
This will enable me to save the message and error details into the database where I can create a dashboard to view the messages and resolve them as I wish rather than in the broker queue such as RabbitMQ.
Now my question is how can I return them to the handler where the error was raised using the information provided in the headers?
What is the best way to do it with REBUS provided I have all the details from the Failed Message as shown in my code snippet?
Regards
What you're trying to achieve will be much easier if you make a small change to your application. You see, Rebus already has a built-in service in place for handling failed messages called IErrorHandler.
You can register your own error handler like this:
Configure.With(...)
.(...)
.Options(o => o.Register<IErrorHandler>(c => new MyCustomErrorHandler()))
.Start();
thus replacing the default error handler (which btw. is PoisonQueueErrorHandler)
The error handler gets to handle the message in the form of the raw TransportMessage (i.e. simply headers and a byte[]) when all retries have failed, so this is the perfect place to save the message to your database.
If you then look here, you can see how Rebus' default error handler adds its own queue name as the rbs2-source-queue header, meaning that the message can later be sent back to that queue.
With this information, it should be fairly easy to write some code that inspects the message for its source queue and sends a RabbitMQ message to that queue.
This will only work if the re-delivery service has access to the RabbitMQ instance where all of your Rebus endpoints are running, of course. It's less straightforward, if you want to implement this in a general way: E.g. if you were using Fleet Manager, each Rebus instance would use a long-polling protocol to query the server for commands, which enables Fleet Manager to tell any Rebus instance to e.g. send a previously failed message to any queue it has access to.
In Corda, how can I make an asynchronous HTTP request from within a flow? Is there an API to suspend a flow while awaiting the response to the HTTP call, or a way to provide a callback?
Corda doesn't currently provide a mechanism for making an asynchronous HTTP request and then either suspending the flow while awaiting a response, or providing a callback that will be called when the response is received.
Instead, it is recommended that you make the HTTP request before initiating the flow, then pass in the response as an argument when instantiating the flow.
Sometimes, this isn't possible. For example, the HTTP request may be required by a response flow that is initiated automatically, or it may depend on the contents of a message received from a counterparty.
In this case, you can still support this kind of workflow as follows. Suppose you are writing a CorDapp for loan applications, which are represented as LoanApplicationStates. Without an HTTP call, the responder doesn't know whether to accept the loan application or not.
Instead of creating a LoanApplicationState directly, the flow would create an UnacceptedLoanApplicationState, which the responder would store on their ledger. Once the flow ends, the responder can make an HTTP call outside of the flow framework. Based on the result of the HTTP call, the responder will either kick off an approve flow that creates a transaction to transform the UnacceptedLoanApplicationState into an LoanApplicationState, or kick off a reject flow to consume the UnacceptedLoanApplicationState state without issuing an accepted LoanApplicationState.
Alternatively, you could add a status field to the LoanApplicationState, specifying whether the loan is approved or not. Initially, the loan state would have the field set to unapproved. Then, based on the result of the HTTP request, the responder would initiate one of two flows to update the LoanApplicationState, updating it with either an approved or a rejected status.
Corda uses quasar fibers to make sync-like (async) calls. Fortunately quasar has support for java's Future and guava's ListenableFuture.
Based on that, you could create a Flow like this
#InitiatingFlow
class ListenableFutureFlow<V>(val futureFn: () -> ListenableFuture<V>,
override val progressTracker: ProgressTracker) : FlowLogic<V>() {
constructor(futureFn: () -> ListenableFuture<V>) : this(futureFn, tracker())
companion object {
object EXECUTING_FUTURE: ProgressTracker.Step("Executing future call inside fiber")
fun tracker() = ProgressTracker(EXECUTING_FUTURE)
}
#Suspendable
override fun call(): V {
progressTracker.currentStep = EXECUTING_FUTURE
return AsyncListenableFuture.get(futureFn())
}
}
And you can use it like this:
val asyncResponse = subFlow(ListenableFutureFlow { myAsyncCall(param1, param2) })
It is not the best solution but at least works with corda's infrastructure :)
Let me know if it works for you!
I wonder about the reason of assigning a token to an asynchronous task, as in the following example:
var ctSource = new CancellationTokenSource();
Task.Factory.StartNew(() => doSomething(), ctSource.Token);
The MSDN documentation insists on passing a token to the running method in addition to assigning it to task, but to me it appears as an unnatural duplication.
If a token is assigned to a task, does it mean, that ctSource.Cancel()automatically triggers TaskCancelledException for the task?
Is there a way to retrieve the assigned token from the task (other than by sending it as an argument) ?
If neither of those, what is the reason of assigning a token to a task?
If a token is assigned to a task, does it mean, that ctSource.Cancel()automatically triggers TaskCancelledException for the task?
The task could start anytime, now or later. So if it happens that the token has a cancellation request before that task has started, the scheduler itself will throw the OperationCanceledException, and your action () => doSomething() is never invoked. So, the token is being passed to the factory, not the task. This is used by the StartNew(...) method.
Is there a way to retrieve the assigned token from the task (other than by sending it as an argument) ?
No. Tasks do not know about CancellationToken, only the implementation. Tasks do not auto-cancel themselves. The function running within a task is responsible for exiting when a cancellation is requested.
You are the owner of the CancellationTokenSource. So pass it to whomever needs it.
Task.Factory.StartNew(() => doSomething(ctSource.Token), ctSource.Token);
If you are not the owner of doSomething() (from 3rd party DLL), then you cannot cancel that operation, unless it accepts a CancellationToken.
I am trying to use Skype's DBus API in order to retrieve the list of messages (message IDs) I've exchanged with a contact. However, both the SEARCH CHATMESSAGES <target> (protocol >= 3) and the SEARCH MESSAGES <target> (protocol < 3) commands return unexpectedly empty results.
Here is the trace of a few exchanges I had with the API. I used d-feet to send my requests, but the result is exactly the same when I send the request from my own program.
Bus name: com.Skype.API
Object: /com/Skype
Interface: com.Skype.API
Method used: Invoke(String request)
Trace:
-> NAME dfeet
<- OK
-> PROTOCOL 8
<- PROTOCOL 8
-> SEARCH CHATMESSAGES mycontact
<-
The same thing happens with two other SEARCH commands:
SEARCH MESSAGES <target> (with PROTOCOL 2).
SEARCH CHATS
Additionally, I also get an empty result when I try to request a message list based on a chat ID: GET CHAT <chat_id> GETMESSAGES.
However, commands such as SEARCH FRIENDS, SEARCH CALLS, or SEARCH ACTIVECHATS work just fine, and return their lists of IDs (contacts IDs, calls IDs, or chat IDs) as expected.
It might also be worth noting that this happens for all contacts, regardless of how many messages I've exchanged with them (I thought at first that there might be too many messages involved, but the result is the same, whether I've sent 3, or thousands of messages to the contact).
Is there anything that would explain why I get these empty responses through DBus, for these requests?
Skype will not use Invoke's return value when its reply is too heavy. As it so happens, when Skype has too much data to prepare and transfer after a request, it automatically returns an empty string to the Invoke call. The true, heavy reply is then prepared asynchrously by Skype, and the client program must be ready to receive it when it eventually arrives.
Whenever you are communicating with Skype over DBus, your application must act as both a client (calling Invoke), and a server (providing a DBus object for Skype to reach). This design was a little unexpected (I guess we could argue on its quality), but here is what it requires you to do:
Make your program a DBus "server" (providing objects to reach). Through your bus name to Skype, register an object path called /com/Skype/Client implementing the com.Skype.API.Client interface.
Prepare a message handler for the only method of this interface: Notify(s). This is the method Skype will try to call to send you the heavy reply to one of your previous requests.
Program your own mechanism to match your Invoke request with the asynchronous Notify message coming in as an answer later on.
The creation of an object can be done through dbus_connection_register_object_path, the parameters for which are:
The DBusConnection structure representing your bus name.
The object path you are registering, here /com/Skype/Client.
A table of message handlers (DBusObjectPathVTable) used to process all incoming requests.
Data to be sent to these handlers when they are called. This is additional data, not the actual message being received since you're just setting up the handler here.
For instance...
DBusHandlerResult notify_handler(DBusConnection *connection,
DBusMessage *message,
void *user_data){
return DBUS_HANDLER_RESULT_HANDLED;
}
void unregister_handler(DBusConnection *connection,
void *user_data){}
DBusObjectPathVTable vtable = {
unregister_handler,
message_handler,
NULL
};
if(!dbus_connection_register_object_path(connection,
"/com/Skype/Client",
&vtable, NULL)){
// Error...
}
Note that this is just an object's definition. In order to actually hook on the Notify calls, you'll have to select() on a DBusWatch file descriptor, and dispatch the incoming DBusMessage in order to have your message handler called.
If you are working with other bindings, you'll probably find much faster ways to setup objects and start working as a client application. See:
GLib's g_dbus_connection_register_object
Exporting objects with dbus-python
QtDBus's QDBusConnection::registerObject
... (other bindings)
If, as part of a single Meteor.call, I make two calls to the database on the server, will these happen synchronously or do I need to use a callback?
Meteor.methods({
reset: function(id) {
Players.remove(_id:id);
// Will the remove definitely have finished before the find?
Players.find();
...
}
From the docs:
In Meteor, your server code runs in a single thread per request, not in the asynchronous callback style typical of Node. We find the linear execution model a better fit for the typical server code in a Meteor application.
If you read the docs on docs.meteor.com/#remove
you can find this :
Blockquote
On the server, if you don't provide a callback, then remove blocks until the database acknowledges the write and then returns the number of removed documents, or throws an exception if something went wrong. If you do provide a callback, remove returns immediately. Once the remove completes, the callback is called with a single error argument in the case of failure, or a second argument indicating the number of removed documents if the remove was successful.
Blockquote
On the client, remove never blocks. If you do not provide a callback and the remove fails on the server, then Meteor will log a warning to the console. If you provide a callback, Meteor will call that function with an error argument if there was an error, or a second argument indicating the number of removed documents if the remove was successful.
So on server side you choose if you want it to run in a sync or async way, it depends if you send a callback or not.