Is it possible to restore a 1:1 conversation?
The Conversation object in the Skype SDK seems to have such functionality. You should be able to restore a conversation by passing a href to it. But when I pass a href string as parameter to createConversation it throws the following error:
Error: ResourceNotFound
at Error (native)
at Exception (http://.../SkypeSDK.js:3346:31)
at UCWA.get (http://.../SkypeSDK.js:15141:31)
at init (http://.../SkypeSDK.js:40672:50)
at new Conversation (http://.../SkypeSDK.js:41826:25)
at createConversationModel (http://.../SkypeSDK.js:41963:36)
at BaseModel.createConversation (http://.../SkypeSDK.js:42037:48)
The lines can be a little bit off. I modified the createConveration method to pass the href to Conversation.
The href string has this format:
/ucwa/oauth/v1/applications/xxxxxxxxxxxx/communication/conversations/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
We have the following situation:
One site has the normal SDK and waits for incoming calls. If you accept the call you should be redirected to a site with the SDK+CC and answer the call. Now we are stuck at how to pass the call. We also tried with it getConversation, but it doesn't return the last incoming conversation.
Once you accept a call on one endpoint you can transfer it to another endpoint. However you cannot accept a call and then re-answer it on a different endpoint. Also, answering the call starts the process of connecting media so that endpoint has effectively picked up the call.
The href of each conversation is unique per application, and in your scenario you will have one for each site. These cannot be shared between applications.
Related
I am thinking of a way to manage failed messages in Rebus.
In my second level retry strategy I want to save the message and exception details into the database so that I can later review the error details and decide whether to resend the message to the be reprocessed or ignore and delete.
In the handler I am capturing details as follows:
public async Task Handle(IFailed<StudentCreated> failedMessage)
{
//Logic to Defer Message with rebus_defer_count not shown
DictionarySerializer dictionarySerializer = new
DictionarySerializer();
ObjectSerializer objectSerializer = new ObjectSerializer();
string headers =
dictionarySerializer.SerializeToString(failedMessage.Headers);
string message =
objectSerializer.SerializeToString(failedMessage.Message);
Exception lastException= failedMessage.Exceptions.Last();
string exception = objectSerializer.SerializeToString(lastException);
//Logic to save the message and error details in the database not shown
}
This will enable me to save the message and error details into the database where I can create a dashboard to view the messages and resolve them as I wish rather than in the broker queue such as RabbitMQ.
Now my question is how can I return them to the handler where the error was raised using the information provided in the headers?
What is the best way to do it with REBUS provided I have all the details from the Failed Message as shown in my code snippet?
Regards
What you're trying to achieve will be much easier if you make a small change to your application. You see, Rebus already has a built-in service in place for handling failed messages called IErrorHandler.
You can register your own error handler like this:
Configure.With(...)
.(...)
.Options(o => o.Register<IErrorHandler>(c => new MyCustomErrorHandler()))
.Start();
thus replacing the default error handler (which btw. is PoisonQueueErrorHandler)
The error handler gets to handle the message in the form of the raw TransportMessage (i.e. simply headers and a byte[]) when all retries have failed, so this is the perfect place to save the message to your database.
If you then look here, you can see how Rebus' default error handler adds its own queue name as the rbs2-source-queue header, meaning that the message can later be sent back to that queue.
With this information, it should be fairly easy to write some code that inspects the message for its source queue and sends a RabbitMQ message to that queue.
This will only work if the re-delivery service has access to the RabbitMQ instance where all of your Rebus endpoints are running, of course. It's less straightforward, if you want to implement this in a general way: E.g. if you were using Fleet Manager, each Rebus instance would use a long-polling protocol to query the server for commands, which enables Fleet Manager to tell any Rebus instance to e.g. send a previously failed message to any queue it has access to.
My problematic seem to be simple, but I haven't find yet a way to solve it...
I have a legacy system which is working and a new system which will replace it. This is only rest webservices call, so I'm using simple bridge endpoint on http service.
To ensure the iso-functional run, I want to put them behind a camel route dispatching message to both system but returning only the response of the legacy one and log the response of both system to be sure there are running in same way...
I create this route :
from("servlet:proxy?matchOnUriPrefix=true")
.streamCaching()
.setHeader("CamelHttpMethod", header("CamelHttpMethod"))
.to("log:com.mylog?showAll=true&multiline=true&showStreams=true")
.multicast()
.to(urlServer1 + "?bridgeEndpoint=true")
.to(urlServer2 + "?bridgeEndpoint=true")
.to("log:com.mylog?showAll=true&multiline=true&showStreams=true")
;
It works to call each services and to log messages, but response are in a mess...
If the first server doesn't respond, the second is not call, if the second respond an error, only that error is send back to client...
Any Idea ?
You can check for some more details in multicast docs http://camel.apache.org/multicast.html
Default behaviour of multicast (your case) is:
parallelProcessing is false so routes are called one by one
To correctly implement your case you need probably:
add error handling for each external service call so exception will not stop correct processing
configure or implement some aggregator strategy and put it to the strategyRef so you can combine results from all calls to the single multicast result
I am trying to use Skype's DBus API in order to retrieve the list of messages (message IDs) I've exchanged with a contact. However, both the SEARCH CHATMESSAGES <target> (protocol >= 3) and the SEARCH MESSAGES <target> (protocol < 3) commands return unexpectedly empty results.
Here is the trace of a few exchanges I had with the API. I used d-feet to send my requests, but the result is exactly the same when I send the request from my own program.
Bus name: com.Skype.API
Object: /com/Skype
Interface: com.Skype.API
Method used: Invoke(String request)
Trace:
-> NAME dfeet
<- OK
-> PROTOCOL 8
<- PROTOCOL 8
-> SEARCH CHATMESSAGES mycontact
<-
The same thing happens with two other SEARCH commands:
SEARCH MESSAGES <target> (with PROTOCOL 2).
SEARCH CHATS
Additionally, I also get an empty result when I try to request a message list based on a chat ID: GET CHAT <chat_id> GETMESSAGES.
However, commands such as SEARCH FRIENDS, SEARCH CALLS, or SEARCH ACTIVECHATS work just fine, and return their lists of IDs (contacts IDs, calls IDs, or chat IDs) as expected.
It might also be worth noting that this happens for all contacts, regardless of how many messages I've exchanged with them (I thought at first that there might be too many messages involved, but the result is the same, whether I've sent 3, or thousands of messages to the contact).
Is there anything that would explain why I get these empty responses through DBus, for these requests?
Skype will not use Invoke's return value when its reply is too heavy. As it so happens, when Skype has too much data to prepare and transfer after a request, it automatically returns an empty string to the Invoke call. The true, heavy reply is then prepared asynchrously by Skype, and the client program must be ready to receive it when it eventually arrives.
Whenever you are communicating with Skype over DBus, your application must act as both a client (calling Invoke), and a server (providing a DBus object for Skype to reach). This design was a little unexpected (I guess we could argue on its quality), but here is what it requires you to do:
Make your program a DBus "server" (providing objects to reach). Through your bus name to Skype, register an object path called /com/Skype/Client implementing the com.Skype.API.Client interface.
Prepare a message handler for the only method of this interface: Notify(s). This is the method Skype will try to call to send you the heavy reply to one of your previous requests.
Program your own mechanism to match your Invoke request with the asynchronous Notify message coming in as an answer later on.
The creation of an object can be done through dbus_connection_register_object_path, the parameters for which are:
The DBusConnection structure representing your bus name.
The object path you are registering, here /com/Skype/Client.
A table of message handlers (DBusObjectPathVTable) used to process all incoming requests.
Data to be sent to these handlers when they are called. This is additional data, not the actual message being received since you're just setting up the handler here.
For instance...
DBusHandlerResult notify_handler(DBusConnection *connection,
DBusMessage *message,
void *user_data){
return DBUS_HANDLER_RESULT_HANDLED;
}
void unregister_handler(DBusConnection *connection,
void *user_data){}
DBusObjectPathVTable vtable = {
unregister_handler,
message_handler,
NULL
};
if(!dbus_connection_register_object_path(connection,
"/com/Skype/Client",
&vtable, NULL)){
// Error...
}
Note that this is just an object's definition. In order to actually hook on the Notify calls, you'll have to select() on a DBusWatch file descriptor, and dispatch the incoming DBusMessage in order to have your message handler called.
If you are working with other bindings, you'll probably find much faster ways to setup objects and start working as a client application. See:
GLib's g_dbus_connection_register_object
Exporting objects with dbus-python
QtDBus's QDBusConnection::registerObject
... (other bindings)
Within my BizTalk 2010 solution I have the following orchestration that’s is started by the receipt of a courier update message. It them makes a couple of call to AX's WCF AIF via two solicit-response ports, a Find request and an Update request.
For this application we are meeting audit requirements through use of the tracking database to store the message body. We are able to link to this from references provided in BAM when we use the TPE. The result for the customer is nice, they get a web portal from which they can view BAM data of message timings etc. but they can also click a link to pull up a copy of the message payloads from the tracking db. Although this works well and makes use of out-of-box functionality for payload storage it has led to relatively complex jobs for the archiving of the tracking db (but that's another story!).
My problem relates to continuation. I have created the following Tracking Profile:
I have associated the first of the orchestration's two solicit response ports with the continuation RcvToOdx based on the interchange Id and this works, I get the following single record written to the Completed activity table:
So, in this case we can assume that an entry was first written on receipt in the inbound message, with the TimeReceivedIntoBts column populated by the physical file receive port. The FindRequestToAx column was then populated by the physical WCF send port. Because this was bound to the RcvToOdx continuation Id and used the same interchange Id and the previously mentioned file receive message, the update was made to the same activity. Notification of the resulting response was also updated to the same activity record - into the FindResponseFromAx column.
My problem is that I would also like BAM to record a timestamp for the subsequent UpdateRequestToAx. Because this request will have the same interchange Id as the previous messages I would expect to be able to solve this problem by simply binding the AxUpdate send port (both send and receive parts of it) to the same continuation id, as seen in the following screen grab:
I also map the UpdateRequestToAx milestone to the physical Ax_TrackAndTraceUpdate_SendPort (Send) and the OrchestrationCompleted milestone to Ax_TrackAndTraceUpdate_SendPort (Receive)
Unfortunately, when I try this I get the following result:
Two problems can be seen from the above db screen grab:
1. Date for the update send port was inserted into a new activity record
2. The record was never completed
I was surprised by this because I'd thought since they update port was enlisted to use the same continuation, and the single InterchangeId was being used by all ports for the continuation Id then all the data milestones would be applied to a single activity.
In looking for a solution to this problem I came across the following post on Stack Overflow suggesting that the continuation must be closed from the BAM API: BAM Continuation issue with TPE. So, I tried this by calling the following method from an expression shape in my orchestration:
public static void EndBAMContinuation(string continuationId)
{
OrchestrationEventStream.EndActivity(CARRIER_ORDER_ACTIVITY_NAME, continuationId);
}
I can be sure the method was called ok because I wrapped the call with a log entry from the CAT framework which I could see in debug view:
I checked the RcvToOdx{867… continuation Id against the non-closed activity and confirmed they matched:
This would suggest that perhaps the request to end the continuation is being processed before the milestone of the received message from the UpdateAx call?
When I query the Relationsips tables I get the following results:
Could anyone please advise why the UpdateToAx activity is not being completed?
I realise that I may be able to solve the problem using only the BAM API but I really want to exhaust any possibility of the TPE being fit for purpose first since the TPE is widely used in other BizTalk solutions of the organisation.
To solve this problem I created a 2nd continuation in the TPE.
"RcvToOdx" continuation for the Find and "OdxToUpdate" continuation for the update - source is InterchangeId on the initial receive port - UPS_TrackAndTrace (same as for other "RcvToOdx" continuation), dest Id is the InterchangeId mapped to update send port.
I'll try provide as much information as possible:
No error message.
The instance stays in the "ready service instances".
The receive location has the same parameters (except URI, the three polling queries, user account/pw and receive pipeline) as another receive location that points to another database/table which works.
The pipeline is waiting for the correct schema.
The port surface and receive location are both waiting for the correct schema.
In my test example, there are only 10 lines being returned.
The message, which contains those 10 lines, validates against the schema.
I tried to let the instance alone to no avail - 30+ minutes - and no change in its condition.
I had also tried suspending and then resuming it which then places the instance in the "dehydrated orchestrations" list. Again, with no error message.
I'm able to get the message by looking at the body of the message that's in the "ready to run" service. (This is the message that validates versus the schema I use in Visual Studio.)
How might something like this arise?
Stupid question, but I have to ask... Is the corresponding host instance running?