I'm on Flash Builder 4.5 and I'm using remote object with amfphp and when I call two method (method1 and method2) at the same time the response of method2 always arrives after method1's response even though method2 is much more faster to return the result.
Here's the scenario:
I set a remote object which refers to a remote php class "Newsletter" which contains the sendNewsletter and getProgress methods.
Here's the code:
-sendNewsletter() reads the email archive and send the newsletter. After each email has sent it writes a log into the database.
-getProgress() reads the log wrote by sendNewsletter, counts how many email have been sent, compares it with the total number of the email that have to be sent and return the progress percentage
From the flex interface the users select a Newsletter to be sent and click on a "send" button which calls a function that calls the sendNewsletter() and then instantiate a loop of calls to getProgress (as you can see when getProgress returns something it calls the setProgress which updates a progress bar and calls getProgress again until the progress percentage reach 100%.
So right after I call sendNewsletter() I call getProgress() on the same remoteClass().
sendNewsletter() can take several minutes to complete (in my tests for sending 4 email it takes about 4 seconds so I think that sending thousands of email will take much more!!) and the trouble I'm encountering here is that getProgress() result arrives only after sendNewsletter() concludes its execution while what I would like to achieve is:
-call sendNewsletter()
-while sendNewsletter() does its stuff() call getProgress several time in order to get the progress percentage:
What I've got now:
call to sendNewsletter()----------------------->response
call to getProgress()------------------------------------->response after sendNewsletter()
What I want to achieve:
sendNewsletter()------------------------------------------------------------------>response()
getProgress()--->response, getProgress() again--->response-->getProgress()-->respone-->etc...
I read many post on how to work around this problem but no solution worked for me.
I tried to to "emulate" to different channel by creating two remote object with endpoint set once to gateway.php?parallel=0 and once gateway.php?parallel=1, but flash builder still send everything in one big request and get the response in one big http packet (I need tow different packet since sendNewsletter takse ages to complete compared to getProgress)
I also tried to delay the call of getProgress() after sendNewsletter() with a Timer of 500ms and flash builder makes two different calls (I can see them in firebug) but the call of getProgress gives response after sendNewsletter() anyway.
I alse tried to call sendNewsletter this way
this.myNewsletter.getOperation("sendNewsletter").send(idNewsletter)
this.myNewsletter.getOperation("sendNewsletter").cancel()
in order to let flash builder forget about the response but no way!!!
So far the only way to work around I found is creating a common httpservice which refers to a php which instantiate the Newsletter class and calls the getProgress method.
By using two different channel I can call the getProgress httpservice while sendNewsletter is being execute. It works but I don't like it and I don't want to create an httpservice for each method I need to call in background, so I want to achieve this with remote object only.
Anyone has addressed the same problem?
You Flash builder guru, I know you're around, please help me!!!!!
Thanks in advance!!!
Bye,
Luke
P.S.
sorry if this post is a little bit long but the situation it's quite complicated.
I don't know exactly, what you want to..
But when working with Remote Object, there is a best practices to use Responder to handle the responses that were arrived parallel from the single Remote Object.
So try to add responder to your service calls like
remoteObject.methodCall().addResponder(new YourResponder(resultEvent, faultEvent));
So when specific response will be come, it will be handled by your different custom responders.
And by that you will be able to handle your response separately.
Related
I'm maintaining an event-sourced application that went far off the road I'm afraid.
In one case a command is received by an aggregate root that publishes an event that is handled by an event handler that needs to do 2 things:
send a command (cmd1) to another aggregate root that will publish an event that will create a number of sagas each firing of some commands that are eventually handled by a number of aggregates
send a second command (cmd2) that will also lead to all sorts of command/event/command sequences.
In schematic form:
cmd0 -> AR0 -> evt0 -> evtHandler -> cmd1 -> AR1 -> evt1 -> saga stuff and more cmds and evts
|-> cmd2 -> AR2 -> evt2 -> more saga stuff, cmds and evts
Everything happens in the same thread and everything happens in 1 transaction started at the first command handling.
Now the goal: all events, saga's, aggregate calls originated from the first command (cmd1) should happen first and then all events, saga's and aggregate calls originated from the second command (cmd2) should happen.
Here's the observation: cmd1 calls AR1 that published evt1 but after that cmd2 calls AR2 publishing evt2. All other events and commands originating from cmd1 are mingled with those from cmd2.
First I thought I could get away with it using the UnitOfWork but even explicitly creating a separate unit of work for handling cmd1 didn't solve the problem. Looking at the implementation in AbstractEventBus I see that the events are simply merged in the parent unit of work and thus end up being merged with the ones originating from cmd2.
Here's the question: Is there a way to first call cmd1 and wait until all effects originating from that command are handled before calling cmd2 while still preserving the transactional atomicity that I currently have?
To be completely honest with you Jan, the best would be if the components within your application don't rely to much on that order.
It essentially means you have distinct message handling components, which in essence could be different micro service, but they are all tied together as the order is important.
Ideally, you'd set up your components to work on their own.
So, aggregates handle a command and publish the result, done.
Sagas react to events, regardless of where they come from, and react on them with actions (e.g. command dispatching).
Embracing the eventuality would help here, as it will drop the entire requirement of waiting for one process to complete.
From a theoretical stance, that would be my response.
From a more pragmatic corner looking at your question, I'd like to point out that it sounds like a rabbit hole you are going in to. You don't only want cmd1 handling to be done, you want event handling on all sagas to be resolved, including commands coming out of that too, correct? Who's here to tell what the number of Sagas is? Or what the number of commands those saga dispatch need to be taken into account? These criteria will likely change over time, adding more an more stuff which needs to happen "in a single transaction".
Well, yes there are way to wait for processing from some parts, to pull them all in a single transaction. But to be honest with you, I wouldn't recommend taking that route, as it will only make using such a message based system more and more complex.
The crux is what all effects are. From the point of dispatching that command, you should only care if that exact command handles successfully yes or no, and that's where the concerns should end.
I know this does not give you a simple programmatic solution, as you need to adjust the design. But I think decoupling is the only right way to go hear.
That's my two cents to the situation, hope this helps you further in any way Jan.
Message Anticipation explanation update
In essence, the messages you'd use in an Axon application form a boundary. A boundary after which the components essentially don't have a clue what is going to handle those messages. The behaviour per message differs a little, but might clarify what opens you have too:
Commands - Commands are consistently routed to a single handle, on a single instance. Furthermore, you can anticipate a response, in the form of an OK or NOK. OK's mean the handler is void or the identifier of a created entity (like the aggregate itself). NOK's typically are the exceptions you throw from your command handling methods, which signal something went wrong or the command simply couldn't be executed and it should be let know to the dispatching end.
Events - Events will be broadcast to any component which has subscribed itself to the EventBus as being capable to handle a given event. Note that event handling is segregated in time from the actual publication point of the event. This means there is no way there are results from event handling which could (or should) be returned to the dispatcher of an event.
Queries - Query messages can be routed in several forms. Either a single component is best suited to answer the query (called Point-to-Point queries). You can also dispatch a query to several handlers and aggregate the results (called Scatter-Gather queries). Lastly, you can subscribe to query models by doing a "Subscription query", which is essentially a combination of a point-to-point followed up by a Flux of updates. Clearly, query dispatching would mean you are receiving a result from some component. It's just that you have freedom in the type of query you do. If any assurance is required about the "up-to-date"-ness of a query response should be part of the implementation of the query being sent and how it is handled by a #QueryHandler annotated method.
Hope this provides some additional clarity at what each of the messages do in an Axon application!
I am working on an application which requires video frame capture from different frame-grabber cards. I am using directshow ISampleGrabberCB::SampleCB callback to receive pointer to the new frame. Now I want to know when exactly this callback gets called ? Is it guaranteed that every time frame-grabber receives a new frame it will automatically get called ?
I was trying for 120Hz signals with various resolutions but I this callback is only getting called 50-55 times. So there is a possibility that my frame-grabber is not able to capture at that rate (although theoretically its capable). I want to find out whether this callback is bottleneck or framegrabber card.
Thank You
SampleCB is immediately called called from streaming thread, you have one call for every frame. While in callback, you block further streaming, that is you need to return control from your callback in order to resume (in particular, if your callback is "slow", it can reduce effective FPS).
I have a Java program which connects to the internet and send files (emails with attachments, SSL, javamail).
It sends only one email at a time.
Is there a way that my program could track the network traffic it itself is generating?
That way I could track progress of emails being sent...
It would also be nice if it was cross-platform solution...
Here's another approach that only works for sending messages...
The data for a message to be sent is produced by the Message.writeTo method and filtered through various streams that send it directly out the socket. You could subclass MimeMessage, override the writeTo method, wrap the OutputStream with your own OutputStream that counts the data flowing through it (similar to my other suggestion), and reports that to your program. In code...
public class MyMessage extends MimeMessage {
...
public void writeTo(OutputStream os, String[] ignoreList) throws IOException, MessagingException {
super.writeTo(new MyCountingStream(os), ignoreList);
}
}
If you want percent completion you could first use Message.writeTo to write the message to a stream that does nothing but count the amount of data being written, while throwing away the data. Then you know how big the message really is, so when the message is being sent you can tell what percent of the message that is.
Hope that helps...
Another user's approach is here:
Using JProgressBar with Java Mail ( knowing the progress after transport.send() )
At a lower level, if you want to monitor how many bytes are being sent, you should be able to write your own SocketFactory that produces Sockets that produce wrapped InputStreams and OutputStreams that monitor the amount of data passing through them. It's a bit of work, and perhaps lower level than you really want, but it's another approach.
I've been meaning to do this myself for some time, but I'm still waiting for that round tuit... :-)
Anyway, here's just a bit more detail. There might be gotchas I'm not aware of once you get into it...
You need to create your own SocketFactory class. There's a trivial example in the JavaMail SSLNOTES.txt file that delegates to another factory to do the work. Instead of factory.createSocket(...), you need to use "new MySocket(factory.createSocket(...))", where MySocket is a class you write that overrides all the methods to delegate to the Socket that's passed in the constructor. Except the getInputStream and getOutputStream methods, which have to use a similar approach to wrap the returned streams with stream classes you create yourself. Those stream classes then have to override all the read and write methods to keep track of how much data if being transferred, and make that information available however you want to your code that wants to monitor progress. Before you do an operation that you want to monitor, you reset the count. Then as the operation progresses, the count will be updated. What it won't give you is a "percent completion" measure, since you have no idea how much low level data needs to be sent to complete the operation.
I have a workflow that contains a Pick activity. Each PickBranch is triggered by a WCF request. The triggered branch then sends a response to the request and performs an Action activity. But the behaviour I'm seeing indicates the response is not being sent until the Action activity is complete which is causing the original request to timeout, depending on how long the Action activity takes to complete.
In the PickBranch above, I'm adding work orders to a mobile database. Each work order takes up to 16 seconds to be added to the database. As the number of work orders increases, the greater the likelihood that the original request will timeout. What am I doing wrong?
Ok, I think I have a resolution for this. As per Maurice's answer here, I added a Delay activity following the SendReplyToReceive and the workflow then started behaving as expected.
Just tested this and it works fine. If I have a Pick with a send and receive inside a trigger and a delay inside the action, the reply is received immediately.
Are you sure the Request on your SendReply activity appears to be set correctly?
Patrick is still right, you should implement your database activity as an AsyncCodeActivity but this would not be the reason for your reply being delayed.
I my experience checking PersistBeforeSend on SendReplyToReceive to True fixes this problem. Putting Persist block after SendReplyToReceive also helps.
This is working as intended. If the operations take such a long time, would you be better served by calling them asynchronously? Check out AsyncCodeActivity here:
http://msdn.microsoft.com/en-us/library/system.activities.asynccodeactivity.aspx
and by unresponsive I mean that after the first three successful connections, the fourth connection is initiated and nothing happens, no crashes, no delegate functions called, no data is sent out (according to wireshark)... it just sits there?!
I've been beating my head against this for a day and half...
iOS 4.3.3
latest xCode, happens the same way on a real device as in the simulator.
I've read all the NSURLConnection posts in the Developer Forums... I'm at a loss.
From my application delegate, I kick off an async NSURLConnection according to Apple docs, using the App Delegate as the delegate for the NSURLConnection.
From my applicationDidFinishLaunching... I trigger the initial two queries which successfully return XML that I then pass off to a OperationQueue to be parsed.
I can even loop, repeating these queries with no issues, repeated them 10 times and worked just fine.
The next series of five queries are triggered via user input. The first query runs successfully and returns the correct response, then the next query is created and when used to create a NSURLConnection (just like all the others), just sits there.?!
The normal delegate calls I see on all the other queries are never seen.
Nothing goes over the wire according to Wireshark?
I've reordered the queries and regardless of the query, after the first one the next one fails (fails as in does nothing, no errors or aborts, just sits there)
It's obviously in my code, but I am blind to it.
So what other tools can I use to debug the async NSURLConnection... how can I tell what it's doing? if at all.
Any suggestions for debugging a NSURLConnection or other ways accomplish doing the same thing a NSURLConnection does??
Thanks for any help you can offer...
OK tracked it down...
I was watching the stack dump in each thread as I was about to kick off each NSURLConnection, the first three were all in the main thread as expected... the fourth one ended up in a new thread?! In one of my NSOperation thread?!?!
As it turns out I inadvertently added logic(?) that started one my NSURLConnection in the last NSOperation call to didFinishParsing: so the NSURLConnection was async started and then the NSOperation terminated... >.<
So I'll move the NSURLConnection out of the didFinishParsing and it should stay in the main loop and I should be good!