Problem consuming ActiveMQ messages from Flex client - apache-flex

I am unable to consume messages sent via ActiveMQ from my Flex client. Sending messages via the Producer seems to work, I can also see that the Flex client is connected and subscribed via the properties on the Consumer object, however the "message" event on the Consumer is never fired so it seems like the messages are not received.
When I look in the ActiveMQ console, I can see the number of subscribers, the number of messages sent and the number of messages received. The strange thing is that the received messages counter seems to increment and that I can also trace the log statements in the Tomcat console, but again no messages are received in the Flex client.
Any ideas?

After rebuilding my app from scratch with a fresh install of Tomcat, everything seems to work. Maybe this was caused by the fact that I was using the BlazeDS Turnkey version that contains a preconfigured instance of Tomcat.
BTW: This is a great tutorial: http://mmartinsoftware.blogspot.com/2008/05/simplified-blazeds-and-jms.html

Related

Rebus retry policy when RabbitMQ is temporarily down

I have a dockerized microservice architecture where I am using Rebus with RabbitMQ as message bus.
One container is running RabbitMQ. Other containers are running services that communicate with each other via Rebus/RabbitMQ.
I want my solution to be resilient to container restarts so if for example the RabbitMQ container restarts I expect the other services to be unaffected by that.
I expect that messages sent while RabbitMQ is down are queued up for delivery by Rebus
in the sending service and that they are delivered when the RabbitMQ connection is restored.
To verify that I run this test scenario:
Service A sends a message to service B via Rebus and RabbitMQ. That works fine.
I stop the RabbitMQ container.
Service A sends a message to service B via Rebus and RabbitMQ. That fails because RabbitMQ is unavailable.
I start the RabbitMQ container again.
I can see that Rebus in my services automatically reconnect to RabbitMQ when it is up. That is as expected.
Now that the RabbitMQ connection is restored I would expect that Rebus sends the pending message from Service A to service B, but it does not.
Is this not expected behaviour of Rebus? If not, can I enable this feature?
I have read this topic https://github.com/rebus-org/Rebus/wiki/Automatic-retries-and-error-handling
and tried to configure Rbus like this:
Configure.With(...)
.Options(b => b.SimpleRetryStrategy(maxDeliveryAttempts: 10))
.(...)
but with no luck.
The "delivery attempts" you're configuring is how you configure how many Rebus should try to consume a received message before giving up (i.e. moving it to the error queue).
If Rebus loses its connection to the broker, it will not be able to receive anything for the entire duration of the outage, so stopping RabbitMQ should effectively pause all message processing (possibly with some exceptions in all messages being handled at the instant where RabbitMQ goes away).
Since no Rebus handlers will be running then, while RabbitMQ is down, you will have to deal with outgoing messages sent from other places, e.g. like messages sent/published from a web request.
(...) I expect that messages sent while RabbitMQ is down are queued up for delivery by Rebus (...)
...but Rebus cannot queue anything up, because RabbitMQ is down(*).
The natural thing to do for Rebus in this situation is to give you, the caller, the responsibility of deciding what to do about the problem.
In .NET, you usually do that by throwing an exception back at you. 🙂
This leaves you with the option of
performing some alternative action, or
retrying some more times, or
whatever makes sense in that particular situation
A simple approach to building some resilience into your system in this case would be to use something like Polly to try sending outgoing messages multiple times in cases where it could fail.
I hope that makes sense. Please let me know if anything needs to be elaborated on. 🙂
(*) Of course Rebus could have "cheated" and queued outgoing messages up in memory, but that would make it very hard for you to write resilient code, because you would not know whether an outgoing message had been safely delivered to the broker, or whether it was just sitting in memory waiting to be saved somewhere.

Will spring kafka API automatically attempt for reconnection in case of broker failure?

I have a doubt regarding spring kafka broker failover mechanism. I was checking by bringing the brokers down, voluntarily and I've been getting these "Connection to node -1 could not be established. Brokers may not be available" warnings continuously as soon as the brokers went down. I understand that it's because of the broker unavailability. I want a support document to know that if it happens automatically by the API itself?
This is all handled by the underlying kafka-clients code; it is outside of Spring's responsibility; Spring is not informed of the situation.
The client will reconnect when the brokers come back up.

SQL to resume suspended messages in order

We have an upcoming deploy for a system that processes a lot of messages through BizTalk. Since those messages are cumulative updates they need to be queued up during the deployment outage then processed in order when the deploy is finished. Since there may be a large number of them it’s difficult to do this manually.
One possible solution is to leave the send port stopped and let the messages suspend. We can then resume them in order when the deployment is completed.
Is it possible to run a SQL script (or a tool) against the BizTalk messagebox database that will resume suspended messages, for a specific port, in order of receipt?
If you have an ordered requirement (you either do or don't), then the Send Port should be marked for Ordered Delivery.
If so, then when you Start a Stopped Send Port, the messages will be processed in the same order they were submitted.
If you stop the port (but leave it subscribed) and start it again afterwards it should resume the message itself, or if not it is simple enough to go into the Administration Console and batch resume them.
However if the response messages of the send port are subscribed too by running Orchestrations you will not be able to un-deploy the Orchestrations until they have all completed, so stopping the send port would not work in this scenario.
Sometimes one option is if the initiating port is a one way receive, is to stop the receive location and let everything complete. You can then stop the application and redeploy and restart it and the send port will pick up all the waiting message to process.
If the above is not possible you may want to look at doing a side by side deployment where you increment the version numbers of all the assemblies in the solution so you can have both versions deployed at the same time and you can then allow the old version to finish running but have the new version processing any new messages.
The better option is to send messages to msmq, usually there is no extra coding required for this. You can just route messages to msmq using MSMQ adapter and then after deployment receive them in order as MSMQ adapter allows to receive in order. Just make sure you do a small test in yr QA environment before doing it in production.

Reliable WCF Service with MSMQ + Order processing web application. One way calls delivery

I am trying to implement Reliable WCF Service with MSMQ based on this architecture (http://www.devx.com/enterprise/Article/39015)
A message may be lost if queue is not available (even cluster doesn't provide zero downtime)
Take a look at the simple order processing workflow
A user enters credit card details and makes a payment
Application receives a success result from payment gateway
Application send a message as “fire and forget”/”one way” call to a backend service by WCF MSMQ binding
The user will be redirected on the “success” page
Message is stored in a REMOTE transactional queue (windows cluster)
The backend service dequeue and process the message, completes complex order processing workflow and, as a result, sends an as email confirmation to the user
Everything looks fine as excepted.
What I cannot understand how can we guarantee that all “one way” calls will be delivered in the queue?
Duplex communication is not a case due to the user should be redirected at the result web page ASAP.
Imagine the case when a user received “success” page with language “… Your payment was made, order has been starting to process, and you will email notifications later…” but the message itself is lost.
How durability can be implemented for step 3?
One of the possible solutions that I can see is
3a. Create a database record with a transaction details marked as uncompleted, just to have any record about the transaction. This record may be used as a start point to process the lost message in case of the message will not be saved in the queue.
I read this post
The main thing to understand about transactional MSMQ is that there
are three distinct transactions involved in a transactional send to a
remote queue.
The sender writes the message to a local queue.
The queue manager on the senders machine transmits the message across the wire to the queue manager on the recipient machine
The receiver service processes the queue message and then removes the message from the queue.
But it doesn’t solve described issue - as I know WCF netMsmqBinding‎ doesn’t use local queue to send messages to remote one.
But it doesn’t solve described issue - as I know WCF netMsmqBinding‎
doesn’t use local queue to send messages to remote one.
Actually this is not correct. MSMQ always sends to a remote queue via local queue, regardless of whether you are using WCF or not.
If you send a message to a remote queue then look in Message Queuing in Server Management you will see in Outbound queues that a queue has been created with the address of the remote queue. This is a temporary queue which is automatically created for you. If the remote queue was for some reason unavailable, the message would sit in the local queue until it became available, and then it would be transmitted.
So durability is provided because of the three-phase commit:
transactionally write message locally
transactionally transmit message
transactionally receive and process message
There are instances where you may drop messages, for example, if your message processing happens outside the scope of the dequeue transaction, and also instances where it is not possible to know if the processing was successful (eg back-end web service call times out), and of course you could have a badly formed message which will never succeed processing, but in all cases it should be possible to design for these.
If you're using public queues on a clustered environment then I think there may be more scope for failure as clustering msmq introduces complexity (I have not really used so I don't know) so try to avoid if possible.

BlazeDS+ActiveMQ: non-graceful disconnection of Flex client from a durable topic does not remove it from ActiveMQ

I'm trying to make a Flex-based desktop application consume messages from an ActiveMQ topic with a durable subscription, using the JMS bridge of BlazeDS. The basic scenario is as follows:
Messages are produced by other producers in the topic to which the Flex client is subscribed.
The Flex client may go offline from time to time, but it must receive all the messages it has missed while being offline when it connects to BlazeDS again. (Of course the Flex client connects with the same client ID every time).
It can not be guaranteed that the Flex client is shut down gracefully.
Everything works fine if I explicitly disconnect my consumer on the Flex side by calling disconnect() - I do it in the exit handler of the application. However, due to #3 above, it is not guaranteed that disconnect() is called all the time. When the Flex client shuts down without calling disconnect(), it seems that the subscription of the "proxy JMS client" that BlazeDS creates and associates to the Flex client stays active towards ActiveMQ, so ActiveMQ still thinks that the client is logged in. When the Flex app starts up the next time, it is unable to log in to BlazeDS because ActiveMQ refuses its subscription, claiming that the client ID is already taken. Why is it so and what can I do here to ensure that BlazeDS makes the "proxy JMS client" offline in ActiveMQ when its real Flex counterpart terminates unexpectedly?
More detailed information: some debugging revealed that:
BlazeDS becomes aware of the termination of the Flex client because it prints a few exceptions to the console when in debug mode. The messages are as follows:
[BlazeDS]23:18:13.688 [WARN] Endpoint with id 'my-streaming-amf' is closing the streaming connection to FlexClient with id '71E6466F-D91F-201C-F60A-A6CB52F95D9F' because endpoint encountered a socket write error, possibly due to an unresponsive FlexClient.
ClientAbortException: java.net.SocketException: Broken pipe
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:319)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:288)
at org.apache.catalina.connector.Response.flushBuffer(Response.java:542)
at org.apache.catalina.connector.ResponseFacade.flushBuffer(ResponseFacade.java:279)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.handleFlexClientStreamingOpenRequest(BaseStreamingHTTPEndpoint.java:818)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.serviceStreamingRequest(BaseStreamingHTTPEndpoint.java:1055)
at flex.messaging.endpoints.BaseStreamingHTTPEndpoint.service(BaseStreamingHTTPEndpoint.java:460)
at flex.messaging.MessageBrokerServlet.service(MessageBrokerServlet.java:353)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:737)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434)
at org.apache.coyote.http11.InternalOutputBuffer.flush(InternalOutputBuffer.java:299)
at org.apache.coyote.http11.Http11Processor.action(Http11Processor.java:963)
at org.apache.coyote.Response.action(Response.java:183)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:314)
... 20 more
[BlazeDS]23:18:13.689 [DEBUG] Streaming thread 'http-8400-1' for endpoint with id 'my-streaming-amf' is releasing connection and returning to the request handler pool.
[BlazeDS]23:18:13.689 [INFO] Number of streaming clients for FlexSession with id '5BC5E8D604A361BCA673B05AC624CCC1' is 0.
[BlazeDS]23:18:13.689 [DEBUG] Number of streaming clients for endpoint with id 'my-streaming-amf' is 0.
At this stage, the subscriptions are still shown on the ActiveMQ web admin interface as being active.
Killing BlazeDS (more precisely, the Tomcat server that hosts it) with kill -9 from the console makes ActiveMQ realize immediately that the "proxy JMS client" is gone and it becomes offline on the ActiveMQ web admin interface. This made me conclude that BlazeDS is keeping the proxy JMS client alive explicitly since kill -9 gives no chance to BlazeDS to unsubscribe the client but it still becomes offline in ActiveMQ.
So, the question once again: What can I do here to ensure that BlazeDS makes the "proxy JMS client" offline in ActiveMQ when its real Flex counterpart terminates unexpectedly? Is this a bug in BlazeDS or am I just missing some hidden configuration setting that would make it work?
Version information: BlazeDS 4.0, ActiveMQ 5.5.0, both freshly downloaded today. I'm using the Tomcat server in the BlazeDS turnkey but ActiveMQ is installed separately because the BlazeDS turnkey ships with ActiveMQ 4.1.1 only. By the way, that version of ActiveMQ has the same issue.
The problem is that there is no way for BlazeDS to detect that your Flex client was shutdown, you will have to implement your own mechanism - my suggestion is to use a heart beat implemented with messaging. If no message is received from the client after a time interval you can assume that the Flex client is gone and do the disconnect (or you can use the session timeout mechanism on the server, and do the disconnect on session expire).
What you have seen (the exception caught when the streaming channel is closed) is not enough to say 100% sure that the Flex client is gone. The streaming is implemented using an HTTP connection kept open forever (used to send server messages) and periodic HTTP post calls (initiated by the client to send messages). In some networks the firewall can decide to kill the HTTP connection after a couple of seconds and you will receive the same error like the one you posted. However, it does not mean that the Flex client is killed - the Flex client can use a fallback strategy and switch to short/long polling in this case. Actually it would be a bug if BlazeDS will automatically do the JMS disconnect in this case.

Resources