I want to set a retry policy for HTTP call, in case of occasional network failuer, So I configured as following:
<http:connector name="HTTP_Retry" cookieSpec="netscape" validateConnections="true" sendBufferSize="0" receiveBufferSize="0" receiveBacklog="0" clientSoTimeout="10000" serverSoTimeout="10000" socketSoLinger="0" doc:name="HTTP\HTTPS">
<reconnect frequency="1000" count="3"/>
</http:connector>
....
<http:outbound-endpoint address="http://localhost:18081/mule/TheCreditAgencyService" doc:name="HTTP" exchange-pattern="request-response" method="POST" connector-ref="HTTP_Retry"/>
But the retry policy is not applied, even I configured a customer retry policy, I debuged the application, set break point, the program is not run into my customer class.
I read the document but there is only example of JMS.
Any tips? Do I miss configured?
Thanks in advance!
The ill-named retry policies take care of reconnecting connectors not resending messages in case of failure.
On a disconnected connector like the HTTP one, a retry policy has not effect. It's useful on connectors like JMS, where a permanent connection is maintained towards a broker, connection that needs reconnecting in case of failure.
What you are after is the until-successful routing message processor.
Related
I have a dockerized microservice architecture where I am using Rebus with RabbitMQ as message bus.
One container is running RabbitMQ. Other containers are running services that communicate with each other via Rebus/RabbitMQ.
I want my solution to be resilient to container restarts so if for example the RabbitMQ container restarts I expect the other services to be unaffected by that.
I expect that messages sent while RabbitMQ is down are queued up for delivery by Rebus
in the sending service and that they are delivered when the RabbitMQ connection is restored.
To verify that I run this test scenario:
Service A sends a message to service B via Rebus and RabbitMQ. That works fine.
I stop the RabbitMQ container.
Service A sends a message to service B via Rebus and RabbitMQ. That fails because RabbitMQ is unavailable.
I start the RabbitMQ container again.
I can see that Rebus in my services automatically reconnect to RabbitMQ when it is up. That is as expected.
Now that the RabbitMQ connection is restored I would expect that Rebus sends the pending message from Service A to service B, but it does not.
Is this not expected behaviour of Rebus? If not, can I enable this feature?
I have read this topic https://github.com/rebus-org/Rebus/wiki/Automatic-retries-and-error-handling
and tried to configure Rbus like this:
Configure.With(...)
.Options(b => b.SimpleRetryStrategy(maxDeliveryAttempts: 10))
.(...)
but with no luck.
The "delivery attempts" you're configuring is how you configure how many Rebus should try to consume a received message before giving up (i.e. moving it to the error queue).
If Rebus loses its connection to the broker, it will not be able to receive anything for the entire duration of the outage, so stopping RabbitMQ should effectively pause all message processing (possibly with some exceptions in all messages being handled at the instant where RabbitMQ goes away).
Since no Rebus handlers will be running then, while RabbitMQ is down, you will have to deal with outgoing messages sent from other places, e.g. like messages sent/published from a web request.
(...) I expect that messages sent while RabbitMQ is down are queued up for delivery by Rebus (...)
...but Rebus cannot queue anything up, because RabbitMQ is down(*).
The natural thing to do for Rebus in this situation is to give you, the caller, the responsibility of deciding what to do about the problem.
In .NET, you usually do that by throwing an exception back at you. ๐
This leaves you with the option of
performing some alternative action, or
retrying some more times, or
whatever makes sense in that particular situation
A simple approach to building some resilience into your system in this case would be to use something like Polly to try sending outgoing messages multiple times in cases where it could fail.
I hope that makes sense. Please let me know if anything needs to be elaborated on. ๐
(*) Of course Rebus could have "cheated" and queued outgoing messages up in memory, but that would make it very hard for you to write resilient code, because you would not know whether an outgoing message had been safely delivered to the broker, or whether it was just sitting in memory waiting to be saved somewhere.
Currently we have a Spring Integration application which accepts HL7 messages. The flow is as follows.
There is a message driven JMS inbound adapter which accepts the messages through ActiveMQ Queue.
Then the message goes through series of transformations and finally ended up in a service activator component to perform necessary business logic.
So far every thing looks good and recently the client requested that they want to have a acknowledgement for each message with the status. There can be two scenarios for a received message
Message executes successfully
Message fails with exception if the required criteria is not satisfied.
So we are thinking of implementing a acknowledgement mechanism which sends the acknowledgement back to the client through the above mentioned ActiveMQ queue or transmit via a tcp port.
Do we have any proven way/ patterns of doing these kind of acknowledgements? Is there any techniques which Spring Integration provides to achive this kind of scenario?
Appreciate your kind reply
Regards,
Keth
See the inbound gateway.
If the sender sets a replyTo header, the reply will be sent there; if not, you can configure a default replyTo destination.
if HTTP is connection-less, how does ASP.net response property, HttpResponse.IsClientConnected detect client is connected or not?
HTTP is not "connection-less" - you still need a connection to receive data from the server; more correctly, HTTP is stateless. Applications running on-top of HTTP will most likely actually be stateful, but HTTP itself is not.
"Connectionless" can also refer to a system using UDP as the transport instead of TCP. HTTP primarily runs over TCP and pretty much every real webserver expects, and returns, TCP messages instead of UDP. You might see HTTP-like traffic in UDP-based protocols like UPnP, but because you want your webpage to be delivered reliably, TCP will always be used instead of UDP.
As for IsClientConnected, when you access that property it calls into the current HttpWorkerRequest which is an abstract class implemented by the current host environment.
IIS7+ implements it such that if it previously received a TCP disconnect message (that sets a field) the method would now return false.
The ISAPI implementation (IIS 6) instead calls into a function within IIS that informs the caller if the TCP client on the current request/response context is still connected, though presumably it works on the same basis: when the webserver receives a TCP timeout, disconnect or connection-reset message it sets a flag and lets execution continue instead of terminating the response-generator thread.
Here's the relevant source code:
HttpResponse.IsClientConnected: http://referencesource.microsoft.com/#System.Web/HttpResponse.cs,80335a4fb70ac25f
IIS7WorkerRequest.IsClientConnected: http://referencesource.microsoft.com/#System.Web/Hosting/IIS7WorkerRequest.cs,1aed87249b1e3ac9
ISAPIWorkerRequest.IsClientConnected: http://referencesource.microsoft.com/#System.Web/Hosting/ISAPIWorkerRequest.cs,f3e25666672e90e8
It all starts with an HTTP request. Inside it, you can, for example, spawn worker threads, that can outlive the request itself. Here is where IsClientConnected comes in handy, so that the worker thread knows that the client has already received the response and disconnected or not.
I have an esb from which i make a webservice call,which works fine most of the times, but sometimes i get the below exception
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
the weird thing is after i get this exception, some times the http outbound call still succeeds and some times does not succeed
why is this not consistent?
is there a chance that some configuration on mule http connector can help this exception scenario behave consistently?
All i am asking is... how to stop the http outbound request from getting processed after a read timed out exception is thrown?
the flow looks like below shown code
<queued-asynchronous-processing-strategy name="allow2Threads" maxThreads="2"/>
<flow name="TestFlow" processingStrategy="allow2Threads">
<vm:inbound-endpoint path="performWebserviceLogic" exchange-pattern="one-way" />
.... some transformation logic
....
<http:outbound-endpoint address="http://localhost:8080/firstwebservicecall" responseTimeout="65000" exchange-pattern="request-response"/>
....
.... some transformation logic on response...
<http:outbound-endpoint address="http://localhost:8080/secondWeberviceCall" responseTimeout="20000" exchange-pattern="request-response"/>
......some transformation logic on response...
<catch-exception-strategy>
<choice>
<when expression="#[groovy:message.getExceptionPayload().getRootException.getMessage().equals('Read timed out') and message.getSessionProperty('typeOfCall').equals('firstWeberviceCall')]">
.... unreliable ...result... as firstWeberviceCall may succeed even after the control comes here
and if we process http://localhost:8080/firstwebservicecall .. the transaction takes place twice... as already it succeeded above even after an exception is thrown
</when>
<when expression="#[groovy:message.getExceptionPayload().getRootException.getMessage().equals('Read timed out') and message.getSessionProperty('typeOfCall').equals('secondWeberviceCall')]">
..... reliable ... if control comes here and if we process http://localhost:8080/secondWeberviceCall .. the transaction takes place only once
</when>
<when expression="#[groovy:message.getExceptionPayload().getRootException.getMessage().equals('Connect timed out') and message.getSessionProperty('typeOfCall').equals('firstWeberviceCall')]">
....reliable
</when>
<when expression="#[groovy:message.getExceptionPayload().getRootException.getMessage().equals('Connect timed out') and message.getSessionProperty('typeOfCall').equals('secondWeberviceCall')]">
....reliable
</when>
</choice>
</catch-exception-strategy>
</flow>
You can configure, thus increase, the time-outs of the HTTP transport in different places:
Response time out on the endpoints,
Connection and socket timeouts on the connector.
This is just pushing the problem further though: increasing the time-outs may solve your issue temporarily but you're still exposed to the failure.
To handle it properly, I think you should strictly check the response status code after each HTTP outbound endpoint, using maybe a filter to break the flow if the status code is not what you expect.
Also, it's well possible that you get a response time out after the HTTP request has been received by the server and before the response gets back to Mule. In that case, as far as Mule is concerned, the call has failed and must be retried. Which means the remote service must be idempotent, i.e. the client should be able to safely retry any operation that failed (or it thinks has failed).
check server SO_TIMEOUT in httpconnection, set it to 0
check - https://www.mulesoft.org/jira/browse/MULE-6331
In my mochiweb application, I am using a long held HTTP request. I wanted to detect when the connection with the user died, and I figured out how to do that by doing:
Socket = Req:get(socket),
inet:setopts(Socket, [{active, once}]),
receive
{tcp_closed, Socket} ->
% handle clean up
Data ->
% do something
end.
This works when: user closes his tab/browser or refreshes the page. However, when the internet connection dies suddenly (say wifi signal lost all of a sudden), or when the browser crashes abnormally, I am not able to detect a tcp close.
Am I missing something, or is there any other way to achieve this?
There is a TCP keepalive protocol and it can be enabled with inet:setopts/2 under the option {keepalive, Boolean}.
I would suggest that you don't use it. The keep-alive timeout and max-retries tends to be system wide, and it is optional after all. Using timeouts on the protocol level is better.
The HTTP protocol has the status code Request Timeout which you can send to the client if it seems dead.
Check out the after clause in receive blocks that you can use to timeout waiting for data, or use the timer module, or use erlang:start_timer/3. They all have different performance characteristics and resource costs.
There isn't a default "keep alive" (but can be enabled if supported) protocol over TCP: in case there is a connection fault when no data is exchanged, this translates to a "silent failure". You would need to account for this type of failure by yourself e.g. implement some form of connection probing.
How does this affect HTTP? HTTP is a stateless protocol - this means that every request is independent of every other. The "keep alive" functionality of HTTP doesnโt change that i.e. "silent failure" can still occur.
Only when data is exchanged can this condition be detected (or when TCP Keep Alive is enabled).
I would suggest sending the application level keep alive messages over HTTP chunked-encoding. Have your client/server smart enough to understand the keep alive messages and ignore them if they arrive on time or close and re-establish the connection again.