I have an esb from which i make a webservice call,which works fine most of the times, but sometimes i get the below exception
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
the weird thing is after i get this exception, some times the http outbound call still succeeds and some times does not succeed
why is this not consistent?
is there a chance that some configuration on mule http connector can help this exception scenario behave consistently?
All i am asking is... how to stop the http outbound request from getting processed after a read timed out exception is thrown?
the flow looks like below shown code
<queued-asynchronous-processing-strategy name="allow2Threads" maxThreads="2"/>
<flow name="TestFlow" processingStrategy="allow2Threads">
<vm:inbound-endpoint path="performWebserviceLogic" exchange-pattern="one-way" />
.... some transformation logic
....
<http:outbound-endpoint address="http://localhost:8080/firstwebservicecall" responseTimeout="65000" exchange-pattern="request-response"/>
....
.... some transformation logic on response...
<http:outbound-endpoint address="http://localhost:8080/secondWeberviceCall" responseTimeout="20000" exchange-pattern="request-response"/>
......some transformation logic on response...
<catch-exception-strategy>
<choice>
<when expression="#[groovy:message.getExceptionPayload().getRootException.getMessage().equals('Read timed out') and message.getSessionProperty('typeOfCall').equals('firstWeberviceCall')]">
.... unreliable ...result... as firstWeberviceCall may succeed even after the control comes here
and if we process http://localhost:8080/firstwebservicecall .. the transaction takes place twice... as already it succeeded above even after an exception is thrown
</when>
<when expression="#[groovy:message.getExceptionPayload().getRootException.getMessage().equals('Read timed out') and message.getSessionProperty('typeOfCall').equals('secondWeberviceCall')]">
..... reliable ... if control comes here and if we process http://localhost:8080/secondWeberviceCall .. the transaction takes place only once
</when>
<when expression="#[groovy:message.getExceptionPayload().getRootException.getMessage().equals('Connect timed out') and message.getSessionProperty('typeOfCall').equals('firstWeberviceCall')]">
....reliable
</when>
<when expression="#[groovy:message.getExceptionPayload().getRootException.getMessage().equals('Connect timed out') and message.getSessionProperty('typeOfCall').equals('secondWeberviceCall')]">
....reliable
</when>
</choice>
</catch-exception-strategy>
</flow>
You can configure, thus increase, the time-outs of the HTTP transport in different places:
Response time out on the endpoints,
Connection and socket timeouts on the connector.
This is just pushing the problem further though: increasing the time-outs may solve your issue temporarily but you're still exposed to the failure.
To handle it properly, I think you should strictly check the response status code after each HTTP outbound endpoint, using maybe a filter to break the flow if the status code is not what you expect.
Also, it's well possible that you get a response time out after the HTTP request has been received by the server and before the response gets back to Mule. In that case, as far as Mule is concerned, the call has failed and must be retried. Which means the remote service must be idempotent, i.e. the client should be able to safely retry any operation that failed (or it thinks has failed).
check server SO_TIMEOUT in httpconnection, set it to 0
check - https://www.mulesoft.org/jira/browse/MULE-6331
Related
I stumbled upon a case where a request to an endpoint might take more than 60 seconds (let's say that's the timeout value), in which case the server sends a response and continues processing the request in the background. There are also cases where the same request would be processed before it times out and a successful response would be sent from the server to the client.
What would be the best HTTP code to use in those first case? I read HTTP server timeout. When should it be sent, which suggests 503 or 504, and HTTP status code for 'Loading', which mentions that the request can be deemed successful and return 200. But I'm not convinced by any of those suggestions more than the others yet.
No
HTTP protocol doesn't work that way.
A server would receive a request, process it and sends a reply. The cycle ends there.
HTTP is never intended to send multui-stage replies with different states. You need to work on a custom protocol built on top of HTTP if you want to do that.
Sending timeout error as an indication of an unfinished response is an anti pattern. If your server takes more time than usual to process a request, you should send a success response with an ID which can be used to poll the state of the initial request and get the results.
So to summarize from your question and comments: you have an HTTP API that takes a command and executes it, and sends a callback-reply through a webhook. If the execution takes longer than a minute, you have to send some form of reply that indicates the request is still being processed.
There are various problems with executing long-running work in an HTTP request handler. For starters, you tie up HTTP server resources (threads, sockets) while processing non-HTTP work, you can't restart the HTTP server without losing work, and so on.
So I would opt for a queuing mechanism that takes in the work, replies 200 OK or 201 Created immediately, and then schedules the work for processing on a background thread or even a different service. When finished, you execute the webhook callback.
Any error response to the initial call will leave the caller confused: they won't know whether their requested work will finish, unless you use an "exotic" status code that actually differs from real error conditions, and document that they can expect that.
Charlie and CodeCaster suggested to return 200 or 201 and I took a look at the other 2xx codes and found 202 Accepted:
From https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
The HyperText Transfer Protocol (HTTP) 202 Accepted response status
code indicates that the request has been accepted for processing, but
the processing has not been completed; in fact, processing may not
have started yet. The request might or might not eventually be acted
upon, as it might be disallowed when processing actually takes place.
202 is non-committal, meaning that there is no way for the HTTP to
later send an asynchronous response indicating the outcome of
processing the request. It is intended for cases where another process
or server handles the request, or for batch processing.
I wonder if this would fit best.
When a TCP connection gets cancelled by the client while making a HTTP request, I'd like to stop doing any work on the server and return an empty response. What HTTP status code should such a response return?
To be consistent I would suggest 400 Bad Request now if your backend apps are capable of identifying when the client gets disconnected or if you reject or close the connection, probably you could return Nginx' non-standard code 499 or 444.
499 Client Closed Request
Used when the client has closed the request before the server could send a response.
444 No Response
Used to indicate that the server has returned no information to the client and closed the connection.
HTTP (1.0/1.1) doesn't have a means to cancel a request. All that a client can do if it no longer wants the response is to close the connection and hope that the server contains an optimization to stop working on a response that can no longer be delivered. Since the connection is now closed, no response nor status code can actually be delivered to the client and so any code you "return" is only for your own satisfaction. I'd personally pick something in the 4xx range1 since the "fault" - the reason you can no longer deliver a response - is due to the client.
HTTP 2.0 does allow an endpoint to issue END_STREAM or RST_STREAM to indicate that they are no longer interested in one stream without tearing down the whole connection. However, they're meant to just ignore any further HEADERS or DATA sent on that stream and so even though you may theoretically deliver a status code, the client is still going to completely ignore it.
1Probably 400 itself since I can't identify a more specific error that seems entirely appropriate.
There are just a few plausible choices (aside from 500, of course):
202 Accepted
You haven't finished processing, and you never will.
This is appropriate only if, in your application domain, the original requestor "expects" that not all requests will be satisfied.
409 Conflict
…between making and cancelling the request.
This is only weakly justified: your situation does not involve one client making a request based on out of date information, since the cancellation had not yet occurred.
503 Service Unavailable
The service is in fact unavailable for this one request (because it was cancelled!).
The general argument of "report an error as an error" favors 409 or 503. So 503 it is by default.
There really is little to do. Quoting from RFC 7230, section 6.5:
A client, server, or proxy MAY close the transport connection at any time.
That is happening at TCP-, not HTTP-level. Just stop processing the connection. A status code will confer little meaning here as the intent of an incomplete/broken request is mere speculation. Besides, there will be no means to transport it across to the client.
There exists a known race condition in the HTTP keepalive mechanism:
HTTP KeepAlive connection closed by server but client had sent a request in the mean time
https://github.com/mikem23/keepalive-race
As I understand, I need my HTTP client either to have a shorter timeout than my HTTP server, or retry when getting TCP-FIN or TCP-RST.
My question is, how do today's web-browsers, that use the HTTP keepalive feature, handle this race condition. Do they retry?
I'll be happy for references, a google search hasn't come up with anything.
According the the RFC, in these cases, a server should respond with a 408 error code, signalling to the client that the connection has already been closed on its side. As the RFC states:
If the client has an outstanding request in transit, the client MAY
repeat that request on a new connection.
This means that it's up to the client (aka each browser) to decide how a 408 response will be handled. There are 2 alternatives:
handle it gracefully: retrying the remaining requests in a new connection automatically, so that the user stays completely unaware of the underlying failure that happened
fail-fast: showing the user a failure with the appropriate 408 error message
For example, it seems that Chrome in the past was following the second approach until a point, where people started considering this as a "buggy" behaviour and switched to the first one. You can find the bug thread related to the Chromium bug here and the associated code change here.
Note: As you can read in the final emails in the linked thread, Chrome performs these retries, only when some requests have succeeded in this connection. As a result, if you try to reproduce that with a single request, returning a 408 response, you'll notice that Chrome won't probably retry in that case.
I want to set a retry policy for HTTP call, in case of occasional network failuer, So I configured as following:
<http:connector name="HTTP_Retry" cookieSpec="netscape" validateConnections="true" sendBufferSize="0" receiveBufferSize="0" receiveBacklog="0" clientSoTimeout="10000" serverSoTimeout="10000" socketSoLinger="0" doc:name="HTTP\HTTPS">
<reconnect frequency="1000" count="3"/>
</http:connector>
....
<http:outbound-endpoint address="http://localhost:18081/mule/TheCreditAgencyService" doc:name="HTTP" exchange-pattern="request-response" method="POST" connector-ref="HTTP_Retry"/>
But the retry policy is not applied, even I configured a customer retry policy, I debuged the application, set break point, the program is not run into my customer class.
I read the document but there is only example of JMS.
Any tips? Do I miss configured?
Thanks in advance!
The ill-named retry policies take care of reconnecting connectors not resending messages in case of failure.
On a disconnected connector like the HTTP one, a retry policy has not effect. It's useful on connectors like JMS, where a permanent connection is maintained towards a broker, connection that needs reconnecting in case of failure.
What you are after is the until-successful routing message processor.
Is there a way to find out if a HttpServletRequest is aborted?
I'm writing an instant browser application (some kind of chat): The clients asks for new events in a loop using AJAX-HTTP-Requests. The server (Tomcat) handles the requests in a HttpServlet. If there are no new events for this client, the server delays the reply until a new event arrives or a timeout occurs (30sec).
Now I want to identify clients that are no longer polling. Therefore, I start a kick-Timer at the end of a request which is stopped when a new request arrives. If the client closes the browser window the TCP-Connection is closed and the HTTP-Request is aborted.
Problem: The client does not run into the kick-Timeout because the Servlet still handles the event request - sleeping and waiting for an event or timeout.
It would be great if I could somehow listen for connection abort events and then notify the waiting request in order to stop it. But I couldn't find anything like that in the HttpServletRequest or HttpServletResponse...
This probably won't help the OP any more, but it might help others trying to detect aborted HTTP connections in HttpServlet in general, as I was having a similar problem and finally found an answer.
The key is that when the client cancels the request, normally the only way for the server to find out is to send some data back to the client, which will fail in that case. I wanted to detect when a client stops waiting for a long computation on server, so I ended up periodically writing a single character to response body through HttpServletResponse's writer. To force sending the data to the client, you must call HttpServletResponse.flushBuffer(), which throws ClientAbortException if the connection is aborted.
You are probably using some sort of thread-notification (Semaphores or Object.wait) to hold and release the Servlet threads. How about adding a timeout (~10s) to the wait, then somehow checking whether the connection is still alive and then continuing the wait for another 10s, if the connection is still there.
I don't know whether there are reliable ways to poll the "liveness" of the connection (e.g. resp.getOutputStream not throwing an Exception) and if so, which way is the best (most reliable, least CPU intense).
It seems like having waiting requests could degrade the performance of your system pretty quickly. The threads that respond to requests would get used up fast if requests are held open. You could try completing all requests (and returning "null" to your clients if there is no message), and having a thread on the back-end that keeps track of how long it's been since clients have polled. The thread could mark a client as being inactive.