Can a http server detect that a client has cancelled their request? - http

My web app must process and serve a lot of data to display certain pages. Sometimes, the user closes or refreshes a page while the server is still busy processing it. This means the server will continue to process data for several minutes only to send it to a client who is no longer listening.
Is it possible to detect that the connection has been broken, and react to it?
In this particular project, we're using Django and NginX, or Apache. I assumed this is possible because the Django development server appears to react to cancelled requests by printing Broken Pipe exceptions. I'd love to have it raise an exception that my application code could catch. It appears JSP can do this. So can node.js here.
Alternatively, I could register an unload event handler on the page in question, have it do a synchronous XHR requesting that the previous request from this user be cancelled, and do some kind of inter-process communication to make it so. Perhaps if the slower data processing were handed to another process that I could more easily identify and kill, without killing the responding process...

While #Oded is correct that HTTP is stateless between requests, app servers can indeed detect when the underlying TCP/IP connection has broken for the request being processed. Why is this? Because TCP is a stateful protocol for reliable connections.
A common technique for .Net web apps processing a resource intensive request is to check Response.IsClientConnected (docs) before starting the resource intensive work. There is no point in wasting CPU cycles to send an expensive response to a client that isn't there anymore.
private void Page_Load(object sender, EventArgs e)
{
// Check whether the browser remains
// connected to the server.
if (Response.IsClientConnected)
{
// If still connected, do work
DoWork();
}
else
{
// If the browser is not connected
// stop all response processing.
Response.End();
}
}
Please reply with your target app server stack so I can provide a more relevant example.
Regarding your 2nd alternative to use XHR to post client page unload events to the server, #Oded's comment about HTTP being stateless between requests is spot on. This is unlikely to work, especially in a farm with multiple servers.

HTTP is stateless, hence the only way to detect a disconnected client is via timeouts.
See the answers to this SO question (Java Servlet : How to detect browser closing ?).

Related

how top-level async web requests handle the reponse

I have fundumental question about how async requests work at top level.
Imagin if we have a top level route called HomePage(). This route is an async route and within this route we call to 10 different APIs before sending the response(image it takes like 5 seconds, remember this is an example to understand the concept and these numbers are for learning purposes). All of these api requests are awaited. So the request handler just releases the thread hanlding this request and goes to handle other requests until the response for these apis come back. So lets add this constraint. Our network card can handle only 1 connection and that one is held open till the response for the request to HomePage is ready. Therefor we cannot make any other requests to the server so whats the difference if this whole thing was sync from the beggining. We cannot drop the connection to the first request to HomePage because if that's the case then how are we ever going to send back the response for that request and we cannot handle new requests because the connection is kept open.
I suspect that my problem is how the reponse is sent back on top level async routes.
Can anybody give a deep dive explaination on how these requests are handled that can take more requests and still send back the response(because if it can send back a response the connection HAS TO HAVE KEPT ALIVE). Examples would be much appreciated.
So lets add this constraint. Our network card can handle only 1 connection
That constraint cannot exist. Network cards handle packets, not connections. Connections are a virtual construct that exist in the host computer.
Can anybody give a deep dive explaination on how these requests are handled that can take more requests and still send back the response(because if it can send back a response the connection HAS TO HAVE KEPT ALIVE).
Of course the connection is kept alive. The top-level async method will return the thread to the thread pool, where it is available to handle any other requests.
If you have some artificial constraint on your web app that prevents it from having more than one connection, then there won't be any other requests to handle, and the thread pool threads will do nothing.

lost server replies/errors with netty's object decoder

I have a very simple netty app which serves both as server and a client.
Client uses channel.writeAndFlush() to send request to server and then blocks on monitor.wait().
In client's InboundAdapter in channelRead() I find the appropriate monitor and do monitor.notify() to let the requesting client thread to proceed working on the server's reply.
On the server in ChannelHandler's channelRead() I do the following:
To limit the amount of requests being processed I submit a task which does the real work as a new task to existing EventLoop: ctx.executor().submit(new Task()). I that task I do heavy IO operations and after that I writeAndFlush() results back to client.
Here is my pipeline setup:
new ObjectEncoder(),
new ObjectDecoder(LibConstants.Search.MAX_REQUIEST_SIZE, ClassResolvers.cacheDisabled(null))
Here is the bootstrap config:
new ServerBootstrap()
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 1000)
.option(ChannelOption.SO_KEEPALIVE, true)
I have 2 problems:
Rather often I get io.netty.handler.codec.DecoderException: java.io.UTFDataFormatException on the client when receiving a reply from server. I cannot find any obvious reason for this. Since my pipeline setup is so simple.
A reply from the server just wouldn't appear on the client. I the logs I see a successful flush on the server but the reply never arrived at the client. This is very hard to deal with since my app is very latency sensitive. Any timeout I would set will kill my user experience.
This all happens over a VPN network so there is a possibility that VPN device misbehaves in some weird way but I hoping that TCP would handle any sort of packet loss/corruption which can happen in the channel.
Any advice or experience you can share will be very appreciated!

Client Reconnection

My understanding of the (JavaScript) hub client is that if a connection is lost, it enters a 'Reconnecting...' phase which attempts to reconnect. If it can't do so, it will enter a 'Disconnected' state which is where it'll stay until asked to start again.
How long is the 'Reconnecting...' phase meant to last before it gives up? I've read 40 seconds before, but my client seems to take much less time - about 10, maybe less. [EDIT: Nevermind this part, I had configured a 10 disconnect on the server as a test... and forgot. I understand this is set by the server during the negotiate. Makes sense!] ... I'd prefer to have the client continually retry until it is told to abort - can this be done, and would it cause issues?
Another question; during the Reconnecting... phase, if I attempt to call a hub method (again, in JS) it never seems to complete. I'm using the returned Deferred to check for 'done' and 'fail' events, but neither seems to get called. Is this by design?
Thanks.
You can definitely have it continually reconnect.
Handle the disconnected event on the client and call connection.start:
$.connection.hub.disconnected(function() {
setTimeout(function() {
$.connection.hub.start();
}, 5000); // Re-start connection after 5 seconds
});
The only issues this would cause is that you could potentially be triggering infinite requests to a server that isn't there for client machines. This becomes even more troublesome when you introduce the mobile market into the situation (drains battery like crazy).
When you attempt to call a hub method while reconnecting SignalR will try to send your command. Since there are 2 channels, one for receiving data and one for sending, (for all transports except web sockets) in some cases it can still be possible to send requests while your offline. Therefore SignalR does not know if a request fails until the browser tells it that it could not successfully make the request.
Hope this helps!
I might have a clue... Touching the Web.config produces an appPool Recycle, meaning that a new worker process will be created for new requests while the existing process will continue for a while until the remaining requests end or the timeout is reached. Request that do not end in the timeout period are terminated.
Signalr client reconnects to the new process while the long running task is running in the old process, so when on the long running task you do
GlobalHost.ConnectionManager.GetHubContext<ForceHub>();
you actually get a reference for "old" hub while the client is connected to the "new" hub.
That's why the test preformed by Wasp worked: he was making a new request to publish on the signalr hub that was processed in the newly created worker process.
You could try to configure a singalr backplane (https://www.asp.net/signalr/overview/performance/scaleout-in-signalr), it’s really easy to configure it using Sql Server (https://www.asp.net/signalr/overview/performance/scaleout-with-sql-server). The backplane should be capable of connect the two worker processes and hopefully you will get the notification on the client.
If this is the problem, notifications generated by new requests will work even without the backplane. Notice that the real purpose of the backplane is to scale out signalr, this is, to connect a farm of WebServers between them.
Also keep in mind that running long-running task inside IIS is as task hard to achieve as, among other things, IIS does regular appPool recycles and has timeout limits for the requests to execute. I recommend that you read the following post: http://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
“If you think you can just write a background task yourself, it's likely you'll get it wrong. I'm not impugning your skills, I'm just saying it's subtle. Plus, why should you have to?”
Hope this helps

blazeds,how to know the client has "disconnected"?

The blazeds server-side don't know the client-side has disconnected. But it seems to know the client-side's network has down.
In my case, I use the polling channel, I download the blazeds's source code, and add some log output in the FlexClientOutboundQueueProcessor.flush(MessageClient messageClient, List<Message> outboundQueue) method.
Then I saw this, when a client subscibed, the server-side invoke the FlexClientOutboundQueueProcessor.flush method every 3 seconds, and print what I added in the flush method, then I only shut down the client's network, not close browser(client and server with difference network), I found the server-side don't print anything, it means that the server-side don't invoke the flush method.
And after more than 30 minutes I recover the client's network, the server-side continue to invoke the flush method (the client's session isn't destroyed, if I close the client's browser, after 30 minutes the server-side will destroy the session).
Now, I have two questions,:
How the server-side know the client's network has downed? Is there a listener to monitor the client's network? If so, where is it? If not, how and where the codes?
It seems that the server-side will invoke the FlexClientOutboundQueueProcessor.flush method every 3 seconds, can this interval be configured? And where the code to start or stop this timing task?
Here answer on your first question: Detecting (on the server side) when a Flex client disconnects from BlazeDS destination
About configuration. You can configure in services-config.xml.
Example BlazeDS applications
Configuring channels with servlet-based endpoints

Detecting aborted requests in a HttpServlet

Is there a way to find out if a HttpServletRequest is aborted?
I'm writing an instant browser application (some kind of chat): The clients asks for new events in a loop using AJAX-HTTP-Requests. The server (Tomcat) handles the requests in a HttpServlet. If there are no new events for this client, the server delays the reply until a new event arrives or a timeout occurs (30sec).
Now I want to identify clients that are no longer polling. Therefore, I start a kick-Timer at the end of a request which is stopped when a new request arrives. If the client closes the browser window the TCP-Connection is closed and the HTTP-Request is aborted.
Problem: The client does not run into the kick-Timeout because the Servlet still handles the event request - sleeping and waiting for an event or timeout.
It would be great if I could somehow listen for connection abort events and then notify the waiting request in order to stop it. But I couldn't find anything like that in the HttpServletRequest or HttpServletResponse...
This probably won't help the OP any more, but it might help others trying to detect aborted HTTP connections in HttpServlet in general, as I was having a similar problem and finally found an answer.
The key is that when the client cancels the request, normally the only way for the server to find out is to send some data back to the client, which will fail in that case. I wanted to detect when a client stops waiting for a long computation on server, so I ended up periodically writing a single character to response body through HttpServletResponse's writer. To force sending the data to the client, you must call HttpServletResponse.flushBuffer(), which throws ClientAbortException if the connection is aborted.
You are probably using some sort of thread-notification (Semaphores or Object.wait) to hold and release the Servlet threads. How about adding a timeout (~10s) to the wait, then somehow checking whether the connection is still alive and then continuing the wait for another 10s, if the connection is still there.
I don't know whether there are reliable ways to poll the "liveness" of the connection (e.g. resp.getOutputStream not throwing an Exception) and if so, which way is the best (most reliable, least CPU intense).
It seems like having waiting requests could degrade the performance of your system pretty quickly. The threads that respond to requests would get used up fast if requests are held open. You could try completing all requests (and returning "null" to your clients if there is no message), and having a thread on the back-end that keeps track of how long it's been since clients have polled. The thread could mark a client as being inactive.

Resources