I'm struggling to find the timeout configuration in GlassFish 4 to solve the below problem:
The application implements a so-called tunneling, serving contents of few connecting portals to users. It opens up an HTTP connection, passes on user's request to one of the portal, receives response from the portal, and then passes the response on to the browser.
However, the problem occurs when one of the connnecting portals takes very long to response. When this happens, the application seems to give up waiting for the response and sends another request, exactly after 5 minutes, to the portal in question. This time it's a request for error/HTTP_BAD_GATEWAY.html.var
Does someone know how to increase this timeout?
Related
In http we have a method called CONNECT. I am trying to understand when it is used by the browser.
Is it used prior to every request or prior to every https request only?
Further, as we know - the http protocol makes persistent connection with the server. This connection gets closed after a certain period of inactivity or timeout. So does this concept influence when the CONNECT method is used by the browser? That is - whether CONNECT is used prior to each persistent connection or prior to each request within the persistent connection?
We've recently noticed a problem where some user agents would repeat the same POST request without the user actually physically triggering it twice.
After further study, we noticed this only happens when the request goes through our load balancer and when the server took a long time to process the request. A packet capture session eventually revealed that the load balancer drops the connection after a 5 minute timeout by sending a TCP Reset to the client; however, the client automatically resubmitted the request without user intervention.
We observed this behavior in Apache HTTP client for Java, Firefox and IE 8. (I cannot install other browsers to test.) This makes me think this behavior is part of the HTTP standard, but this is not very easy to google.
Also, it seems this only happen if the first request is submitted via a kept-alive TCP connection.
This is part of the HTTP 1.1 protocol to handle connections that are closed prematurely by servers.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.2.4.
I experienced a similar situation whereby we had the same form posted a couple of times within milliseconds apart.
A packet capture via wireshark confirmed the retransmission by the browser and logs from the server indicated the arrival of the requests.
Also further investigation also revealed that the load balancer such as F5 have reported incidence of retransmission behavior. So it is worth checking with your load balancer vendor as well.
I have some very interesting behaviour from seemingly random clients on a website. What I see is that random POST-requests to the server result in a bad request to the backend. I've tracked down why the request is bad - but I still don't know WHY this occurs.
Client connects to webserver with HTTP.
Client sends the headers of an ordinary POST-request (not the body)
Five seconds pass. The cache server passes the request to its backend, because it took too long to complete the request.
The cache server replies to the client, with an error message - indicating that the request was bad.
The client sends the POST-body - a few seconds after the reply has been received.
I have no problem accepting that the cache server can be reconfigured to wait longer. My problem is; What can be the reason for clients to wait several seconds between the headers sent, and sending the POST-body? I don't know of any cases where this behavior makes sense.
This is a fairly ordinary Magento eCommerce website, with a setup of Haproxy -> Varnish -> Nginx -> php5-fpm. Varnish is the component that ships the request to Nginx when five seconds of idling has passed.
I have verified with tcpdump/wireshark that the server does not receive the POST-body from the client within the time (before Haproxy as well).
I have verified that this occurs across user agents, and across the kind of requests (Being ordinary login forms, to ajax callbacks)
Does anyone have any clever ideas?
NOTE: I wasn't sure if this was a question for Stack Overflow or Serverfault, but I consider this an HTTP question that requires developer-knowledge.
The server is buggy-- you shouldn't send partial requests from the front-end to the backend. It's possible that the client is waiting for a HTTP/100 Continue response for the server before transmitting the POST body. It's also possible that the client is generating the POST data and that's taken some time for some reason.
I'm working on Comet support for CppCMS framework via long XMLHttpRequest polls. In many cases, such request is closed by client before any response from server was given -- for example the page is closed, user moves to other page or it is just refeshed.
At the server side I expect that I would recieve the notification that connection is dropped. I tested the application via 3 connectors: FastCGI, SCGI and simple HTTP Proxy.
From 3 major UNIX web servers, Apache2, lighttpd and Nginx, only the last one had closed
connection as expected allowing my application to remove the request from wait queue -- this worked for both FastCGI and HTTP Proxy connectors. (Nginx does not have scgi module by default).
Others, Apache and Lighttpd do not close connection or inform the backend about disconnected
clients, the proceed as if the client is still on line. This happens for all 3 supported APIs: FastCGI, SCGI and HTTP Proxy.
I had opened an issue for Lighttpd, but what
more conserns me is the fact that Apache -- mature and well supported web server as lighttpd
and does not discloses the server backend that client had gone.
Questions:
Is this a bug or this is a feature? Is there any reason not to close the connection between web server and application backend?
Are there real life Comet application working behind these servers via FastCGI/SCGI/HTTP-Proxy backends?
If the above true, how do they deal with this issue? I understand that I can timeout all connections every 10 seconds, but I would like to keep them idle as far as client listens -- because this allows easier scale up -- each connection is very cheep -- the cost is only the opended socket.
Thanks!
(1) Feature. Or, more specifically, fallout from an implementation detail.
A TCP/IP connection does not involve a constant flow of traffic back and forth. Thus, there is no way to know that a client is gone without (a) the client telling you it is closing the connection or (b) a timeout.
(2) I'm not specifically familiar with Comet or CppCMS. But, yes, there are all kinds of CMS servers running behind the mentioned web servers and they all have to deal with this issue (and, yes, it is a pain).
(3) Timeouts are the only way, but you can mitigate the pain, so to speak. Have the client ping the server across the connection every N seconds when there is otherwise no activity. Doesn't have to do anything and you can tack stuff on the reply; notifications of concurrent edits or whatever you need.
You are correct in that it is surprising that mod_fastcgi doesn't support telling the backend that Apache has detected the disconnect or the connection timed out. And you aren't the first to be dismayed.
The second patch on this page should fix that particular issue:
http://osdir.com/ml/web.fastcgi.devel/2006-02/msg00015.html
http://ncannasse.fr/blog/tora_comet
I don't have any concrete information for you, but this article does mention that they can detect when the client has disconnected from Apache. See tora.Queue. And it sounds like the source is available in the neko CVS, so you might be able to find some clues there. Good luck.
Suppose I click on a link to website A on a page and just before the current page gets replaced, I click on a different link to a different website say B.
What happens to the request that was sent to website A? Does the webserver of site A reply back and the browser just rejects the HTTP reply?
There is no specific HTTP provision for canceling a request. I would expect this to happen at the socket level.
I would expect the associated TCP socket to be closed immediately upon canceling the request. Since http uses only 1 socket, the server will get the close after the request. If the close was processed before the data is generated, generated data down won't be sent to the client. Otherwise the data is sent to the client and ignored since the socket is closed. There may be wasted work, but a special http message to "cancel" would have the same effect.