I compiled Nginx 1.2.4 with --with-http_stub_status_module.
When I query Nginx regarding its current status it returns 8 Active connections which are all Writing type:
Active connections: 8
server accepts handled requests
5011178 5011178 5011178
Reading: 0 Writing: 8 Waiting: 0
However there is absolutely no more established connection to my Nginx except the curl I'm performing to get the status.
Any idea why Nginx returns this number of Active connections?
Active connections don't represented number of request to your app. If your app serves some html pages then each element like image, css or js (if isn't cached in browser or other user client) will be also requested from your server. So for one html pages there can be many active connections.
But... as I know they will be as Reading or Waiting connections and in your example there are Writing. If you are extremely sure that there is only one request from curl made by you then check your nginx or some proxy (if exists) configuration because it's look like client connections aren't closed after some POST request or Uploads.
Related
We are facing 499 error when we close the browser tab before getting response for the request. We are using nginx in k8s.
I have tried by configuring "proxy_ignore_client_abort: on" property in ingress configuration, still we are getting issue even after configuring the above property. Please suggest me way to fix this issue.
Firstly we are supposed to know that the nginx throw 499 if the client actively disconnected the connection. So it you may not pay much attention to it if everything is good.
Nginx could be the server to the user and the client to the backend server like the below:
from user->->nginx->server(tomcat).
In my case, I found that server like tomcat would abort the connection if it cannot handle too many requests in the accepted list.(or too slow to respond).
In tcp, the real server like tomcat would maintain 2 list. The first 1 is SYN list, and the 2nd is accepted list. Pls let me ellaborate it:
Clients firstly send syn to the server.
and the server put it into syn list and return SYN+ACK.
Client send the ACK to the server.
Finally the server established the connection after removing it from the syn list and put it into the accepted list.
In your case, if you close the tab before step2, I think you needn't do anything at all.
if you close the tab before the tab 4, you can refactor the interface of your server to be async to greatly enhance its responding speed.
just want to know that how could or where we can check multiple request/response over single tcp connection in http2....i mean practically.
Thanks in advance
In Chrome Developer Tools in the Network tab you can add the Connection ID column and if this is the same, then it's using the same connection:
Alternatively if you run the site through WebPagetest there is a handy Connection View.
Here you can see Amazon only uses two connections for images-na.ssl.com:
Rather than the usual 6 connections per domain when forcing HTTP/1.1:
And the reason it uses two connections (and why you get 12 connections under HTTP/1.1 for some domains) is that anonymous CORS requests (that Amazon uses to download JavaScript by XHR) effectively count as a separate domain and so go under a separate connection in HTTP/2 and up to 6 more connections in HTTP/1.1.
My understanding so far is that when someone tries to access web page the following happens:
HTTP request is formed
New socket is opened
HTTP request is sent
If everything went OK, the web browser accepts HTTP response and builds DOM tree out of received HTML. If there are any resources missing, new HTTP request needs to be made for each one separately.
Each of those HTTP requests requires opening another socket (establishing new virtual connection with server).
Q: How is that efficient? I understand those resources could be located on another host (which would indeed require new TCP connection) but if they are all on the same host wouldn't it be way more efficient to transfer all data within single TCP connection.
Each of those HTTP requests requires opening another socket (establishing new virtual connection with server).
No it doesn't. HTTP 1.1 uses persistent connections by default, and HTTP 1.0 before it had the unofficial Connection: keep-alive header, which accomplished the same thing, nearly twenty years ago.
Q: How is that efficient?
It isn't, and that's why it doesn't happen.
I understand those resources could be located on another host (which would indeed require new TCP connection) but if they are all on the same host wouldn't it be way more efficient to transfer all data within single TCP connection.
Yes, and that is what happens by default.
I am trying to solve an issue with uploads to our web infastructure.
When a user uploads media to our site, it is proxied (via our Web Proxy tier) to a Java backend with a limited number of threads. When a user has a slow connection or a large upload, this holds one of the Java threads open a long period of time, reducing overall capacity.
To mitigate this I'd like to implement an 'upload proxy' which will accept the entire HTTP POST data of the upload and only when it has received all of the data it will proxy that POST to the Java backend quickly, pushing the problem of the upload thread being held open to a HTTP proxy.
Initially I found Apache Traffic Server has a 'buffer_upload' plugin, but it seems a bit bleeding edge and has no support for regex in URLs, although it would solve most of my issues.
Does anyone know a proxy product that would be able to do what I am suggesting (aside from Apache Traffic Server)?
I see that Nginx has fairly detaild buffer settings for proxying, but it doesn't seem (from docs/explanations) to wait for the whole POST before opening a backend connection/thread. Do I have this right?
Cheers,
Tim
Actually, nginx always buffers requests before opening a connection to the backend. It is possible to turn off response buffering using proxy_buffering or setting an X-Accel-Buffering response header for per-response buffering control.
I'm working on Comet support for CppCMS framework via long XMLHttpRequest polls. In many cases, such request is closed by client before any response from server was given -- for example the page is closed, user moves to other page or it is just refeshed.
At the server side I expect that I would recieve the notification that connection is dropped. I tested the application via 3 connectors: FastCGI, SCGI and simple HTTP Proxy.
From 3 major UNIX web servers, Apache2, lighttpd and Nginx, only the last one had closed
connection as expected allowing my application to remove the request from wait queue -- this worked for both FastCGI and HTTP Proxy connectors. (Nginx does not have scgi module by default).
Others, Apache and Lighttpd do not close connection or inform the backend about disconnected
clients, the proceed as if the client is still on line. This happens for all 3 supported APIs: FastCGI, SCGI and HTTP Proxy.
I had opened an issue for Lighttpd, but what
more conserns me is the fact that Apache -- mature and well supported web server as lighttpd
and does not discloses the server backend that client had gone.
Questions:
Is this a bug or this is a feature? Is there any reason not to close the connection between web server and application backend?
Are there real life Comet application working behind these servers via FastCGI/SCGI/HTTP-Proxy backends?
If the above true, how do they deal with this issue? I understand that I can timeout all connections every 10 seconds, but I would like to keep them idle as far as client listens -- because this allows easier scale up -- each connection is very cheep -- the cost is only the opended socket.
Thanks!
(1) Feature. Or, more specifically, fallout from an implementation detail.
A TCP/IP connection does not involve a constant flow of traffic back and forth. Thus, there is no way to know that a client is gone without (a) the client telling you it is closing the connection or (b) a timeout.
(2) I'm not specifically familiar with Comet or CppCMS. But, yes, there are all kinds of CMS servers running behind the mentioned web servers and they all have to deal with this issue (and, yes, it is a pain).
(3) Timeouts are the only way, but you can mitigate the pain, so to speak. Have the client ping the server across the connection every N seconds when there is otherwise no activity. Doesn't have to do anything and you can tack stuff on the reply; notifications of concurrent edits or whatever you need.
You are correct in that it is surprising that mod_fastcgi doesn't support telling the backend that Apache has detected the disconnect or the connection timed out. And you aren't the first to be dismayed.
The second patch on this page should fix that particular issue:
http://osdir.com/ml/web.fastcgi.devel/2006-02/msg00015.html
http://ncannasse.fr/blog/tora_comet
I don't have any concrete information for you, but this article does mention that they can detect when the client has disconnected from Apache. See tora.Queue. And it sounds like the source is available in the neko CVS, so you might be able to find some clues there. Good luck.