IIS concurrent handling of request slow - asp.net

In IIS, I have set the "Maximum worker processes" to 16 in application pool setting. Now, I have a request which will load for some time. At the same time, if another request is submitted, this request will not respond until the previous request finished loading. I am curious that I already set the worker process to 16. Is that supposed that IIS will handle the request concurrently with the available worker process? But as I observed, the latter request have response only after the previous request finished. Is that the requests are queued in only one worker process so that make it waiting? How can I make IIS handle the request concurrently without the waiting?

Related

Mule 3 : http listener persistent connections and Connection Idle timeout

I am new to Mule and trying to learn Mule 3 (some of our existing API in production are using Mule 3).
A production application has an HTTPlistener using 'use Persistent connection' and 'connection idle timeout' as default value of 30000 (30 seconds).
My understanding i that if I call the API that listens to the request at this listener from Postman (the REST client), if the request takes more than 30 seconds, it should receive a timeOut error (504).
We added a Thread.sleep in an expression to simulate this behavior.
<expression-component doc:name="Expression"><![CDATA[Thread.sleep(60000);]]></expression-component>
This will cause the sleep to wait for 1 minute, which is greater than 30 seconds configured for a timeout.
However, the request waits for the thread to wake up after 1 minute and returns a successful response.
So I am not sure what is the purpose of 'connection idle timeout' ?
Also, what does 'persistent connection' mean ?
The documentation is vague.
HTTP Persistent Connections are a feature of the HTTP protocol, that the connector implements. The connection idle time indicates how long the persistent connection will remain open if there is no activity. It is not related to a response timeout, that is a timeout on the client side and seems to be what you are expecting. In this case the HTTP Listener is the server and Postman is the client.
A response timeout in the client doesn't has an HTTP status response because the request is aborted. You can get a 504 status if the request is against a proxy and the proxy has a client timeout against a backend. The proxy usually returns a 504 in that scenario.
The documentation for connectors assumes that you are familiar with the protocol or backend concepts. In this case the HTTP protocol.

Are http requests automatically retrying tcp connections?

I'm building a distributed system in which i do some http requests to comunicate. I want the requests to be fault tolerant. The requests has no timeout, should i retry the request after some period if i have no response or the http request is automatically retrying tcp connections? I used the library async http client in java. Thanks
... the http request is automatically retrying tcp connections?
The HTTP request is not a thing which can retry something by itself. A HTTP request is just data. It is up to the application to retry the request if something goes wrong. Some libraries used in applications might offer this, others not. Most don't since it is often not clear if the request should be retried in the first place since it might have unintended side effects if the web application receives the request twice (it might have received the first even though it gave no response).

Timeout for HTTP_BAD_GATEWAY.html.var in GlassFish 4?

I'm struggling to find the timeout configuration in GlassFish 4 to solve the below problem:
The application implements a so-called tunneling, serving contents of few connecting portals to users. It opens up an HTTP connection, passes on user's request to one of the portal, receives response from the portal, and then passes the response on to the browser.
However, the problem occurs when one of the connnecting portals takes very long to response. When this happens, the application seems to give up waiting for the response and sends another request, exactly after 5 minutes, to the portal in question. This time it's a request for error/HTTP_BAD_GATEWAY.html.var
Does someone know how to increase this timeout?

nginx - ungraceful worker termination after timeout

I plan to use nginx for proxying websockets. When performing nginx reload / HUP , I understand that nginx waits for the old worker processes to stop processing all requests. In websocket connection however, this may not happen for long time as the connection is persistent. Is there an option / roadmap to forceibly kill old worker process after timeout on reload?
References:
http://nginx.org/en/docs/control.html
http://forum.nginx.org/read.php?21,247573,247651#msg-247651
Thanks
Unless you have either solution: proxy_read_timeout 1d or a ping message to keep connection alive, Nginx closes connections in 60sec otherwise. This default value was chosen by a reason.
See what Nginx core developer says:
There is proxy_read_timeout (http://nginx.org/r/proxy_read_timeout)
which as well applies to WebSocket connections. You have to bump it
if your backend do not send anything for a long time. Alternatively,
you may configure your backend to send websocket ping frames
periodically to reset the timeout (and check if the connection is
still alive).
Having said that nothing should stop you from using USR2+QUIT signals combination that usually used when you gracefully restart Nginx while binary upgrade. Nginx master/worker processes rare consume more than 50MB of memory, so to keep multiple masters isn't that expensive. USR2 helps to fork new master and spawn its workers followed by gracefully shutdown old workers and master.

HTTP 503 when calling long-running ASP.NET web service

We have ASP.NET Web Service running on IIS6 that has long-running methods (processing takes about 5 minutes).
When we call the web service from Win 2003 Server, our client gets HTTP 503 error after waiting for the response for couple of minutes. So we never get the response data back to the client, even though, the call is actually completed on the server (our application logging shows that the whole method gets executed). So the execution on the server side is not stopped, client just stops waiting for the response.
However, when we call the same method with the same parameters and the same client from Win XP workstation, everything works as expected and we don't get any HTTP errors.
Does anyone got any ideas why this error happens only when calling from Server OS?
Is there some registry or other setting where you can control how long OS waits for HTTP responses?

Resources