I have one dotnet core webapi and deployed on windows2019/IIS, it receives the transactions from a sender and send back the acknowledgment.
I am experience an issue where sender is complaining that some time they are receiving time-out issue, means application is not sending the acknowledgement to sender.
The transactions which is giving the time-out issue is not showing in my application log file (I am recording each and every entry) and on IIS level there is no error.
Not sure these time-out transactions are dropped on IIS level or it is not reached to IIS.
I checked the logs of IIS and applied the lock in the method which receives HTTPPost
Related
I have a .Net application hosted in IIS 10 running on Windows Service 2019. Sometimes the website stops responding (usually when running E2E tests). Even after restarting Website/Application Pool/IIS/Machine it doesn't work.
Looking in Event Viewer I see errors like these:
Forms authentication failed for the request. Reason: The ticket supplied has expired
Failed to stop a listening channel for protocol 'http' at allotted time from worker process serving application pool
A process serving application pool exceeded time limits during shut down
In HTTPERR files I see a lot of messages containing Connection_Abandoned_By_ReqQueue and Connection_Dropped
In inetpub log files I can't see any relevant, just the url requests.
To add more information, we have signalr installed and sometimes errors appear in the events, errors with messages like:
The user identity cannot change during an active SignalR connection.
Any idea what might be causing this?
Have a web application running on several servers. Two of the servers are having issues where the application pool becomes disabled. Message in the Event viewer System Log: Application pool 'xxxxx' is being automatically disabled due to a series of failures in the process(es) serving that application pool.
Just prior to this message are several other 'Warning' Messages: A process serving application pool 'xxxxx' suffered a fatal communication error with the Windows Process Activation Service. The process id was '1072'. The data field contains the error number. Or A process serving application pool 'xxxxx' terminated unexpectedly. The process id was '3644'. The process exit code was '0x0'.
Running IIS 7. The servers that are failing are running 2008R2 with Service Pack1 and the other are running 2008R2 (no Service Pack).
In the HTTP log right before I see the AppOffline message, there are the several Connection_Abandoned_By_ReqQueue and Client_Reset messages.
I have read and reread many posting about changing the Rapid-Fail Protection settings from the default of failure interval (minutes) 5 and Maximum failures 5 as I can see that after five failures in five minutes the AppPool is stopped. However, doing this just changes the number of issues during a particular time before the AppPool will be stopped and not really addressing the root cause of the problem.
What is the correct method for determine why the application is failing?
Could the difference in service pack on the server be a culprit?
Thanks.
We have a ASP.NET/WCF app hosted in Window Server 2012 (IIS 7). We used the basicHttpBinding. This ASP.NET/WCF application exposes two methods; one is to receive messages and the other is to download a text file (1MB) onto the server.
On another server, we have ASP.NET hosted in Window Server 2012 (IIS 7) which is the client that consumes the exposed method mentioned earlier. This client application sends a message and uploads a text file at a high frequency. This communication between this ASP.NET/WCF application and client application works fine for a few hours until we get the following error at the ASP.NET/WCF side.
Application pool 'XXXXXXXXXX' is being automatically disabled due to a series of failures in the process(es) serving that application pool.
So, could you please shine some light regarding this issue that we are facing?
This is due to something called "Rapid Fail Protection." When your underlying application crashes a certain number of times in a certain time period, the application pool is automatically disabled.
The default settings are 5 crashes in 5 minutes, but you can configure this yourself. See this link for details.
Is it possible to configure IIS in such a way that it can handle multiple HTTP requests that arrive on the same TCP socket in HTTP pipelining mode in parallel?
We have a problem where multiple requests are done by a web client in a single TCP socket, using HTTP pipelining. The client basically sends let's say 10 requests at once, and then the server sends 10 responses (in the same order as the requests). Our server takes quite some time for each request, mostly waiting for external IO. It would be much more efficient if IIS could start to work on all 10 requests in parallel, then serialize the responses in the correct order back to the client. Obviously, the server would need some way to cache responses if e.g. response 3 is available earlier than response 2.
Is that possible somehow? Maybe this is not possible in IIS, or I'm just searching for the wrong keywords... We are running IIS 7.5 and ASP.NET 4.5 on Windows Server 2008 R2.
We came across the same issue in IIS 7.5.
Our solution was to enable "Web Garden"... and it really really works well! It's just that you can't have a "session" based web site. So if you have clients "logging in", you will have to re-configure the process. (We used cookies to store an encrypted token - anyway that's besides the point).
Go to:
Internet Information Service > Applications Pools
Select the Pool being used (you should have a pool per site)
Click Advanced Settings...
Find "Maximum Worker Processes" and crank that sucker!
The amount of processes that you push it up to now depends entirely on how much RAM your system has. You can of course monitor and control this your self.
With a "Web Garden" enabled, you will notice (with Process Explorer or something similar), IIS will spawn a new instance of w3wp.exe for each request, up to the max number you specified. New requests simply get processed by the next available Worker Process available, enabling true IIS parallel request processing. If two requests come in within moments of each other, and request 2 is completed before request 1, request 2 is sends its response.
IIS uses the HTTP server api (that uses HTTP.sys); so I did a simple test -
wrote an HTTP server using this API,
wrote a Winsock client that opens a connection and sends 2 http requests
I observed that if I called HttpReceiveHttpRequest twice on the server (without sending the response for the first request), it doesn't receive the second request (basically, the second call blocks). This holds true for both PUT and GET requests.
It appears that HTTP.sys is in fact serializing requests to IIS on a single connection; I couldn't find any configuration on HTTP.sys that might modify this behavior.
As you can see while the requests from all users all over the web are just being added to the queue, and building up and up (Green) - only 1 single Request is Executing (Blue).
This doesn't really answer the question - but its an beautiful illustration of this disastrous situation.
We've implemented a "background service" in our Asp.Net web app that receives messages from MSMQ at random intervals without requiring an HTTP request to start up the application. We auto start this web app using a serviceAutoStartProvider.
Everyhing works when IIS initially starts up, the server is rebooted and so on, we receive messages just fine. BUT if we just stop the site in IIS (not touching the application or app pool), the application stops receiving MSMQ messages. And when we start the web site again, the serviceAutoStartProvider is not called again, so our app does not start listening to MSMQ messages again!
If we issue a HTTP request against the web app after the IIS site has been stopped and started again, it starts listening to MSMQ messages again.
Shouldn't our "background service" web app continue to listen to MSMQ messages even if the IIS site is stopped? It won't get any requests, but I think it should continue to run.
What exactly happens in an Asp.Net application/app pool when the IIS site is stopped? Any events fired that we can hook up to? The app pool claims to be "started" in IIS manager, but code is not running in it.
Why isn't our serviceAutoStartProvider called when the site is started again? I believe it is "by design", since the application isn't really stopped. But the applications isn't running, either, has to be waken up by an actual HTTP request.
When the IIS Web App shuts down (eg. due to no new HTTP(S) requests for the timeout time) the .NET app domain (within the app pool worker process) completely closes and unloads. This includes all background threads, including those used by the .NET thread pool.
A Web App can be configured with a longer (or no) timeout, then background worker threads could continue to process work.
But better would be to either run such workers in a specialist service process managed completely separately.
Or, even better, use IIS application hosting with WCF to create the MSMQ listener. I understand in this case the integration of Windows Process Activation Services with IIS would restart the Web App if a new message arrived after it had been shutdown.
I would host the MSMQ listenner in a windows service. Why couple it to IIS?
UPDATE
Actually what I mean is why couple the MSMQ and ASPNET in the same app pool?
You can now use "Application Initialization" feature of IIS8 (and IIS7.5), more information including version availability and usage documentation can be found at:
http://www.iis.net/learn/get-started/whats-new-in-iis-8/iis-80-application-initialization
This replaces "Application Warm-Up Module" which is no longer supported, and provides us with proper control over component/service initialization in an "always running" scenario.