I am encountering a problem with request done to a WCF application hosted in IIS 8.5. The problem is that request fails after exactly 15 minutes (every time), but the thread continues to work behind the scenes (finishes the invoked action after 15 minutes).
The test we have done a through a web application that connects to WCF service and also using the WcfTestClient.exe, the result is the same.
We have checked all the configurations in IIS and nothing points to 15 minutes timeout. Service binding has more than 15 minutes for receiveTimeout and sendTimeout.
Has anyone encounter this problem? We can't find the cause of this behavior.
Thank you
In the end it was a configuration on a proxy server that was in front of the IIS server. It had a timeout setting of 15 minutes.
Related
I have a .Net application hosted in IIS 10 running on Windows Service 2019. Sometimes the website stops responding (usually when running E2E tests). Even after restarting Website/Application Pool/IIS/Machine it doesn't work.
Looking in Event Viewer I see errors like these:
Forms authentication failed for the request. Reason: The ticket supplied has expired
Failed to stop a listening channel for protocol 'http' at allotted time from worker process serving application pool
A process serving application pool exceeded time limits during shut down
In HTTPERR files I see a lot of messages containing Connection_Abandoned_By_ReqQueue and Connection_Dropped
In inetpub log files I can't see any relevant, just the url requests.
To add more information, we have signalr installed and sometimes errors appear in the events, errors with messages like:
The user identity cannot change during an active SignalR connection.
Any idea what might be causing this?
Background:
I am working with an external vendor where requests for a file download go through a web service. I believe their service is hosted on herokuapp and recently we have been seeing connections get killed after exactly 30 seconds. 99.99% of our requests receive subsecond responses, occasionally one or two requests will take 20+ seconds and for any that hit the 30 second mark we see the issue. So I only see this issue on about 1/10,000 requests (happens once every few days).
I've done some looking around and the only thing that I've found to be common is that Heroku has a 30 second HTTP request timeout that a few people have issues with and on the server side it's pretty easy to spot one of these. The issue is we don't have access to server side logs and only have a generic error returning client side.
What I have tried:
In terms of debugging I have pointed the service endpoint to a local dummy web service that literally just sleeps for 3 minutes and it doesn't timeout until the 120 second mark (which is our server's default).
Error in WebException message after 30 seconds to external vendor's service: "The underlying connection was closed: An unexpected error occurred on a receive"
As a note to the above error message, tls1.2 is already being forced for these requests.
Actual Question:
Is it possible based on these logs that Heroku is actually killing this connection server side and this would result in the generic error seen?
Sources:
Heroku HTTP request timeouts: https://devcenter.heroku.com/articles/request-timeout
Outsystems blaming these errors on remote server, not client: https://www.outsystems.com/forums/discussion/15641/tip-the-underlying-connection-was-closed-an-unexpected-error-occurred-on-a-rece/
I have an asp.net application running on IIS 7.5 , Windows Server 2008 R2 – using an application pool in classic mode, framework version 4.
Sometimes I am running into the following problem:
The application can work for a few days, but all of a sudden I receive an http error 503 – server unavailable.
When I look at the application pool it seems to be running (I see it started), but it actually FREEZES – every request to it is responded with 503.
At the worker processes list (in the IIS manager) I see a lot of requests unhandled.
It's important to mention that other asp.net application running under other application pools are working just fine which means the IIS is working fine and the problem is only in this specific application pool.
When I researched the http error logs I saw the following error logs in the Windows\System32\LogFiles\HTTPERR folder:
In regular settings when everything works fine I noticed records of
"Timer_ConnectionIdle" (a normal thing from what I have read).
At a certain times I notice an appearance of "Client_Reset" records.
15 minutes after the "Client_Reset" errors started->records of "QueueFull appear".
In order to work with the application I am doing an iisreset (I guess a recycle for the pool will be enough also).
I will be happy to receive any help or suggestions.
EDIT:
It's important to mention that nothing related gets written to the IIS logs, or the System and Application logs. This error occurs before.
Without more information about your problem, most quick fix will be to Configure Recycling Settings for an Application Pool. Since your problem is about request queues, you can choose option After reaching a number of requests.
When I encounter HTTP 500 errors, I find that enabling IIS Tracing is very helpful. Below is a link that describes the process of enabling tracing and then reviewing the trace. The first section of the site describes how to install IIS, so you probably want to skip to the section labeled "Enable Failed-Request Tracing."
Troubleshooting Failed Requests Using Tracing in IIS 7
Edit:
Since you're getting a QueueFull error, you may want to monitor the request queues. The easiest way to do this is using Perfmon. On the server with IIS, open Performance Monitor and add the appropriate counters under "HTTP Service Request Queue." In your case, "Current Request Queue" for the ailing Application Pool would likely be of value.
On one of our production servers, occasionally requests get stuck in the RequestAquireState while in the session module. As it is an MVC request, it does not timeout, so we sometimes get requests that run in the background for several hours.
We are using the standard asp.net session module on .net4 and IIS 7.5 We are using InProc.
Why would it get stuck?
Had the same problem when running with Asp.net State Server, restarting the service resolved the problem.
I've got a very simple Windows Form app that hits an IIS 7 site about 2000 times in the space of a few seconds (using threads).
When I run that app on the server itself, using either localhost or the ip address, everything is totally fine.
However, when I run the app on my dev box, using the ip address, I get an error from the "GetResponse" method:
The operation has timed out
The App can definitely connect to the site, because it consistently either starts throwing the timeout error after 10 or so hits (no more than 11), or it throws the timeout error immediately.
What's going on?
It's hitting IIS 7 on a Windows Server 2008 VM (external box), Windows Firewall is OFF.
My App is running locally on my dev box as the admin.
cheers
I believe the default thread pool size for IIS is about 10 threads. You're overloading that single server.
Are you doing performance testing? Do you expect that many requests, that fast, in production?