I am building an ASP.NET application. I'm experiencing some slow load times so I checked out the traffic using Fiddler. It seems that the page itself is loading in around 3 seconds.
OK, it's kind of slow, but what baffles me is how long the js, css, and image assets take to load, even when they're being cached. It is taking the HTTP request 3 seconds before it responds with a 304. Now, I've done my share of web development, and my understanding is that a 304 response should not take 3 seconds.
My suspicion is that the server that the app is hosted is too weak. It is a VM running Windows Server 2K8 SP2 with about 2GB of memory, and the physical machine has at least one other VM running concurrently. Before I go and get myself a new server, does the lack of power in the machine sound like a possible cause of this problem?
Note: Latency should not be a problem; I'm accessing it through an intranet.
Related
We have a prod server running on IIS 10 (Windows Server 2016). The strange thing is, that the response time (even to access the main page) of an HTTP Request becomes slower after some time.
First assumption was that the ASP.NET Server / WCF Services have a problem, but even if we do an IISRESET /Stop - wait 10 secs - IISRESET /Start does not solve the problem.
Only a full restart of the server solves the problem.
The small red spikes in the picture shows our tries with IISRESET.
Do you have any suggestions on this?
The solution to this problem can be solved by improving the configuration of the server, or setting idle-timeout.
Please check the CPU and memory usage. If you find that the website has high CPU, please use the DebugDiag Collection to collect relevant information.
We have a webserver (WinSrv2019) running a few ASP.NET 4.8 websites with a huge traffic.
Each site is running in its personal 64-bit Application Pool (Integrated Pipeline), .NET CLR Version v4.0.
One of the websites gets stuck apparently and stops responding.
Sometimes it continues a few minutes, sometimes it can remain stucked for a hour.
During this period our inter-application logs are empty.
Moreover, IIS log that logs incoming requests is empty as well and incoming requests during this period aren't being logged.
On the client side browser doesn't get any error, but just continue waiting for response.
Since it happens with the single site only, we've excluded the option that the problem refers server hardware. The server resources usage aren't get over 50%
On the second hand we don't have idea what can cause Application Pool to get stuck in this way.
Please advice.
How we can investigate this issue?
Is there any tool allowing to "debug" what happens inside IIS on the low level?
Thanks in advance
Last night one of the websites (.NET 4.0 forms) hosted on my Win 2008 R2 (IIS 7.5) Server started to time out throwing the following error for all connected users.
TYPE System.Web.HttpException
MESSAGE Request timed out.
DETAIL System.Web.HttpException (0x80004005): Request timed out.
The outage was confined to just one website within IIS, the others continued to work fine.
Unfortunately I was unable to identify why the website was timing out. Here are the steps I took:
First thing I did was look at the task manager which revealed normal CPU and memory usage. Network activity was also moderate.
I then opened IIS to look at the live connections under 'Worker Processes'. There were about 60 live connections, so it didn't look like anything DDoS related.
Checked database connectivity (hosted on a separate server), all fine!
I then reset the website on IIS. That didn't work
I tried to then do a complete iisreset...still no luck :(
In the end (and under some duress) the only thing I could think to do to resolve this was to restart the server.
Restarting the server worked but I am nervous not knowing why this happened in the first place. Can anyone recommend any checks that I failed to carryout? Is there an official checklist for working through these sorts of IIS problems? I have reviewed the IIS logs but don't see anything unusual on the run up to the outage.
Any pointers or links to useful resources to help me understand and mitigate against this in future will be much appreciated.
EDIT
The only time I logged into the server that day was to add an additional web handler component (for remote deploy) to IIS Web Deploy. I'm doubtful this caused the outage as the server worked for for 6 hours after.
Because iisreset didn't helped and you had to restart whole machine, I would suspect it was a global resources shortage and mostly used website (or most resource consuming) was impacted. It could be because of not available RAM, network connections congestion due to some malfunctioning calls (for example a lot of CLOSE_WAIT sockets exhausting connections pool, we've seen that in production because of malfunction of external service). It could be also one specific client problem, which was disconnected after machine restart so eventually the problem disappeared.
I would start from:
Historical analysis
review Event Viewer to see any errors/warnings from that period of time,
although you have already looked into IIS logs, I would do it once again with help of Log Parser Lizard to make some statistics like number of request per client, network bandwith per client, average response time per client and so on.
Monitoring
continuously monitor Performance Counters:
\Processor(_Total_)\% Processor Time,
\.NET CLR Exceptions(_Global_)\# of Exceps Thrown / sec,
\Memory\Available MBytes,
\Web Service(Default Web Site)\Current Connections (per each your site name),
\ASP.NET v4.0.30319\Request Wait Time,
\ASP.NET v4.0.30319\Requests Current,
\ASP.NET v4.0.30319\Request Queued,
\Process(XXX)\Working Set,
\Process(XXX)\% Processor Time (XXX per each w3wp process),
\Network Interface(XXX)\Bytes total / sec
run Performance Analysis of Logs (PAL) Tool during time of failure to make a very detailed analysis of performance counters data,
run netstat -ano to analyze network traffic (or TCPView tool even better)
If all this will not lead you to any conclusion, create a Debug Diagnostic rule to create a memory dump of the process for long running requests and analyze it with WinDbg and PSSCor extension for .NET debugging.
My environment: Windows Server 2008, IIE 7.0, ASP.NET
I developed a Silverlight client.
This client gets updates from the ASP.NET host through a WCF web service.
We get 100% CPU usage and connection drops when we have a very low number of users (~50).
The server should clearly be able to handle a lot more than that.
I ran some tests on our DEV server and indeed 100 requests / s maxes out the CPU.
What's odd is that even if the service is replaced by a dummy sending back hardcoded data the service still maxes out the CPU.
The thread count looked very low at about 20 so I thought there was some contention somewhere.
I changed all configuration options I could find to increase the worker threads (processModel, httpRuntime and the MaxRequestsPerCPU registry entry).
Nothing changed.
Then I stopped the IIS server and ran the web service as a console (removing all the ASP authentication references).
The service maxed out the CPU as well.
Then comes the magic part: I killed the console app and restarted IIS and now the service runs a 5-60% CPU with 100 requests / s and I can see 50+ worker threads.
I did the same thing on our preprod machine and had the same magic effect.
Rebooting the machines keeps the good behaviour.
So my question is: what happened to fix my IIS server? I really can't understand what fixed it.
Cheers.
Find out the root cause of the high CPU usage, and then you can find a fix,
http://support.microsoft.com/kb/919791
Say a website on my localhost takes about 3 seconds to do each request. This is fine, and as expected (as it is doing some fancy networking behind the scenes).
However, if i open the same url in tabs (in firefox), then reload them all at the same time, it appears to load each page sequentially rather than all at the same time. What is this all about?
Have tried it on windows server 2008 iis and windows 7 iis
It really depends on the web browser you are using and how tab support in it has been programmed.
It is probably using a single thread to load each tab in turn, which would explain your observation.
Edit:
As others have mentioned, it is also a very real possibility the the webserver running on your localhost is single threaded.
If I remember correctly HTTP standard limits the number of concurrent conections to the same host to 2. This is the reason highload websites use CDNs (content delivery networks).
network.http.max-connections 60
network.http.max-connections-per-server 30
The above two values determine how many connections Firefox makes to a server. If threshold is breached, it will pipeline the requests.
Each browser implements it in its own way. The requests are made in such a way to maximize the performance. Moreover, it also depends on the server (localhost which is slower).
Your local web server configuration might have only one thread, so every next request will wait for the previous to finish