My App pool is taking like 180mb to 220mb at any given time.
It sometimes goes down to 80mb but comes back to 180mb in few mins.
Is this behaviour normal? If the memory usage seems high, how can i reduce it?
We have like 500 employees of which at any given time atleast 200 employees will be working on that particular website.
I am using IIS 7.0, windows server 2008, Asp.net 3.5
Any help is greatly appreciated.
Abhi
It is totally dependent on your site. 180-220 mb is nothing. On 32bit windows you have to worry around 600mb. 64bit windows, it can be much higher.
Right click on your App Pool in IIS and choose Advance Settings... then scroll down and look for Private Memory Limit (KB) and Virtual Memory Limit (KB) near the very bottom. However like #BNL suggests your usage is nothing to really be concerned about.
Yes, the behavior you describe sounds normal. Garbage collection, among other things, can cause periodic fluctuations in memory use.
Unless your server is starting to page excessively, I wouldn't restrict the memory available to the AppPools. Internal ASP.NET features such as caching work better when they have plenty of memory available.
If you're concerned that the memory use is higher than it should be, then consider running your apps through a memory profiler, to find out how the memory is being used.
Related
We have the following setup:
Virtual server, Intel Xeon X5650 # 2.67Ghz (4 processors)
8GB RAM
Windows server 2008 Standard 64bit
Sql Server Express
IIS 7.5
Our database is only 200mb. We are running an ASP.net app. We recently ran into some performance issues, ~200 concurrent connections was causing 100% CPU usage (mostly consumed by IIS) and bringing the response time to around 20sec! After some tweaks to our code we have been able to run a load test from loader.io with 1500 concurrent users over 1 minute and our response time at the end was around 5 seconds and CPU was around 95%, again consumed mainly by IIS, our memory was sitting at around 4GB usage. However we are expecting bigger spikes than 1500, anywhere up to around 4000 users in a short amount of time.
My questions are the following:
1) Is this normal performance for our current setup? Our site is quite intensive on the database and we are using Entity Framework.
2) Would upgrading to Sql Web edition have any benefit seeing as though our Database is so small?
3) Do you think that this type of setup could handle 4000 users?
4) Any suggestions on what we could do to handle this load?
I know this is somewhat subjective, but any answers are much appreciated.
Is this normal performance for our current setup?
Depends on your code. Did you profile the code to make sure you dont have anything stupid in there?
Our site is quite intensive on the database and we are using Entity Framework.
Again, did you pofile to figure out you spend a lot of time in entity framework? It is slow, ut the question is what "intensive" means. This is what profilers are for.
Would upgrading to Sql Web edition have any benefit seeing as though our Database is so
small?
Help, my pizza comes too late. Wiould upgrade to a larger car help? You say yourself that you spend the time in IIS, not sql server.
Do you think that this type of setup could handle 4000 users?
You think my car is big enough? Note I don't tell you what I need it for. Without looking at usage patterns and your code - no idea. THAT SAID: the server is pathetic compared to what you buy today. As such, this is a irrelevant question - just upgrade if you have to.
Any suggestions on what we could do to handle this load?
Load test + profiler, optimize code. Get bigger server. Realize that we dont have crystal balls to figure out how good / bad / stupid your code is.
Number one question arising here, is: did you deploy RELEASE or DEBUG compiled binaries of your project?
Upgrade to WebEdition will not solve any problem here, since the difference in the versions is very simple: WebEdition is just throttled in the internal scheduler/etc. - so you will be just fine with the standard edition.
My experience is that the most crucial aspect of concurrent request is the amount of server memory and the consumption of this memory by your code.
As the physical memory is consumed, the server starts to swap from physical to virtual memory which slows down processing dramatically and leads to symptoms you describe.
I would start with putting another 8gb of ram into the server. In the meantime try to optimize your code so that less data is processed during requests or less memory is used. Also, move sql server to a separate machine so that there is no competition between iis and sql server when it comes to memory availability.
With your current machine, I doubt the problem is the IIS itself, but rather related to the way your app is designed and/or utilize frameworks. I personally learned just recently that IIS requests including multiple rounds trips to the database can be measured in hundreds of micro-seconds, not hundreds of milliseconds... A single locking bug, or unbalanced queuing can limit your application scalability and regardless of your hardware specs [https://twitter.com/michaelzino/status/454512110165184512].
Entity Framework is known for validating your models against the database schema for the first initial calls. I would suggest profiling your app layers, starting from the data access layer, or the intrinsic database calls, and going up.
I have a website application running in it's own application pool on IIS 7.0. The application is an ASP.NET MVC 3 website.
I have noticed the memory usage for this applications corresponding w3wp IIS worker service is quite high ( 800 MB, with some fluctuation ).
I am trying to diagnose the problem and have tried the following:
I have disabled output page caching for the website at IIS level and then recycled the application pool. This causes the w3wp process to restart. The memory usage for this process then slowly creeps up to around 800 MB, it takes around 30 seconds to do so. There are no page requests being handled at this time. When I restart the website from IIS the memory size of the process does not alter.
I have tried running a debug copy of the application from VS 2010, there are no problems with memory usage.
Some ideas I have/questions are:
Is this problem related to the websites code? - Given that the memory rockets before any page requests have been sent/handled, I would assume this is NOT a code problem?
The application built in MVC has no handling of caching written into it.
The website uses real-time displaying of data, it uses ajax requests periodically, and is generally left 'open' for long periods of time.
Why does the memory usage rocket up after the application is recycled and no user requests are being sent? Is this because it is loading old cache information into it's memory from disk?
The application does NOT crash, I'm just concerned about the memory usage, it is not that big of a website...
Any ideas/help with getting to the bottom of this problem would be appreciated.
The best thing to do if you can afford to use a debugger is install the Windows Debugging Tools and use something like WinDbg and SOS.dll to figure out exactly what is it in memory.
once you've installed the tools then you can:
Launch Windbg.exe running elevated (as Administrator)
Use "File->Attach To Process" and choose w3wp.exe for the app you are trying to figure out. If you have many you can use Task Manager and add the command-line column to see the PID or use IIS Manager->Worker Processes to figure it out, and then choose that process in WinDBG.
run:
.loadby sos clr
!dumpheap -stat
At that point you should be able to see all types sorted by the most memory consumption so you can start with the ones at the bottom. (I would recommend exclude Strings, and Object since those are usually a side-effect and not the cause).
Use "!dumpheap -type type-here" to find the instances and use !gcroot for those to figure out why they are in memory, maybe due to a static field, or an event handler leaked, WCF channels not disposed, or things like that are common sources.
I just looked my server and my pools use 900-1000 MB Virtual size Memory, and 380 MB Working set. My sites run smooth with out problem for some years now, and I have checked the sites from all sides. My pool never recycles and the server runs until the next update continuously with 40% stable free physical memory.
If your memory is not continuously growing, then this memory is the code plus the data that you set as static, const, the string, and the possible cache, inside your application.
You can use process explorer to see the working and the virtual size memory.
You can also think to run a profile against your code to see if you have any "memory leak" or other issue. Find one from google: https://www.google.com/search?hl=en&q=asp.net+memory+profiler.
It probably doesn't apply here but thought I would throw it in for good measure. Recently I had a problem where my memory would go right up and max out when it really could of cleaned up 80% of it. Problem: It thought it about 2 more gig than it actually did so the GC was quite lazy. (It was due to a VM ware bug -windows was reporting 8 Gig but physically there was only 6.4). See blog.http://www.worthalook.net/2014/01/give-back-memory/
Something that might help: if you "rewrite" (open/save) the web.config , then your application will reset, you should monitor the memory usage from that point. If it keeps growing during usage, this could mean memory leak or insane caching. You might be able to identify which actions on your site lead to memory increase. During a long time the memory usage of an application should be stable.
I'd like to describe strange issue I've noticed while analyzing my asp.net application in production and ask for some advice or opinion on the following matter.
Application usually runs with some 80-90 MB of memory footprint. This seems stable since no memory leaks have been detected so far - no slight increase in memory usage over time. Yet, problem occurs when application pool recycles (I'm using shared hosting and judging by logs it occurs either when app is idle for 20 mins or every ~30 hours - something like that). The issue is that used memory almost doubles for some period on recycle - it goes to some 160-170 MBs without any explanation. This is confusing, since it is common claim that recycling should purge the memory and all other resources - at least I get it that way. System holds this amount of memory for some 7-8 hours and then memory usage drops to it's usual level of 90-100 MB, again, with no apparent reason (at least not know to me). All the time, application seems to work well - no significant delays or troubles with site availability - to the users everything seems OK, no complaints so far. When looking at the memory consumption over time graph - it looks almost like a step function.
The important thing is that I haven't been able to reproduce this sort of behavior in my testing environment. Occasionally, I've been getting notes from provider administrators that my app is using more resources than allowed and this really bugs me.
So, what I would like to know - is there any possible scenario where application pool recycling does not release all memory resources ? Is there any advice or guideline what I should focus on ? I'm not an expert in this area, but I've been reading about things like overlapping recycling, serialization problems on recycle and couple more issues ... Any ideas ? Similar experience ?
Thanks
This post provides a pretty good overview of what happens when your site's app pool is recycled: http://blogs.msdn.com/b/tess/archive/2006/08/02/asp-net-case-study-lost-session-variables-and-appdomain-recycles.aspx
My speculation is that your memory usage is increasing due to the JIT-compilation that follows every recycle of the app pool. My guess is that your shared host has different configuration and environmental settings than your development server.
IMHO, if you're using ~100 megs of memory on a shared host, you are asking for trouble if it's a host like DiscountASP.NET or GoDaddy. If you care at all about that website, go get a VPS or some more configurable hosting where you can pay a premium for a higher memory limit.
getting:
aspnet_wp.exe (PID: 988) was recycled because memory consumption exceeded the 148 MB (60 percent of available RAM).
any suggestion of web.config etc optimization? did already all the debug/release compile stuff. but still it takes to fast to much memory. machine got 256mbs ram, on 512mb it runs smooth. want to squeeze it down as much as possible. within code it did also as much to keep memory low, but it kind of 50mb data only. this must be possible. or does the framework need so much?
You can try de-configuring all of the HttpModules that you don't use. Windows Authentication, for example, is a common one.
You might also take a look at any static variables that you're allocating.
Disabling ViewState on as many pages as possible might help a little.
Minimize or eliminate any extra DLLs that get loaded (from your bin directory or the GAC).
However, it's unlikely that you'll save much memory by hunting-and-pecking like that. If you're serious about chasing it down, you'll need a memory profiler tool, such as .NET Memory Profiler from SciTech. They have a free two week trial.
Well, it should fit, but it depends on what you're doing with it, and which version of IIS you're running.
Optimizing Memory Usage (IIS 6.0)
Servers running IIS 6.0 benefit from ample physical memory. Generally, the more memory that you add, the more the servers use and the better they perform. IIS 6.0 requires a minimum of 128 MB of memory; at least 256 MB is recommended. If you are running memory-intensive applications, your server might require much more memory to run optimally — more than the recommended 256 MB of memory.
IIS 7.0 however has somewhat larger requirements:
Minimum: 512 MB
Recommended: 2 GB or more
That article has a few other recommendations on optimisations you can carry out, I'd also recommend Tess Ferrandez's blog, especially her post ".NET Memory usage - A restaurant analogy" which explains memory allocation nicely, and her other posts on debugging memory usage - which start in a similar place to RickNZ's suggestions.
What are you putting in memory?
256mb is way below what a server would have, even if it was shared server you'd likely have more space to work with than that.
Are you caching?
What OS are you running?
Like RickNZ said trimming as much fat out as you can might save you some memory.
Honestly I think you simply have an unrealistic memory target.
We're running a custom application on our intranet and we have found a problem after upgrading it recently where IIS hangs with 100% CPU usage, requiring a reset.
Rather than subject users to the hangs, we've rolled back to the previous release while we determine a solution. The first step is to reproduce the problem -- but we can't.
Here's some background:
Prod has a single virtualized (vmware) web server with two CPUs and 2 GB of RAM. The database server has 4GB, and 2 CPUs as well. It's also on VMWare, but separate physical hardware.
During normal usage the application runs fine. The w3wp.exe process normally uses betwen 5-20% CPU and around 200MB of RAM. CPU and RAM fluctuate slightly under normal use, but nothing unusual.
However, when we start running into problems, the RAM climbs dramatically and the CPU pegs at 98% (or as much as it can get). The site becomes unresponsive, necessitating a IIS restart. Resetting the app pool does nothing in this situation, a full IIS restart is required.
It does not happen during the night (no usage). It happens more when the site is under load, but it has also happened under non-peak periods.
First step to solving this problem is reproducing it. To simulate the load, we starting using JMeter to simulate usage. Our load script is based on actual usage around the time of the crash. Using JMeter, we can ramp the usage up quite high (2-3 times the load during the crash) but the site behaves fine. CPU is up high, and the site does become sluggish, but memory usage is reasonable and nothing is hanging.
Does anyone have any tips on how to reproduce a problem like this in a non-production environment? We'd really like to reproduce the error, determine a solution, then test again to make sure we've resolved it. During the process we've found a number of small things that we've improved that might solve the problem, but I'd really feel a lot more confident if we could reproduce the problem and test the improved version.
Any tools, techniques or theories much appreciated!
You can find some information about troubleshooting this kind of problem at this blog entry. Her blog is generally a good debugging resource.
I have an article about debugging ASP.NET in production which may provide some pointers.
Is your test env the same really as live?
i.e
2 separate vm instances on 2 physical servers - with the network connection and account types?
Is there any other instances on the Database?
Is there any other web applications in IIS?
Is the .Net Config right?
Is the App Pool Config right for service accounts ?
Try look at this - MS Article on II6 Optmising for Performance
Lots of tricks.