I have an MVC4 web application that, when volume is put through it, consumes ~400MB RAM in all environments excluding the production environment. When a similar volume of load is put through it on a production server (hosted externally), the memory utilisation trebles to ~1.2GB and the memory isn't released even when the application is idle. The IIS configuration across all environments is the same.
Its also worth noting that the application, when idle, releases some of that memory in my test environments, but doesn't do the same in production. The RAM gradually increases and tops out at 1.2-1.3GB, but never drops below – even if traffic is completely routed away from the server.
I have not been able to recreate this issue on any other environment other than my third party hosting platform, but before I conclusively blame the infrastructure and get the hosting company on the case I wondered:
a) Is this a common problem and why does it happen
b) How can I see what is using the memory
c) Would you expect the same code to consume significantly different levels of system resources based on platform (I know my host may have monitoring etc. in production which will perhaps inflate a little)
Any help on this is appreciated.
This is a common problem which we normally face when we work on Different Environments. This is because System configuration, Windows etc differs from system to system.
In this particular case as we see its a big difference, probably there is some loops or memory is not freed at regular intervals.
Few steps:
Try to get root of the problem i.e. which method is taking time. Use Loggers like nlog.
Try using profilers if you are using Sql Server
And the third is use ants-performance-profiler
Also it depends on number of user hitting on site and some deadlock conditions.
There can be numerous reasons for the same.
Related
I have a Windows Server 2012 with IIS 8.0. It is hosting many small websites with a low user base which are not mission critical in any way. With small website I mean that the application code and memory footprint is quite low, but due to the loaded libraries, like EntityFramework, the memory consumption of the applications are about 140MB when freshly started and idle.
In general that’s not a big deal for a full-blown webserver, but I only have a VPS with 4GB of RAM which also runs several other applications (databases, BIND, hMail, etc.). I’m using it basically as development server to play with many different technologies. Therefore, I’m running out of RAM quickly while serving dozens of ~140MB w3wp’s.
Beside of suspending when idle I’d like to reduce the memory consumption while still using any framework or library I’d like to use – that’s the purpose of the whole thing actually.
Long story short: As the applications not only share the same .NET version but also some libraries like EF or MVC, would it make more sense to run multiple sites in one app_pool so that they can share the libs? Or would each site load its own copy anyway (due to different Application domains like discussed here)?
Bonus question: when considering a hardware upgrade 1GB of RAM is 20$/month but putting the whole server on SSDs is 10$/month. While I do know that reading from page file is always much slower than reading from RAM I’m thinking about using a big pagefile on the SSD instead of buying 1gig of additional RAM for twice the price – again, speed of the websites isn’t critical, they should just work. Would that make any sense at all?
Looking at a w3wp Process (hosting multiple sites) in Process Explorer shows that it hosts several different application domains with different instances of the same assemblies loaded into memory. So moving the sites into a single AppPool may not help much.
But there is another option. In IIS 8+ you can share common assemblies across AppPools. If certain assemblies are used by multiple AppPools, they are loaded into memory just once and then aliased by the different processes.
Have a look at this bit from asp.net and this TechNet blog post
You have to do a little bit of setup work, but then it seems to work quite well.
I am trying to migrate an application on a Weblogic server which already has an application in it. Please suggest if having two EARs in the same weblogic server is a feasible design
It is perfectly feasible and standard; however, there are one or two reasons why you might not want to do this.
One is file descriptor exhaustion. If one of the applications (EARs) runs out of file descriptors, it will probably crash / render inoperable the entire process, i.e. the entire Weblogic server.
Another is heap memory exhaustion; much the same problem occurs if one of the applications exhausts the maximum available heap memory.
Application servers try to isolate applications from each other, but cannot completely succeed at this due to the limitations of the JVM. Operating systems and virtual machine hypervisors are actually able to do a better job of isolating applications from each other.
I'd like to describe strange issue I've noticed while analyzing my asp.net application in production and ask for some advice or opinion on the following matter.
Application usually runs with some 80-90 MB of memory footprint. This seems stable since no memory leaks have been detected so far - no slight increase in memory usage over time. Yet, problem occurs when application pool recycles (I'm using shared hosting and judging by logs it occurs either when app is idle for 20 mins or every ~30 hours - something like that). The issue is that used memory almost doubles for some period on recycle - it goes to some 160-170 MBs without any explanation. This is confusing, since it is common claim that recycling should purge the memory and all other resources - at least I get it that way. System holds this amount of memory for some 7-8 hours and then memory usage drops to it's usual level of 90-100 MB, again, with no apparent reason (at least not know to me). All the time, application seems to work well - no significant delays or troubles with site availability - to the users everything seems OK, no complaints so far. When looking at the memory consumption over time graph - it looks almost like a step function.
The important thing is that I haven't been able to reproduce this sort of behavior in my testing environment. Occasionally, I've been getting notes from provider administrators that my app is using more resources than allowed and this really bugs me.
So, what I would like to know - is there any possible scenario where application pool recycling does not release all memory resources ? Is there any advice or guideline what I should focus on ? I'm not an expert in this area, but I've been reading about things like overlapping recycling, serialization problems on recycle and couple more issues ... Any ideas ? Similar experience ?
Thanks
This post provides a pretty good overview of what happens when your site's app pool is recycled: http://blogs.msdn.com/b/tess/archive/2006/08/02/asp-net-case-study-lost-session-variables-and-appdomain-recycles.aspx
My speculation is that your memory usage is increasing due to the JIT-compilation that follows every recycle of the app pool. My guess is that your shared host has different configuration and environmental settings than your development server.
IMHO, if you're using ~100 megs of memory on a shared host, you are asking for trouble if it's a host like DiscountASP.NET or GoDaddy. If you care at all about that website, go get a VPS or some more configurable hosting where you can pay a premium for a higher memory limit.
It's one of those things I see a lot but never really think of. Do you think for the purpose of web application development (specifically ASP.NET WebForms/MVC). Do you think it's advantageous to do such a thing and if so, what kind of advantages come out of it?
By virtualization I mean using products like Hyper-V to separate the server context like your SQL and Web Server, etc.
First question is, virtualization of what? Do you mean server virtualization? Do you mean running VMWare on each dev's laptop with multiple OSes? Do you mean moving everything to the cloud?
Virtualization of servers, in web app context, is not really different from that in general IT - most of the servers on the Internet, including StackOverload's, are bought to handle peak loads and spend most of the time idling away the cycles, so virtualizing them makes sense when you have more than a certain amount.
VMWare on the desktop (or other parallels on other operating systems) is superb because a) your devs can run a full instance of your server environment, including multiple virtual servers connnected in a virtual network - this is about as close to the real thing that you can get, minus hardware costs and minus devs messing with each other's servers. For clients, you can use Linux and multiple Windows installs to test various browsers, font sizes, etc. quickly - also a big win.
Moving everything to the cloud makes sense in many cases, but is probably a topic for a separate full-sized question :)
One big advantage I see is, that every developer can have his/her own sandbox to work on. If someone messes up his/her sandbox he/she can take a clean image and all is OK again. So I guess that means that there is room to experiment without losing valuable time getting back to the normal setup, you can simply do a rollback.
I'm in doubt a bit on whether you should use virtualisation for production environments. Depending on the application of course.
The only time I would use a virtual for ASP.Net development was if the app required specific setup, such as relying on installed software, wierd settings or particular shares. Every developer has their own webserver and can run their own database so if it's a "basic" webapp I don't see much value in virtuals.. it's pretty hard to break anything with a basic web app deployment :)
With a virtual server, you can test your code in a production-like environment. It is also possible to quickly revert back to the original setup. For many applications, it is useful in that time period just after you write the code, but before it goes to production.
I'm a fan of virtualizaion and use it in testing and production (VMWare and Hyper-v) but over the last year I find it less important on a dev machine. TFS provides me with all the backup/rollback ability that I need, multiple versions of .net can now exist on the same machine and VS2008 can target all those versions.
In a development environment a virtual environment is useful to put several different servers on one box, you can have an instance for your web app, one for your services, one for database, etc. That way it mimics your production environment if you are using separate servers.
One of the benefits of using virtualization in production is that your application is not tied to a specific machine. If you wanted to move your web server instance to another box, it is trivial to do so. You don't need to install or configure things on the new server and hope that everything is set up properly.
One problem I have had though in testing virtual instances is that it can run slower for some applications, specifically engineering apps that like running the CPU at 100%. So test before you leap.
We're running a custom application on our intranet and we have found a problem after upgrading it recently where IIS hangs with 100% CPU usage, requiring a reset.
Rather than subject users to the hangs, we've rolled back to the previous release while we determine a solution. The first step is to reproduce the problem -- but we can't.
Here's some background:
Prod has a single virtualized (vmware) web server with two CPUs and 2 GB of RAM. The database server has 4GB, and 2 CPUs as well. It's also on VMWare, but separate physical hardware.
During normal usage the application runs fine. The w3wp.exe process normally uses betwen 5-20% CPU and around 200MB of RAM. CPU and RAM fluctuate slightly under normal use, but nothing unusual.
However, when we start running into problems, the RAM climbs dramatically and the CPU pegs at 98% (or as much as it can get). The site becomes unresponsive, necessitating a IIS restart. Resetting the app pool does nothing in this situation, a full IIS restart is required.
It does not happen during the night (no usage). It happens more when the site is under load, but it has also happened under non-peak periods.
First step to solving this problem is reproducing it. To simulate the load, we starting using JMeter to simulate usage. Our load script is based on actual usage around the time of the crash. Using JMeter, we can ramp the usage up quite high (2-3 times the load during the crash) but the site behaves fine. CPU is up high, and the site does become sluggish, but memory usage is reasonable and nothing is hanging.
Does anyone have any tips on how to reproduce a problem like this in a non-production environment? We'd really like to reproduce the error, determine a solution, then test again to make sure we've resolved it. During the process we've found a number of small things that we've improved that might solve the problem, but I'd really feel a lot more confident if we could reproduce the problem and test the improved version.
Any tools, techniques or theories much appreciated!
You can find some information about troubleshooting this kind of problem at this blog entry. Her blog is generally a good debugging resource.
I have an article about debugging ASP.NET in production which may provide some pointers.
Is your test env the same really as live?
i.e
2 separate vm instances on 2 physical servers - with the network connection and account types?
Is there any other instances on the Database?
Is there any other web applications in IIS?
Is the .Net Config right?
Is the App Pool Config right for service accounts ?
Try look at this - MS Article on II6 Optmising for Performance
Lots of tricks.