Due to some memory leaks, some memory is not released except when doing IISreset.
I found some code where I have a class of properties and methods, where only 10% of the methods are specific for the class, 90% can be moved to another class.
how bad is that? is this is affecting my memory, because I am initiating this class with every user using the application?
It is a ASP.NET Application on IIS6.
I suppose if the methods require no instance state but require an instance to call (i.e. they are instance methods that never reference 'this'), they could be made into static methods and save you one object allocation. However, if you have to allocate the object anyway (as it sounds like you do, for the other "10% of functionality" you mention) that doesn't sound like it will fix the issue.
You should probably do an analysis of memory usage in your application using the debugger. Rico Mariani has a pretty good blog post about tracking down managed memory leaks here: http://blogs.msdn.com/b/ricom/archive/2004/12/10/279612.aspx . It is old but still relevant. (Note: If you happen to be using .NET 4.0 you'll need to do ".loadby sos clr" instad of ".loadby sos mscorwks" to load the SOS debugger extension in WinDBG.)
Related
Are singletons in ASP.NET shared between users/sessions? And if they are, are there any safety considerations? Think serializing/deserializing vulnerabilities, thread safety etc.
Is it the way to go using settings from the database that are the same for all users?
Hand crafting the anti-pattern called "Singleton" in C# code is a really bad idea in general, ASP.NET or not.
The singleton lifetime that is supported in the dependency injection framework is a good idea if it does what you need.
I would advise you to only use it for read-only data, like settings, though. You don't have an application on the desktop as of old. Your application might be recycled on the fly, or maybe stretched across multiple nodes on a server farm. So suddenly your "singleton" is actually only a singleton if you have a single instance of your program running. Building your application so this becomes an artificial problem (i.e. the framework would support it, but your own code is build to fail if you actually do so) is not a smart way to go about this.
So to summarize: Singleton lifetime in your dependency injection container? Might be okay. Depends on your use case. An actual "Singleton" pattern in your code? Bad. Very Bad. Tells me you don't actually do any unit testing and nothing is planned to bring this application over a few thousand hobbyist users who don't care if your app is down every time you deploy.
TL;DR:
Made refactoring for performance, website got slower. Ran the Concurrency Visualizer, the graph looks like the lock convoys as described on MSDN.
Context
I’m helping with refactoring an ASP.NET website to switch user controls from performing business logic on datasets to perform presentation logic on business objects and also reduce database calls made from the user controls.
The issue
We have noticed a significant performance drop (hangs/blockings) after introducing changes involving what we thought would be performance improvements in multiple areas.
We’re using Lean Sentry to monitor our websites’ performance. According to the hang diagnostics, the thread pool was running out of threads and (according to the descriptions on the diagnostics page) when GC runs, it stops more threads from being created. The GC Heap and Gen 0 were consuming a lot of memory (~ 9 GB), according to the memory diagnostics.
What I did so far?
I used memory profiler in Visual Studio and identified issues with our excessive DataAdapter and DataTable usage. Memory consumption dropped to 3 GB but that only helped with GC blocking. It is still slower than it had been before we introduced the changes and I still see blocking on high load caused by functions like CompilationLock.GetLock() and BuildManager.GetBuildResultFromCacheInternal(). Googling them didn’t return anything useful.
This is a website that uses JIT compilation. I had assumed that the issue with CompilationLock might be because of JIT compiling and wanted to run the website precompiled, but one of our global Utilities classes caused ambiguity with some other Utilities class/namespace that I don’t know of. I’ve found out that there is a Microsoft.Build.Utilities namespace, but it’s not referenced in our website and I can’t reproduce the ambiguity in my own environment when I reference Microsoft.Build myself, so I couldn’t get the website running on precompiled mode on the staging server to test this theory.
I made additional changes on memory allocation and the amount of database calls, using Visual Studio’s memory allocation and instrumentation profilers as a measure, but I didn’t notice any progress on performance.
I used a concurrency profiler to gather more information on thread utilization. I haven’t used this tool before, so I’m not sure about my interpretations here. There are multiple threads in each handle and in one handle I’m seeing 42% contention. I see that the DataAdapter.Fill and SqlHelper.ExecuteReader methods show up most when it’s set to “Show Just My Code” and WaitForSingleObjectExImplementation shows up most when it’s set to “Show All Code”.
I encountered a SO question about ASP.NET websites’ performance issues and set EnableSessionState="ReadOnly" for each page, but I didn’t notice difference with this change, either.
Concurrency Visualizer and Common Patterns for Poorly-Behaved Multithreaded Applications helped me identify the issue. My graph doesn’t seem like serial execution, but I see 80–90% synchronization as shown in Lock Convoys graph. I checked out a SO question on lock convoys debugging, too.
Testing approach
I’m using Screaming Frog to crawl the website in order to reproduce the issues and taking numbers of requests per second and response times in both Screaming Frog and Lean Sentry as a performance measure. It might not be the best way but the difference is noticeable, reproducible and it’s pretty much all I have at this point.
Architecture of the website
The website was originally coded in VB.NET for .NET Framework 1.0 about 10 years ago, and upgraded to .NET Framework 4.6.1 by fixing some compatibility issues. There haven’t been any architectural changes so far. There is a shared SqlHelper class, which is a collection of shared data access functions like ExecuteDataset or ExecuteDatareader, that return either a DataSet, DataReader or String value. These functions read the connection string information from the web.config file and create a new SqlConnection, SqlDataAdapter, SqlDataReader and SqlCommand object to perform the database operations. The Data Access Layer that consumes this shared class consists of classes for each module like shopping cart, category, product, etc. to be instantiated in each user control and they consist of functions that represent stored procedures in the database.
The refactoring
We have introduced some new objects to be instantiated either inside page load of the related user control, or inside OnItemDataBound event of repeaters and attached to its child user controls’ public properties, which are refactored to use the object. However, there are still other child user controls that need multiple data tables, so we decided to store one of the data tables in one of the objects and pass it to related user controls by assigning it to their public properties.
I guess that we hurt performance by introducing these objects. Even though database calls and memory consumption seem to be reduced, I’m wondering if the objects are causing threads to be synced all the time.
The graph before any refactoring happened:
The graph after all the refactoring I mentioned applied:
Will you help me identify the issue?
Your problem is rather complex. I think that you have two basic options to resolve your refactoring performance issues:
Revert changes to the code to a point where all or much of the refactoring hadn’t yet been done and when you had better performance than what you are currently experiencing. Then, proceed gradually with the addition of new classes for performance improvements. If a change does not improve performance, then undo it and try something else.
Replace some / much of the newly added classes with versions that support the interfaces but lack the performance overhead. Do this selectively to isolate where the performance issues exist. Perhaps, the website has tapped into an unknown performance bug that was not triggered by prior implementations of the added classes.
I would favor option 1, though it may seem counterproductive. It is a bit like punting in U.S. football. Sure, it is nice to just drive down the field. But sometimes the dominant strategy is to punt, get the ball back and try to score on another drive.
I've written a class in VB.Net that is consumed in an ASP.NET Web Application running IIS7. I use .NET Framework 4.0. The class performs a REST request to an online and retrieves an XML response containing strongly typed data.
This class also performs caching using an SQL Server database. The class is compiled to a .DLL and referenced by the Web Application. It works excellent and now I need to know how to make the class thread-safe.
I have no experience with making code 'thread-safe'. I don't know where to begin in determining whether or not it is thread-safe. Am assuming, because I didn't pay attention to this during development, that it is NOT thread-safe and that since it the web application will be used by many users at the same time that it must be payed attention to.
Can anyone point me on how to test for thread-safety? Are there any resources online that that will give me some ideas? Are there any rules of thumb that will tip me off as to where my main concerns are?
The easiest possible thing you can look out for is the use of "static" (C#) or "Shared" (VB.NET) variables. If these variables can be modified throughout the lifetime of the application you will likely run into threading issues which can really often result in "random looking" problems.
I would also be concerned about how you are doing the caching in your database as multiple .NET threads hitting SQL (for the cache) could cause issues as well depending on how its designed.
Bottom line is you are likely going to need to learn more about threading if you want to be sure this is going to not have issues. Probably the best book I have ever read in terms of simple to very complex C# topics is C# 4.0 In a Nutshell I would take a look at that book especially the threading chapters. (Seriously read the whole thing though) If you get that read through and have a good understanding of the concepts mentioned you should be fine.
I have an ASP.NET website that seems to be using a lot of memory. I left it for 7 hours on Sunday and it reached 3.2gb. I thought .NET handled all it's own garbage collection / free'd objects and so on, so I am not really sure where to start looking for a solution.
The website uses XML's heavily so I thought this could be the issue but I have implemented the global use of the XMLSerializer to try and rule this out.
I also have a custom handler that deals with all the images, resizes, caches and then loads from cache where it can. Could this be causing any issues?
Sorry to be so vague but the problem is that I don't know where to start with the issue really. Any help appreciated.
Server info:
.NET 2.0
Windows 2008 server
IIS7
Thanks in advance.
Best place to start is using a profiler. RedGate has the ANTS Memory Profiler which is really good and has a free trial. Product page here.
You run the application, attach the profiler then start using the page as normal. The profiler collects information about the objects in use and this should allow you to pinpoint the root cause of the problem.
Once in my application, it turned out to be that we were accidentally creating a NHibernate SessionFactory for every single query that we performed. These were all referenced internally by NHibernate, which meant that they were never freed in addition to being horribly slow and inefficient. The profiler lead us right to it and we never would have found it otherwise.
An alternative to RedGate is using adplus and WinDbg. Also read this blog:
http://blogs.msdn.com/tess/
This is an excellent source of help with debugging issues.
My SysAdmin and I had successfully used adplus and WinDbg to find a memory leak in an ASP.NET Application. My developers mistake was, that they accidentally used ASP.NET's cache without an expiration timeout.
Another Bug was, that a developer used Attribute overloads with XMLSerialization. With that feature .net will allays create a new serializer helper (whatever) assembly. Assemblies cannot be unloaded, so the application eats a lot of memory
When dealing with the file system its really important that you dispose your readers and temporary objects.
You say that you are using XmlSerializer a lot. This causes a memory leak if you are not using the default constructors XmlSerializer(type) and XmlSerializer(type, defaultNameSpace).
See this MSDN article for more info
In which cases to you need to watch out for Concurrency problems (and use lock for instance) in ASP.NET?
Are there 'best practices' around on this topic
Documentation?
Examples?
'worst practices...' or things you've seen that can cause a disaster...?
I'm curious about for instance singletons (even though they are considered bad practice - don't start a discussion on this), static functions (do you need to watch out here?), ...?
Since ASP.NET is a web framework and is mainly stateless there are very few concurrency concerns that need to be addressed.
The only thing that I have ever had to deal with is managing application cache but this is easily done with a cache-management type that wraps the .NET caching mechanisms.
One huge problem that caused us a lot of grief was using Modules vs. Classes in our main Web Service. This was before we really knew what we were doing and has since been fixed.
The big problem with using modules is that by default any module level variables are visible to every instance of the ASP worker process. We pass in multiple datasets and manipulate them then return them to the client. Because we were using modules the variables holding these datasets were getting corrupted by multiple calls occuring at one time.
This was not caught in testing and was difficult to reproduce until we figured out how to properly load test our web services. It took something like 10-20 requests per second before we could reproduce it accurately.
In the end, we just changed all the modules to classes, and then used those classes instead of calls to the modules, this cleared up this concurrency issue as each instantiated class had its own copy of the dataset in memory.