Combining Session and Cache - asp.net

To make my extranet web application even faster/more scalable I think of using some of the caching mechanisms. For some of the pages we'll use HTML caching, please ignore that technique for this question.
E.g.: at some point in time 2500 managers will simultaneously login on our application
(most of them with the same Account/Project)
I think of storing an Account-cachekey and Project-cachekey into the user's Session and use that to get the item from the Cache.
I could have simply stored the Account into the session, but that would result in 2500 of the same Accounts in memory.
Is there a better solution to this or does it make sense :)?

Generally adding items to session is seen as having a negative impact on scalability. Depending on your technology, you may have a problem scaling out to more than 1 server when using session variables (eg in classic asp).
Having said that, if performance is your top priority you could cache data in both session and application variables. I have always thought that its not worth the hassle for a dataset of this size becuase sql server will almost certainly have this data cached in memory and all you are saving is a network round trip.
Lastly, look at code and hardware optimisation first for performance enhancements. Migrating to managed/compiled code, reducing the size of your html, optimising your images, minifying JavaScript, and of course the html caching you mentioned previously - these are all things I would consider first.

Related

How to "warm-up" Entity Framework? When does it get "cold"?

No, the answer to my second question is not the winter.
Preface:
I've been doing a lot of research on Entity Framework recently and something that keeps bothering me is its performance when the queries are not warmed-up, so called cold queries.
I went through the performance considerations article for Entity Framework 5.0. The authors introduced the concept of Warm and Cold queries and how they differ, which I also noticed myself without knowing of their existence. Here it's probably worth to mention I only have six months of experience behind my back.
Now I know what topics I can research into additionally if I want to understand the framework better in terms of performance. Unfortunately most of the information on the Internet is outdated or bloated with subjectivity, hence my inability to find any additional information on the Warm vs Cold queries topic.
Basically what I've noticed so far is that whenever I have to recompile or the recycling hits, my initial queries are getting very slow. Any subsequent data read is fast (subjective), as expected.
We'll be migrating to Windows Server 2012, IIS8 and SQL Server 2012 and as a Junior I actually won myself the opportunity to test them before the rest. I'm very happy they introduced a warming-up module that will get my application ready for that first request. However, I'm not sure how to proceed with warming up my Entity Framework.
What I already know is worth doing:
Generate my Views in advance as suggested.
Eventually move my models into a separate assembly.
What I consider doing, by going with common sense, probably wrong approach:
Doing dummy data reads at Application Start in order to warm things
up, generate and validate the models.
Questions:
What would be the best approach to have high availability on my Entity Framework at anytime?
In what cases does the Entity Framework gets "cold" again? (Recompilation, Recycling, IIS Restart etc.)
What would be the best approach to have high availability on my Entity Framework at anytime?
You can go for a mix of pregenerated views and static compiled queries.
Static CompiledQuerys are good because they're quick and easy to write and help increase performance. However with EF5 it isn't necessary to compile all your queries since EF will auto-compile queries itself. The only problem is that these queries can get lost when the cache is swept. So you still want to hold references to your own compiled queries for those that are occurring only very rare, but that are expensive. If you put those queries into static classes they will be compiled when they're first required. This may be too late for some queries, so you may want to force compilation of these queries during application startup.
Pregenerating views is the other possibility as you mention. Especially, for those queries that take very long to compile and that don't change. That way you move the performance overhead from runtime to compile time. Also this won't introduce any lag. But of course this change goes through to the database, so it's not so easy to deal with. Code is more flexible.
Do not use a lot of TPT inheritance (that's a general performance issue in EF). Neither build your inheritance hierarchies too deep nor too wide. Only 2-3 properties specific to some class may not be enough to require an own type, but could be handled as optional (nullable) properties to an existing type.
Don't hold on to a single context for a long time. Each context instance has its own first level cache which slows down the performance as it grows larger. Context creation is cheap, but the state management inside the cached entities of the context may become expensive. The other caches (query plan and metadata) are shared between contexts and will die together with the AppDomain.
All in all you should make sure to allocate contexts frequently and use them only for a short time, that you can start your application quickly, that you compile queries that are rarely used and provide pregenerated views for queries that are performance critical and often used.
In what cases does the Entity Framework gets "cold" again? (Recompilation, Recycling, IIS Restart etc.)
Basically, every time you lose your AppDomain. IIS performs restarts every 29 hours, so you can never guarantee that you'll have your instances around. Also after some time without activity the AppDomain is also shut down. You should attempt to come up quickly again. Maybe you can do some of the initialization asynchronously (but beware of multi-threading issues). You can use scheduled tasks that call dummy pages in your application during times when there are no requests to prevent the AppDomain from dying, but it will eventually.
I also assume when you change your config file or change the assemblies there's going to be a restart.
If you are looking for maximum performance across all calls you should consider your architecture carefully. For instance, it might make sense to pre-cache often used look-ups in server RAM when the application loads up instead of using database calls on every request. This technique will ensure minimum application response times for commonly used data. However, you must be sure to have a well behaved expiration policy or always clear your cache whenever changes are made which affect the cached data to avoid issues with concurrency.
In general, you should strive to design distributed architectures to only require IO based data requests when the locally cached information becomes stale, or needs to be transactional. Any "over the wire" data request will normally take 10-1000 times longer to retrieve than an a local, in memory cache retrieval. This one fact alone often makes discussions about "cold vs. warm data" inconsequential in comparison to the "local vs. remote" data issue.
General tips.
Perform rigorous logging including what is accessed and request time.
Perform dummy requests when initializing your application to warm boot very slow requests that you pick up from the previous step.
Don't bother optimizing unless it's a real problem, communicate with the consumer of the application and ask. Get comfortable having a continuous feedback loop if only to figure out what needs optimization.
Now to explain why dummy requests are not the wrong approach.
Less Complexity - You are warming up the application in a manner that will work regardless of changes in the framework, and you don't need to figure out possibly funky APIs/framework internals to do it the right way.
Greater Coverage - You are warming up all layers of caching at once related to the slow request.
To explain when a cache gets "Cold".
This happens at any layer in your framework that applies a cache, there is a good description at the top of the performance page.
When ever a cache has to be validated after a potential change that makes the cache stale, this could be a timeout or more intelligent (i.e. change in the cached item).
When a cache item is evicted, the algorithm for doing this is described in the section "Cache eviction algorithm" in the performance article you linked, but in short.
LFRU (Least frequently - recently used) cache on hit count and age with a limit of 800 items.
The other things you mentioned, specifically recompilation and restarting of IIS clear either parts or all of the in memory caches.
As you have stated, use "pre-generated views" that's really all you need to do.
Extracted from your link:
"When views are generated, they are also validated. From a performance standpoint, the vast majority of the cost of view generation is actually the validation of the views"
This means the performance knock will take place when you build your model assembly. Your context object will then skip the "cold query" and stay responsive for the duration of the context object life cycle as well as subsequent new object contexts.
Executing irrelevant queries will serve no other purpose than to consume system resources.
The shortcut ...
Skip all that extra work of pre-generated views
Create your object context
Fire off that sweet irrelevant query
Then just keep a reference to your object context for the duration of your process
(not recommended).
I have no experience in this framework. But in other contexts, e.g. Solr, completely dummy reads will not be of much use unless you can cache the whole DB (or index).
A better approach would be to log the queries, extract the most common ones out of the logs and use them to warm up. Just be sure not to log the warm up queries or remove them from the logs before proceeding.

Caching large amounts of data

I have been reading that lots of people use Redis or another key-value store/NoSQL solution as a distributed cache for their website.
Maybe I'm not understanding completely, but it seems a solution like this only works for shared data. For example, if I have a website that requires a user to log-in and the queries they generate return data specific to only that user (in my case, banking/asset information) that can't be cached for all users, this type of solution doesn't work.
Unfortunately, the database is shared across all our applications and when it get bogged down, the website gets bogged down as well. Since each user has gigabytes of information, I obviously can't cache all of that and each web page queries completely different information.
Is there some caching strategy that I can employ for this type of scenario?
A distributed cache like Velocity doesn't require that the data it stores be limited to "shared" data. But you do have to read the data from your DB and store it in the cache, which takes time.
A few alternatives:
Partition your data, so it's spread out among several DB servers
Add as much RAM as you can to each DB server, to allow SQL Server to cache what it can
There are many variations to the partitioning theme....
Is your web app load balanced? There are caching options at the web tier as well -- the ASP.NET object cache is a good place to start.
It's possible that your web clients are requesting the same data more than once (for a given user). So caching could give a benefit in that case.
But before you go implementing a huge caching solution, you really need to look at the queries that are particularly slow or executed a huge number of times and see if you can optimize them in any way.
Then look at upgrading your DB machine.
I read a nice article about the performance issues that MySpace had when they had a huge growth.
You can find the article here.
One quote from the article that stands out:
The addition of the cache servers is "something we should have done
from the beginning, but we were growing too fast and didn't have time
to sit down and do it," Benedetto adds
If the problem is in your database server think about partitioning your data and making use of a database farm to spread the load. Also think about SSD's! They can really speed up your database access code.
Depending how dynamic your data is you could consider using Fragment Caching. This will cache the HTML of the page rather than the data so if the volume of data is prohibtive to cache then this might work for you

Dealing with larger traffic on ASP.net web site

I have a asp.net web site for our company and handles about 1000 - 2000 users every day. Now the site will have about 4-5000 users every day. We are putting it to two servers and put them in the hardware load balanced environment.
I am wondering if there is anything else I should do from the ASP.net web site perspective to handle the larger users.
Thanks.
Some things I'd take into consideration..
Session state management - are you going to do it out-of-process? If so, make sure everything being stored in Session is serializable.
Do you have a large number (or any? some may argue) update panels being used or many standard server-side postbacks? If so, try to convert what you can to simple AJAX requests and marshal raw/JSON data back and forth. This will minimize on the number of full page life cycles and data traffic on the server.
On the front-end/UI side, try to leverage CSS sprites, so that you go to the server for the images once and never again.
For database connectivity, make sure you leverage connection pooling.
You may also want to consider js and css minification.
Additionally, these pages has some good tips:
http://msdn.microsoft.com/en-us/magazine/cc163854.aspx (a bit outdated, but still somewhat relevant)
http://developer.yahoo.com/performance/rules.html
First of all you should profile your application against bottleneck - if there is any place in your code which makes your application slow then adding new servers won't help. There are many profilers - I recommend JetBrains Dot Trace (there is a free trial for couple of days).
Second thing is OutputCache - the shortest explanation is "store html that is sent to the users, not recreate it every time. There is a huge number of articles about OutputCache so I don't think you need any link here.
If the traffic is even bigger you can think about using some solution for caching your responses around the world (read e.g. about Akamai) but I don't suppose you will need it with couple thousands of visitors daily.

ASP.NET object caching - how much is too much?

My first time really getting into caching with .NET so wanted to run a couple of scenarios by you.
Question 1: Many expensive objects
I've got some small objects (simple int/string properties) which are pretty expensive to instantiate. These are user statistic objects which each user may have 1 - 10 of. Is it good or bad practice to fill up the cache with these fellas?
Question 2: Few cheap regularly used objects
Also got a few objects (again small) which are used many times on every page load. Is the cache designed to be accessed so regularly?
Fanks!
stackoverflow: Cracking question suggestion tool btw.
1) I would cache them. You can always set CacheItemPriority.Low if you are worrying about the cache 'filling up'
2) Yes the cache is designed to be accessed regularly. It can lead to huge performance improvements.
The answer to both of your questions is to cache them aggressively if you can.
Objects that are expensive to instantiate yet relatively static (that is unchanging) throughout the application's life ought to be cached. Even relatively inexpensive objects should be cached if you can improve performance by doing so.
You may find yourself running into problems when you need to invalidate the cache when any of these objects become stale or obsolete. Cache invalidation can be a difficult problem especially in a multi-server environment.
I don't think there is any problem with hitting the cache too frequently...
Overall asp.net Caching is fairly intelligent int terms of deciding what to keep and generally managing space. As a rule though I wouldn't depend on the cache to store information, only use it as an alternative to hitting disk or DB. User objects may be better served by session state.
https://web.archive.org/web/20211020111559/https://aspnet.4guysfromrolla.com/articles/100902-1.aspx
is a great article explaining the built in capabilities of .net caching.
Let's address the question in your title - when caching is too much.
It's too much if you are putting so much in the cache that it is pushing other things away. If the web sites on the server in total is using more memory than there is physical memory, they will push each other out into the virtual memory that is stored on disk. That will practically mean that you are caching some of the objects on disk instead of in memory, which is a lot slower.
It's too much if you are putting so many objects in the cache that they push each other out, so that you rarely get to use any of the objects in the cache before they go away.
So, generally you can cache a lot before you reach the limit where there is no point in putting anything more in the cache.
When determine what's most benificial to cache, consider where the bottle necks are. If you for example have a database server with a lot more capacity than the web server, caching the database results doesn't save so much resources. Getting data from the database takes time, but it doesn't use much resources on the web server while waiting for it, so it will not affect the throughput much.

What to put in a session variable

I recently came across a ASP 1.1 web application that put a whole heap of stuff in the session variable - including all the DB data objects and even the DB connection object. It ends up being huge. When the web session times out (four hours after the user has finished using the application) sometimes their database transactions get rolled back. I'm assuming this is because the DB connection is not being closed properly when IIS kills the session.
Anyway, my question is what should be in the session variable? Clearly some things need to be in there. The user selects which plan they want to edit on the main screen, so the plan id goes into the session variable. Is it better to try and reduce the load on the DB by storing all the details about the user (and their manager etc.) and the plan they are editing in the session variable or should I try to minimise the stuff in the session variable and query the DB for everything I need in the Page_Load event?
This is pretty hard to answer because it's so application-specific, but here are a few guidelines I use:
Put as little as possible in the session.
User-specific selections that should only last during a given visit are a good choice
often, variables that need to be accessible to multiple pages throughout the user's visit to your site (to avoid passing them from page to page) are also good to put in the session.
From what little you've said about your application, I'd probably select your data from the db and try to find ways to minimize the impact of those queries instead of loading down the session.
Do not put database connection information in the session.
As far as caching, I'd avoid using the session for caching if possible -- you'll run into issues where someone else changes the data a user is using, plus you can't share the cached data between users. Use the ASP.NET Cache, or some other caching utility (like Memcached or Velocity).
As far as what should go in the session, anything that applies to all browser windows a user has open to your site (login, security settings, etc.) should be in the session. Things like what object is being viewed/edited should really be GET/POST variables passed around between the screens so a user can use multiple browser windows to work with your application (unless you'd like to prevent that).
DO NOT put UI objects in session.
beyond that, i'd say it varies. too much in session can slow you down if you aren't using the in process session because you are going to be serializing a lot + the speed of the provider. Cache and Session should be used sparingly and carefully. Don't just put in session because you can or is convenient. Sit down and analyze if it makes sense.
Ideally, the session in ASP should store the least amount of data that you can get away with. Storing a reference to any object that is holding system resources open (particularly a database connection) is a definite scalability killer. Also, storing uncommitted data in a session variable is just a bad idea in most cases. Overall it sounds like the current implementation is abusively using session objects to try and simulate a stateful application in a supposedly stateless environment.
Although it is much maligned, the ASP.NET model of managing state automatically through hidden fields should really eliminate the majority of the need to keep anything in session variables.
My rule of thumb is that the more scalable (in terms of users/hits) that the app needs to be, the less you can get away with using session state. There is, however, a trade-off. For web applications where the user is repeatedly accessing the same data and typically has a fairly long session per use of the site, some caching (if necessary in session objects) can actually help scalability by reducing the load on the DB server. The idea here is that it is much cheaper and less complex to farm the presentation layer than the back-end DB. Of course, with all things, this advice should be taken in moderation and doesn't apply in all situations, but for a fairly simple in-house CRUD app, it should serve you well.
A very similar question was asked regarding PHP sessions earlier. Basically, Sessions are a great place to store user-specific data that you need to access across several page loads. Sessions are NOT a great place to store database connection references; you'd be better to use some sort of connection pooling software or open/close your connection on each page load. As far as caching data in the session, this depends on how session data is being stored, how much security you need, and whether or not the data is specific to the user. A better bet would be to use something else for caching data.
storing navigation cues in sessions is tricky. The same user can have multiple windows open and then changes get propagated in a confusing manner. DB connections should definitely not be stored. ASP.NET maintains the connection pool for you, no need to resort to your own sorcery. If you need to cache stuff for short periods and the data set size is relatively small, look into ViewState as a possible option (at the cost of loading more bulk onto the page size)
A: Data that is only relative to one user. IE: a username, a user ID. At most an object representing a user. Sometimes URL-relative data (like where to take somebody) or an error message stack are useful to push into the session.
If you want to share stuff potentially between different users, use the Application store or the Cache. They're far superior.
Stephen,
Do you work for a company that starts with "I", that has a website that starts with "BC"? That sounds exactly like what I did when I first started developing in .net (and was young and stupid) -- I crammed everything I could think of in session and application. Needless to say, that was double-plus ungood.
In general, eschew session as much as possible. Certainly, non-serializable objects shouldn't be stored there (database connections and such), but even big, serializable objects shouldn't be either. You just don't want the overhead.
I would always keep very little information in session. Sessions use server memory resources which is expensive. Saving too many values in session increases the load on server and eventualy the performance of the site will go down. When you use load balance servers, usage of session can run into problems. So what I do is use minimal or no sessions, use cookies if the information is not very critical, use hidden fields more and database sessions.

Resources