I've 2 servers mirror with a load balancer. I'd like to know the pros and cons of sticking with app cache versus going with something like memcache? I'm very interested in various solutions and especially the types of errors that I could get or limitations by not synchronizing them.
To start the discussion, I'd hazard that using ASP.NET cache would be faster and simpler.
You are best advised to abstract the caching into an interface, implement the interface in a number of ways and Test the different implementations.
As in many cases, it is a matter of looking at the data and how much it is shared between different users.
ASP.NET cache would not necessarily be faster or simpler. It depends on how much you are caching and whether the webservers have the resources to handle it. In most reasonable size apps, the answer to that is often No.
The main downside to not synchronizing between cache servers would be that in a load balanced environment, subsequent requests for the same data might go to different servers. This would just mean that the database gets hit twice some of the time. A way to mitigate this is to implement sticky sessions, where a given user is always sent to the same server and the load balancer only makes a balancing decision at the start of a user session.
Related
We have MVC 4 application which communicates with backend D365 entities.
The application makes a lot of CRM calls to get the data hence it was really slow and the user experience was very poor
To improve its performance, cache layer has been added and whenever application gets a data from CRM, it puts it into Session variable.
Surely, that helped with the performance as within the user's session , it prevents the trip to server and everything is served from session data. However now the application is having a lot of data syncing issues. (data saved by one user is not reflected to others until they logout and re-login)
My questions : was it really a good way of handling the performance issue the application was having? In my opinion, rather then fixing the performance issue , a workaround was added which becomes the cause of other issue.
secondly question: is there a better architecture/design that can be put in place which will improve the performance as well as resolve the data syncing issues the application is having? I am thinking to add a distributed cache layer (Azure Redis likely) to replace in-place Session layer, and optionally (if that makes sense) to implement write-through strategy in Redis so that the front-end application only talks to Cache and let cache keep the data store up to date.
Any guidance or pointer is very much appreciated!
I think you're on the right path. As you've already experienced adding a cache to your application introduces new challenges: handling stale data. In your case it data is cached at the user-level, which means each user has its own cache. This works well if each user works on their own piece of the data. For instance: a banking app where each user sees only their own bank statements (and never those of others). However, this is not the case in your application. Multiple users operate on the same data and now you're running into synchronization issues. A quick fix could be to replace the Session cache with the Application cache which is shared with all users.
Externalizing your cache (e.g. Redis or Memcached) is another solution and offers many advantages (e.g. distribution; scaling; synchronization; etc.), but also increases the complexity of your application. Now your application is dependent on another piece of infrastructure with its own behavior.
I am just getting in to the more intricate parts of web development. This may not be in the best place. However, when is it best to get load balancing for a web project? I understand that it depends on good design/bad design as to how many users you can get to visit a site without it REALLY effecting the performance. However, I am planning to code a new project that could potentially have a lot of users and I wondered if I should be thinking off the bat about load balancing. Opinions welcome; thanks in advance!
I should not also that the project most likely will be asp.net (webforms or mvc not yet decided) with backend of mongodb or pgsql(again still deciding).
Load balancing can also be a form of high availability. What if your web server goes down? It can take a long time to replace it.
Generally, when you need to think about throughput you are already rich because you have an enormous amount of users.
Stackoverflow is serving 10m unique users a month with a few servers (6 or so). Think about how many requests per day you had if you were constantly generating 10 HTTP responses per second for 8 hot hours: 10*3600*8=288000 page impressions per day. You won't have that many users soon.
And if you do, you optimize your app to 20 requests per second and CPU core which means you get 80 requests per second on a commodity server. That is a lot.
Adding a load balancer later is usually easy. LBs can tag each user with a cookie so they get pinned to one particular target. You app will not notice the difference. Usually.
Is this for an e-commerce site? If so, then the real question to ask is "for every hour that the site is down, how much money are you losing?" If that number is substantial, then I would make load balancing a priority.
One of the more-important architecture decisions that I have seen affect this, is the use of session variables. You need to be able to provide a seamless experience if your user ends-up on different servers during their visit. Session variables won't transfer from server to server, so I would avoid using them.
I support a solution like this at work. We run four (used to be eight) .NET e-commerce websites on three Windows 2k8 servers (backed by two primary/secondary SQL Server 2008 databases), taking somewhere around 1300 (combined) orders per day. Each site is load-balanced, and kept "in the farm" by a keep-alive. The nice thing about this, is that we can take one server down for maintenance without the users really noticing anything. When we bring it back, we re-enable our replication service and our changes get pushed out to the other two servers fairly quickly.
So yes, I would recommend giving a solution like that some thought.
The parameters here that may affect the one the other and slow down the performance are.
Bandwidth
Processing
Synchronize
Have to do with how many user you have, together with the media you won to serve.
So if you have to serve a lot of video/files to deliver, you need many servers to deliver it. Let say that you do not have, what is the next think that need to check, the users and the processing.
From my experience what is slow down the processing is the locking of the session. So one big step to speed up the processing is to make a total custom session handling and your page will no lock the one the other and you can handle with out issue too many users.
Now for next step let say that you have a database that keep all the data, to gain from a load balance and many computers the trick is to make local cache of what you going to show.
So the idea is to actually avoid too much locking that make the users wait the one the other, and the second idea is to have a local cache on each different computer that is made dynamic from the main database data.
ref:
Web app blocked while processing another web app on sharing same session
Replacing ASP.Net's session entirely
call aspx page to return an image randomly slow
Always online
One more parameter is that you can make a solution that can handle the case of one server for all, and all for one :) style, where you can actually use more servers for backup reason. So if one server go off for any reason (eg for update and restart), the the rest can still work and serve.
As you said, it depends if/when load balancing should be introduced. It depends on performance and how many users you want to serve. LB also improves reliability of your app - it will not stop when one system goes crashing down. If you can see your project growing to be really big and serve lots of users I would sugest to design your application to be able to be upgraded to LB, so do not do anything non-standard. Try to steer away of home-made solutions and always follow good practice. If later on you really need LB it should not be required to change your app.
UPDATE
You may need to think ahead but not at a cost of complicating your application too much. Do not go paranoid and prepare everything to work lightning fast 'just in case'. For example, do not worry about sessions - session management can be easily moved to SQL Server at any time and this is the way to go with LB. Caching will also help if you hit some bottlenecks in the future but you do not need to implement it straight away - good design (stable interfaces), separation and decoupling will allow for the cache to be added later on. So again - stick to good practices, do not close doors but also do not open all of them straight away.
You may find this article interesting.
I have an ASP .NET 2.0 web application which is a online survey system. At times it has huge number of users and the application go slow.
I wanted to do load balancing for the web application which runs in a single server now.
Will anyone suggest me...
In what all scenarios i should consider load balancing to my application?
what type of applications need load balancing?
what is the pros and cons of load balancing?
what is the guidelines for devoloping applications which targets load balancing,
At max how many number of concurrent users can access web application without load balancing without affecting performance much?
In my case application is already devoloped. What all the areas i should make changes to prepare it for load balancing?
Thanks in advance.
First you need to ensure that you know where the bottleneck to performance is.
If you can focus first on getting the total round trip time for each user's page load down then you will be able to handle more users.
In a case where you are bottlenecked on database calls you could setup more servers for load balancing your web application and get very little benefit.
Is there a bandwidth issue with your webserver? Are requests for images, css and javascript files slowing down just as much as web application page_load requests?
Ensure that you aren't storing too much data in session state. If you are storing lists or other large objects in memory you have to remember that you will be multiplying that memory usage by the number of users causing things to get out of control pretty quickly if you have 10,000 users with an active session. In some of these cases it may be preferable to move state information out to cookies that are stored by the user.
From my experience, the software load balancing options are limited, inconsistent and inflexible.
Of course, when developing the application, you need to ensure that your application can scale out for a web farm scenario - i.e, things like ensuring you are using a distributed cache provider, a state server etc.
The hardware based solutions will provide multiple methods of distributing loads and are very consistent. Consider options like using NLB/F5 Big-IP load balancers.
Have a look at this post from Scott as well - Old, but useful. http://www.hanselman.com/blog/LoadBalancingAndASPNET.aspx
Thanks, Anoop
In what all scenarios i should consider load balancing to my application?
In any scenario when your current server can no longer handle the load by itself. There are two kinds of scaling, vertical (buy a better server) or horizontal (add more servers). Load balancing is a form of horizontal scaling, and typically gives you more bang for your buck, but is also much harder to set up.
What type of applications need load balancing?
Pretty much the same answer here as with a.
What is the pros and cons of load balancing?
Pro is that it lets you scale. Cons are that it introduces unique issues.
What is the guidelines for devoloping applications which targets load balancing?
The big one is session. Session state by default is stored in proc, which means that it only exists on the one server. If the user has the potential of being bounced across different servers, they will lose their session. I would recommend reading up on it here (the way you probably want to go is either dont use session, or use state server)
At max how many number of concurrent users can access web application without load balncing without affecting performance much?
Completely depends on your server hardware and your application requirements. I believe there are 5 asp worker processes per core, so that is one consideration right off the bat. There is also RAM usage to consider.
My first time really getting into caching with .NET so wanted to run a couple of scenarios by you.
Question 1: Many expensive objects
I've got some small objects (simple int/string properties) which are pretty expensive to instantiate. These are user statistic objects which each user may have 1 - 10 of. Is it good or bad practice to fill up the cache with these fellas?
Question 2: Few cheap regularly used objects
Also got a few objects (again small) which are used many times on every page load. Is the cache designed to be accessed so regularly?
Fanks!
stackoverflow: Cracking question suggestion tool btw.
1) I would cache them. You can always set CacheItemPriority.Low if you are worrying about the cache 'filling up'
2) Yes the cache is designed to be accessed regularly. It can lead to huge performance improvements.
The answer to both of your questions is to cache them aggressively if you can.
Objects that are expensive to instantiate yet relatively static (that is unchanging) throughout the application's life ought to be cached. Even relatively inexpensive objects should be cached if you can improve performance by doing so.
You may find yourself running into problems when you need to invalidate the cache when any of these objects become stale or obsolete. Cache invalidation can be a difficult problem especially in a multi-server environment.
I don't think there is any problem with hitting the cache too frequently...
Overall asp.net Caching is fairly intelligent int terms of deciding what to keep and generally managing space. As a rule though I wouldn't depend on the cache to store information, only use it as an alternative to hitting disk or DB. User objects may be better served by session state.
https://web.archive.org/web/20211020111559/https://aspnet.4guysfromrolla.com/articles/100902-1.aspx
is a great article explaining the built in capabilities of .net caching.
Let's address the question in your title - when caching is too much.
It's too much if you are putting so much in the cache that it is pushing other things away. If the web sites on the server in total is using more memory than there is physical memory, they will push each other out into the virtual memory that is stored on disk. That will practically mean that you are caching some of the objects on disk instead of in memory, which is a lot slower.
It's too much if you are putting so many objects in the cache that they push each other out, so that you rarely get to use any of the objects in the cache before they go away.
So, generally you can cache a lot before you reach the limit where there is no point in putting anything more in the cache.
When determine what's most benificial to cache, consider where the bottle necks are. If you for example have a database server with a lot more capacity than the web server, caching the database results doesn't save so much resources. Getting data from the database takes time, but it doesn't use much resources on the web server while waiting for it, so it will not affect the throughput much.
First of all to give you a bit of background on the current environment. We have a number of ASP.NET applications, all of which use session for certain aspects. We are "Load Balanced" over multiple servers due to traffic levels, however, our load balancing is set to use "Sticky Sessions" as currently all web applications are set to use "InProc" for session state.
We are looking at being able to remove the "Sticky Sessions" configuration on our load balancer, as due to our traffic loads servers can and do get overloaded. We want to go with a more balanced approach, but must be able to use session.
I know that SqlServer for session state will work, but for reasons beyond our control, we cannot use SqlServer to store our state. In researching it seems that StateServer is our best bet. We have an additional server, with loads of memory sitting around. This server could be our StateServer for the entire Web Cluster. We just want to know the following things.
1.) Besides any potential serialization issues with the switch from InProc to StateServer, are there any major known issues with losing session objects or generating errors with the above listed environment?
2.) Aside from the single point of failure, and slighly slower performance are there any other gotchas that we need to be aware of with using StateServer.
3.) Are there any metrics that show the performance differences between the three types of state storage?
Here is a decent FAQ on asp.net state: http://www.eggheadcafe.com/articles/20021016.asp
From that Article, here is some information on StateServer:
In a web farm, make sure you have the same MachineKey in all your web servers. See KB 313091 on how to do it.
Also, make sure your objects are serializable. See KB 312112 for details.
For session state to be maintained across different web servers in the web farm, the Application Path of the website (For example \LM\W3SVC\2) in the IIS Metabase should be identical in all the web servers in the web farm. See KB 325056 for details
I have only used sql and in-proc. But these 3 that apply when using sql server apply as well:
Avoid storing too much information in the session, as it affects both in serialization and data transmitted over the network.
Make sure you don't have anything that depends on the Session_onEnd. This is just not available for out of process sessions.
Turn off session on pages that doesn't uses it. This don't make a difference for in-process session, but for out of process it will save you a lot.
Make sure your server etag ids are synchronized across the web farm otherwise caching at client browsers will be upset.
Have you reviewed your code in detail to make sure everything can be serialized out of process and across a LAN efficiently?
Are you solving the main performance problem within your system? I ask because the database is the typical source of contention.
My main motivation for moving away from sticky sessions was operational flexibility i.e. cycle down a problematic server or to deploy a software upgrade. So having implemented a central session state service make sure you take full advantage from an operational stand point.
In my experience we've found out that native state server or even using SQL Server for sessions is a very scary scenario as both have issues (mainly performance). By the way, we are also using sticky sessions.
I think you can explore other products for this to achive the absolute best. A free option would be Velocity but it is still not released.
And another comprehensive but proven product will be (Very expensive actually) NCache. THis will even help in your serilizations with less cost, If you use their API's it will be even better results.
Take a look and see which looks best for you.
About SQL Server, you server will die very soon if you have enough number of hits coming in (I belive you have some hits already which yielded you to do Web Farm or you do it just for the sake of redundancy)
Bottom line: We are evaluating Velocity because NCAchce is really expensive. However advantages are huge.
We are using StateServer for a very small web farm with only two nodes for a few hundred users.
I'm not responsible for its operation but I remember only two issues in two years where the service had to be restarted because it crashed.
I would like to another one more point to the accepted answer:
Make sure the version of framework dlls is the same.
In my case the System.Web dll versions were different as a few windows updates were skipped on one of the servers of the farm.