Where to store multi-tenant user specified cache? - symfony

I'm building SaaS application on top of Symfony2.
Our system is simple: one node balancer, a few application servers, and a few database servers.
Every app server has exactly same copy of app and differents only in parameters.
Where to store client specified cache?
Where to store app/cache? Separate for every client app/cache/clientN?
Where to store HTTP cache? On app servers or node balancer?
What if every client has different domain?
Where to store database query/result cache? On DB server in memory (redis/riak/memcached)?

All of that can be served by the same cluster of distributed HA cache. There are many. Redis/Hazelcast are examples of HA cache. You only need to take care of cache grouping/naming.
Where to store client specified cache?
I do not understand what is client specified cache.
Where to store app/cache? Separate for every client app/cache/clientN?
Same cache can be used. Ensure that cache naming is different for different entities. Separate cache or same is a specific question. This depends on the extent of separation desired. Also on the size of each cache, how it impacts other clients etc. This is similar to whether you want a shared table or a shared db or separate db server for the multi-tenant implementation.
Where to store HTTP cache? On app servers or node balancer? What if
every client has different domain?
Static content can be cached on node balancer. Load balancers like nginx support this capability. The same cache server can be used too.
Where to store database query/result cache? On DB server in memory
(redis/riak/memcached)?
Again, the same cache cluster can be used. Note memcached is not replicated. Custom app code is required for that.

Related

Is it worth adding an Azure Cache for Redis instance Output Cache provider to a single instance Web App?

I have a single instance ASP.NET MVC website running on Azure. I'm working on improving its speed.
ASP.NET Output Caching was added for faster page loads, with good results.
I was reading up on the possibility of using an Azure Redis instance as the Output Cache.
My thinking is:
Default Output Cache is the best for single instance apps, it should be the fastest because it runs on the same machine
Azure Redis Cache would most likely be slower, since it would add an extra cache lookup roundtrip between the Web App and the Redis instance
Is that correct?
Correct, given that all of your requests are being processed within the same application it's sufficient to use in-memory caching.
Azure Redis Cache would be beneficial if you had multiple processes which all wanted to share the same cache, e.g. if your website was running across multiple containers.
It depends on what you are trying to achieve. In memory cache will be quicker than Redis but lets say you restart your app and the cache would need to be refreshed. |In cases, where you have large reference data which you are caching, this might be an overhead. You can use a combination of in memory and Redis in such a case, which will also act as a fail safe in case something goes wrong.

Multiple Azure Web App Instances - Inconsistent DB Queries / Data

I have an Azure Web App with autoscaling configured with a minimum of 2 instances. Database is SQL Azure.
User will make changes to the data e.g. edit a product's price. The change will make it to the database and I can see it in SSMS. However, after user refreshes the page, the data may or may not get updated.
My current theory is something to do with having multiple instances of the web app, because if I turn off autoscale and just have 1 instance, the issue is gone.
I haven't configured any sort of caching in Azure at all.
It sounds like what is happening is the data may or may not appear because it is stored in memory on the worker server (at least temporarily). When you have multiple worker servers, a different one may serve the request, in which case that server would not have the value in memory. The solution is to make sure that your application's code is re-fetching the value from the database in every case.
Azure Web Apps has some built in protection against this, called the ARR affinity cookie. Essentially each request has a cookie which keeps sessions "sticky". i.e. if a worker server is serving requests to a certain user, that user should receive subsequent requests from that server as well. This is the default behavior, but you may have disabled it. See: https://azure.microsoft.com/en-us/blog/disabling-arrs-instance-affinity-in-windows-azure-web-sites/

Application variable across load balanced servers (ASP.Net)

We have a website that runs on two load balanced servers. We used the ASP.Net Application variable to make application state "online/ offline", or for some important messages across the application,
So when i try update a application variable its available on one server but not on other.
How i can manage a application variable across load balanced servers.
What may I use? Of course keeping it as simple as possible.
Are you using sticky sessions? How often does the data change? Is application cache even necessary?
One option: You can have each webserver store (and manage, refresh, invalidate) its own application cache. But then you run the chance of storing different copies.
Another option: distributed cache such as memcached or ncache or something else.
Another option: read/write the data out to a shared disk.
Store that information in a database that all servers have access to and access information from.

What is database backed Cache and how does it work?

What is database backed Cache and how does it work ? Something similar in line of when the app server goes down and database is backed by cache there is no time wasted to repopulate an in memory cache
A database backed cache is a method of storing data that is costly (in resources or time) to generate or derive. You could see them implemented for such things as:
improving web server performance by caching dynamic pages in a db as static html so additional hits to the page do not incur the overhead of regenerating the page. Yes, this might be counter-intuitive as often database access is the bottleneck, though in some cases it is not.
Improving query time against a slow (or offsite) directory server or database.
If I understand your example correctly, I believe you might have it backwards. The database is backing some other primary location. For example, in an app server farm, if a security token is stored in a db backed cache and the app server you are currently interacting with goes down you could be routed to a different service instance. The token cache check it's in-memory cache which won't contain the token, so it will be retrieved from the database, deserialized and added to the (new) local cache. The benefits are minimized network transport and improved resilience to failures.
Hope this helps.

ASP.NET In a Web Farm

What issues do I need to be aware of when I am deploying an ASP.NET application as a web farm?
All session state information would need to be replicated accross servers. The simplest way would be to use the MSSQL session state provider as noted.
Any disk access, such as dynamic files stored by users, would need to be on an area avialable to all servers. Such as by using some form of Network Attached storage. Script files, images and html etc would just be replicated on each server.
Attempting to store any information in the application object or to load information on application startup would need to be reviewed. The events would fire each time the user hit a new machine in the farm.
Machine keys across each server is a very big one as other people have suggested. You may also have problems if you are using ssl against an ip address rather than a domain.
You'll have to consider what load balancing strategy your going to go through as this could change your approach.
Sessions is a big one, make sure you use SQL Server for managing sessions and that all servers point to the same SQL Server instance.
One of the big ones I've run across is issues with different machineKeys spread across the different servers. ASP.NET uses the machineKey for various encryption operations such as ViewState and FormsAuthentication tickets. If you have different machineKeys you could end up with servers not understanding post backs from other servers. Take a look here if you want more information: http://msdn.microsoft.com/en-us/library/ms998288.aspx
Don't use sessions, but use profiles instead. You can configure a SQL cluster to serve them. Sessions will query your session database way too often, while profiles just load themselfs, and that's it.
Use a distributed caching store like memached for caching data, and ASP.Net cache for stuff you'll need alot
Use a SAN or an EMC to serve your static content
Use S3 or something similar to have a fallback on 3.
Have some decent loadbalancer, so you can easily update per server, without ever needing to shut down the site
HOW TO: Set Up Multi-Server ASP.NET Web Applications and Web Services
Log aggregation is easily overlooked - before processing HTTP logs, you might need to combine them to create a single log that includes requests sent to across servers.

Resources