What is database backed Cache and how does it work? - asp.net

What is database backed Cache and how does it work ? Something similar in line of when the app server goes down and database is backed by cache there is no time wasted to repopulate an in memory cache

A database backed cache is a method of storing data that is costly (in resources or time) to generate or derive. You could see them implemented for such things as:
improving web server performance by caching dynamic pages in a db as static html so additional hits to the page do not incur the overhead of regenerating the page. Yes, this might be counter-intuitive as often database access is the bottleneck, though in some cases it is not.
Improving query time against a slow (or offsite) directory server or database.
If I understand your example correctly, I believe you might have it backwards. The database is backing some other primary location. For example, in an app server farm, if a security token is stored in a db backed cache and the app server you are currently interacting with goes down you could be routed to a different service instance. The token cache check it's in-memory cache which won't contain the token, so it will be retrieved from the database, deserialized and added to the (new) local cache. The benefits are minimized network transport and improved resilience to failures.
Hope this helps.

Related

Is it worth adding an Azure Cache for Redis instance Output Cache provider to a single instance Web App?

I have a single instance ASP.NET MVC website running on Azure. I'm working on improving its speed.
ASP.NET Output Caching was added for faster page loads, with good results.
I was reading up on the possibility of using an Azure Redis instance as the Output Cache.
My thinking is:
Default Output Cache is the best for single instance apps, it should be the fastest because it runs on the same machine
Azure Redis Cache would most likely be slower, since it would add an extra cache lookup roundtrip between the Web App and the Redis instance
Is that correct?
Correct, given that all of your requests are being processed within the same application it's sufficient to use in-memory caching.
Azure Redis Cache would be beneficial if you had multiple processes which all wanted to share the same cache, e.g. if your website was running across multiple containers.
It depends on what you are trying to achieve. In memory cache will be quicker than Redis but lets say you restart your app and the cache would need to be refreshed. |In cases, where you have large reference data which you are caching, this might be an overhead. You can use a combination of in memory and Redis in such a case, which will also act as a fail safe in case something goes wrong.

Multiple Azure Web App Instances - Inconsistent DB Queries / Data

I have an Azure Web App with autoscaling configured with a minimum of 2 instances. Database is SQL Azure.
User will make changes to the data e.g. edit a product's price. The change will make it to the database and I can see it in SSMS. However, after user refreshes the page, the data may or may not get updated.
My current theory is something to do with having multiple instances of the web app, because if I turn off autoscale and just have 1 instance, the issue is gone.
I haven't configured any sort of caching in Azure at all.
It sounds like what is happening is the data may or may not appear because it is stored in memory on the worker server (at least temporarily). When you have multiple worker servers, a different one may serve the request, in which case that server would not have the value in memory. The solution is to make sure that your application's code is re-fetching the value from the database in every case.
Azure Web Apps has some built in protection against this, called the ARR affinity cookie. Essentially each request has a cookie which keeps sessions "sticky". i.e. if a worker server is serving requests to a certain user, that user should receive subsequent requests from that server as well. This is the default behavior, but you may have disabled it. See: https://azure.microsoft.com/en-us/blog/disabling-arrs-instance-affinity-in-windows-azure-web-sites/

Where to store multi-tenant user specified cache?

I'm building SaaS application on top of Symfony2.
Our system is simple: one node balancer, a few application servers, and a few database servers.
Every app server has exactly same copy of app and differents only in parameters.
Where to store client specified cache?
Where to store app/cache? Separate for every client app/cache/clientN?
Where to store HTTP cache? On app servers or node balancer?
What if every client has different domain?
Where to store database query/result cache? On DB server in memory (redis/riak/memcached)?
All of that can be served by the same cluster of distributed HA cache. There are many. Redis/Hazelcast are examples of HA cache. You only need to take care of cache grouping/naming.
Where to store client specified cache?
I do not understand what is client specified cache.
Where to store app/cache? Separate for every client app/cache/clientN?
Same cache can be used. Ensure that cache naming is different for different entities. Separate cache or same is a specific question. This depends on the extent of separation desired. Also on the size of each cache, how it impacts other clients etc. This is similar to whether you want a shared table or a shared db or separate db server for the multi-tenant implementation.
Where to store HTTP cache? On app servers or node balancer? What if
every client has different domain?
Static content can be cached on node balancer. Load balancers like nginx support this capability. The same cache server can be used too.
Where to store database query/result cache? On DB server in memory
(redis/riak/memcached)?
Again, the same cache cluster can be used. Note memcached is not replicated. Custom app code is required for that.

How to synchronize server operation in multiple instances web application on Azure?

I have a client-server web application - the client is HTML/JS and the server is ASP.Net. web application hosted by Azure Web Role
In this application the client can save a document on the server by calling a web service method on the server side.
After calling the saving method, the client might save the document again while the server processes his previous save request. In this case I want the newly initiated saving to be queued until the previous saving operation is completed.
If I had a single web role instance, it would be easy for me to implement this by thread synchronization, but since the client request might be handled by different web role instances, the synchronization is problematic.
My question is - how can I implement such synchronization mechanism, or is there a better way to get the same result, that I'm not aware of.
Thanks.
I would consider combination of storage or service bus queues to queue up the requests to process the documents AND using BLOB leases to mark the work as in progress.
Queuing would be important since the requests might be delayed in processing if there is a previous request for the same job that's on going.
BLOB Leasing is a way to put a centralized lock in storage. Once you start processing of a request, you can put a blob with a lease on it and release the lease once you're done. Requests for the same work would check first if the lease is available before kicking off. Otherwise, they could just wait. More info on leases here: http://blog.smarx.com/posts/leasing-windows-azure-blobs-using-the-storage-client-library
Have you looked into using the Windows Azure Cache - Co-Located on your roles?
This is a shared caching layer that can use excess memory on your roles (Or have it's own worker role if your web roles are already close to capacity) to create a key / value store which can be accessed by any role on the same deployment.
You could use the cache to store a value indicating a document is currently being processed and block it until the document has finished uploaded. As it is a shared caching layer the value will be persisted across your instances (Though the cache will not persist during an upgrade deployment).
Here's a good introductary article to using Caching in Azure with configuration examples and sample code.

How can I use caching to improve performance?

My scenario is : WebApp -> WCF Service -> EDMX -> Oracle DB
When I want to bind grid I fetch records from Oracle DB using EDMX i.e LINQ Query. But, this degrades performance as multiple layers take place between WebApp & Oracle DB. Can I use caching mechanism to improve the performance? But as far as I know cache is shared across the whole application. So, if I update cache other user might receive wrong information. Can we use caching per user? Or is there any other way to improve performance of the application?
Yes, you can definitely use caching techniques to improve performance. Generally speaking, caching is “application wide” (or it should be) and the same data is available to all users. But this really depends on your scenario and implementation. I don't see how adding the extra caching layer will degrade performance, it's a sound architecture and well worth the extra complexity.
ASP.NET Caching has a concept of "cache dependencies" which is a method to notify the caching mechanism that the underlying source has changed, and the cached data should be flushed and reloaded on the next request. ASP.NET has a built-in cache dependency for SQL Server, and a quick Google search revealed there’s probably also something you can use with Oracle.
As Jakob mentioned, application-wide caching is a great way to improve performance. Generally user context-agnostic data (eg reference data) can be cached at the application level.
You can also cache user context data by storing data in the user's session when they login. Then the data can be cached for the duration of that users session (HttpContext.Session)
Session data can be configured to be stored in the web application process memory, in a state server (a special WCF service) or in a SQL Server database, depending on the architecture and infrastructure.

Resources