How to make .NET applications running in a cluster to communicate to each other? - asp.net

I have a single .NET web app running in an ARR cluster (IIS) with multiple machines.
Each machine must keep a cache for user access permissions in memory. That is, when the app must determine whether the user has permission to access a resource, it queries a memory cache to avoid database access (there's a lot of queries per user request).
The problem is that, in certain situations, this cache must be invalidated. But, as there are multiple machines, when one decides the cache must be invalidated, it has to be propagated to the other machines.
What is the best practice to solve this problem? We have an ASP.NET MVC 3 app running on a IIS ARR cluster.

Message queues are the normal solution.
You can have a queue that is subscribed by all your nodes and when you need invalidate a cache you send a message to the queue - the other nodes see this message and invalidate their queues.
MSMQ is the Microsoft message queue, but there are many third party ones. You may want to take a look at nServiceBus as an alternative.

Invalidate all cache related to permissions, or only relevant parts of the cache (ex particular user, particular role etc?)
Anyway, I'm thinking along the lines of some pub/sub pattern, ex http://redis.io/topics/pubsub - a server that the apps subscribe to (sub), and the apps can request invalidation of cache by publishing a request to all subscribers (pub)

Related

Azure Scale Out WebApp Connection Constantly Switches Between Servers

We have an ASP.NET WebForms website running in an Azure WebApp with automatic "Scale Out" enabled. I can see there are currently two instances running. We have a test page with the following code:
Request.ServerVariables["LOCAL_ADDR"]
If we constantly refresh this page, the IP randomly switches between two different values (presumably for the two different website instances that are running). This is breaking functionality that relies on Sessions, as the Sessions are server-specific.
Is there a way to keep users on the same server instead of connecting them to a random server for each request? (I understand this wouldn't fully solve the problem, so my next question is...)
Is it not viable to use Session objects with the "Scale Out" feature enabled? If not, what are some other options? If we use cookies, I'm concerned about reaching data limits since we occasionally use Sessions to preserve large data sets for short periods. If we use something like a Redis cache, it adds to our operating costs. Is there a better way to do this?
In Azure App Service we need to enable ARR Affinity to keep the session active in one Server.
The Application requesting routing identifies the user and assigns a Affinity cookie. Client establishes the session with the current instance and keeps the instance active until the session expires.
ARR affinitywill not work when we scale out the app service instances, when we scale out, new instances of our app services server will be created and the ARR Affinity will fail if the request goes to new server.
Thanks #ajkuma-MSFT
If our application is stateful, scaling up would be the best option, If our application is stateless, scaling out gives the greater flexibility and a better scalability potential.
Instead of Scaling out we can Scale up App Services plan by increasing the size and SKU of our existing App Service plan to higher tier with more compute, features and then enable the ARR affinity, which helps the sessions remain active and persistent in one Server.
If we use something like a Redis cache, it adds to our operating costs.
Thanks #Praveen Kumar Sreeram
when you configure Load Balancer utilizing the auto-scaling capability, "Sessions" wouldn't work as planned in the Azure App Service.
Another option is to use Redis Cache.
Currently I am using the Standard Tier S1.
With auto Scaling Rule, we can scale up and down when it is not required.
Scale up to Standard S3.
One affinity cookie will be associated to the app server for each request, even if the app receives repeated requests; this will maintain the session's persistence and activity.
And as new instance of app server won’t be created, the application session will remain active due to ARR affinity.
Reference taken from Microsoft Q&A

Difference between web and desktop applications in database access

i have a bit theoretical question.
When creating web applications, there is difference to desktop applications with working and active connection to database. So im curious if there is some solution, which can provide more desktop-like access to database e.g. transactions on asynchronous requests from client (web browser)?
edit:
So i figured out, that there can be a transaction process of asynchronous request, from client. Is there solution, which can provide it in web apps?
e.g I have assynchronou ajax call, which consist of multiple operations, and i wana to process them as transaction. If everything is okay, operations will be all done. But if one of them fail, just rollback it. Like its in DB. Is it possible?
edit2: maybe im wrong and the issue is not about ajax, but about whole web applications, but i dont think there is a way how to make a asynchronnous request from web client.
Transaction need continuous connection to database. To make it work with web application you need a platform which allow the application to run continuously independent of client request. Java servlet is best fit, php is a no-no. So I asume you will use java servlet.
In java servlet, you can create a db transaction, create an id for it, and then store them in a static variable or in the provided application-wide object, context. Then, return the id to the client.
When the client want to send another request, make it send the id. The application then can locate the transaction variable based on the id. As long as the application doesn't restarted between the two requests, the transaction is still there and active.
Because web application don't know when the user leave the application, you must create a mechanism to check the transactions periodically, and then rollback it if the user leave them for a specified time period.
The database has no knowledge of who is connected outside of authentication.

Multiple Azure Web App Instances - Inconsistent DB Queries / Data

I have an Azure Web App with autoscaling configured with a minimum of 2 instances. Database is SQL Azure.
User will make changes to the data e.g. edit a product's price. The change will make it to the database and I can see it in SSMS. However, after user refreshes the page, the data may or may not get updated.
My current theory is something to do with having multiple instances of the web app, because if I turn off autoscale and just have 1 instance, the issue is gone.
I haven't configured any sort of caching in Azure at all.
It sounds like what is happening is the data may or may not appear because it is stored in memory on the worker server (at least temporarily). When you have multiple worker servers, a different one may serve the request, in which case that server would not have the value in memory. The solution is to make sure that your application's code is re-fetching the value from the database in every case.
Azure Web Apps has some built in protection against this, called the ARR affinity cookie. Essentially each request has a cookie which keeps sessions "sticky". i.e. if a worker server is serving requests to a certain user, that user should receive subsequent requests from that server as well. This is the default behavior, but you may have disabled it. See: https://azure.microsoft.com/en-us/blog/disabling-arrs-instance-affinity-in-windows-azure-web-sites/

How to Design a Database Monitoring Application

I'm designing a database monitoring application. Basically, the database will be hosted in the cloud and record-level access to it will be provided via custom written clients for Windows, iOS, Android etc. The basic scenario can be implemented via web services (ASP.NET WebAPI). For example, the client will make a GET request to the web service to fetch an entry. However, one of the requirements is that the client should automatically refresh UI, in case another user (using a different instance of the client) updates the same record AND the auto-refresh needs to happen under a second of record being updated - so that info is always up-to-date.
Polling could be an option but the active clients could number in hundreds of thousands, so I'm looking for a more robust and lightweight (on server) solution. I'm versed in .NET and C++/Windows and I could roll-out a complete solution in C++/Windows using IO Completion Ports but feel like that would be an overkill and require too much development time. Looked into ASP.NET WebAPI but not being able to send out notifications is its limitation. Are there any frameworks/technologies in Windows ecosystem that can address this scenario and scale easily as well? Any good options outside windows ecosystem e.g. node.js?
You did not specify a database that can be used so if you are able to use MSSQL Server, you may want to lookup SQL Dependency feature. IF configured and used correctly, you will be notified if there are any changes in the database.
Pair this with SignalR or any real-time front-end framework of your choice and you'll have real-time updates as you described.
One catch though is that SQL Dependency only tells you that something changed. Whatever it was, you are responsible to track which record it is. That adds an extra layer of difficulty but is much better than polling.
You may want to search through the sqldependency tag here at SO to go from here to where you want your app to be.
My first thought was to have webservice call that "stays alive" or the html5 protocol called WebSockets. You can maintain lots of connections but hundreds of thousands seems too large. Therefore the webservice needs to have a way to contact the clients with stateless connections. So build a webservice in the client that the webservices server can communicate with. This may be an issue due to firewall issues.
If firewalls are not an issue then you may not need a webservice in the client. You can instead implement a server socket on the client.
For mobile clients, if implementing a server socket is not a possibility then use push notifications. Perhaps look at https://stackoverflow.com/a/6676586/4350148 for a similar issue.
Finally you may want to consider a content delivery network.
One last point is that hopefully you don't need to contact all 100000 users within 1 second. I am assuming that with so many users you have quite a few servers.
Take a look at Maximum concurrent Socket.IO connections regarding the max number of open websocket connections;
Also consider whether your estimate of on the order of 100000 of simultaneous users is accurate.

How to synchronize server operation in multiple instances web application on Azure?

I have a client-server web application - the client is HTML/JS and the server is ASP.Net. web application hosted by Azure Web Role
In this application the client can save a document on the server by calling a web service method on the server side.
After calling the saving method, the client might save the document again while the server processes his previous save request. In this case I want the newly initiated saving to be queued until the previous saving operation is completed.
If I had a single web role instance, it would be easy for me to implement this by thread synchronization, but since the client request might be handled by different web role instances, the synchronization is problematic.
My question is - how can I implement such synchronization mechanism, or is there a better way to get the same result, that I'm not aware of.
Thanks.
I would consider combination of storage or service bus queues to queue up the requests to process the documents AND using BLOB leases to mark the work as in progress.
Queuing would be important since the requests might be delayed in processing if there is a previous request for the same job that's on going.
BLOB Leasing is a way to put a centralized lock in storage. Once you start processing of a request, you can put a blob with a lease on it and release the lease once you're done. Requests for the same work would check first if the lease is available before kicking off. Otherwise, they could just wait. More info on leases here: http://blog.smarx.com/posts/leasing-windows-azure-blobs-using-the-storage-client-library
Have you looked into using the Windows Azure Cache - Co-Located on your roles?
This is a shared caching layer that can use excess memory on your roles (Or have it's own worker role if your web roles are already close to capacity) to create a key / value store which can be accessed by any role on the same deployment.
You could use the cache to store a value indicating a document is currently being processed and block it until the document has finished uploaded. As it is a shared caching layer the value will be persisted across your instances (Though the cache will not persist during an upgrade deployment).
Here's a good introductary article to using Caching in Azure with configuration examples and sample code.

Resources