Azure Scale Out WebApp Connection Constantly Switches Between Servers - asp.net

We have an ASP.NET WebForms website running in an Azure WebApp with automatic "Scale Out" enabled. I can see there are currently two instances running. We have a test page with the following code:
Request.ServerVariables["LOCAL_ADDR"]
If we constantly refresh this page, the IP randomly switches between two different values (presumably for the two different website instances that are running). This is breaking functionality that relies on Sessions, as the Sessions are server-specific.
Is there a way to keep users on the same server instead of connecting them to a random server for each request? (I understand this wouldn't fully solve the problem, so my next question is...)
Is it not viable to use Session objects with the "Scale Out" feature enabled? If not, what are some other options? If we use cookies, I'm concerned about reaching data limits since we occasionally use Sessions to preserve large data sets for short periods. If we use something like a Redis cache, it adds to our operating costs. Is there a better way to do this?

In Azure App Service we need to enable ARR Affinity to keep the session active in one Server.
The Application requesting routing identifies the user and assigns a Affinity cookie. Client establishes the session with the current instance and keeps the instance active until the session expires.
ARR affinitywill not work when we scale out the app service instances, when we scale out, new instances of our app services server will be created and the ARR Affinity will fail if the request goes to new server.
Thanks #ajkuma-MSFT
If our application is stateful, scaling up would be the best option, If our application is stateless, scaling out gives the greater flexibility and a better scalability potential.
Instead of Scaling out we can Scale up App Services plan by increasing the size and SKU of our existing App Service plan to higher tier with more compute, features and then enable the ARR affinity, which helps the sessions remain active and persistent in one Server.
If we use something like a Redis cache, it adds to our operating costs.
Thanks #Praveen Kumar Sreeram
when you configure Load Balancer utilizing the auto-scaling capability, "Sessions" wouldn't work as planned in the Azure App Service.
Another option is to use Redis Cache.
Currently I am using the Standard Tier S1.
With auto Scaling Rule, we can scale up and down when it is not required.
Scale up to Standard S3.
One affinity cookie will be associated to the app server for each request, even if the app receives repeated requests; this will maintain the session's persistence and activity.
And as new instance of app server won’t be created, the application session will remain active due to ARR affinity.
Reference taken from Microsoft Q&A

Related

How to implement SignalR scale-out without using existing backplane options

I am using SignalR hosted in multiple servers behind a load balancer. I am storing the connnection id and the user id in the custom database table in sql server. Every time, I need to send notification to the selected users. It is working fine in the single server environment. How do I scale the SignalR implementation with custom database table without using existing backplane options?
I am not sure what is your current implementation because it seems to be a bit mixed your explanation. If you have multiple servers behind a load balancer it means you applied some techniques (I think so!). But you said it's working fine in the single server environment but not in multiple servers. Let's review what is mandatory for multiple servers (scale out)
Communication between instances: It means that any message in one instance is available on all the other instances. The classic implementation is any type of queue, SignalR supports Redis, you can use SQL Server but it's clear the limitations of any SQL solution. Azure has a Redis Cache as a PaaS
In-memory storage: You normally use this in a single server but it's mandatory to implement shared memory. Again, Redis has a shared memory solution in case you have the server available. There is not any possibility of implementing this without a solution like Redis.
Again, a lower-performance solution would be a MemStorage implementation in SQL.
Authentication: The out-of-the-box implementation of security uses a cookie to store the encrypted key. But once you have multiple servers every server has its unique key. To solve the problem you have to implement your own DataProtector in case this is your method used.
The examples are extremely beyond this explanation, most of the code (even templates without the actual methods implemented) would take several pages. I suggest you to take a look at the 3 items that are mandatory to scale out your application.

Multiple Azure Web App Instances - Inconsistent DB Queries / Data

I have an Azure Web App with autoscaling configured with a minimum of 2 instances. Database is SQL Azure.
User will make changes to the data e.g. edit a product's price. The change will make it to the database and I can see it in SSMS. However, after user refreshes the page, the data may or may not get updated.
My current theory is something to do with having multiple instances of the web app, because if I turn off autoscale and just have 1 instance, the issue is gone.
I haven't configured any sort of caching in Azure at all.
It sounds like what is happening is the data may or may not appear because it is stored in memory on the worker server (at least temporarily). When you have multiple worker servers, a different one may serve the request, in which case that server would not have the value in memory. The solution is to make sure that your application's code is re-fetching the value from the database in every case.
Azure Web Apps has some built in protection against this, called the ARR affinity cookie. Essentially each request has a cookie which keeps sessions "sticky". i.e. if a worker server is serving requests to a certain user, that user should receive subsequent requests from that server as well. This is the default behavior, but you may have disabled it. See: https://azure.microsoft.com/en-us/blog/disabling-arrs-instance-affinity-in-windows-azure-web-sites/

How to Design a Database Monitoring Application

I'm designing a database monitoring application. Basically, the database will be hosted in the cloud and record-level access to it will be provided via custom written clients for Windows, iOS, Android etc. The basic scenario can be implemented via web services (ASP.NET WebAPI). For example, the client will make a GET request to the web service to fetch an entry. However, one of the requirements is that the client should automatically refresh UI, in case another user (using a different instance of the client) updates the same record AND the auto-refresh needs to happen under a second of record being updated - so that info is always up-to-date.
Polling could be an option but the active clients could number in hundreds of thousands, so I'm looking for a more robust and lightweight (on server) solution. I'm versed in .NET and C++/Windows and I could roll-out a complete solution in C++/Windows using IO Completion Ports but feel like that would be an overkill and require too much development time. Looked into ASP.NET WebAPI but not being able to send out notifications is its limitation. Are there any frameworks/technologies in Windows ecosystem that can address this scenario and scale easily as well? Any good options outside windows ecosystem e.g. node.js?
You did not specify a database that can be used so if you are able to use MSSQL Server, you may want to lookup SQL Dependency feature. IF configured and used correctly, you will be notified if there are any changes in the database.
Pair this with SignalR or any real-time front-end framework of your choice and you'll have real-time updates as you described.
One catch though is that SQL Dependency only tells you that something changed. Whatever it was, you are responsible to track which record it is. That adds an extra layer of difficulty but is much better than polling.
You may want to search through the sqldependency tag here at SO to go from here to where you want your app to be.
My first thought was to have webservice call that "stays alive" or the html5 protocol called WebSockets. You can maintain lots of connections but hundreds of thousands seems too large. Therefore the webservice needs to have a way to contact the clients with stateless connections. So build a webservice in the client that the webservices server can communicate with. This may be an issue due to firewall issues.
If firewalls are not an issue then you may not need a webservice in the client. You can instead implement a server socket on the client.
For mobile clients, if implementing a server socket is not a possibility then use push notifications. Perhaps look at https://stackoverflow.com/a/6676586/4350148 for a similar issue.
Finally you may want to consider a content delivery network.
One last point is that hopefully you don't need to contact all 100000 users within 1 second. I am assuming that with so many users you have quite a few servers.
Take a look at Maximum concurrent Socket.IO connections regarding the max number of open websocket connections;
Also consider whether your estimate of on the order of 100000 of simultaneous users is accurate.

How to make .NET applications running in a cluster to communicate to each other?

I have a single .NET web app running in an ARR cluster (IIS) with multiple machines.
Each machine must keep a cache for user access permissions in memory. That is, when the app must determine whether the user has permission to access a resource, it queries a memory cache to avoid database access (there's a lot of queries per user request).
The problem is that, in certain situations, this cache must be invalidated. But, as there are multiple machines, when one decides the cache must be invalidated, it has to be propagated to the other machines.
What is the best practice to solve this problem? We have an ASP.NET MVC 3 app running on a IIS ARR cluster.
Message queues are the normal solution.
You can have a queue that is subscribed by all your nodes and when you need invalidate a cache you send a message to the queue - the other nodes see this message and invalidate their queues.
MSMQ is the Microsoft message queue, but there are many third party ones. You may want to take a look at nServiceBus as an alternative.
Invalidate all cache related to permissions, or only relevant parts of the cache (ex particular user, particular role etc?)
Anyway, I'm thinking along the lines of some pub/sub pattern, ex http://redis.io/topics/pubsub - a server that the apps subscribe to (sub), and the apps can request invalidation of cache by publishing a request to all subscribers (pub)

system.web.caching - At what level is the cache maintained?

I am looking at implementing caching in a .net Web App. Basically... I want to cache some data that is pulled in on every page, but never changes on the database.
Is my Cache Element unique to each:
Session?
App Pool?
Server?
If it is session, this could get out of hand if thousands of people are hitting my site and each cache is ~5k.
If App Pool, and I had several instances of one site running (say with a different DB backend, all on one server, though) then I'd need individual App Pools for each instance.
Any help would be appreciated... I think this data is probably out there I just don't have the right google combination to pull it up.
By default it is stored in memory on the server. This means that it will be shared among all users of the web site. It also means that if you are running your site in a web farm, you will have to use an out-of-process cache storage to ensure that all nodes of the farm share the same cache. Here's an article on MSDN which discusses this.
"One instance of this class is created per application domain, and it remains valid as long as the application domain remains active" - MSDN

Resources