Do I need to change to Redis Cache in Azure from Nov 30 2016? - asp.net

I have a single instance of a website hosted in Azure, which uses the in-role session cache. This uses some very basic calls to pass data between pages, such as Session("MustChangePassword") = "True"
Microsoft have emailed Azure customers saying that the in-role and managed caches are going to be retired, and that Azure Redis cache should be used instead:
Azure Managed Cache Service and Azure In-Role Cache to be retired November 30, 2016
As a reminder, Azure Managed Cache Service and Azure In-Role Cache service will remain available for existing customers until November 30, 2016. After this date, Managed Cache Service will be shut down, and In-Role Cache service will no longer be supported. We recommend that you migrate to Azure Redis Cache. For more information on migrating, please visit the Migrate from Managed Cache Service to Azure Redis Cache documentation webpage. For more information about the retirement, please visit the Azure Blog.
Is this going to still affect cloud services that use just one instance, or will Session data just completely break after this change is made if I don't do anything?
If I do have to change to Redis cache, I see from the supplied links that I can download it as a NuGet package and make changes to the web.config file. However, I am then unsure as to whether I'd need to make changes to the code, or whether the calls to Session("Whatever") would still work without any further changes needed.
So in summary:
1) Do I need to change to the new cache?
2) If so, what code changes do I need to make over and above configuring the new cache?

This announcement is at least one year old, if not older.
So in summary:
Do I need to change to the new cache?
If so, what code changes do I need to make over and above configuring the new cache?
To Anser your questions:
YES
Check out the documentation links you quoted.
And by the way, you cannot downlaod Azure Redis Cache as a NuGet package. What you download is Client SDK/API to work with Azure Redis Cache. Azure Redis Cache is a separate service in Azure. Which is also billed separately.

So it turns out that using a session call such as Session("MustChangePassword") = "True" is absolutely fine to still use in the case of running a single instance machine.
It may not be supported, but it still works, and I have not had to add any other kind of session management to this project.
Everything is working exactly as it was before the announcement, and continues to work after the deadline had passed.
So in summary:
1) Do I need to change to the new cache?
2) If so, what code changes do I need to make over and above configuring the new cache?
The answers to the above questions were 1) No, and 2) No changes needed.

Related

How to atomically update and roll back a Firebase Hosting site + Cloud Run service?

Suppose we have a site on Google Firebase Hosting that routes some requests to a Google Cloud Run service. The service is considered entirely an implementation detail and its only client is the single website. The only reason for using a Cloud Run service is that it is the only suitable technical option within the Firebase platform.
Now, suppose that the API of the service may have a breaking change with every update, so the Firebase Hosting content must change too. How do you update or roll back both parts together so as to avoid incompatibilities?
Straightforwardly, we can update the service and the site content in separate steps, but that means some requests from the old revision of the site may reach the new revision of the service or the other way around, causing errors due to API mismatch. The same issues are present when rolling back the site content and the service at the same time.
One theoretical solution would be to deterministically route requests to different service revisions based on revision labels, but that is not supported on Cloud Run.
One realistic solution would be to create a new service for every update of the site content. However, that would result in unbounded accumulation of services which are not automatically deleted like service revisions are.
Another solution (proposed below) would be to maintain backwards compatibility in the service - it would support both the latest and the previous API version. However, this can be considered an unnecessary overhead. Since the two parts (static content and the service) have no real need to ever be updated independently, it would be very convenient to avoid the overhead of maintaining backwards compatibility in the service.
For what I know there is no way to make this update in a single transaction to avoid this behavior you mentioned as Firebase and Cloud Run are different products.
Also a good Practice in API design is to allow Service Evolution this means that updating the API shall not break the apps consuming it and new versions of the app shall be able to evolve in a way that they can consume the current API.
Something that is done when a new API will not allow retrocompatibility is to have different endpoints this is why some APIs are apiName/V1/method and apiName/v2/method but in this case both versions of the API are deployed.

Is it worth adding an Azure Cache for Redis instance Output Cache provider to a single instance Web App?

I have a single instance ASP.NET MVC website running on Azure. I'm working on improving its speed.
ASP.NET Output Caching was added for faster page loads, with good results.
I was reading up on the possibility of using an Azure Redis instance as the Output Cache.
My thinking is:
Default Output Cache is the best for single instance apps, it should be the fastest because it runs on the same machine
Azure Redis Cache would most likely be slower, since it would add an extra cache lookup roundtrip between the Web App and the Redis instance
Is that correct?
Correct, given that all of your requests are being processed within the same application it's sufficient to use in-memory caching.
Azure Redis Cache would be beneficial if you had multiple processes which all wanted to share the same cache, e.g. if your website was running across multiple containers.
It depends on what you are trying to achieve. In memory cache will be quicker than Redis but lets say you restart your app and the cache would need to be refreshed. |In cases, where you have large reference data which you are caching, this might be an overhead. You can use a combination of in memory and Redis in such a case, which will also act as a fail safe in case something goes wrong.

Sessions randomly clear on Win2008 ASP.NET website

I couldn't find anything about this online so I thought I'd ask here. Do any of you have issues with sessions just randomly clearing on a Windows 2008 Server environment? This problem is completely random and very unpredictable. I have no code that clears sessions except on logout, and not quite sure what could be causing it (well, I have ideas...)
My host, who I've been with for many years (and never had a problem with) is telling me that Windows 2003 is better at managing session variables and that I will likely be rid of this session clearing issue if I were to move to a 2003 Server environment. Thing is, I'm already set up and running on IIS 7 with the URL Rewrite module and I'd rather not move or reconfigure URL rewriting. Tech support says the App Pool I am running on is configured properly. My session timeout is set to 60 minutes in Web.config and my host tells me that session timeout is set to 60 minutes for my domain.
I could optionally go with an Azure AppFabric Cache for sessions but I'd rather not pay an extra $50 a month--it's a pretty small and low income site. I'm currently using a SQL Azure database but from what I hear, database sessions are not ideal on SQL Azure.
Thoughts?
Are you modifying any files in the web site?
Changes to the folder or file structure of the web site often triggers an app pool recycling, resetting sessions. The work-around is to use a durable session store like the SQL Server Session State provider.
Most likely answer is your app pool is recycling on you for some reason or another which will dump your in process session every time. Proximate causes can be lots of things, especially if app pools are shared. An easy way to see if your app pool is getting dumped is to take advantage of asp.net heartbeat monitoring, it could be configured to email you when these events occur.

How to deploy website to production with minimal impact to users

I'm trying to find the best server architecture solution to deploy monthly updates to an Asp.net external public facing website. What I'm looking for are ways to release a new version of a website with minimal impact to users. Besides deploying the standard way (ie. stop IIS, copy new website over existing website, start IIS), what are some "better" solutions for deployment out there? It would be nice if they kept their session and didn't have to see a "Website under maintenance" message during the update.
My server configuration
We have 2 IIS web servers (2003) and are trying to figure out the best way to utilize them for deployments. My first thought was to update the non-active web server with the latest release. Then to gracefully point the web traffic to that server with minimal impact to users (best case, the user doesn't lose his session). How would you go about "repointing" the web traffic from server 1 to server 2? Changing firewall NAT? Changing DNS records? Some other way?? We need to be able to test the live site immediately after we release the new changes (duh).
BTW, we are using nant and cruise control to automate the builds, and a custom web service to deploy the build to production. So it's all automated with the click of a button.
Could a better solution be achieved using a 3rd server? If so how?
The way we do is
We have a load balancer from netscaler,
take one webserver out of loadbalancer , do all deployments, do a iisreset and the put back in load balancer.
Do the same thing for server2 .
Finally invalidate loadbalancer cache.
Well, there are a couple of things here:
First, consider using a load balancing solution. Windows 2003 server ships with windows load balancing (WLBS), though its not the greatest product. It is, though, free. With that, you can point all traffic to one server, update it, and then do the opposite.
Secondly, you may want to consider looking at how you're working with sessions. HTTP is stateless, which means that as long as you can reconstruct a user's session on any page hit, you should be fine. One ideal step towards this is using ASP.NET Forms Authentication - the cookie written by it isn't tied to an ASP.NET session. Of course, this approach leads to greater risk - there is a chance users will get an error screen if they hit something JUST AS you're copying files. And then there will be a delay while the app pool refreshes.
Overall, your better option is load balancing. Even with it, though, consider trying the second option as well - having sessions that can regenerate works well if users fail to be sticky to one of the servers in the pool.
Just wanted to add this for brevity. At my previous work, we achieved seamless deployments by using the following setup:
A load balancer would point to the production ASP.NET webservers (two in your case, but we had three), and the webservers would have their session setup to pull from a third server dedicated to hosting OutOfProc ASP.NET session.
To deploy a site, we'd pull one of the servers out of the load balancer, update the files, fire it back up, and place it back into the load balancer pool. Repeat for the rest of the webservers.
Because each webserver got the session data from the one central server, taking one webserver out, did not log out the users on that server.
If we had code changes that were incompatible with the existing session data, we'd wait till a scheduled maintenance window to deploy. Otherwise, users with that session data would get errors till they logged out.
Additionally, since this setup relies on the webserver being up, if you wanted to increase reliability, you could change the OutOfProc to SQL based session servers. You would need several servers that replicated the same session database and point the webservers to them. More complicated, but would reduce site downtime.

Windows Azure & ASP.NET session

I have ASP.NET web application which stores information on the session while user goes through pages. Will I have any problems if I deploy such application to Windows Azure?
As Nariman stated, you can't have server affinity - the load balancing is beyond your control. You can use either table storage or SQL Azure for session state. I don't really see much value in storing session state in blobs.
See this post on the SQL Azure blog, from August 2010, to see how to implement session storage in SQL Azure. This will allow you to manage session state across instances as you scale up.
EDIT 6/16/2014 - The Redis cache supports this. See Azure Redis Cache (Preview) ASP.NET Session State Provider
EDIT 5/23/2012 - Wow, lots has changed since I posted this. SQL Azure is fully supported as a session state provider (via Universal Providers, shipping since v1.4), as well as Windows Azure Cache. More details are provided in this StackOverflow answer.
You won't notice a difference on a single-instance deployment that uses InProc but you do need to rely on out of process Blob & Table storage if you plan on running multiple web roles. (There's no way to keep a user pegged to the same instance in Azure load-balancing, AFAIK.)
With the October 2012 release of the Azure SDK they provided a special session provider that can use the "Co-Located Role Caching" as a special back end for session. It allows you to use the role-level caches rather than having to choose from table storage or SQL Azure only. There are instructions on how to configure that here:
http://dotnetthread.com/articles/27-Setting-up-Windows-Azure-Caching-for-Session-State-Management.aspx

Resources