I have an existing ASP.NET website that I would like to port to Azure within my free trial.
I would like the migration to be as painless as possible. The application uses log4net and NHibernate, plus it needs to share data with an application supposed to run on a virtual server.
Two questions can be asked as 1
How do I configure paths in Web.config to access a shared drive?
I need to configure the paths into which logs will be stored and, most important, I have to specify where the application will read the files written by the daemon that will run on my Azure Linux VM.
When both the app and the daemon ran on the same server (yes, I had Mono running fine) I just had to choose a shared local directory.
I'm not sure I'm totally understanding the scenario, but I'll try to give you a few options.
One - Windows Azure Web Sites (currently in Preview) could be a great option for your ASP.NET site. Of course, it depends what needs your site has. But, you can write your log4net files with web site and using NHibernate too.
Two - Web roles work great for situations like this. You would likely have to change some code to use blob storage for persistant file storage. You could use Windows Azure drives as a way to get a persistent location for log files. Windows Azure drives don't have a pre-determined drive letter, so you'd want to use the API to get to that. That may, or may not, be a good option for your situation. With web roles you could also write the log4net files to local storage and use Windows Azure diagnostics to transfer them periodically to blob storage. Just another way to persist the files.
Three - Using Windows Azure Virtual Machines (currently in Preview) you could write the log files to a data disk, which is backed by blob storage.
In the end, if you have files you need to share across instances and/or roles, then leveraging blob storage is likely your best option.
Related
There doesn't seem to be an obvious place or any documentation from Microsoft that explains where to install complex applications which write files into the app folder and are shared by different users.
We have a complex .NET web application that contains 21 individual projects and includes the following components:
A main web site which runs a SAAS system and can be locally installed at customer sites
Applications to install, configure and diagnose
Applications which are installed as windows services
Utility applications which are installed on the server and can also be installed on clients and talk to the web app through web services
On Windows 2012 and earlier, you could quite happily deploy this application into program files, and the entire app would sit in one folder. All fine.
However from Windows 2012 R2 and later, normal user accounts cannot write to program files. Our web app contains significant uploading ability, not only of occasional use files, but also a high volume of data files which are fed from customer LANs to our SAAS system. In addition there is log file writing, and session files etc are written inside the folder that hosts the IIS web itself (edit: and the webfolder itself also exists inside our program files app folder)
From what I can gather, apps now reside in program files, and common data exists somewhere inside c:\users\public, and user specific data exists inside c:\users\username. That means the app is spread around on the hard disk and not easy to backup or administer. To follow those conventions we'd need significant rewriting as a lot of the EXE's write relative to the exe path and the web backend also relies on being able to write files inside the webfolder itself.
So is there a proper place in Windows where a complex app like this can be installed that writes to its own folder structures ?
I've noticed on Windows 2012 R2 and Windows 2016, that I CAN create a folder in the root of C: drive, whereas on Windows 10 I cannot. Would a standalone folder be the right solution to this ? It would certainly be the easiest solution for us.
The windows services and IIS app pool would always run under different credentials, so they could no longer share settings if we have a per-user model for writing files.
The app will have to conform to DoD STIG standards for web and .net applications.
thanks
i have a .net solution, with a Mvc 4.5 web, a c# server dll, another webservice layer dll, etc.
I want to deploy it on a azure virtual machine xx.cloudapp.net:8080
There are many guids on how to deploy a new website on azure, but since this solutions contains a lot of dlls, i need a virtual machine.
I didnt found any guide on how to do it, can you please give me a link or something?
You don't have to use Virtual Machines to install DLLs - you can do this with Cloud Services (web/worker role) as well, via startup tasks. As long as these DLLs are easy to fetch (e.g. blob storage) and quick to install, you can take that route. Many do just that, since this allows you to work with stateless OS VMs (where you don't worry about maintaining the OS, or making copies of a VM when wanting to scale out to multiple instances).
That said: To install to a Virtual Machine, you'd typically copy files to your VM somehow (maybe fetching from a CI engine, possibly ftp'ing the files, whatever procedure you'd typically use with a Windows server). And you'd use RDP for gaining access to the desktop.
Once you have the VM set up just how you want it, you can then create an image of the VM and add it to your personal gallery, whereby you can then spin up additional VMs based on that image. Unlike Cloud Services, each Virtual Machine will then take on a life of its own (and live in its own VHD in its own blob), where you'd have to distribute both OS updates and app updates to each VM as the need arises.
I've created an Azure server instance. I've deployed a simple application to it. As part of the deployment process I enabled Remote Desktop Connections.
I have some standard ASP.net applications that run on Windows, is there something to stop me deploying these applications manually to IIS using Remote Desktop. I've read so much about having to migrate standard ASP.net apps to Azure. I don't want to this as we will have customers who will still use Windows Server 2003/2008 so I don't want to have to maintain 2 versions.
Well, as I understand it, in theory you could deploy stuff using remote desktop. But when the instance shuts down/restarts you'll lose it all (unless you've built it into your startup scripts) and have to re-load everything each time. The main reason they suggest you have at least two instances is so that when one shuts down for updates etc there is always at least one other running.
The "Windows Azure Accelerator for Web Roles" project allows you to create an Azure web role which then enables you to use web deploy for all your other web sites - I'm guessing that will be a whole lot better approach and is definitely worth a look. Also, I believe smarx.com is a good place to browse for info and ideas.
Using a startup task and the Azure Bootstrapper you can download, unzip, install almost any kind of 3rd party software that supports either xcopy deployment (just copy the files) or an unattended(silent) install.
Assuming you aren't using Azure storage or anything like that, there shouldn't be any difference with the IIS application. If you are using anything specific to Azure, you can use the RoleEnvironment.IsAvailable to test if you are running inside Azure or not. That will return true for the emulator as well. If you want to use Azure storage from both, you can add the settings in the web.config to use if not running in Azure.
I currently work with a legacy asp.net web application and one of the requirements going forward is that it be deployable to windows azure.
I would like to know how difficult it will be to manage deployment to both Azure and a traditional IIS web server.
Azure seems to require a specific customized version of a web applicaiton project is it possible to deploy the customized web application to a standard IIS instance once it has been converted.
EDIT:
It is a ASP.NET Web Application rather than a Web Site (compiles everything into one dll)
UPDATE:
In the end due to the amount of work involved in converting the application to work in Azure and the cost of Azure compared with other cloud solutions it was decided to go with a traditional Cloud hosted virtual server.
And thank you for the really good answers.
Whether or not you can deploy your application to Azure almost as is depends a lot on how your application works. Azure pretty much requires your application be stateless. If it's a plain vanilla web application that stores data in the session or application cache only and saves data to a database only, then you can deploy it to Azure.
If you have stateful services running like background threads (which is bad anyways), or if you save data to the file system (besides temporary caching), then you may have issues. Really, the issues moving to Azure are really the same as moving to any multi-server load balanced solution. One caveat is permanent storage.
If you need to store data in a place other than the database, then you're best off working with Azure's storage solution which has an API and client library for storing binary data, key/value data (they call it tables, but really, it's not tables), and queues. They also do have a transparent blob-as-file-system option for compatibility. If you want to use these in your app that also is used outside of Azure then you need to write an extra layer between your code and the Azure client library that supports both Azure services and standard local service. Azure SDK does include emulators for Azure services, but they're definitely not meant for production use.
As far as the mechanics of Azure-specific projects, that is actually not that difficult. Yes, you need to create an Azure-specific project in your solution that defines the Web Role and what gets deployed, but it will reference your existing Web Application, not the other way around. You can deploy the Azure Web Role to Azure or you can continue to deploy the existing application to IIS normally and concurrently.
Web Site, Web Application, MVC, really doesn't make much of a difference. Actually doesn't have to be .NET either. Can be PHP or Java or whatever you want to put on your VM. It'll all work the same as far as Azure is concerned.
MS likes to push Azure as a Platform-as-a-Service (Paas) solution where they have a ton of services they offer and you run apps on their standard platform, and contrasts that with Amazon AWS which they call Infrastructure-as-a-Service (Iaas) which is "just" a Virtual Machine. However, MS is really just as much a IaaS solution as AWS, perhaps even more so. The only difference between AWS and Azure is AWS allows you to choose what to install on your VM and with Azure you have to use Windows Server 2008 R2 as the basis for your VM (but you can customize the VM image to install custom software on top of windows). With both Azure and AWS, the hosts offer additional PaaS services you can take advantage of for data storage and message routing. AWS also offers tons of extra services like video streaming.
Also note that with Azure (and AWS I think) you can use the services they offer even in a non-hosted application. If you want to use Azure's data storage from a non-Azure application, you can do that, it's just HTTP REST calls to get/put data. The only differences you pay for data in/out between datacenter and your non-datacenter-hosted application which would be free if the app was also inside the datacenter (just the data in/out is free in-datacenter, you still have storage and transaction fees).
A few things:
Samuel Neff's answer mentioned mounting a file system in a blob (a Cloud Drive). Only one instance may lock this cloud drive for writing, so it does not behave like a network file share. You'll need to plan for this.
You'll need to integrate with the Windows Azure diagnostics subsystem, to gain visibility into your app's run state (e.g. performance counters, trace logs, etc.).
If there are 3rd-party apps that your web app depends on, you'll need to install these. These actually get installed as part of the role instance's boot process, either via your OnStart() event handler or as a startup task. The latter allows for admin-level installs (including registry changes, COM component installations, etc.). You'll need to carefully manage these installations, as they impact the boot time of the instance.
For an asp.net app, you'll need to think about session state. In-proc session state won't work, because each instance will have its own state store in memory. The SQL Azure session state provider doesn't have background cleanup agents, so you'll need to build this into your web or worker role instance (see this blog post by the SQL Azure team for the implementation). The best option is to use the AppFabric Cache, a new service that just went into production. This cache-as-a-service provides an custom session state provider for asp.net as well. Note: As of today, the AppFabric Cache service is only accessible via a .NET interface; there's no REST interface for it (all other storage services - tables, blobs, queues - have a REST interface). .NET, Java, and PHP all have storage client libraries. Ruby has one from the open source community.
You'll have to manage scaling out to more than one instance, when the need arises. This is not a built-in service today, but there are 3rd-party services such as ParaLeap's AzureWatch. There's also Microsoft's System Center Operations Manager, which now has Windows Azure monitoring support. You'll also need to handle scale-back situations, where you reduce the number of server instances.
I have some additional details in an answer for a similar StackOverflow question, here.
I have not tried Windows Azure Migration Scanner personally, but if it works as advertised, this would really come in handy.
I have been struggling with the best way to make sure that the certain XML configuration files stay synchronized between multiple servers in a Web Farm. I am not necessarily concerned about the Web.Config, as much as I am concerned about some of the other configuration files that are present in the application.
For example, we store caching policies in an external XML file, where it has its own schema, and will soon have its own tool to maintain the values. Once the changes are applied, they should be migrated across the farm.
Some scenarios that I have considered so far:
RoboCopy, replication, or equivalent. This requires that work only ever be done on a particular node of the farm. (Push to application.)
Configuration Server. All external configurations and their tools are stored on a physical instance of IIS. The application will retrieve these configuration files on application start and periodically poll for changes. (Application polls.)
Team Build. We could host the tools in-house and set up a post-build process to deploy the files. (Application polls.)
Database Storage. Applications could read and poll database for configuration. (Application polls.)
All of these scenarios have their pros and cons. I am not sure what the best solution is, although I think having the application poll for changes might be the cleanest approach. Still, the question is which way would you best consider this to be accomplished?
I have web cluster running Windows Server 2008. To keep everything synchronized I'm using a DFS share that is on each of the members. Any changes made to any of the members are replicated to the others. I'm also using the IIS "Shared Configuration" feature to store my metabase within this DFS share. That way all of the IIS settings are replicated as well.
On a project that I worked on, we used a product called ServerSync to replicate files on the farm. It works pretty good and it is -fast-.