P/Invoke gives AccessViolationException in ASP.NET - asp.net

My goal is to call a native DLL via P/Invoke from an ASP.NET application. So far, I can successfully call the DLL from a Console app, or even from an OWIN server running on HttpListener, hosted in an Azure WorkerRole.
Troubles arise when I try to host the exact same code in ASP.NET/IIS, either in a simple ASP.NET app or in an Azure WebRole. In such a context, the call to the DLL throws an AccessViolationException.
From my research, it looks like the issue could come from the fact that the native DLL is not thread-safe - and a test trying to call it from concurrent threads even in a Console app throws the AVE, which shows that it's not thread-safe indeed. So I'm checking with the DLL's author on that.
But in the meantime, I'm still wondering if that's really the root cause of the crash on ASP.NET/IIS, because during my tests I only do one request at a time. So waiting for the thread-safety to be fixed, I wanted to know if you guys would be aware of other specificites that could cause the P/Invoke to fail in ASP.NET/IIS.
UPDATE
Based on my numerous tests, it turned out that the crash was caused by the DLL trying to load external files. In a non-IIS application, placing these files at the same folder level as the DLL just works; but on my dev machine for example, the same code running on IIS tries to look for the files in "C:\Program Files (x86)\IIS Express".
So my question now is: is there a way to control the path where a simple File.Open would look for and if not, is there a way to get the default path so that I can copy the required files there at startup?
Thanks

What you observe simply indicates that you use relative paths. That means when running under IIS Express (not IIS) the process tries to search under C:\Program Files (x86)\IIS Express as it assumes that's the base directory to use when interpreting the relative paths.
You should always use absolute paths instead of relative paths, and then this issue won't happen.
http://msdn.microsoft.com/en-us/library/ms178116(v=vs.100).aspx

Please don't muck around with copying files on every app startup, that can only go south.
A better solution would be to get the provider of the DLL to add some code that checks the Windows Registry for a path value. If that value exists in the Registry, the DLL will attempt to load its associated libraries from the path in that value; otherwise it will fall back to attempting to load them from the DLL's current directory.

Related

How can I get the root of my project in ASP.NET and not the location of IIS Express using .CurrentDirectory()?

I have an ASP.NET Core project that I'm developing and I'm making use of LiteDB as a database solution. In order to instantiate my database I need to provide a URI so that the program knows where to create the database.
Usually I'd do something like System.Environment.CurrentDirectory() to find the current directory I'm in and modify that, however the result of this command turns out to be C:\program files\IIS Express or something similar. Basically the current directory points to the location of IIS Express.
Instead I want to get the root of my ASP.NET project, the location that contains my controllers folder, appsettings.json, bin and obj folders.
What command do I need to use to get a string representing this location? I don't want to use a hard coded string for obvious reasons.
If you don't have access to DI you could try:
Assembly.GetExecutingAssembly().Location
If you have access to DI, try this:
Inject the IHostingEnvironment and call environment.ContentRootPath
How to get root directory of project in asp.net core. Directory.GetCurrentDirectory() doesn't seem to work correctly on a mac
You may be better served by using a dedicated location (such as a network share) rather than the folder that contains the executable code. Having user data mixed into deployed code comes with a lot of headaches:
Deployments become more complex as they need to account for the presence of user data. This is especially true if you need to delete pre-existing code during a deployment.
If the user data is sensitive, developers may be denied access to read the deployed code. This can make troubleshooting issues much harder.
Backups of deployed code will contain user data, which means they will always appear to be different even if nothing (code-wise) has changed.
Of course, this all assumes that the production environment is configured differently from developers' local machines.

Web Deployment fails because 'SqlServerSpatial140.dll' file is in use (w3wp.exe) [duplicate]

I am using VS2013 Premium to publish a site to Windows Server 2012.
All files publish ok except these:
SqlServerTypes\x64\msvcr100.dll
SqlServerTypes\x64\SqlServerSpatial110.dll
SqlServerTypes\x86\msvcr100.dll
SqlServerTypes\x86\SqlServerSpatial110.dll
I get this kind of errors for each of the above files I tried to publish:
Web deployment task failed. (The file 'msvcr100.dll' is in use. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_FILE_IN_USE.)
Interrestingly, these files were published the first time (when they were not on the server), then they are no longer overwritten. Tried with 2 different web servers.
I have followed the guide here:
http://blogs.msdn.com/b/webdev/archive/2013/10/30/web-publishing-updates-for-app-offline-and-usechecksum.aspx
...But it only managed to put the site offline (VS is placing the app_offline.htm) but publish still fails with the same error.
All other files publish perfectly.
Any ideas?
You can take you app offline during publishing which hopefully should free up the lock on the file and allow you to update it.
I blogged about this a while back. The support outlined was shipped inside of the Azure SDK and Visual Studio Update. I don't remember the exact releases but I can find out if needed. Any update dating around/after that blog post should be fine.
Prerequisites:
VS 2012 + VS update / VS 2013 + VS Update / VS2015
MSDeploy v3
Note: if you are publishing from a CI server the CI server will need the updates above as well
Edit the publish profile
In VS when create a Web Publish profile the settings from the dialog are stored in Properties\PublishProfiles\ as files that end with .pubxml. Note: there is also a .pubxml.user file, that file should not be modified
To take your app offline in the .pubxml file add the following property.
<EnableMSDeployAppOffline>true</EnableMSDeployAppOffline>
Notes
ASP.NET Required
The way that this has been implemented on the MSDeploy side is that an app_offline.htm file is dropped in the root of the website/app. From there the asp.net runtime will detect that and take your app offline. Because of this if your website/app doesn't have asp.net enabled this function will not work.
Cases where it may not work
The implementation of this makes it such that the app may not strictly be offline before publish starts. First the app_offline.htm file is dropped, then MSDeploy will start publishing the files. It doesn't wait for ASP.NET to detect the file and actually take it offline. Because of this you may run into cases where you still run into the file lock. By default VS enables retrys so usually the app will go offline during one of the retrys and all is good. In some cases it may take longer for ASP.NET to respond. That is a bit more tricky.
In the case that you add <EnableMSDeployAppOffline>true</EnableMSDeployAppOffline> and your app is not getting taken offline soon enough then I suggest that you take the app offline before the publish begins. There are several ways to do this remotely, but that depends on your setup. If you only have MSDeploy access you can try the following sequence:
Use msdeploy.exe to take your site offline by dropping app_offline.htm
Use msdeploy.exe to publish your app (_make sure the sync doesn't delete the app_offline.htm file_)
Wait some amount of time
Publish the site
Use msdeploy.exe to bring the app online by deleting app_offline.htm
I have blogged how you can do this at http://sedodream.com/2012/01/08/howtotakeyourwebappofflineduringpublishing.aspx. The only thing that is missing from that blog post is the delay to wait for the site to actually be taken offline. You can also create a script that just calls msdeploy.exe directly instead of integrating it into the project build/publish process.
I have found the reason why the solution at
http://blogs.msdn.com/b/webdev/archive/2013/10/30/web-publishing-updates-for-app-offline-and-usechecksum.aspx
did not work for the original poster, and I have a workaround.
The issue with the EnableMSDeployAppOffline approach is that it only recycles the app domain hosting the application. It does not recycle the app pool worker process (w3wp.exe) which the app domain lives in.
Tearing down and recreating the app domain will not affect the Sql Server Spatial dlls in question. Those dlls are unmanaged code which are manually loaded via interop LoadLibray calls. Therefore the dlls live outside the purview of the app domain.
In order to release the files locks, which the app pool process puts on them, you need to either recycle the app pool, or unload the dlls from memory manually.
The Microsoft.SqlServer.Types nuget package ships a class which is used to load the Spatial dlls called SqlServerTypes.Utilities. You can modify the LoadNativeAssemblies method to unload the unmanaged dlls when the app domain is unloaded. With this modification when msdeploy copys the app_offline.htm the app domain will unload and then unload the managed dlls as well.
[DllImport("kernel32.dll", SetLastError = true)]
internal extern static bool FreeLibrary(IntPtr hModule);
private static IntPtr _msvcrPtr = IntPtr.Zero;
private static IntPtr _spatialPtr = IntPtr.Zero;
public static void LoadNativeAssemblies(string rootApplicationPath)
{
if (_msvcrPtr != IntPtr.Zero || _spatialPtr != IntPtr.Zero)
throw new Exception("LoadNativeAssemblies already called.");
var nativeBinaryPath = IntPtr.Size > 4
? Path.Combine(rootApplicationPath, #"SqlServerTypes\x64\")
: Path.Combine(rootApplicationPath, #"SqlServerTypes\x86\");
_msvcrPtr = LoadNativeAssembly(nativeBinaryPath, "msvcr100.dll");
_spatialPtr = LoadNativeAssembly(nativeBinaryPath, "SqlServerSpatial110.dll");
AppDomain.CurrentDomain.DomainUnload += (sender, e) =>
{
if (_msvcrPtr != IntPtr.Zero)
{
FreeLibrary(_msvcrPtr);
_msvcrPtr = IntPtr.Zero;
}
if (_spatialPtr != IntPtr.Zero)
{
FreeLibrary(_spatialPtr);
_spatialPtr = IntPtr.Zero;
}
};
}
There is one caveat with this approach. It assumes your application is the only one running in the worker process that is using the Spatial dlls. Since app pools can host multiple applications the file locks will not be released if another application has also loaded them. This will prevent your deploy from working with the same file locked error.
There are known issues with IIS and file-locks (why they aren't solved yet i dont know).
The question i want to ask however is if you even need to re-deploy these files?
I recognize the file-names and recall them to be system-files which should either already be present on the server or simply not need to be re-deployed.
I am not very experienced when it comes to IIS but i have ran into this problem before and several of my more experienced co-workers have told me that this is as i said a known IIS-issue and i believe the answer to your question is:
Avoid deploying unnecessary files.
try again
Reset website
try again
iisreset
I think what would be the easiest thing to do is to make these dll's as CopyLocal as true. I am assuming these dll's are pulled out from program files folder. Try marking them as copylocal true and do a deployment.Try to stop any IIS local process running in your local machine.
Watch out you don't have one of those new-fangled cloud backup services running that is taking file locks - and also you don't have things open in explorer or a DLL inspection tool.
I think it's kind of ridiculous that MS doesn't make better provisions for this problem. I find that 9 times out of 10 my deployment works just fine, but then as our traffic increases that can become 1 in 10 times.
I am going to solve the problem with :
two applications MySite.A and MySite.B, where only one is running at a time.
I always then deploy to the dormant site.
If there's a problem during the deployment it will never cause the whole site to go down.
If there's a major problem after deployment you can revert back very easily.
Not quite sure how I'm implementing it, but I think this is what I need to do.

Automatically correct .vs/config/applicationhost.config

Visual Studio 2015 has moved the IIS Express configuration file from My Documents into its own hidden .vs directory and tells IISExpress.exe to use that configuration file.
While this may sound like a good idea when you have multiple web applications with conflicting configurations, in fact it's not when you have web applications with custom configurations at all. Because this config file is in a temporary directory which can be deleted when checking out code into a new location from source control, or when cleaning up things in general.
The old location in the Documents folder is a central location and files there usually won't be deleted. If I need to make some changes to this file, like allowing the application to use Windows authentication, I edit this file once and I'm done. Now with the new location, I may need to make these changes multiple times. And since there's two config files (the old location still exists for me), it's even more confusing.
Then you shouldn't commit anything from .vs into version control, so it's not even possible to share these customisations with other team members.
What is the recommended way to update the applicationhost.config file so that it provides the environment required by the web application? Is there a tool that can run in a pre/post build step? Is there a customisation XML file that can be merged into the default file from .vs?
Is it possible at all to just check out a web application from source control and let it work in a local IIS Express? Or will it always require manual corrections after getting the code and web.config?
PS: I've forgotten one thing: There's the <UseGlobalApplicationHostFile> element in the project file which can be set to true, but as soon as I do that, I get an access denied error message every time I want to start the web application. So that doesn't seem to work well.

ASP.NET File IO Issue - System.UnauthorizedAccessException - stranger than you think

I am getting a strange issue where I seem to have read access, because I can
1. Get a list of files from a directory (Directory.GetFiles())
2. Load an XML document using XmlDocument instance's Load() method
But I can't use File.ReadAllText() to load a text file into memory. Gives me an System.UnauthorizedAccessException. I am not even trying to read from a network directory, just a local one. I've also used System.Security.Principal.WindowsIdentity.GetCurrent().Name to check the working user which is [CompanyDomain]/[MyUserName] and this user has full access to the directory I am using. I've also checked that the directory actually exists.
My environment
1. Windows Server 2003 Standard Edition
2. Visual Studio 2008
3. Just using the built in web server that launches every time i run the project.
Note: I couldn't find the IUSR_MACHINENAME user on this machine.
Any idea what steps I should take next?
Cheers,
James
One thing to assert:
the file that can be Xml Load()-ed is the very same file that cannot be ReadAlText()-ed ?
When things get odd like this, I found that turning auditing on, at the level of the directory or even of the file, often ends up pointing me towards a proper diagnostic and hence resolving the issue.
Also: In looking online reference for ReadAllText() I noted that (oddly, I think), this exception can be caused by:
path specified a file that is read-only.
Not sure why write access should be sought by this apparent read-only operation, but, maybe just try to make the file r/w-able.
http://msdn.microsoft.com/en-us/library/72wdk8cc%28VS.71%29.aspx
<identity impersonate="true" />

How do you deploy your ASP.NET applications to live servers?

I am looking for different techniques/tools you use to deploy an ASP.NET web application project (NOT ASP.NET web site) to production?
I am particularly interested of the workflow happening between the time your Continuous Integration Build server drops the binaries at some location and the time the first user request hits these binaries.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Feel free to complete the previous list.
And here's what we use to deploy our ASP.NET applications:
We add a Web Deployment Project to the solution and set it up to build the ASP.NET web application
We add a Setup Project (NOT Web Setup Project) to the solution and set it to take the output of the Web Deployment Project
We add a custom install action and in the OnInstall event we run a custom build .NET assembly that creates an App Pool and a Virtual Directory in IIS using System.DirectoryServices.DirectoryEntry (This task is performed only the first time an application is deployed). We support multiple Web Sites in IIS, Authentication for Virtual Directories and setting identities for App Pools.
We add a custom task in TFS to build the Setup Project (TFS does not support Setup Projects so we had to use devenv.exe to build the MSI)
The MSI is installed on the live server (if there's a previous version of the MSI it is first uninstalled)
We have all of our code deployed in MSIs using Setup Factory. If something has to change we redeploy the entire solution. This sounds like overkill for a css file, but it absolutely keeps all environments in sync, and we know exactly what is in production (we deploy to all test and uat environments the same way).
We do rolling deployment to the live servers, so we don't use installer projects; we have something more like CI:
"live" build-server builds from the approved source (not the "HEAD" of the repo)
(after it has taken a backup ;-p)
robocopy publishes to a staging server ("live", but not in the F5 cluster)
final validation done on the staging server, often with "hosts" hacks to emulate the entire thing as closely as possible
robocopy /L is used automatically to distribute a list of the changes in the next "push", to alert of any goofs
as part of a scheduled process, the cluster is cycled, deploying to the nodes in the cluster via robocopy (while they are out of the cluster)
robocopy automatically ensures that only changes are deployed.
Re the App Pool etc; I would love this to be automated (see this question), but at the moment it is manual. I really want to change that, though.
(it probably helps that we have our own data-centre and server-farm "on-site", so we don't have to cross many hurdles)
Website
Deployer:
http://www.codeproject.com/KB/install/deployer.aspx
I publish website to a local folder, zip it, then upload it over FTP. Deployer on server then extracts zip, replaces config values (in Web.Config and other files), and that's it.
Of course for first run you need to connect to the server and setup IIS WebSite, database, but after that publishing updates is piece of cake.
Database
For keeping databases in sync I use http://www.red-gate.com/products/sql-development/sql-compare/
If server is behind bunch of routers and you can't directly connect (which is requirement of SQL Compare), use https://secure.logmein.com/products/hamachi2/ to create VPN.
I deploy mostly ASP.NET apps to Linux servers and redeploy everything for even the smallest change. Here is my standard workflow:
I use a source code repository (like Subversion)
On the server, I have a bash script that does the following:
Checks out the latest code
Does a build (creates the DLLs)
Filters the files down to the essentials (removes code files for example)
Backs up the database
Deploys the files to the web server in a directory named with the current date
Updates the database if a new schema is included in the deployment
Makes the new installation the default one so it will be served with the next hit
Checkout is done with the command-line version of Subversion and building is done with xbuild (msbuild work-alike from the Mono project). Most of the magic is done in ReleaseIt.
On my dev server I essentially have continuous integration but on the production side I actually SSH into the server and initiate the deployment manually by running the script. My script is cleverly called 'deploy' so that is what I type at the bash prompt. I am very creative. Not.
In production, I have to type 'deploy' twice: once to check-out, build, and deploy to a dated directory and once to make that directory the default instance. Since the directories are dated, I can revert to any previous deployment simply by typing 'deploy' from within the relevant directory.
Initial deployment takes a couple of minutes and reversion to a prior version takes a few seconds.
It has been a nice solution for me and relies only on the three command-line utilities (svn, xbuild, and releaseit), the DB client, SSH, and Bash.
I really need to update the copy of ReleaseIt on CodePlex sometime:
http://releaseit.codeplex.com/
Simple XCopy for ASP.NET. Zip it up, sftp to the server, extract into the right location. For the first deployment, manual set up of IIS
Answering your questions:
XCopy
Manually
For static resources, we only deploy the changed resource.
For DLL's we deploy the changed DLL and ASPX pages.
Yes, and yes.
Keeping it nice and simple has saved us alot of headaches so far.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
As a developer for BuildMaster, this is naturally what I use. All applications are built and packaged within the tool as artifacts, which are stored internally as ZIP files.
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Manually - we create a change control within the tool that reminds us the exact steps to perform in future environments as the application moves through its testing environments. This could also be automated with a simple PowerShell script, but we do not add new applications very often so it's just as easy to spend the 1 minute it takes to create the site manually.
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
By default, the process of deploying artifacts is set-up such that only files that are modified are transferred to the target server - this includes everything from CSS files, JavaScript files, ASPX pages, and linked assemblies.
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Yes, BuildMaster handles all of this for us. Restoring is mostly as simple as re-executing an old build promotion, but sometimes database changes need to be manually restored, and data loss can occur. The basic rollback process is detailed here: http://inedo.com/support/tutorials/performing-a-deployment-rollback-with-buildmaster
web setup/install projects - so you can easily uninstall it if something goes wrong
Unfold is a capistrano-like deployment solution I wrote for .net applications. It is what we use on all of our projects and it's a very flexible solution. It solves most of the typical problems for .net applications as explained in this blog post by Rob Conery.
it comes with a good "default" behavior, in the sense that it does a lot of standard stuff for you: getting the code from source control, building, creating the application pool, setting up IIS, etc
releases based on what's in source control
it has task hooks, so the default behaviour can be easily extended or altered
it has rollback
it's all powershell, so there aren't any external dependencies
it uses powershell remoting to access remote machines
Here's an introduction and some other blog posts.
So to answer the questions above:
How is the application packaged (ZIP, MSI, ...)?
Git (or another scm) is the default way to get the application on the target machine. Alternatively you can perform a local build and copy the result over the Powereshell remoting connection
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Unfold configures the application pool and website application using Powershell's WebAdministration Module. It allows us (and you) to modify any aspect of the application pool or website
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Yes unfold does this, any deploy is installed next to the others. That way we can easily rollback
when somehting goes wrong. It also allows us to easily trace back a deployed version to
a source control revision.
Do you keep track of all deployed versions for a given application?
Yes, unfold keeps old versions around. Not all versions, but a number of versions. It makes rolling back almost trivial.
We've been improving our release process for the past year and now we've got it down pat. I'm using Jenkins to manage all of our automated builds and releases, but I'm sure you could use TeamCity or CruiseControl.
So upon checkin, our "normal" build does the following:
Jenkins does a SVN update to fetch the latest version of the code
A NuGet package restore is done running against our own local NuGet repository
The application is compiled using MsBuild. Setting this up is an adventure, because you need to install the correct MsBuild and then the ASP.NET and MVC dll's on your build box. (As a side note, when I had <MvcBuildViews>true</MvcBuildViews> entered in my .csproj files to compile the views, msbuild was randomly crashing, so I had to disable it)
Once the code is compiled the unit tests are run (I'm using nunit for this, but you can use anything you want)
If all the unit tests pass, I stop the IIS app pool, deploy the app locally (just a few basic XCOPY commands to copy over the necessary files) and then restart IIS (I've had problems with IIS locking files, and this solved it)
I have separate web.config files for each environment; dev, uat, prod. (I tried using the web transformation stuff with little success). So the right web.config file is also copied across
I then use PhantomJS to execute a bunch of UI tests. It also takes a bunch of screenshots at different resolutions (mobile, desktop) and stamps each screenshot with some information (page title, resolution). Jenkins has great support for handling these screenshots and they are saved as part of the build
Once the integration UI tests pass the build is successful
If someone clicks "Deploy to UAT":
If the last build was successful, Jenkins does another SVN update
The application is compiled using a RELEASE configuration
A "www" directory is created and the application is copied into it
I then use winscp to synchronise the filesystem between the build box and UAT
I send a HTTP request to the UAT server and make sure I get back a 200
This revision is tagged in SVN as UAT-datetime
If we've got this far, build is successful!
When we click "Deploy to Prod":
The user selects a UAT Tag that was previously created
The tag is "switched" to
Code is compiled and synced with Prod server
Http request to Prod server
This revision is tagged in SVN as Prod-datetime
The release is zipped and stored
All up a full build to production takes about 30 secs which I'm very, very happy with.
Upsides to this solution:
It's fast
Unit tests should catch logic errors
When a UI bug gets into production, the screenshots will hopefully show what revision # caused the it
UAT and Prod are kept in sync
Jenkins shows you a great release history to UAT and Prod with all of the commit messages
UAT and Prod releases are all tagged automatically
You can see when releases happen and who did them
The main downsides to this solution are:
Whenever you do a release to Prod you need to do a release to UAT. This was a conscious decision we made because we wanted to always ensure that UAT is always up to date with Prod. Still, it's a pain.
There's quite a few configuration files floating around. I've attempted to have it all in Jenkins, but there's a few support batch files needed as part of the process. (These are also checked in).
DB upgrade and downgrade scripts are part of the app and run at app startup. It works (mostly), but it's a pain.
I'd love to hear any other possible improvements!
Back in 2009, where this answer hails from, we used CruiseControl.net for our Continuous Integration builds, which also outputted Release Media.
From there we used Smart Sync software to compare against a production server that was out of the load balanced pool, and moved the changes up.
Finally, after validating the release, we ran a DOS script that primarily used RoboCopy to sync the code over to the live servers, stopping/starting IIS as it went.
At the last company I worked for we used to deploy using an rSync batch file to upload only the changes since the last upload. The beauty of rSync is that you can add exclude lists to exclude specific files or filename patterns. So excluding all of our .cs files, solution and project files is really easy, for instance.
We were using TortoiseSVN for version control, and so it was nice to be able to write in several SVN commands to accomplish the following:
First off, check the user has the latest revision. If not, either prompt them to update or run the update right there and then.
Download a text file from the server called "synclog.txt" that details who the SVN user is, what revision number they are uploading and the date and time of the upload. Append a new line for the current upload and then send it back to the server along with the changed files. This makes it extremely easy to find out what version of the site to roll back to on the off chance that an upload causes problems.
In addition to this there is a second batch file that just checks for file differences on the live server. This can highlight the common problem where someone would upload but not commit their changes to SVN. Combined with the sync log mentioned above we could find out who the likely culprit was and ask them to commit their work.
And lastly, rSync allows you to take a backup of the files that were replaced during the upload. We had it move them into a backup folder So if you suddenly realised that some of the files should not have been overwritten, you can find the last backup up version of every file in that folder.
While the solution felt a little clunky at the time I have since come to appreciate it a whole lot more when working in environments where the upload method is a lot less elegant or easy (remote desktop, copy and paste the entire site, for instance).
I'd recommend NOT just overwriting existing application files but instead create a directory per version and repointing the IIS application to the new path.
This has several benefits:
Quick to revert if needed
No need to stop IIS or the app pool to avoid locking issues
No risk of old files causing problems
More or less zero downtime (usually just a pause at the new appdomain initialises)
The only issue we've had is resources being cached if you don't restart the app pool and rely on the automatic appdomain switch.

Resources